Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
A few million people probably first thought about the security of pacemakers and other implantable medical devices last December when watching the TV show “Homeland.” The character of Nick Brody contributes to an electronic attack on the pacemaker of the U.S. vice president. The pacemaker is made to fail once the attackers get some key security information from Brody.
Certainly, ever since the first pacemaker implants of the 1960s, biomedical engineers have made remarkable strides with implantable medical devices. IMDs, as they’re called, are delivering painkillers and insulin at proper rates; they’re measuring our vital signs and reporting them to doctors and nurses; and, of course, they’re still making sure our hearts beat as steadily as metronomes.
But these devices are essentially embedded computers, and with computers come questions of hardware and software security.
Five years ago, my guest today considered that very sort of hack rather sensationally depicted in the Showtime TV show in a paper demonstrating a vulnerability in the security of pacemakers.
Kevin Fu is an associate professor of electrical engineering and computer science at the University of Michigan, where he’s currently teaching the first graduate course in the U.S. on medical device security. He joins us by phone.
Kevin, welcome to the podcast.
Kevin Fu: Thank you. Glad to be here.
Steven Cherry: Your 2008 attack took about [US] $30 000 in gear. It required what The New York Times called “a sustained effort by a team of specialists.” And the pacemaker had to be centimeters away from your equipment. Would it be easier or harder today?
Kevin Fu: That’s a good question. Analyzing the security of a system never gets—very rarely gets—harder. Now, it’s unknown, I think, how much it would actually cost to carry out today, so there have been some others who have reproduced the work lately. I saw one team of MIT graduate students who reproduced the work in, I think, about a weekend. But I think the important question is, What are the defenses being put in place?
Steven Cherry: Yeah, let’s talk about those.
Kevin Fu: Sure. So, there’s research defenses, and then there’s sort of the practical defenses, and in our world, in the university world, we think about a decade or two out. Some of the research is looking at, for instance, how to do “friendly jamming,” that is, how to stop the insecure transmissions in a way that does not require surgical intervention.
But on the practical side, in the manufacturing world, there’s quite a bit of navel gazing on just how the heck did we get security thinking into the manufacturing process. Because those that have been in the security business long enough know that it’s very difficult to integrate security after the fact, to bolt it on. It really needs to be designed in from the beginning. And in the medical-device world, that’s even more so. It’s much harder to change a medical device after the fact, for various practical reasons.
And fortunately, we’re starting to see some glimmers of hope at some of the industrial workshops—some early standardization of security for medical devices. We’re starting to hear, more often, companies talking about security during the concept phase. That is, before a single line of code is written, before a single piece of the design has been set, security has already made its way into the process because it is now something important, along with the safety and effectiveness.
Steven Cherry: Kevin, I’m not going to ask to comment on the “Homeland” plot unless you want to, but it seems they got one thing right: The point of entry, so to speak, was a function that doctors used to make changes to the pacemaker settings, and pacemakers do, in fact, have that kind of back door built into them. Isn’t that just asking for trouble?
Kevin Fu: So, you have a couple questions there. I think the technically interesting question is, Why the heck is there a back door in a cardiac device? And there is actually a good clinical reason. The reason is, imagine you get a new device and it gets implanted. How do you do an end-to-end test to verify that the device has been implanted properly and can detect a cardiac arrhythmia? Well, you need to have a function to induce a condition that could be then restored, so that condition would be inducing a particular kind of fatal heart rhythm.
So this is, at least in the devices I work with, built into the devices. I do think there should be some better authentication of that, but patients are much better off leaving the clinic knowing that the device actually works rather than leaving just sort of thinking maybe it works. So that’s sort of answer one to “Why the heck is it there?”
Of course, unauthenticated seems a little bit suspect to me. [It] may have been fine back in the day, when these devices weren’t using radio communication. But the moment you start adding networking and wireless communication, it changes the threat model. It changes the threat landscape, unfortunately. It’s really unfortunate. The good news is, as far as we know, there aren’t any attacks in the wild. The patients are much better off with these devices than without, due to their predisposed risks.
Steven Cherry: Yeah, I wanted to point that out myself, that other than on the Showtime TV network, there haven’t been any hacks of IMDs. Do you—outside of a Mission: Impossible setup (or the “Homeland” setup), where you’re going after one particular person and have almost unlimited resources, because it’s sort of like an act of war or something—do you think this sort of thing could happen and would happen? How serious are the risks right now?
Kevin Fu: Well, perhaps I’m in a different world, but I don’t usually think about “could it happen,” because anything can happen. I can get hit by a bus while walking across the street. All these devices today are becoming wireless and networked, so in my mind I don’t dwell too much on the drama of “could it happen,” because I know it could happen—anything could happen. So we like to think a little more on the, well, less-sexy material, just the “how do we make these things more secure.” And that does not usually lend itself to a plot in a thriller.
Steven Cherry: You mentioned friendly jamming before, and you’ve designed a security device for implantable medical devices, and you call it a shield. And I guess it intercepts calls to the IMD and then decides whether to pass them on? How does it work?
Kevin Fu: You’re referring to a project that was led by Shyam Gollakota, at the University of Washington, and Dina Katabi, at MIT, and the interesting thing about that work is they discovered a way to create a little external device that would jam the insecure communication to and from an implanted device and would essentially overlay a new secure wireless communication mechanism.
And the beauty is that this technique allows you to paradoxically add security after the fact; it’s pretty rare that you can actually do that effectively. So they were able to design this technique that can take an insecure implant and make it more secure. The technically hard part about that approach is just the difficulty of jamming when it has to do with the proximity. It’s very hard to jam and then listen yourself. You might be jamming yourself. And so part of their work, and the innovative parts of their work, was to make it effective and still be able to listen to the signal itself, but jamming the undesirable parts.
Steven Cherry: You mentioned the reason there are back doors in pacemakers, and, you know, there sort of is a good reason in the first place. There are very good reasons to have cellular or Internet connectivity in an implantable medical device. Maybe you could just talk about that for a minute.
Kevin Fu: One of the great reasons about having wireless communication in an implant is that you increase the sterile zone. Now, this is a very foreign concept to the typical computer scientist that thinks of viruses as digital as opposed to biological. So, one of the common problems during surgery is infection. About 1 percent of the surgical implantations have major complications, and of that 1 percent, about 1 percent of those are fatal.
So one way to reduce these morbidity and mortality is to increase the sterile zone. So what the wireless communication does is it removes computer equipment away from the patient, reducing the amount of foreign objects that come near the patient. So that was one of the big reasons for use of wireless communication: a safety reason.
And a second reason for the wireless communication was just more information. So now you can collect information that was, previously you’d require physical intervention to collect it. In the old days, moreover, pacemakers, when you’d reprogram them—let’s say you want to change the settings a bit—in the old days what you’d do is you’d take a needle and you’d plunge it through the armpit of the patient and twist. And that would effectively twist the little implanted dial to change the settings. So, of course, wireless or radio-based communication is much more desirable than those kinds of situations, where you’re doing these physical interventions.
Steven Cherry: Kevin, the iPod was originally a humble little device that ran just enough software that you could copy songs to it and play them. Now we have iOS—it’s up to version 6—and it runs smartphones and tablets too. Are implantable medical devices going to have operating systems?
Kevin Fu: Well, implantable medical devices today—well, if you’ve seen one, you’ve seen one. They’re all going to be a little bit different. But the ones we’ve seen are mostly custom operating systems—usually no operating system at all. They’re very, very resource-constrained. They’re much closer to what we’d call a “microcontroller design” as opposed to a microprocessor. Now, the external devices, you’ll start to see some operating systems much more often, but the implants today, I haven’t seen any.
Steven Cherry: You teach a course now on medical device security. Why is that necessary?
Kevin Fu: So, I’m teaching a course on medical device security because I find that it’s very hard for manufacturers to find students who’ve been exposed to the interdisciplinary problems and approaches to solve those problems for medical device security.
Literally, we have companies saying, “Hey, do you know anyone we can hire?” And I say, “No. Nobody’s really training these people. There’s some good computer scientists. There’s some good biomedical engineers. But these are completely different cultures.”
So part of the course is about exposing the students from different walks of life, from different disciplines, to the computer science problems, to the medical and human factors problems, to the regulatory issues as well.
Steven Cherry: I guess a course like that is a necessary step down the road toward really securing these devices. But won’t a lot of information eventually find its way to the black hats as well?
Kevin Fu: Well, so, I think your question is sort of saying, “Well, if you’re analyzing systems, couldn’t that help the bad guys?” I have a couple answers to that. One is, in our course, we don’t do a whole lot of analysis. We mostly do constructive work. Again, this is the kind of work that wouldn’t make headlines for any kind of television drama. For instance, “requirement specification,” that wakes every student up, when we start talking about requirements and specification.
But it’s a very important and fundamental process that goes into medical device manufacturing to increase its safety, its effectiveness, and also its security. There’s quite a few people doing analysis. I think you’ll always see that. And I think it is an important element to gaining new knowledge.
In my personal opinion, it’s much better to know the problem than to pretend it doesn’t exist and then in a few years be confronted with that problem. So I think analysis serves a purpose, but it’s important to have a balanced approach, make sure that there are also some solutions being explored, because it’s often much harder to find solutions than it is to find problems. There are plenty of problems out there. It’s just extremely difficult to solve.
Steven Cherry: You had a paper at the 2012 Design Automation Conference titled “Design Challenges for Secure Implantable Medical Devices.” [PDF] If you could address one design challenge and get manufacturers to concentrate on that, what would it be?
Kevin Fu: That’s a good question. I’d say one of the biggest design challenges for medical devices in general, not just implants, but medical devices, is how to incorporate the security life cycle into the product design.
So security, unlike most features, is not actually a thing. It’s sort of a process. It’s sort of a philosophy. It’s sort of a property. It’s much more like insurance. And I think it’s very difficult right now. Manufacturers haven’t quite figured out what is the security thing and how do we rationalize about it?
I’ll just give you one example: One example is that very often you’ll find medical-device manufacturers disallowing hospitals from applying software updates to the operating system on a bedside medical device. And this leads to problems, because, of course, if your operating system isn’t patched, it’s much more likely to be infected by malware. But they’ve got a trade-off going on, where they haven’t quite figured out how to both increase security but also make it easy for the manufacturer to validate the safety and effectiveness and manage the change of the software.
Steven Cherry: So it’s possible to go into a hospital right now and come down with a virus and a virus?
Kevin Fu: Oh, so, yes. It’s possible. In fact, yes, I’d say I’d be really surprised to walk into a hospital and not find malware, just because all hospitals are purchasing from some of the same places with the same challenges. Hospitals are having a really hard time managing medical devices, because the change management is just extremely challenging.
Steven Cherry: Well, Kevin, this is all just another way in which my grandparents would be completely lost in the modern age, but, of course, would have happily enjoyed the benefits of IMDs. Thanks for your work, and thanks for joining us today.
Kevin Fu: Sure. And if there’s any last word, I think patients are still much better off with these devices than without, and I still see it as a very optimistic world where we’ll have patients with better health and longer, healthier lives.
Steven Cherry: We’ve been speaking with University of Michigan professor Kevin Fu about hacking pacemakers and other implantable medical devices.
For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
Image: Max Santos/iStockphoto
This interview was recorded Tuesday, 5 March 2013.
Segment producer: Eliza Strickland; audio engineer: Francesco Ferorelli
Read more “Techwise Conversations,” find us in iTunes, or follow us on Twitter.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.