Robotic systems are, at last, beginning to take over some of the burden of managing the fluctuations in blood glucose in patients with Type 1 diabetes. But a new report warns that as the systems get adopted more widely, the risk of criminal eavesdropping and sabotage will also increase.
The report, by Yogish C. Kudva and colleagues at the Mayo Clinic in Rochester, Minn., and at the University of Virginia in Charlottesville, appears in Diabetes Technology & Therapeutics.
“Deliberately wrong (high) glucose data sent to an unprotected mobile computing platform may cause the algorithm to deliver excessive insulin, whereas incorrect low glucose values could cause it to deliver too little,” the researchers write.
Make the machine administer too little insulin, and the blood-glucose level may rise high enough to send the patient into a ketoacidosis coma. Make it administer too much, and the glucose falls until the brain fails causing the to patient faint, or even die. It might seem to bad guys like the way to commit the perfect murder.
Patients are particularly vulnerable to low blood glucose when sleeping. In fact, heading off such nightime episodes is a chief selling point for the most advanced commercial artificial pancreas, the MiniMed 640G, which was recently approved in Australia and Europe. If its algorithm predicts that a sleeping patient’s blood glucose is about to fall too far, the machine will sound an alarm; if the patient still doesn’t respond, the machine will stop the flow of insulin.
The researchers note that standards are already being developed to assure that all the parts of the artificial pancreas—the glucose sensors, the insulin pump, and the computer—be interoperable. They say these standards also ought to include provisions for encryption and other security measures. They also suggest that the system seek a second opinion by submitting its operations to the inspection of “intelligent safety algorithms, informed by additional data such as insulin delivery history.”
Such a safety algorithm might suspect foul play if something extraordinary seemed to happen—for instance, if the sensors reported a sudden rise in blood sugar in the middle of the night, long after the patient’s final meal of the day. This could raise a red flag, inducing the artificial pancreas to wake the patient up to make an independent test of his blood-glucose level.
The artificial pancreas is the culmination of a 50-year slog in bioengineering—one that is finally paying off because of improvements in insulin, sensors, and algorithms. Read all about it in the upcoming June issue of IEEE Spectrum, which is devoted to a single topic: “Hacking the Human OS.”
Philip E. Ross is a senior editor at IEEE Spectrum. His interests include transportation, energy storage, AI, and the economic aspects of technology. He has a master's degree in international affairs from Columbia University and another, in journalism, from the University of Michigan.