AI Makes Anthrax Bioterror Detection Easier

Deep learning AI can identify individual anthrax spores in seconds within special microscope images

3 min read
Anthrax under the microscope
Image: Chris Bickel/AAAS

In the wake of the 9/11 terrorist attacks, separate terror incidents involving letters laced with anthrax killed five Americans and sickened 17 in what the FBI describes as the worst biological attacks in U.S. history. Detection of such lethal anthrax spores could get a speed boost from artificial intelligence that has learned to identify the telltale patterns of the dangerous bacteria within microscope images.

The method of swiftly detecting a deadly biological agent with a mortality rate of more than 80 percent relies on two key components: holographic microscopes that can measure unique patterns of light scattering and refraction as light passes through the cell structures of various single-cell organisms, and artificial intelligence (AI) based on deep learning that has been trained to distinguish between Bacillus anthracis bacteria and other Bacillus bacteria. 

“This study showed that holographic imaging and deep learning can identify anthrax in a few seconds,” says YongKeun “Paul” Park, associate professor of physics at the Korea Advanced Institute of Science and Technology (KAIST). “Conventional approaches such as bacterial culture or gene sequencing would take several hours to a day.”

The research published by Park and his colleagues in the 3 August 2017 online issue of the journal Science Advances may have implications beyond just identifying anthrax. Their HoloConvNet system represents a multilayered neural network loosely modeled on biological brains that can use deep learning to identify any single-cell organisms, as long as the neural network gets to first run through the proper training datasets involving images of those particular organisms.

For example, the Korean researchers showed the broader potential of deep learning and holographic imaging by having HoloConvNet identify the Listeria monocytogenes bacteria responsible for food-transmitted infections in pregnant women, newborns, and the elderly. That could provide physicians and researchers with a tool for much faster diagnosis of the microscopic critters responsible for a number of different illnesses such as sepsis.

Park also pointed to the possibility of swiftly identifying dangerous bacteria responsible for food poisoning, which is no small thing considering that food-borne pathogens sicken about 50 million people per year and kill 351,000 people annually.

Still, the new potential to quickly identify individual anthrax spores may have special resonance given that aerosolized anthrax capable of being spread over large areas and inhaled by hundreds or thousands of people remains a bioterrorism nightmare for experts. Anthrax is the main culprit in a suspected bioweapons production mishap that killed about 100 Soviet citizens in 1979. Then there were the anthrax-laced letters that killed a handful of Americans and sparked a massive FBI investigation in 2001. Countries considered hostile to the United States, such as North Korea, have attempted to develop weaponized anthrax spores in past decades.

The possibility of totalitarian North Korea developing weapons of mass destruction has been particularly concerning for democratic South Korea, given that the two countries share one of the most heavily fortified borders in the world and technically remain in a state of war following a ceasefire in July 1953 that brought the Korean War’s fighting to a close. 

“For decades, many studies have tried to establish effective early warning systems for anthrax attacks,” Park says. “However, the limited sensitivity of conventional biochemical methods essentially require preprocessing steps, and thus the consequent limitations in detection speed hampers their use in the realistic setting of biological warfare.”

It was not easy for the Korean researchers to more rapidly identify anthrax. They first tried using supervised machine learning algorithms to identify the anthrax spores, but were only able to identify the genus of the samples (a broader classification of organisms one step above individual species). Success only came after the researchers turned to deep learning.

Getting clearance to conduct this latest experiment also took almost a year because they needed access to a biosafety level 3 (BSL-3) laboratory at South Korea’s Agency for Defense Development. Sangjin Park, one of the coauthors of the published research, had access to the lab as an employee of the Agency for Defense Development. But he had to wear the usual protective clothing while handling the anthrax samples and using the holographic microscope inside the lab.

Holographic microscopes hold the promise of “label-free quantitative imaging of live cells and tissues” that does not rely on adding fluorescent dye or proteins to the samples in order to better see the internal structures of cells. But it’s still a relatively young technology that needs more widespread use so that researchers have access to a bigger database of holographic images that can train deep learning AI to identify different pathogens, Park says. He serves as chief technology officer and cofounder of Tomocube, Inc., a startup that aims to eventually offer a service combining holographic microscopy and deep learning AI.

“One of the big challenges is to pile up ‘big data’ of holographic images for more precise and efficient diagnosis of pathogens or diseases,” Park says. “It cannot be done by a few research groups, and this is why we have commercialized holography microscopy so that it can be used in numerous research laboratories and hospitals as a platform device.”

The Conversation (0)
Illustration showing an astronaut performing mechanical repairs to a satellite uses two extra mechanical arms that project from a backpack.

Extra limbs, controlled by wearable electrode patches that read and interpret neural signals from the user, could have innumerable uses, such as assisting on spacewalk missions to repair satellites.

Chris Philpot

What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

Keep Reading ↓Show less
{"imageShortcodeIds":[]}