A Brainy Approach to Image Sorting

DARPA project reads the brain waves of image analysts to speed up intelligence triage

PHOTO: Jonathan Nourok/Getty Images

3 April 2008--We may need computers to tell us the square root of 529 679, but for now, at least, they still need us to recognize a kitten napping in a box of yarn. The point goes to the humans for our keen sense of the relationship between objects, our eye for texture, and our understanding of emotional relevance, but we don't wield these abilities with great speed. This slowness, unfortunately, has caused intelligence agencies a good deal of distress. They collect surveillance images from satellites, infrared sensors, and aerial-mounted cameras so quickly that analysts must struggle to keep up.

But what if we could combine the speed of a computer with the sensitivity of the human brain? Teams of researchers at Honeywell, Teledyne Scientific and Imaging, and Columbia University are busy hooking image analysts up to EEG machines, reading their brain activity, and speeding up data sorting sixfold. Their research is for a Defense Advanced Research Projects Agency (DARPA) program called Neurotechnology for Intelligence Analysts, which began its second of three phases this year. Each phase whittles down the number of participating research teams, and by the end, DARPA expects to have one team with a superior system.

”This [system] could be used for searching for desired images in a large database of images. It would be faster than a manual search,” says Deniz Erdogmus, a computer science professor at Oregon Health & Science University, in Portland, who collaborates with the group at Honeywell. Erdogmus presented an EEG approach to image triage on 2 April at the IEEE International Conference on Acoustics, Speech, and Signal Processing, in Las Vegas.

Erdogmus explains that it takes humans about 300 milliseconds to consciously recognize specific information in a picture--an adult face among children, for example. It takes another 200 ms for the person to react physically, say, by pushing a button as an analyst would do. But even before a person is conscious of what he or she is seeing--about 150 ms after being shown an image--the electrical activity in the brain's visual cortex has already spiked. The activity is called an event related potential, or ERP.

In Erdogmus's experiments, which DARPA funded, six professional image analysts watched as aerial photographs flashed on a computer screen, more than five of them per second. The analysts were told to search the terrain for large targets, such as golf courses. Meanwhile, a 32-electrode EEG cap, plastered to the analysts' heads, detected brain activity that was then recorded in a separate computer. After the experiment, Erdogmus ran the recordings through a program that flagged any pictures whose appearance coincided with an ERP. While his analysis pulled out many false targets, it rarely missed a real one. Even if it were used to isolate candidate targets for another analyst to scrutinize more closely, the technique could save a lot of time, says Erdogmus. For the system to meet DARPA standards, the analysis will have to happen concurrently with the recordings. The research team at Columbia University, in New York City, has already shown that it can analyze its data in real time, says Paul Sajda, an associate professor of biomedical engineering and the project leader at Columbia.

One main challenge in using the technique has been clearly detecting a signal against the background of normal brain activity. The Oregon lab uses a commercial EEG electrode cap that detects and evenly weighs signals from all parts of the brain. The baseline hum of activity in the human brain produces a voltage signal of 10 to 100 microvolts, while the ERP signal has an amplitude of only 1 to 10 microvolts.

Another problem is that the brain continues to respond electrically even after the image disappears, which makes it difficult to match signals with the pictures that evoked them. In an effort to get around that problem, Erdogmus has been refining a strategy to calibrate the system for each new user. During a training period, images are presented in controlled sequence so that the responding brain signals won't overlap. In these trials, the analyst must push a button in response to target pictures. This gives the computer a clear indication of what each person's ERP looks like so that it can better sort out overlapping ones.

The question remains whether watching images in rapid sequence will tire analysts out faster and ultimately make them less efficient. Catherine Huang, a graduate student in the Erdogmus lab who has tried the procedure, says it's essential to take small breaks between chunks of images but that even after an hour of watching satellite images flash past, she didn't feel tired. ”Each block is only 5 seconds, and you can take a break for as long as you want,” she says. Honeywell has reported the same feedback from the subjects in its in-house experiments. Teledyne could not be reached for comment.

The real difficulty could be in making the system user-friendly. ”Even though our system is faster, we still need to hook up the electrode to the head. So we are not sure if the user will accept this,” says Huang. Securing an electrical connection between the ERP cap and the analyst's head usually requires dousing the scalp in a conductive gel, and with all the necessary wires, the user must sit there looking like a futuristic Medusa.

About the Author

Morgen E. Peck is a freelance writer and former neuroscience researcher. She wrote about algorithms for brain machine interfaces in the April 2008 issue of IEEE Spectrum and a controversial breast cancer screening technology in the February 2008 issue.

Related Stories

Advertisement
Advertisement