Imagine being blindfolded and having an array of electrodes sitting on your tongue. Around your neck hangs a flat box containing a microprocessor, and your lips close around a long cord that connects to the box and a camera. The cord dangles halfway down your chest, as if you’re drooling electronics. While you tentatively direct the camera, you feel your tongue vibrate at tiny discrete points that form a circle. The electronics in your mouth are telling you that you are facing a round object. It might be a tennis ball right in front of you. But then again, it might be a hot-air balloon a kilometer away. You really can’t tell.
The main idea of the BrainPort is to help blind people by translating visual information into tactile cues. A video feed is reduced to simple shapes, which are then drawn on the tongue by activating certain electrodes, each of which applies a small voltage that lightly tingles the tissue. As you turn the camera to explore an area, the electrodes respond with different patterns of mild zaps to indicate the shapes of objects in the camera’s field of view. The sensory experience of the BrainPort, in visual terms, is a flat world rendered in blurry, monochromatic silhouettes.
Critics say most blind people would find information at such a low level of detail, transmitted through the tongue, to be of limited value in decoding the clutter and chaos of everyday life. Given its estimated US $5000 price tag, doubts about the BrainPort’s utility are compounded by questions of whether low-vision consumers will prefer it to other, cheaper assistive technologies.
Developed and marketed by Wicab, a small company in Middleton, Wis., the BrainPort is an offshoot of vision-loss research done by Paul Bach-y-Rita and colleagues in neighboring Madison at the University of Wisconsin’s biomedical engineering department. The device’s cord is an awkward, potentially embarrassing feature that was summarily rejected by everyone contacted for this article. Even Wicab’s chief executive, Robert Beckman, says, “To be commercially feasible, we need to be in a wireless format, where the array is mounted on a tray that is fixed on your upper teeth.”
“We’ve made an array that mounts that way,” he adds. But the wireless aspect is still to come.
Although those changes would definitely improve the device’s cosmetic appeal, the inherent shortcomings of the sense of touch impose a low ceiling on the ambitions of any tactile vision device. Not only is the BrainPort limited by the size of its array, which consists of 100 electrodes in a 10-by-10 grid, it is also hampered by the simple fact that the human brain has not evolved to process large amounts of tactile information simultaneously.
Even if the tongue and the array were able to provide the canvas for a detailed, pointillist masterpiece and a palette of strong and weak buzzes were calibrated to mimic gray scale, the result would present a serious cognitive challenge. The patterns would dissolve into nonsense, the signal lost in an onslaught of zaps. “It’s too hard to learn to integrate all this information—it’s not natural,” says Hong Tan, an electrical engineering professor at Purdue University, in West Lafayette, Ind., who works on tactile human-machine interfaces.
A camera–tongue display combination would not be able to convey enough detail to let a sightless person get around safely, so a user would still need to rely on a Seeing Eye dog or a cane to contend with such obstacles as a drop-off at a stairway or a curb, or a glass door. “I think it’s easy to recognize fairly simple, high-contrast objects on a black or a white background—that can be done,” says Richard Normann, a bioengineering professor working on the University of Utah’s visual neuroprosthesis project. “But our world is not that kind of visual world.”
The fact that the system can only supplement other aids raises a logistical issue: the camera would have to be attached discreetly to the user’s head, so that his or her hands are free to, for example, hold a cane or a dog’s harness.