Wearable Computers Will Transform Language

Smart clothes and accessories will let us share thoughts and sensations as well as words

11 min read
Illustration: MCKIBILLO
Illustration: MCKIBILLO

HRAdamVoorhesCMYK_Future_Wearables3Photo: Adam Voorhes; Prop Stylist: Robin Finlay

In August of 1961, fabled mathematicians Edward O. Thorp and Claude Shannon, of MIT, walked into a Las Vegas casino. They intended to try their luck at roulette, a game in which players bet on where a whirling ball will land after falling from an outer stationary track onto an inner spinning wheel. But they weren’t typical gamblers.

They worked as a team. Shannon watched the wheel, clandestinely clocking the speeds of the rotor and the ball by flipping microswitches in his shoe with his big toe. The signals coursed through wires that ran up his pant leg to a small computer strapped to his waist. The machine calculated the ball’s final resting position and then transmitted this prediction wirelessly to a receiver under Thorp’s shirt. Through a tiny speaker in his ear, Thorp heard one of eight distinct tones that advised him on how to bet. To his and Shannon’s delight, he reported years later, this newfound faculty increased the duo’s odds of winning by 44 percent, and they “often turned a few dimes into a pile.”

human os icon

Engineers widely regard this invention as the first wearable computer—an early glimpse at today’s fitness trackers, smart watches, and augmented-reality eyewear, and their possible descendants: electronic contact lenses, haptic undergarments, brain-reading caps, body-monitoring tattoos, gesture-recognizing rings, speech-detecting tongue piercings, and touch-sensitive sleeves, pleats, buttons, and zippers. Compared with today’s powerful all-purpose processors, the MIT mathematicians’ machine wasn’t much—just 12 transistors hand-soldered in an analog circuit. Yet the impact on its wearers was profound—foretelling, perhaps, a future when we depend on our electronic devices to experience life as much as we rely on our eyes, ears, and skin.

graphic link to what could possibly go wrong sidebar

“We will get to a point when we stop thinking of technologies as external to our bodies,” says Desney Tan, an expert in computer interfaces at Microsoft Research, in Redmond, Wash. Wearables will always be on and immediately accessible. Donning them, he forecasts, will allow us to “sense and capture many more things about the world” and to communicate those sensations in new ways. For example, on-body displays could let people use images to express ideas in face-to-face conversations. Brain-activity monitors could capture emotions and add them automatically to text messages. Haptic fabrics could let a father-to-be experience the kicking of his unborn baby.

Electric circuits are creeping ever closer to us—from our desktops to our laps, our pockets, and now our faces and wrists. And many engineers predict the trend will continue: In the coming decades, computers will be seamlessly woven into our clothing, fashioned into our jewelry, and painted on our skin. Eventually, we may not be able to distinguish their capabilities from our own.

Electronic communication has long been limited not by computing power but by the ability of machines to understand us. The human and the computer possess “enormously powerful information-processing capabilities,” wrote Edward Tufte, a data-visualization pioneer now at Yale University, in 1989. “Yet communication between the two must pass through the low-resolution, low-information, narrow-band user interface of a video display terminal.” Your phone, in other words, may be able to send a message in a split second, but you can type only so fast.

At first glance, it seems the problem gets worse with wearables. “We can put a really powerful computer in a smart watch, or into Google Glass, or whatever the next form factor will be,” says Chris Harrison, an assistant professor of human-computer interaction at Carnegie Mellon University, in Pittsburgh. “So the question is, How do you get input into something so small?”

In the coming era of wearable computing, as Harrison foresees it, we will manipulate digital bits the same way we do real objects—with our hands. “Touch is a really powerful, wonderfully intuitive thing,” he says, adding that although the touch screens on today’s smartphones, tablets, and watches are a step in the right direction, “they’re pretty lame.” The gestures they recognize are disappointingly few, including one- and multifinger taps, swipes, and pinches. “We don’t do things in the real world based on the number of fingers we poke at things,” he quips. “There are all these other, really rich dimensions of touch that touch screens ignore”—such as pressure, contact area, the shape of your hand, and whether you use a pad, knuckle, or nail.

To explore more diverse types of virtual touch, Harrison does what he calls “time-machine research.” In a bright, airy laboratory that he has adorned with obsolete PCs and hand-welded sculptures made of discarded cameras and cellphones, he and his students build prototypes of possible future interfaces by hacking or cobbling together existing technologies. From a table cluttered with to-go cups, cables, laptops, watch parts, and mannequin hands, he produces an iPad.

“Think about the natural ways you use your hands,” he begins as he launches a homespun app called Touch Tools. “Grasps, for example, are really iconic. How I hold a pen is very different from how I hold a dry eraser.” Pinching his middle and index fingers and thumb together as if clutching an invisible stylus, he touches the screen with his fingertips. The gesture brings up an image of a ballpoint pen, which he uses to draw a doodle. He then widens his grip about an inch, summoning a pink rubber eraser. He erases his scribble. “So I don’t need tool bars—I just change the shape of my hand,” he says. He conjures a tape measure, using his other hand to extend the tape. Then comes a magnifying glass, a mouse, a camera.

Video: Ariel Bleicher and Celia Gorman

You might think that touch gestures won’t be very useful when people stop carrying around screens and sleek wee wearables become the norm. Not so. “If we’re clever about how we use our fingers, we can actually make interactions on even something as small as a smart watch really quite powerful,” Harrison says. With Gierad Laput, a Ph.D. student, he has built a smart watch that users can manipulate mechanically. Four rice-grain-size sensors, which click and pivot like miniature joysticks, detect when you shift, tilt, twist, or depress the face, providing “all these different degrees of freedom that let you interact with the watch in more expressive ways than just using a touch screen,” Laput says. To allow for more expansive interactions, he suggests, future engineers could put light projectors into watches or other wearables such as lapel pins, which would let people use the space around these devices.

“Could you turn your skin into a touch screen?” Harrison interjects. “Could you turn this wall?”

As if on cue, Ph.D. student Robert Xiao presents a textbook-size box on a tripod. The contraption, called WorldKit, contains a digital projector and a Microsoft Kinect depth camera wired to a laptop. Working with Harrison and computer-science professor Scott Hudson, Xiao programmed the system to “explode information on every surface of the environment,” he explains. He swipes his palm across the wall, triggering a soft yellow glow that trails his hand. “When I lift my hand up, it’s actually a multitouch surface.”

“We’re painting interfaces onto the world,” Harrison says. “I could say, ‘I want a thermostat here and a calendar there,’ or ‘I want this whole wall to be blue.’ ”

“You want a light switch?” Xiao asks, and one appears next to Harrison’s office door. “We actually had this controlling the lights.” Xiao envisions a future in which disc jockeys queue tracks on their forearms and architects revise blueprints on the walls of their clients’ homes. “In 20 years, maybe people will go to Home Depot and instead of buying lightbulbs, they’ll be buying information bulbs.”

In fact, future consumers may choose from many different kinds of interfaces, mixing and matching to satisfy their style and fancy. Other researchers are now investigating smart glasses that track eye movements, wristbands that capture hand gestures, tongue implants that recognize whispers, even neck tattoos that pick up subvocalizations—the subtle vibrations your throat muscles make when you imagine how a word sounds.

At the Georgia Institute of Technology, in Atlanta, Clint Zeagler is designing interfaces for smart clothing, such as embroidered patterns sewn with conductive thread, that users can navigate by touch rather than sight. “Why can’t my zipper be a scroll?” he asks. “Why can’t my button for my pants be a button? Why can’t it turn on my pants?”

The gadgets we’ll wear won’t just be handier than those we now have to carry. Wearables will be truly revolutionary, some experts say, because they’ll learn things about us that we never tell them or that we might not think to find out for ourselves.

The seeds of such a future are already here. Google’s digital personal assistant, Now, has the smarts to figure out the locations of your home and workplace just by tracking your movements during the day. Because it knows this information, it can, for instance, display on its own initiative traffic updates on your phone 5 minutes before you begin your commute.

In February, Yahoo announced a five-year US $10 million partnership with Carnegie Mellon to create the “next next next-generation personal assistant,” says Justine Cassell, a computer-science professor and codirector of the project. The software sidekick she has in mind “will constantly be learning about you and becoming more personalized.” It will know—without your input—what cuisines you like, who your friends are, and that you might enjoy that new shop across the street.

Your phone already collects a good deal of intelligence on you—through your GPS trail, browsing history, texts, calendars, e-mails, and online social networks. But wearables will be able to gather much more. Sensors exist or are in the works that can observe your breath, sweat, gaze, facial expressions, heart rate, muscle movements, and sleep patterns, as well as the ambient temperature and air quality. And now scientists are starting to develop wearable technologies that can eavesdrop on the brain.

“Your computer will act as an extension of your own mind”

Known as brain-computer interfaces, or BCIs, these systems are found mostly in research laboratories. They tend to be bulky and uncomfortable, and the information they can extract from the brain is still crude. The most common approach is called electroencephalography, or EEG. A typical setup consists of a headband or cap lined with anywhere from a few to a few hundred electrodes, which measure voltage fluctuations produced by electrically active brain cells, or neurons. By the time these signals reach the scalp, however, they are weak, jumbled, and incomplete, making it difficult to pick out subtle events, such as thoughts or emotions, from the noise made by other brain processes, facial movements, and nearby electronics.

But engineers are beginning to overcome these challenges. Using machine-learning techniques, they have trained computers to recognize certain general states of mind from EEG recordings, including attention, arousal, fatigue, stress, and cognitive load. Early prototype BCIs can also detect when a person decides to act, identifies something he or she is looking for, or becomes fully focused on a task—a mental state known as flow or “in the zone.” Their inventors are confident that these technologies will be commonplace.

“Say you’re walking down the street,” imagines Paul Sajda, a BCI expert at Columbia University, in New York City. “You’ve got your Google Glass on, with EEG sensors embedded in the frames.” You notice things—a pretty bird, a flashy car—and your future Glass notices you noticing. If it also tracks your gaze, it can identify what you’re looking at. Maybe your Glass tells you something useful—the species of that bird, the make of that car. Maybe it takes a picture. “A very simple app might be a stuff-I-thought-was-interesting-today scrapbook,” Sajda proposes.

For many researchers who work in wearable computing, the ultimate goal is to design machines that use data from the brain and body to understand the world in human terms. For example, Kaleb McDowell and Brent Lance at the U.S. Army Research Laboratory, in Aberdeen, Md., and their colleagues are developing a BCI that categorizes images based on how a person’s brain responds to them.

HRUSArmyResearchLaboratoryRedMind-Reading Machine: A volunteer tests a brain-computer interface being developed at the U.S. Army Research Laboratory. The system can identify when a user spots a potential threat, such as a person with a gun, simply by observing brain activity.Photo: Ron Carty/U.S. Army Research Laboratory

In the system’s current iteration, a volunteer wears a cap studded with 64 electrodes while watching a series of scenes—buildings, streets, fields. She is told to look for threats, such as people with guns. At five images per second, the pictures flash by too fast for her to react physically to each one, even to press a button. All the while, the BCI monitors electrical pulses from her brain, and when it sees signals suggesting she has spotted a shooter, it tags those images “threat.” The scientists have found that the computer identifies these targets correctly at least 85 percent of the time.

In just a few years, McDowell and Lance speculate, BCIs could be used commercially to train computer-vision systems or help military analysts sift through aerial footage from satellites and drones. In a few decades? Okay, we’ll dream big, they say: Brain-sensing and brain-modeling technologies will advance enough to be able to observe emotions, desires, and comprehension. Other researchers fantasize that BCIs might someday record perceptions, dreams, and maybe even internal speech—the voice you hear in your head when you talk to yourself.

In the future, people will wear BCIs constantly while also sporting cameras, microphones, and other sensors that will simultaneously record everything they see, hear, and feel. The network of computers on a person’s body will then use these data to build digital representations of how he or she perceives the world. For instance, your wearables might track every gesture that calms you, every sound that surprises you, every remark that makes you swear under your breath. Over time, these machines will learn to anticipate your actions and emotions and will automatically tailor your environment to suit your needs—muting your alerts when you’re in an important meeting, dimming the lights when you drift off to sleep. They may even recognize subtle cues you don’t notice or can’t sense, such as early signs of depression or the onset of a migraine.

And when our computers know us better than we know ourselves, they will help us to communicate better with one another. They will monitor our conversations and inform us when others are bored, inspired, hurt, grateful, or just not on the same page. They will encourage us to speak up when we are shy, stop us from sending e-mails when we are angry, and remind us of what makes us happier when we are sad.

Thad Starner, a professor at Georgia Tech and a technical lead for Google Glass, has worn some version of a computerized head-up display for 21 years. He uses a one-handed keyboard to take notes and browse the Web. One day, he’s sure, he will no longer need to search for pertinent information because his Glass will listen to his conversations and intuitively fetch the data for him. It will appear before his eyes as quickly and unobtrusively as if he were recalling a memory. “Your computer will act as an extension of your own mind,” he prophesies.

Sooner or later, our electronic gizmos will no longer be possessions we take or leave behind. They will be us.

What Could Possibly Go Wrong?

Your Body, Broadcasting Live

Wearable sensors could spill your innermost secrets

wearables icon\u2029Illustration: MCKIBILLO

It’s already an all-too-familiar fear. You buy a smart gizmo or download an Internet app in hopes it will entertain you or make you healthier or your life easier. But then you wonder: Are criminals hacking my accounts? Are companies profiting from a knowledge of my habits? Is the government collecting my texts and e-mails?

And if you fret about the fate of data being gathered by the smartphone in your pocket, you’ll shudder at the thought of what could leak from hardware in your clothes or on your skin. Wearables will likely record not just what you do and whom you talk to but also the states of your mind and body, including your heart rate, blood pressure, and brain activity—information you probably don’t want shared too widely. What if your boss could measure how focused you are at work? What if your spouse could know whom else you found attractive?

Without reliable security, clear privacy laws, and simple user controls, the wearables generations might have few secrets left to keep. People might give up data unwittingly, lured by cheap deals and ignorant of the fine print of privacy policies, says Jason Hong, a privacy and security expert at Carnegie Mellon. Smartphone users, he points out, are often surprised that many free apps keep close tabs on them. He fingers a few notorious snoops: the game Angry Birds, Bible App, and Brightest Flashlight Free. “People don’t expect these apps to collect location data,” he says, but they do. “They send it out to advertisers.”

Records from wearables such as brain sensors could also be used in criminal investigations, says Nita Farahany, who studies the legal implications of emerging technologies at Duke University, in Durham, N.C. Under U.S. law, she explains, “you can’t be forced to testify against yourself, but that doesn’t mean your body can’t be used against you.” If prosecutors can use fingerprints and DNA to get a conviction, what’s to stop them from using scans of a suspect’s thoughts or emotional reactions?

The most vexing spies, however, may be wearers themselves. Wearables could allow you to watch yourself so closely that you will stop just being you. “Say I go out to dinner, and I have a great time,” imagines Stephen Fairclough, a psychophysiologist at Liverpool John Moores University, in England. “Then I get home and my smart watch tells me I was stressed. I think, ‘Was I?’ I start second-guessing myself, and it changes the way I perceive the experience.” If people rely too readily on computerized data to analyze their behavior and make sense of the world, he worries, “we may stop trusting our own instincts and our own feelings.”

Many engineers believe that the technical challenges of building tiny wearable machines will be easily overcome. “The last 50 years have been about evolving technology,” says Desney Tan at Microsoft Research, referring to developments in electronics that could one day allow wearables to vastly extend the abilities of Homo technologicus. “The next 50 will be about inventing the human being.”

So let’s make sure we get it right. —A.B.

This article originally appeared in print as “Beyond Words.”

For more on the future of wearable computers see “Learn New Skills With Superhuman Speed.”

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions