Why Smart Glasses Might Not Make You Smarter

A Q&A with wearable-computer pioneer Steve Mann

5 min read
Steve Mann
Photo: Steve Mann

steve mann

Photo: Steve Mann
Click to enlarge.

Steve Mann built his first smart eyeglasses when he was still in high school and has continued to improve on his designs ever since—as a graduate student at MIT and now as a professor of electrical and computer engineering at the University of Toronto. The author of Cyborg: Digital Destiny and Human Potential in the Age of the Wearable Computer, he’s considered one of the world’s foremost experts on how the use of computer prostheses can extend human abilities. He’s also been wearing one version or another of his smart eyeglasses for more than three decades. In June, he’ll be hosting the 2013 IEEE International Symposium on Technology and Society in Toronto, which will focus on wearable computing and the rise of augmented and mediated reality.

Mann recently spoke with IEEE Spectrum about the new wave of head-up displays coming on the market, such as Google Glass. He talked about the benefits they might offer as well as possible design flaws and the potential for abuse.

top tech 2013 graphic link to landing page

IEEE Spectrum: You’ve spent decades using wearable computers to augment your own abilities. Now, millions of people are poised to do the same, with products like Google Glass. Is there anything they should know before donning their first pair of smart glasses?

Steve Mann: There are different kinds of effects that smart eyeglasses, which I call “glass,” can have over the long term. I think the mistake a lot of people are making now is that they think that by augmenting they are making things better, and the problem is you often get information overload. You have to be able to mediate—you have to be able to take [information] away, otherwise you’ll drive yourself crazy. I don’t think anyone has realized that yet because I don’t know anyone else who has lived long-term in an “augmediated-reality” world.

IEEE Spectrum: One of the things people are excited about is that products like Google Glass will make them smarter, that they’ll function like a second brain. Is this realistic?

Steve Mann: It depends on how you define “smart.” I would say the effects of glass have to be carefully thought out, whether they make you smarter or dumber or different. For example, in 1978 I came up with what I call generation-one glass. It included a camera and a display, similar to the products you are seeing come on the market now. I found that design created a lot of strange effects. In effect, it took your eye out of the eye socket and moved it to one side a little bit. That doesn’t make you smarter. It makes you dizzier and more confused, and it makes you trip and fall, and it gives you strange unpleasant flashbacks when you take it off. It will make you stupid.

I overcame that by creating generation-two glass, which causes the eye itself to become the camera. That is to say, if a person is wearing generation two or higher, when you look them in the eye it looks like they have a glass eye. We called this “digital eyeglass” back in the ’70s and ’80s.

Generation-two glass can make you more situationally aware. However, there were still problems with focusing and depth, so we came up with generation three and generation four, where the display is a laser device that causes the eye itself to become both a camera and a display with infinite depth of focus.

IEEE Spectrum: What do you mean by “more situationally aware”?

Steve Mann: If it’s done right, it’ll make you smarter and more situationally aware, in the sense that you see things clearer. I’ll give you some examples. About 15 years ago, I created a wearable face recognizer that automatically identified faces and put virtual name tags superimposed on top of reality. Another thing I did was create glasses that allowed me to see in different spectral bands. I could see where people had recently walked because the ground was still warm. I could look into a parking lot and see which cars had recently been parked because the engine was still hot. I had made the infrared visible—I saw in a broader spectrum.

I also invented something called high-dynamic-range, or HDR, imaging, which combines dark, medium, and light exposures and increases what you can see. For example, if you’re walking down an alley late at night and there is a car with headlights shining in your face, you can still read the license plate and recognize the driver.

IEEE Spectrum: Do you see any flaws in the current design of Google Glass and the smart glasses Vuzix recently released?

Steve Mann: The biggest flaw is that it’s generation one. It should be generation four or generation five. In the same sense that looking at a computer screen all day can give you eyestrain, this can ruin your eyes.

IEEE Spectrum: That sounds serious.

Steve Mann: I’ve done all kinds of experiments. I’ve effectively taken my eyes out of the sockets and put them on the end of a long meter stick. I’ve also experimented with different spacing between the eyes. It turns out that taking one eye out of the socket and moving it to the right is worse than taking both eyes out and putting them on the sides of your head. It will drive you nuts.

IEEE Spectrum: What do you mean by “drive you nuts”?

Steve Mann: You can get visual flashbacks after you take off the glasses and have the experience where something isn’t where it should be in space. For example, a ball is coming toward you and you jump the wrong way to dodge it. Or you go through a door, and you bang your head on the frame.

IEEE Spectrum: And this is happening because of the placement of the display?

Steve Mann: When we change our vision by mediating it in various ways, we are reconfiguring our visual system. We are remapping the brain. The biggest side effect would be damage to the visual cortex—brain damage. The second biggest side effect is eyestrain and long-term damage to the way the vision system focuses.

IEEE Spectrum: What about potential benefits?

Steve Mann: Potential benefits are immense if these products are done right. I’ve had a person who is legally blind put on generation-four glass, and she could read with it.

I originally built my first prototypes as a seeing aid. The vision I had was that you would download your prescription over the Internet into the glass. The prescription would automatically get stronger if your eyes were tired. If you were reading a newspaper, it would adapt to your field of view. Today, I can focus in on something that is less than a centimeter away from my face. I can pick up microchips and see the numbers written on them.

IEEE Spectrum: In the past, you’ve written extensively about the potential for abuse that’s presented by everything from surveillance cameras to corporate-endorsed smart suits. As these technologies have evolved and become increasingly mainstream, have you become more or less concerned?

Steve Mann: I think the potential for abuse in a surveillance-only society is immense. Surveillance is putting cameras on property, like land or buildings. It comes from a French word meaning “oversight.” However, by putting cameras on people, we can have what I call “sousveillance,” or undersight. Sousveillance is distributed. Surveillance cameras are not going to go away, but sousveillance gives us a new way of collecting data. We’ll be discussing the relationship between surveillance and sousveillance at the upcoming 2013 IEEE International Symposium on Technology and Society in June.

IEEE Spectrum: You’ve also been sounding the alarm about McVeillance, one-sided surveillance by powerful corporate entities that simultaneously forbid people from using their own cameras. Does Google Glass (and similar products) have the potential to counterbalance this?

Steve Mann: I think so. Suppose that you invite me over to your house, and you say, “I am going to keep a recording of everything you do.” That’s called surveillance. And you also forbid me from having a camera myself. But if I am wearing my glass, I also have a way to collect data. We can have veillance from all different directions.

For more about Google Glass, see “Google Gets in Your Face.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions