Google Glass

This wearable computer augments the self, not reality

4 min read
Google Glass
Photo: Rod Furlan

https://spectrumqa.ieee.org/ns/slideshows/08SS_GoogleGlass2a/fullscreen/08rw.slide_ass.f6.jpg

Google Glass is a polarizing device. Depending on whom you ask, it represents either the best of near-future technology or the worst. Positive opinions have often focused on the possibilities for augmented reality (AR), while the negative ones tend to focus on the ramifications for privacy. But I’m going to focus instead on what’s possible with the current Glass hardware—dubbed the XE or Explorer Edition—what makes it groundbreaking, and what we can expect from Glass-like devices in the near future.

First, it’s important to dispel a few misconceptions about Glass XE, which is not meant to be a consumer product. At its core, XE is an experimental, voice-activated wearable computer equipped with a camera and a head-up display. In its current iteration, despite the early hype of Glass as an augmented reality device, the small screen can overlay information on only a portion of your field of view. In addition, even though the onboard 1-gigahertz dual-core processor is reasonably powerful, it’s held back by a relatively small battery and a limited thermal budget. These two issues make true AR applications impractical on current hardware.

The privacy issues of the current hardware are also overblown: While Glass can stream live video, it can do so for only short periods of time because of its limited battery life. It is also unable to do so stealthily; the screen illuminates when you record video, tipping off bystanders.

However, despite its small size, Glass’s head-up display is impressive, surprisingly clear, and oddly immersive. Because Glass spends most of the time in standby with the screen off—and therefore transparent—to make the most of its limited battery life, you can easily forget you’re wearing it. When Glass wants your attention, it uses an integrated bone conduction speaker to notify you. If you choose to engage, simply tilting your head 30 degrees upward will activate the screen, a frictionless process that feels natural. Most of the time you’ll be interacting with Glass either by using its touch pad, immediately familiar to anyone who is already accustomed to touch screens, or by uttering one of several voice commands.

The features currently available cover a significant portion of the tasks you would normally use your smartphone for. You can receive notifications (from Gmail or SMS, for instance) and reply using voice commands, search using Google, get directions, and start and receive calls (including video calls). Its point-of-view camera allows you to take pictures or shoot videos with an ease that makes Glass a great tool for lifestreaming (creating an automated electronic diary) and fulfills the basic requirements for its use as an external memory device, a point I’ll return to later.

But Glass shouldn’t be defined by its current software package. Its brilliance lies in its form factor and multimodal natural interface, which translates into a user experience fundamentally different from anything I’ve experienced before. Wearing and interacting with Glass doesn’t feel anything like using a conventional mobile device. Rather, for lack of a better metaphor, it feels more like having a computer inside your head.

Consider that at any time you can simply look up—thereby activating Glass—and ask Google a question. It’s surprising how often you get just the answer you need, and the whole process is significantly faster than reaching into your pocket, unlocking your phone, and typing in a search query. Similarly, receiving a text message and replying to it using Google’s competent speech-to-text engine while you walk down the street feels a lot like poor man’s telepathy.

The bottom line is that interacting with Glass often feels so natural that you’ll eventually be able to use it without thinking about it, which is a big deal from a cognitive standpoint. Perhaps the best description of my experience with Glass is that it feels like having artificial senses spliced into my existing ones.

As I mentioned in my previous article about wearable computers [see “Build Your Own Google Glass,” IEEE Spectrum, January 2013], our information-hungry brains are eager to incorporate new streams of information into our mental models of the world (one fascinating example involves wearing an ankle bracelet that vibrates to indicate north). Past an initial period of adaptation, these new streams of information fade into the background of our minds as conscious attention is replaced with mostly automatic behavior.

So it’s natural that a device like Glass would be quickly assimilated as an extension of our beings. For me, it took my brain roughly two weeks to fully incorporate Glass into my model of the world. I am now able to use most of its features without thinking. For me the effect is so strong, I often find myself tilting my head upward to activate Glass even when I’m not wearing it.

External Memories

Moving into the speculative, what’s the near-future potential of wearable point-of-view computers? Future versions of Glass will enable a wide range of augmented cognition applications—combining the natural strengths of the human brain, the massive computational power of the cloud, cheap storage, and developments in machine learning.

For example, once we deal with the (admittedly nontrivial) privacy constraints around continuously recording video with Glass, hardware iterations with improved battery life could record everything you see and hear and upload it to the cloud, where machine-learning algorithms would sift through the data, extract salient features, and generate transcripts, thus making your audiovisual memory searchable. Imagine being able to search through and summarize every conversation you ever had, or extract meaningful statistics about your life from aggregated visual, aural, and location data.

Ultimately, given enough time, those digital memory constructs will evolve into what can be loosely described as our external brains in the cloud—imagine a semiautonomous process that knows enough about you to act on your behalf in a limited fashion.

Even though there are significant challenges ahead for the creation of such external brains, it’s hard to imagine a future in which this doesn’t happen, once you consider that the required technological foundations are either already in place or are expected to become available in the immediate future.

To wrap up with an anecdote: A couple of days ago I was stopped by a stranger who asked me, “What can you see through Google Glass?” To which I replied, only partly tongue in cheek, “I can see the future.”

A condensed and edited version of this article appeared in the October issue.

About the Author

An artificial intelligence researcher and investor, Rod Furlan can be found online at BitCortex. In January, while waiting for Google to release the developer edition of Glass, he wrote for IEEE Spectrum about building his own version.

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions