Coming Soon: Augmented Reality Glasses for the Masses

Seven years ago, Google tried and failed to find a market for its AR glasses. But the technology has evolved, says ARM’s Nandan Nayampally

3 min read

Futuristic image of a woman with augmented reality glasses
Photo-illustration: Jonathan Kitchen/Getty Images

A lot can change in seven years. Google Glassa wearable display with a camera and other tools that feed wearers information and allow them to capture photos and videos, began shipping to selected developers in 2013. It was released as a more open beta test in 2014. Then, in early 2015, Google withdrew the product. It has since reemerged, along with a variety of competitors, as a specialized product for use in industry—often for training or displaying diagrams or other information during specific tasks.

As a consumer product though, the technology stalled.

Until now, that is. Facebook last month confirmed that it’s building augmented reality (AR) glasses. Apple is rumored to be getting ready to release its own version of AR glasses next year.

But are AR glasses finally ready for prime time?

I asked Nandan Nayampally, vice president and general manager of ARM’s Immersive Experience Group, to consider whether the technology—and consumers—are ready for AR glasses. Here’s what he had to say.

Nandan NayampallyPhoto: Tekla Perry

IEEE Spectrum: Why don’t we have AR glasses for consumers yet?

Nayampally: It’s an application that has very high performance requirements but with a lot of constraints, so many key technology pieces had to be right. For example, it requires very high levels of specialized computation that fit within certain envelopes of power and size.

Computational power and other technology has built up rapidly in the past seven years that just wasn’t available when Google launched Glass. Take the displays—microLED, liquid crystal on silicon (LCOS), and waveguide technology have come a long way. Also, the human interface has evolved; we’ve learned how to better use audio, and improved voice recognition. In the first iteration in 2013, the interface was not intuitive, the use cases weren’t clear, and the technology couldn’t cope. Today’s products are taking a new approach to how you interface with your devices in a more human way.

[We’re getting close, but] there are plenty of challenges that still exist, so we will go through various form factors and improvements, particularly as designers begin understanding use cases.

IEEE Spectrum: The use case for Google Glass was essentially just taking videos.

Nayampally: Yes, that was what most people remember and it was painful for people at the time. But now the world has moved on to a point that you can assume there is a camera on you everywhere, so it won’t be as much of an issue.

The biggest emerging use cases incorporate virtual social interaction—letting one be somewhere with someone without actually being in the same physical place. I’ll be able to have an immersive experience of being at a game or an event with avatars of friends—who may not be in the same place, but are effectively together. This also sets up a better way to have meetings for teams that work in remote locations.

IEEE Spectrum: What are the remaining technical challenges?

Nayampally: There is a great deal of computation that needs to be done in a very small footprint at a low power to deliver a compelling visual, audio, and haptic experience while maintaining a form factor for the glass that is fashionable, light, and practical—so you don’t have to charge [the device] every two hours. There still are big challenges on power, energy consumption, thermal, and form factors [as well as a] continuing need for improved display and battery technology.

IEEE Spectrum: How will we get there?

Nayampally: With VR headsets today, you see the possibility of making augmented reality compelling. However, a large part of what has been happening with VR has been on general purpose, high-performance processing, using a lot of software to run the algorithms. This is not optimal for a smaller form factor. The key algorithms—like vision, gesture recognition, and hand tracking—are going to have to go into hardware—programmable, but hardware-accelerated implementations The computation platforms also need to be designed with use cases and workloads in mind, the software applications and development environments need to be further optimized. And all these things are beginning to happen.

IEEE Spectrum: Who will be first to get AR glasses out there?

Nayampally:  There already are a number of products in the market that are taking big strides toward untethered head-mounted displays—companies like Microsoft, Magic Leap, and Nreal, just to name a few.  I’m not going to make a guess on who delivers a compelling “fashionable,” “all-day-wear” smartglass which supports advanced mixed reality features. But you will see more announcements in the near future like the one you saw from  Facebook in September. Within the next year or so, you will also see products that support hand-tracking, which is important for more general use because, while serious gamers love their specialized controllers, those end up being too cumbersome for the average user. Amazon also recently announced Echo Frames with Alexa—that’s AR audio, but still AR.

The Conversation (0)