Next-Gen AR Glasses Will Require New Chip Designs

A Facebook executive challenges the Arm processor community to create tech building blocks for augmented reality glasses

2 min read
Sha Rabii, Facebook’s head of silicon and technology engineering, speaking at ARM TechCon 2019.
Sha Rabii, Facebook’s head of silicon and technology engineering, speaks at ARM TechCon 2019.
Photo: Tekla Perry

What seems like a simple task—building a useful form of augmented reality into comfortable, reasonably stylish, eyeglasses—is going to need significant technology advances on many fronts, including displays, graphics, gesture tracking, and low-power processor design.

That was the message of Sha Rabii, Facebook’s head of silicon and technology engineering. Rabii, speaking at Arm TechCon 2019 in San Jose, Calif., on Tuesday, described a future with AR glasses that enable wearers to see at night, improve overall eyesight, translate signs on the fly, prompt wearers with the names of people they meet, create shared whiteboards, encourage healthy food choices, and allow selective hearing in crowded rooms. This type of AR will be, he said, “an assistant, connected to the Internet, sitting on your shoulders, and feeding you useful information to your ears and eyes when you need it.”

This vision, he indicated, isn’t arriving anytime soon, but it is achievable. The biggest roadblock, he said, is lowering the energy consumption of the hardware, along with reducing the heat that today’s processors emit.

“The low-power design community is uniquely positioned to take the mantle and create the tools that let us realize this vision,” he said.

Rabii had a couple of suggestions for approaches that chip developers could take. For one, he said, chips have to be better tailored to their intended uses.

“Our use case,” he said, speaking about AR glasses, “is moderate performance but high-power efficiency, form factors that support stylish and lightweight designs, [and chip designs that are] mindful of temperature for user comfort.”

Designers also need to think more realistically about how chips use energy. Energy consumption, he says, is mostly “determined by memory access and data movement. Data transfer is far more expensive than compute.” For example, he indicated, “fetching 1 byte from DRAM takes 12,000 times more energy than performing an 8-bit addition; sending 1 byte wirelessly takes 300,000 times more energy.”

Hardware designers need to keep these differences in mind in the way they implement AI, Rabii says. “The prevalent model is to have a monolithic accelerator as a discrete compute element, with all AI workloads transferred to this element,” he said. “But this is a data transfer intensive architecture, which has implications for power consumption.”

Better, he suggested, would be to “treat AI as a deeply embedded function and distribute it across all the compute” in a system. This type of architecture, he said, brings compute to data, so data doesn’t have to move around as much, dramatically saving power.

There are other ways AI can be designed to use less energy, Rabii says. “Not every AI function needs the same precision,” he says. “A large percentage of the computational effort is required for the last percents of accuracy,” so breaking up workloads and reducing precision when possible can make AI systems far more efficient.

That’s what designers can do now. In the future, he said, Facebook is looking forward to improvements in semiconductor process technologies that will lead to better performance per watt, as well as specialized accelerators that focus on specific types of AI for higher performance and better energy efficiency. Some of those advances, he hopes, will come from Arm Holdings and the Arm ecosystem.

The Conversation (0)

Two Startups Are Bringing Fiber to the Processor

Avicena’s blue microLEDs are the dark horse in a race with Ayar Labs’ laser-based system

5 min read
Diffuse blue light shines from a patterned surface through a ring. A blue cable leads away from it.

Avicena’s microLED chiplets could one day link all the CPUs in a computer cluster together.

Avicena

If a CPU in Seoul sends a byte of data to a processor in Prague, the information covers most of the distance as light, zipping along with no resistance. But put both those processors on the same motherboard, and they’ll need to communicate over energy-sapping copper, which slow the communication speeds possible within computers. Two Silicon Valley startups, Avicena and Ayar Labs, are doing something about that longstanding limit. If they succeed in their attempts to finally bring optical fiber all the way to the processor, it might not just accelerate computing—it might also remake it.

Both companies are developing fiber-connected chiplets, small chips meant to share a high-bandwidth connection with CPUs and other data-hungry silicon in a shared package. They are each ramping up production in 2023, though it may be a couple of years before we see a computer on the market with either product.

Keep Reading ↓Show less