Where Do AI Gadgets Go From Here?

Smartphones can rest easy. Apps? Not so much.

5 min read
close up view of a small white box on the shoulder of a person's white hooded sweatshirt

Humane’s AI Pin may be an example of a device released too early, before advancements in on-device AI change the face of consumer tech.

Humane

This spring, hype has been swirling around two AI-powered gadgets: the Humane AI Pin and Rabbit R1. Both promised AI automation and seamless conversation with an ever-present, always-helpful AI assistant.

They failed. Prominent tech reviewer Marques Brownlee called the Humane AI Pin the “worst product I’ve ever reviewed,” while the Rabbit R1 received the somewhat kinder verdict of “barely reviewable.”

“I don’t think it will be more than six months before we see actual [AI] applications that are run on PCs, or even mobile devices.” —Dwith Chenna, AMD

Dr. John Pagonis, Principal UX Researcher at Zanshin Labs, observed that any new consumer device needs to prove it’s better than what’s already available, a task both AI gadgets failed. “What is the problem [these devices] solve? What is the need that they cover? That is not obvious.”

So, that’s a wrap, right?

Not quite. While Humane and Rabbit failed, solutions to the problems that stymied these newcomers are right around the corner—and they could change consumer tech forever.

ChatGPT, are you there? Hello? Hello...?

Today’s best AI large language models (LLMs) face a common foe. Latency. People expect a reaction when they tap or talk, but the best LLMs reside in data centers, which can cause delays. That’s core to Humane’s and Rabbit’s woes. Reviewers complained the gadgets were slow to respond and useless when Internet access was unreliable or unavailable.

Yet there’s a solution: Put the LLM on the device. I reported on this possibility for IEEE Spectrumin December, and a lot has happened since then. Meta’s Llama 3, Microsoft’s Phi 3, and Apple’s OpenLEM—all announced in April of 2024—brought big gains in the quality of small AI models. Chipmakers like Apple, Intel, AMD, and Qualcomm are tackling the problem, too, boosting the performance of the AI coprocessors in laptops, tablets, and smartphones.

“All these apps, all these different notifications. It’s too much. It’s exhausting. There’s a lot of research that this shouldn’t be the way we interact with technology.” —Patricia Reiners, UX designer

Dwith Chenna, an engineer at AMD who specializes in AI inference, said these improvements make LLMs possible without the cloud. The Humane AI Pin and Rabbit R1 simply released too early to take advantage of these advancements.

A diagram of the Apple M4 chip. It points ou the location of the Neural Engine, which has 16 cores and is "faster and more efficient" than the Neural Engine in prior Apple chips.Apple’s M4, which debuted in the new iPad Pro, has, the company says, “Apple’s fastest Neural Engine ever.”Apple

“There’s a lot of focus on trying to squeeze [large language] models, to run them on devices like PCs and mobile phones,” Chenna said. “I don’t think it will be more than six months before we see actual [AI] applications that are run on PCs, or even mobile devices.”

Bringing LLMs to consumer tech will also tackle another key problem. Privacy.

Rabbit’s R1 uses a Large Action Model (LAM) to automate apps and services, but some reviewers expressed discomfort with the idea. The LAM requires personal information, including logins and passwords, to act on a user’s behalf. Rabbit promised to handle the information securely, but since the AI model it uses is hosted in the cloud, data is inevitably sent off-device.

“I think one of the primary concerns is privacy and security. Not everyone is comfortable sharing their information with the cloud,” said Chenna. Pagonis agreed, and noted the largest tech companies are already maneuvering to address the problem. “At Google I/O, they referred to running Gemini Nano on-device for better privacy. And that is a strategy I’m sure Apple will follow.”

Design stumbles sour AI gadgets

Smaller, quicker LLMs that run on-device could solve the latency problem, but they won’t instantly redeem Humane and Rabbit’s gadgets. Both made severe design mistakes that hamper their promised ease of use.

“I think the technology is absolutely fascinating, and also... super revolutionary,” said Patricia Reiners, a freelance UX designer and host of the Future of UX podcast. “But, it needs to work. Basic usability issues shouldn’t be the case.”

Reiners explained Humane’s and Rabbit’s woes could be avoided if they took a step back from their AI ambitions to think about how people use technology in the real world. The Humane AI Pin can overheat with frequent use and relies on a projector to display information, which is a problem when using the Pin outdoors. Rabbit, meanwhile, disables the R1’s touchscreen in some menus, but not others, confusing users.

A person holds a hand in front of the projector on the Human AI Pin. The Pin projects a display that includes information about a solar eclipse.The Humane AI pin uses a projector to display information. The projected images are also reportedly hard to see outdoors.Humane

“I think this is super important for people who are reading your article,” said Reiners. “Test devices or products early, from the first idea. Start with the prototypes.”

Reiners and Pagonis had different opinions on the Humane and Rabbit’s fate. Reiners doubts Humane’s design problems are fixable but said the Rabbit R1’s “half-baked” features might be addressed with updates. Pagonis was more skeptical. “[Humane and Rabbit] have failed, in my view, the fundamental exercise of product discovery—which is to figure out the utility of a product, and then make it easy,” he said.

Could AI kill the app?

But they did agree on one thing: the failure of early AI gadgets leaves the arena wide open for Apple, Google, and Microsoft.

“Apple and Google, they’re not sleeping,” said Reiners. “That’s the reason why the Rabbit R1 and the Humane AI Pin rushed so much to ship. They know the big players are working on it.”

Pagonis went a step further and predicted a clean sweep for big tech. “Who is going to win? Companies that have your data, like Google. That controls the user experience, like Apple. And have a relationship with you, like Apple and Google and, of course, Microsoft.”

That might sound disappointing. It implies future AI-enabled devices will maintain the status quo: They’ll look, feel, and function just like the smartphones we’re used to.

Google spoke about how it’s bringing LLMs to its smartphones at Google I/O 2024.CNET

Yet Reiners thinks that’s not the end of the story. AI might not reinvent the look and feel of tomorrow’s consumer tech, but it could power a reinvention of the software we use on our computers, tablets, and smartphones.

“When you open your phone, there’s so much going on. All these apps, all these different notifications. It’s too much. It’s exhausting,” said Reiners. “There’s a lot of research that this shouldn’t be the way we interact with technology.”

Reiners believes companies like Apple and Google will attempt to fulfill the promises made by Humane and Rabbit with simplified AI operating systems that predict what users need and automate common tasks. She notes that smartphones would prove easier to use if they presented users with fewer options after they unlock the device. Phones may even replace apps with automations controlled by an on-device AI agent.

“The user doesn’t really need apps,” said Reiners. “They have a goal, what they want to do, and want to get it done. So, as designers and people who work in tech, we need to rethink how we interact with users.”

The Conversation (0)