Let’s Bring Rosie Home: 5 Challenges We Need to Solve for Home Robots

What problems do engineers need to crack before they can deliver the proverbial Rosie the Robot?

5 min read
Let’s Bring Rosie Home: 5 Challenges We Need to Solve for Home Robots
Image: United Archives/Alamy

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Science fiction authors love the robot sidekick. R2-D2, Commander Data, and KITT—just to name a few—defined “Star Wars," “Star Trek," and “Knight Rider," respectively, just as much as their human actors. While science has brought us many of the inventions dreamed of in sci-fi shows, one major human activity has remained low tech and a huge source of frustration: household chores. Why can't we have more robots helping us with our domestic tasks? That's a question that many roboticists and investors (myself included) have long been asking ourselves. Recently, we've seen some promising developments in the home robotics space, including Jibo's successful financing and SoftBank's introduction of Pepper. Still, a capable, affordable robotic helper—like Rosie, the robot maid from “The Jetsons"—remains a big technical and commercial challenge. Should robot makers focus on designs that are extensions of our smartphones (as Jibo seems to be doing), or do we need a clean-sheet approach towards building these elusive bots?

Take a look at the machines in your home. If you remove the bells and whistles, home automation hasn't dramatically changed since the post–World War II era. Appliances, such as washing machines, dishwashers, and air conditioners, seemed magical after WWII. Comprised primarily of pumps, motors, and plumbing, they were simply extensions of innovations that came to bear during the industrial revolution. It probably doesn't come as a surprise that industrial behemoths such as GE, Westinghouse, and AEG (now Electrolux) shepherded miniature versions of the machines used in factories into suburban homes. At the time, putting dirty clothes and dishes into a box from which they emerged clean was rather remarkable. To this day, the fundamental experience remains the same, with improvements revolving around reliability, and efficiency. Features enabled by Internet-of-Things technologies are marginal at best, i.e., being able to log into your refrigerator or thermostat through your phone.

But before wondering when we'll have home robots, it might be fair to ask: Do we even need them? Consider what you can already do just by tapping on your phone, thanks to a host of on-demand service startups. Instacart brings home the groceries; Handy and Super send professionals to fix or clean your home; Pager brings primary care, while HomeTeam does elderly care. (Disclosure: my company, Lux Capital, is an investor in Super, Pager, and HomeTeam.) So, again, why do we need robots to perform these services when humans seem to be doing them just fine? I don't think anyone has a compelling answer to that question today, and home robots will probably evolve and transform themselves over and over until they find their way into our homes. Indeed, it took decades of automobiles until the Model T was born. The Apple IIs and PC clones of the early 1980s had difficulty justifying their lofty price tags to anyone who wasn't wealthy, or a programmer. We need to expect the same from our first home bots.

So it might be helpful to examine what problems engineers need to crack before they can attempt to build something like Rosie the robot. Below I discuss five areas that I believe need significant advances if we want to move the whole home robot field forward.

1. We Need Machine-Human Interfaces

imgJoaquin Phoenix waits for his new AI assistant to boot up in the movie “Her."Image: Warner Bros. Ent.

Siri and Amazon's Alexa demonstrate how far speech recognition and natural language processing have come. Unfortunately, they are no more than a human-machine interface, designed to displace the keyboard and mouse. What we need is a machine-human interface. Where is the distinction? It starts with understanding people, rather than aggregating data and using statistical patterns to make inferences. It can understand our moods and emotional contexts, as an artificial intelligence would. Humans do not interact with one another through a series of commands (well, maybe some do); they establish a connection, and once a computer can take on that role, then we have a true machine-human interface. Scientists are starting to tackle this by applying concepts used in programming toward establishing rules for robot-human conversations, but we'll need much more if we want to have engaging AI assistants like the one in the movie “Her."

2. Cheap Sensors Need to Get Cheaper

Driverless cars will generate hard cash for their operators, so forking over thousands for an array of lidar, radar, ultrasound, and cameras is a no-brainer. Home robots, however, may need to fit the ever-discretionary consumer budget. The array of sensors the robot would need to properly perceive its environment could render it cost prohibitive unless the sensors cost pennies as they do in mobile phones. MEMS technology dramatically lowered the cost of inertial sensors, which previously cost thousands of dollars and were relegated to aircraft and spacecraft. Can computer vision applied to an array of cheap cameras and infrared sensors provide adequate sensing capability? And can we expect lidar to come down in price, or do we need a whole new sensing technology? A startup called Dual Aperture has added a second aperture for infrared hence creating the ability to infer short distances. Meanwhile, DARPA is funding the research on chip-based lidar, and Quanergy expects to launch a solid-state optical phased array, thereby eliminating the mechanical components that raise the cost of lidar. We expect engineers to find creative ways to reduce the cost of existing sensing technology, while obviating others altogether, and hopefully making them as cheap as sensors in our phones today.

3. Manipulators Need to Get a Grip

imgSRI's underactuated manipulator relies on a cable-driven tendon system that reduces complexity while maintaining positional accuracy.Image: SRI International

Loose objects find their way into our homes because they are easy to manipulate with our hands. If we expect a robot to be able to clean and organize these objects as efficiently as humans do, it needs manipulators that are at least as effective as the human hand. Companies such as Robotiq, Right Hand Robotics, and Soft Robotics, among others, have designed efficient and reliable manipulators. Though air-powered inflatable grippers have the advantage of being soft and lightweight, they do require a pump, which is not very practical for a mobile robot. Efforts funded by DARPA at iRobot, SRI, and other labs and companies seem to be taking us in the right direction, helping robots get a grip.

4. Robots Need to Handle Arbitrary Objects

Opening doors, flipping switches, and cleaning up scattered toys are simple tasks for us humans, but compute-intense for machines today. A robot like Roomba performs two tasks: running a suction motor and generating a path along what's expected to be a flat surface with rigid obstacles. How about washing dishes or folding laundry? These tasks require a suite of capabilities ranging from recognizing objects, identifying grasping points, understanding how an object will interact with other objects, and even predicting the consequences of being wrong. DARPA, NSF, NASA, and European Union science funding agencies are sponsoring much-needed research in this area, but “solving manipulation" will probably require leveraging a number of different technologies, including cloud robotics and deep learning.

5. Navigating Unstructured Environments Needs to Become Routine

Anyone who saw this year's DARPA Robotics Challenge would appreciate how difficult of a problem it is to navigate and manipulate an unstructured, unknown environment. Those robots were slow. Though driverless cars pose a formidable challenge, it has been proven to be more tractable. Deep learning techniques can help robots recognize soft objects vs. hard obstructions, and human assistants may be able to “teach" robots until the algorithms take over. Like the manipulation problem, real-time navigation requires robots to quickly sense, perceive, and execute—probably several orders of magnitude faster that they DRC winning team is today. Full autonomy won't happen overnight, but that isn't a problem: humans can help robots get out of a bind. Willow Garage was a pioneer with its Heaphy Project, where assistance to robots is crowdsourced to remote operators. More robotics and industrial automation companies are embracing the notion of humans overseeing robots, with the expectation of going from the (superfluous) 1:1 human-robot ratio to a single operator being able to oversee/assist many robots.

The Conversation (0)