In the first “Matrix” movie, there’s a scene where Neo points to a helicopter on a rooftop and asks Trinity, “Can you fly that thing?” Her answer: “Not yet.” Then she gets a “pilot program” uploaded to her brain and they fly away.
For us humans, with our non-upgradeable, offline meat brains, the possibility of acquiring new skills by connecting our heads to a computer network is still science fiction. Not so for robots.
Several research groups are exploring the idea of robots that rely on cloud-computing infrastructure to access vast amounts of processing power and data. This approach, which some are calling "cloud robotics," would allow robots to offload compute-intensive tasks like image processing and voice recognition and even download new skills instantly, Matrix-style.
Imagine a robot that finds an object that it's never seen or used before—say, a plastic cup. The robot could simply send an image of the cup to the cloud and receive back the object’s name, a 3-D model, and instructions on how to use it, says James Kuffner, a professor at Carnegie Mellon currently working at Google who coined the term “cloud robotics.”
Kuffner described the possibilities of cloud robotics at the IEEE International Conference on Humanoid Robots, in Nashville, Tenn., this past December. Embracing the cloud could make robots “lighter, cheaper, and smarter,” he said in his talk, which created much buzz among attendees.
For conventional robots, every task—moving a foot, grasping an object, recognizing a face—requires a significant amount of processing and preprogrammed information. As a result, sophisticated systems like humanoid robots need to carry powerful computers and large batteries to power them.
According to Kuffner, cloud robotics could offload CPU-heavy tasks to remote servers, relying on smaller and less power-hungry onboard computers. Even more promising, the robots could turn to cloud-based services to expand their capabilities.
As an example, he mentioned the Google service known as Google Goggles. You snap a picture of a painting at a museum or a public landmark and Google sends you information about it. Now imagine a “Robot Goggles” application, Kuffner suggested; a robot would send images of what it is seeing to the cloud, receiving in return detailed information about the environment and objects in it.
Using the cloud, a robot could improve capabilities such as speech recognition, language translation, path planning, and 3D mapping.
CLOUD ROBOTICS ORIGINS
The idea of connecting a robot to an external computer is not new. Back in the 1990s, Masayuki Inaba at the University of Tokyo explored the concept of a “remote brain,” as he called it, physically separating sensors and motors from high-level “reasoning” software.
Now cloud robotics seeks to push that idea to the next level, exploiting the cheap computing power and ubiquitous Net connectivity available today.
Kuffner, who currently works on Google’s self-driving car project, realized that running computing tasks on the cloud is often much more effective than trying to do it locally. Why can’t robots do the same?
As a side project, he's now exploring a variety of cloud robotics ideas at Google, including "using small mobile devices as Net-enabled brains for robots,” he told me. "There is an active group of researchers here at Google who are interested in cloud robotics," he says.
Last month, some of his colleagues unveiled their Android-powered robot software and a small mobile robot dubbed the cellbot [see image above]. The software allows an Android phone to control robots based on Lego Mindstorms and other platforms.
APP STORE FOR ROBOTS
But cloud robotics is not limited to smartphone robots. It could apply to any kind of robot, large or small, humanoid or not. Eventually, some of these robots could become more standardized, or de facto standards, and sharing applications would be easier. Then, Kuffner suggested, something even more interesting could emerge: an app store for robots.
The app paradigm is one of the crucial factors behind the success of Apple’s iPhone and Google’s Android. Applications that are easy to develop, install, and use are transforming personal computing. What could they do for robotics?
It’s too early to say. But at the Nashville gathering, attendees received Kuffner’s idea with enthusiasm.
“The next generation of robots needs to understand not only the environment they are in but also what objects exist and how to operate them,” says Kazuhito Yokoi, head of the Humanoid Research Group at Japan's National Institute of Advanced Industrial Science and Technology (AIST). “Cloud robotics could make that possible by expanding a robot’s knowledge beyond its physical body.”
“Coupling robotics and distributed computing could bring about big changes in robot autonomy,” said Jean-Paul Laumond, director of research at France’s Laboratory of Analysis and Architecture of Systems, in Toulouse. He says that it’s not surprising that a company like Google, which develops core cloud technologies and services, is pushing the idea of cloud robotics.
But Laumond and others note that cloud robotics is no panacea. In particular, controlling a robot’s motion—which relies heavily on sensors and feedback—won’t benefit much from the cloud. “Tasks that involve real time execution require onboard processing,” he says.
Stefan Schaal, a robotics professor at the University of Southern California, says that a robot may solve a complex path planning problem in the cloud, or possibly other optimization problems that do not require strict real-time performance, "but it will have to react to the world, balance on its feet, perceive, and control mostly out of local computation."
And there are other challenges. As any Internet user knows, cloud-based applications can get slow, or simply become unavailable. If a robot relies too much on the cloud, a problem could make it "brainless."
Kuffner is optimistic that new advances will make cloud robotics a reality for many robots. He envisions a future when robots will feed data into a "knowledge database," where they'll share their interactions with the world and learn about new objects, places, and behaviors.
Maybe they'll even be able to download a helicopter pilot program?
MORE CLOUD ROBOTICS PROJECTS
• Researchers at Singapore's ASORO laboratory have built a cloud computing infrastructure to generate 3-D models of environments, allowing robots to perform simultaneous localization and mapping, or SLAM, much faster than by relying on their onboard computers. The backend system consists of a Hadoop distributed ¿le system that can store data from laser scanners, odometer data, or images/video streams from cameras. The researchers hope that, in addition to SLAM, the cluster could also perform sensor fusion and other computationally intensive algorithms.
• At LAAS, Florent Lamiraux , Jean-Paul Laumond, and colleagues are creating object databases for robots to simplify the planning of manipulation tasks like opening a door. The idea is to develop a software framework where objects come with a "user manual" for the robot to manipulate them. This manual would specify, for example, the position from which the robot should manipulate the object. The approach tries to break down the computational complexity of manipulation tasks into simpler, decoupled parts: a simpli¿ed manipulation problem based on the object's "user manual," and a whole-body motion generation by an inverse kinematics solver, which the robot's computer can solve in real time.
• Gostai, a French robotics firm, has built a cloud robotics infrastructure callled GostaiNet, which allows a robot to perform speech recognition, face detection, and other tasks remotely. The small humanoid Nao by Aldebaran Robotics will use GostaiNet to improve its interactions with children as part of research project at a hospital in Italy. And Gostai's Jazz telepresence robot uses the cloud for video recording and voice synthesis.
• At present the iCub humanoid project doesn't rely on "cloud robotics," but Giulio Sandini, a robotics professor at the Italian Institute of Technology and one of the project's leaders, says it's "a precursor of the idea." The iCub, an open child-sized humanoid platform, works as a "container of behaviors," Sandini says. "Today we share simple behaviors, but in the same way we could develop more complex ones like a pizza making behavior, and our French collaborators could develop a crepes making behavior." In principle, you'd just upload a "behavior app" to the robot and it would cook you pizzas or crepes.
[If you know of other cloud robotics projects, let me know.]
And here's Kuffner's powerpoint presentation:
[iframe https://www.scribd.com/embeds/47486324/content?start_page=1&view_mode=slideshow&access_key=key-7umyt4ryqej2r9us6a1&show_recommendations=false allowfullscreen=false expand=1 height=826 width=620]