As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:
- RoboEarth greatly speeds up robot learning and adaptation in complex tasks.
- Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.
The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.
But before you yell "Skynet!," think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.
That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.
Imagine the following scenario: A service robot like the one in the hospital room [photo, top] is pre-programmed to serve a drink to a patient. A simple program might include: Locate the drink, navigate to its position, grasp it, pick it up, locate the patient in the bed, navigate to the patient, and finally hand over the drink.
Now imagine that during task execution this robot monitors and logs its progress and continuously updates and extends its rudimentary, pre-programmed world model with additional information. It updates and adds the position of detected objects, it evaluates the correspondence of its map with its actual perception, and it logs successful and unsuccessful attempts during its task performance. If the robot is not able to fulfill a task, it asks a person for help and stores any newly learned knowledge. At the end of its task performance, the robot shares its acquired knowledge by uploading it to a Web-style database.
Some time later, the same task is to be performed by a second robot that has no prior knowledge on how to execute the task. This second robot queries the database for relevant information and downloads the knowledge previously collected by other robots. Although differences between the two robots (e.g., due to wear and tear or different robot hardware) and their environments (e.g., due to changed object locations or a different hospital room) mean that the downloaded information may not be sufficient to allow this robot to re-perform a previously successful task, this information can nevertheless provide a useful starting point.
Recognized objects, such as the bed, can now provide occupancy information even for areas not directly observed. Detailed object models (e.g., of a cup) can increase the speed and reliability of the robot's interactions. Task descriptions of previously successful actions (e.g., driving around the bed) can provide guidance on how the robot may be able to successfully perform its task.
This and other prior information (e.g., the previous location of the cup, the likely place to find the patient) can guide this second robot’s search and execution strategy. In addition, as the two robots continue to perform their tasks and pool their data, the quality of prior information will improve and begin to reveal underlying patterns and correlations about the robots and their environment.
As you can see in the video above, RoboEarth has a way to go. One year into the project, we can download task descriptions from RoboEarth and execute a simple task. We can also upload simple things, like an improved map of the environment. But for now we are far from using or creating the rich amount of prior information described in the scenario above, or addressing potential safety or legal challenges.
I think that the availability of such prior information is a necessary condition for robots to operate in more complex, unstructured environments. The people working on RoboEarth -- me included -- believe that, ultimately, the nuanced and complicated nature of human spaces can't be summarized within a limited set of specifications. A World Wide Web for robots will allow them to achieve successful performance in increasingly complex tasks and environments.
For more information, have a look at the RoboEarth website.
Blog Post: A Google researcher suggests that a cloud infrastructure could make robots smaller, cheaper, and smarter
Blog Post: This little French robot wants to be your avatar