At the South by Southwest festival last week, XPRIZE announced the launch of the $10 million ANA Avatar XPRIZE, “a four-year global competition to develop real-life avatars.” The idea is to bring together many different kinds of remote robotic technologies to create an easy to use, effective, and immersive remote experience. Challenges like these have proved very effective in the past—DARPA in particular has sponsored massive advances in both self-driving cars and humanoid robots.
Part of the reason that DARPA was able to do this is by carefully asking for what is almost, but not quite, impossible. Or what former DARPA program manager Gill Pratt used to call “DARPA hard.” XPRIZE, on the other hand, seems to be asking for the definitely possible, and in some cases, the already achieved.
The $10,000,000 ANA Avatar XPRIZE is a four-year global competition focused on accelerating the integration of several emerging and exponential technologies into a multipurpose avatar system that will enable us to see, hear, touch and interact with physical environments and other people through an integrated robotic device.
The ANA Avatar XPRIZE will incentivize innovators around the world to tackle the grand challenge of overcoming the limits of physical transportation. Teams will collaborate to allow us to more rapidly and efficiently distribute skills and expertise to geographic locations where and when they are needed, bridging the gap between distance, time, and cultures.
The winning team will combine state-of-the-art technologies to demonstrate a robotic avatar that allows an untrained operator to complete a diverse series of tasks, from simple to complex, in a physical environment at least 100km away.
The competition itself will consist of sets of tasks within three scenarios, each of which must be completed in 20 minutes or less. We should emphasize that these are only representative scenarios, as XPRIZE says: “These scenarios are for illustration only, and the actual scenarios may be quite different in detail.”
My initial reaction to this is that it needs to be significantly more difficult. I’m looking at these tasks in the context of the DARPA Robotics Challenge, because the DRC was essentially a challenge for teleoperated robots. There was some assistive autonomy because of the bandwidth restrictions in place, but all of the robots depended on having a well-trained operator interpreting sensor data, making key decisions, and sending commands to the robot.
The XPRIZE competition is different from the DRC in several important ways. First, XPRIZE will use a simulated “high quality” network connection where “reliability, bandwidth, latency, and jitter will be representative of the best available public Internet service.” It’s hard to say how much more effective DRC teams would have been if they could have used the best available public Internet service, but my guess is “lots.”
Second, there’s more of an emphasis on the robot as an avatar rather than as a tool. Operators will need to be able to identify textures being touched by their robots, and identify where their robots are being touched. They’ll also need to do stuff like temperature sensing and sound localization. These things are certainly useful, but I’m pretty sure they’ve been solved problems for a while now.
Third, as you can see, the tasks are far more general, and not just disaster focused. Except for a few of them, they all seem significantly easier than the DRC tasks, all of which were performed (successfully by several robots) under much more difficult circumstances. Here’s what I think might be a bit challenging:
- Take a blanket from the wheelchair, fold it, and put it on a shelf.
- Use a shovel to load 20 kg of debris into a wheelbarrow.
- Throw the weighted end of [a] rope towards [a] sound.
Not very challenging, but at least somewhat challenging, mostly because folding things can be tricky even for humans, throwing heavy objects often requires specialized hardware, and moving that much weight under a time limit with a shovel might be difficult for a robot that isn’t statically stable. And speaking of statically stable, the rules mention that the robot should “walk” from place to place but don’t actually specify that legs (or two legs, for that matter) are required.
Lastly, and this is probably the biggest distinction of the XPRIZE at this point, the robots and teleop systems will use third-party operators, and teams will only have 20 minutes to train them before the competition. This is in sharp contrast to the DRC, where the robot operators were highly trained experts. We’re looking forward to seeing lots of useful innovation here, with a focus on interfaces, assistive autonomy, and ease of use.
It’s also worth mentioning that the RoboCup@Home competition has been doing tasks very similar to these for years, except with the robots acting autonomously. Again, already solving what seems to be a much harder problem.
The upshot is I don’t have a problem with the Avatar XPRIZE or anything—I just feel like as is, it’s not being XPRIZE-y enough. The prize is significant, and the vision that XPRIZE presents is long-term impactful, but the actual challenge itself I don’t think does nearly enough to advance the state of the art, especially in the context of other XPRIZE challenges and robotics challenges that we’ve already seen.
So how could the competition be made better? Fundamentally, I think there are several different ways of looking at robotic avatars—you could consider them to be remote humans, with the goal of creating an experience for a user that’s as immersive and as much like “being there” as possible, or you could consider them to be one half of a robot-human team, where both the robot and the human augment each other’s strengths and while compensating for weaknesses.
First let’s look at the idea of an immersive experience avatar. Ideally, you’d be able to create a sensory experience that doesn’t feel restrictive in any way. You might not be able to do everything you would in person, but you’d be able to participate and enjoy most activities, and not feel constantly constrained by the remote robotic system. Here are some suggestions of tasks that might be a useful challenge:
- Identify materials with different textures while blindfolded.
- Throw a frisbee back and forth with a remote user.
- Play a game of hide-and-seek in a house-like environment.
An avatar designed to be part of a more practical robot-human team might be much different, depending on what the system was designed to do. To use disaster relief as an example, you might not want the robotic system to be humanoid at all. You’d want the remote hardware to be capable of extreme mobility, to be very strong and durable, and to incorporate a suite of sensors that outclass anything a human has to offer, while relying on the human to make sense of the data and provide high level control of the robot. Some challenging tasks that would take advantage of this might be:
- Perform a dexterous manipulation task in a smoke-filled environment.
- Locate an unconscious human trapped under rubble, and rescue them.
- Collaborate with other humans to unload heavy supplies out of a truck.
Or, in the context of providing medical care:
- Perform (simulated) surgery in an ambulance in heavy traffic.
- Remotely diagnose an uncommon ailment.
- Perform a surgical task that normally requires several assistants using only the robotic system.
In all of these examples, the robotic system would not only be acting as a remote avatar, it would also be significantly enhancing the capabilities of the human using it, making it possible to do things that weren’t possible before. Perhaps this is by using sensors to collect more data than our senses can, or by providing enhanced strength or dexterity, but either way it makes remote presence potentially more effective (or effective in different ways) than if the human was there in person.
I think these example tasks, or tasks like them, would be a substantial challenge and would significantly advance the state of the art. I don’t think that they’d be impossible— we’ve seen examples of many aspects of what would be required tackled individually, from the likes of Willow Garage, MIT, Kindred, Sarcos, and HaptX, just to name a few.
The $10 million prize is a powerful motivator, especially considering that many of these emerging technologies will likely have immediate commercial applications. There’s a real opportunity here to advance robotics, and I just hope that XPRIZE is able to take advantage of it.
[ Avatar XPRIZE ]
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.