AAAI Video Highlights: Drones Navigating Forests and Robot Boat Swarms

We take you through two of the most impressive robot videos submitted to the AAAI Video Competition

5 min read

AAAI Video Highlights: Drones Navigating Forests and Robot Boat Swarms
Photo: University Institute of Lisbon

Last Friday, we posted a bunch of videos from the AAAI Video Competition. There are lots of good videos (really, they’re all good), and we didn’t want to play favorites or otherwise influence your votes, so we didn’t add much in the way of commentary or anything like that. But it’s been almost a week, and a few of those videos are certainly worth taking a closer look at. 

First, we have a video accompanying “Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots,” by Miguel Duarte, Vasco Costa, Jorge Gomes, Tiago Rodrigues, Fernando Silva, Sancho Moura Oliveira, and Anders Lyhne Christensen, from the BioMachines Lab and Institute of Telecommunications, in Lisbon, Portugal. This video is fantastic because, among other reasons, I HAD THAT EXACT SAME PLAYMOBIL PIRATE SHIP WHEN I WAS A KID.

There are numerous advantages to using numerous robots. Swarms are robust and flexible, and they can be much cheaper and more effective than a handful of large robots, provided that you can get them all coordinated on doing what you want them to do. This last bit isn’t easy, which is where the artificial neural networks and evolutionary algorithms come into play. By letting simulated robot boats execute simulated missions and learn from their simulated successes and simulated failures, controllers can learn how to execute swarm goals even while just controlling boats on an individual basis.

As most roboticists have by now realized, the way that robots perform in simulation is vastly different from how they perform out of simulation. This is called the “reality gap,” and there’s also another gap that happens when you go from in-lab reality to in-world reality, but I don’t know what the catchphrase is for that one. What’s especially impressive about this research is that it worked outside, in what (to a tiny robot boat) is basically an ocean, complete with wind and waves and sunlight and possibly sharks.

Running a swarm of autonomous boats, even if they’re well Tupperware’d and Ziploc’d and otherwise waterproofed, will inevitably result in a significant number of hardware issues, and this swarm was no exception. The motors were temperamental, the GPS receivers didn’t always work that well, and decreasing battery power affected the speed of the robots. The simulations didn’t incorporate any of this, but the overall performance of the swarm wasn’t compromised at all. Resilience like this is one of the great things about swarms: with so many robots all working together, having one or two or even a bunch crap out on you just don’t matter all that much.

To actually make this swarm useful in real world applications, it’s going to need to learn more skills, like how to integrate mission-specific sensors, fault tolerance (beyond the swarm’s inherent adaptability), and how not to get run over by whatever other boats might be operating in the area. Not all of these behaviors can be evolved, but the researchers are hoping to figure out ways of combining building blocks to generate useful new behaviors on the fly.

[ Paper ] via [ BioMachines Lab ]

Thanks Anders!

Second is this video accompanying “A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots,” by Alessandro Giusti, Jérôme Guzzi, Dan C. Ciresan, Fang-Lin He, Juan P. Rodríguez, Flavio Fontana, Matthias Faessler, Christian Forster, Jürgen Schmidhuber, Gianni Di Caro, Davide Scaramuzza, and Luca M. Gambardella from IDSIA and the University of Zurich. This video is fantastic because, among other reasons, I HAVE THAT EXACT SAME DRONE IN MY CLOSET but I can’t fly it because I haven’t registered it with the FAA:

In many ways, what’s most impressive here is the way that the classifier was trained, since training classifiers usually requires a Mechanical Turk-level of human time and energy, with people going through and labeling features in an enormous dataset while slowly turning into brain-dead zombies. Using a human hiker with three cameras to automatically classify a massive amount of imagery into “center of trail,” “left of trail,” and “right of trail,” is simple, efficient, and quite clever. 

Once you’ve got all of those classified images (17,119 of them, in this case), a deep neural network ponders them for three days on a workstation with a beefy graphics card, looking for consistencies between images of a given class. These aren’t consistencies that a human would necessarily notice, but it doesn’t matter: the neural network doesn’t care what humans think are the defining characteristics of trails—it’s just fine on its own, thank you very much. And it really is just fine: the neural network does as well (and in some cases better than) a human at classifying trail images, with an accuracy of around 85 percent. It’s always interesting to look at failure cases, where the neural network gets it wrong. Take a look at these examples; see if you can tell how the classifier got fooled:

img

Once you have a working classifier, getting a robot to autonomously pilot itself along a trail is, if not trivial, at least reliably possible. All you have to do is get it to turn right if it sees a left of trail image, left if it sees a right of trail image, and to keep on going if it’s looking at the center of the trail. The processing required to do this is light enough that it can be run onboard a small quadrotor with a camera, although if the robot’s camera isn’t as good as the camera used to capture the training images, things might not go as well as you might like. There are other tricky things, too:

“The quadrotor is often unable to negotiate trails if there is not enough free space besides the trail centerline: in fact, if the yaw is roughly correct, the classifier compensates a lateral shift only when the quadrotor is about one meter off the centerline; because the current pipeline does not implement explicit obstacle detection and avoidance, this results in frequent crashes on trails with narrow free space. On wide trails with even lighting conditions, the robot was able to successfully follow the trail for a few hundreds of meters.”

Still, doing this onboard in real time on real trails that the robot has never seen before is very, very cool.

If I can digress from this research for a minute to rope in the whole urban delivery drones thing: urban areas have a lot in common with forests, in that there are well defined paths with hazards on either side. If you want a sense of what the state of the art is in negotiating environments like this using on-board hardware, as far as we know, you’ve just seen it. And equally as far as we know, neither Google nor Amazon nor anyone else have demonstrated anything like this at all, and until they do, autonomous delivery drones aren’t happening. 

Anyway, we should get more details in a few months at ICRA in Stockholm: hopefully this method will work in other environments as well, leading to inexpensive and intelligent autonomous drones.

[ Paper ] via [ Alessandro Giusti ]

Thanks Davide!

The Conversation (0)