The July 2022 issue of IEEE Spectrum is here!

Close bar
Drones That Smash Into Obstacles Can Be a Good and Useful Thing
Just like bees, these microdrones can bump into things, including each other, and continue flying without a problem.
Image: Vijay Kumar Lab/UPenn

A little over a year ago, we wrote about some clumsy-looking but really very clever research from Vijay Kumar’s lab at the University of Pennsylvania. That project showed how small drones with just protective cages and simple sensors can handle obstacles by simply running into them, bouncing around a bit, and then moving on. The idea is that you don’t have to bother with complex sensors when hitting obstacles just doesn’t matter, which bees figured out about a hundred million years ago.

Over the past year, Yash Mulgaonkar, Anurag Makineni, and Luis Guerrero-Bonilla (all in Kumar’s lab) have come up with a bunch of different ways in which smashing into obstacles can actually be a good and useful thing. From making maps to increased agility to (mostly) on purpose payload deployment, running into stuff and bouncing off again can somehow do it all. 

You can read more about the non-collision avoidance that these drones have going on in our previous article, but it’s essentially as simple as ignoring collisions while relying on a roll cage (made out of heat-cured carbon fiber yarn) modeled after the general shape of a gömböc, which is maybe my favorite shape ever. 

Anyway, the video above highlights the three main improvements:

  1. Bumbling drones can map obstacles: Imagine (if you haven’t already) that every time one of these drones runs into something it says, “Ow.” A whole swarm of them wandering through an area full of obstacles will be going “ow ow ow ow ow” the whole time, and if you keep track of exactly where you hear each “ow,” you can build up a picture of where each collision takes place, and eventually, you’ll get a (sparse) map of where all the obstacles are. While you can do this sort of thing with drones equipped with stereo cameras and lidar and whatnot, that means a big, heavy, expensive, fragile drone. Using a large number of inexpensive robots equipped with basic sensors is potentially more reliable and cost effective. 
  1. Bumbling drones can change direction: All drones can reverse which direction they’re flying in, but they have to do it by decelerating to a complete stop and then accelerating again. If you’re near an obstacle and your drone can handle it, it’s sometimes much faster to smash headlong into that obstacle and use the impact to change direction in a fraction of the time. 

Drone deploys lights by colliding with wallImage: Vijay Kumar Lab/UPenn

  1. Bumbling drones can deploy payloads: First author on this paper Yash Mulgaonkar explains the last one: “Given that our robots can sustain collisions, we can fly in dark, unstructured environments and deploy small payloads by colliding. We demonstrate it here by flying in a dark basement and deploying small magnetic LED flares to illuminate the environment. This payload can be substituted with other small sensors as well, like those for measuring air quality, radiation, chemical contamination, etc.” 

Right now, this all requires an external motion-capture system to work, and the computation isn’t done on the robots either, meaning that it’ll work in a comfortably equipped robotics lab but not anywhere particularly useful. The good news is that the researchers are working on on-board localization and visual odometry, and we’re pretty sure that they’ll make it happen. They’re pretty sure too, and the paper promises that “the ideas described in this paper can be realized on independent robots with cameras and IMUs within the next year or two.”

Robust Aerial Robot Swarms Without Collision Avoidance,” by Yash Mulgaonkar, Anurag Makineni, Luis Guerrero-Bonilla, and Vijay Kumar from the University of Pennsylvania, appears in the January 2018 issue of IEEE Robotics and Automation Letters.

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓Show less