Insect-Inspired Vision System Helps Drones Pass Through Small Gaps

Drones that mimic insect behavior negotiate gaps with just a monocular camera

3 min read
The modified drone used in UMD's experiments includes a NVIDIA TX2 module mounted at the top. For sensing, the drone uses its front-facing camera and a downward-facing optical-flow sensor, which combines a camera plus sonar.
The modified drone used in UMD’s experiments includes an Nvidia TX2 module mounted at the top. For sensing, the drone uses its front-facing camera and a downward-facing optical-flow sensor, which combines a camera plus sonar.
Photo: UMD

Insects are quite good at not running into things, and just as good at running into things and surviving, but targeted, accurate precision flight is much more difficult for them. As cute as insects like bees are, there just isn’t enough space in their fuzzy little noggins for fancy sensing and computing systems. Despite their small size, though, bees are able to perform precise flight maneuvers, and it’s a good thing, too, since often their homes are on the other side of holes not much bigger than they are.

Bees make this work through a sort of minimalist brute-force approach to the problem: They fly up to a small hole or gap, hover, wander back and forth a little bit to collect visual information about where the edges of the gap are, and then steer themselves through. It’s not fast, and it’s not particularly elegant, but it’s reliable and doesn’t take much to execute.

Reliable and not taking much to execute is one way to summarize the focus of the next generation of practical robotics—in other words, robotic platforms that offer affordable real-world autonomy. The University of Maryland’s Perception and Robotics Group has been working on a system that allows a drone to fly through very small and completely unknown gaps using just a single camera and onboard processing. And it’s based on a bee-inspired strategy that yields a success rate of 85 percent.

We’ve posted before about autonomous drones flying through small gaps, but the big difference here is that in this case, the drone has no information about the location or size of the gap in advance. It doesn’t need to build up any kind of 3D map of its environment or model of the gap, which is good because that would be annoying to do with a monocular camera. Instead, UMD’s strategy is to “recover a minimal amount of information that is sufficient to complete the task under consideration.”

To detect where the gap is, the drone uses an optical-flow technique. It takes a picture, moves a little bit, and then takes another picture. It identifies similar features in each picture, and thanks to parallax, the farther-away features behind the gap will appear to have moved less than the closer features around the gap. The edges of the gap are the places where you’ve got the biggest difference between the amount that features appear to have moved. And now that you know where all those things are, you can just zip right through!

To detect where the gap is, the drone uses an optical-flow technique. It takes a picture, moves a little bit, and then takes another picture. It then identifies similar features in each picture, and thanks to parallax, the farther-away features behind the gap will appear to have moved less than the closer features around the gap. The edges of the gap are the places where you've got the biggest difference between the amount that features appear to have moved

Or, almost. The other piece of this is using visual servoing to pass through the gap. Visual servoing is just using visual feedback to control motion: The drone takes a picture of the gap, moves forward, takes another picture, and then adjusts its movement to make sure that its position relative to the gap is still what it wants. This is different from a preplanned approach, where the drone figures out in advance the entire path that it wants to take and then follows it—visual servoing is more on the fly. Or, you know, on the bee.

The UMD researchers tested this out with a Bebop 2 drone packing an Nvidia Jetson TX2 GPU. A variety of different gaps of varying sizes and shapes were cut in a foreground wall, which was covered in newspapers to give them some extra texture, and this is where we’re obligated to point out that this technique probably won’t work out if you’re trying to fly through a gap in one white wall with another white wall on the other side. Anyway, as long as you’ve got newspapered walls, this system works quite well, the researchers say: “We achieved a remarkable success rate of 85 percent over 150 trials for different arbitrary shaped windows under a wide range of conditions which includes a window with a minimum tolerance of just 5 cm.”

The maximum speed that the drone was able to achieve while passing through the gap was 2.5 meters per second, primarily constrained by the rolling shutter camera (which could mess up the optical flow at higher speeds), but again, this method isn’t really intended for high-performance drones. Having said that, the researchers do mention in the conclusion of their paper that “IMU data can be coupled with the monocular camera to get a scale of the window and plan for aggressive maneuvers.” So, hopefully we’ll be seeing some of that in the near future.

“GapFlyt: Active Vision Based Minimalist Structure-Less Gap Detection For Quadrotor Flight,” by Nitin J. Sanket, Chahat Deep Singh, Kanishka Ganguly, Cornelia Fermuller, and Yiannis Aloimonos from the University of Maryland, is published in IEEE Robotics & Automation Letters.

[ UMD ]

The Conversation (0)