The July 2022 issue of IEEE Spectrum is here!

Close bar

RoboBee Can Now Pivot on a Dime

Optimized yaw torque provides RoboBee with precision flight capabilities

2 min read
robobee robot compared to size of penny
Harvard Microrobotics Laboratory

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Since becoming the first insect-inspired robot to take flight, Harvard University’s famous little robotic bee, dubbed RoboBee, has achieved novel perching, swimming, and sensing capabilities, among others.

More recently RoboBee has hit another milestone: precision control over its heading and lateral movement, making it much more adept at maneuvering. As a result, RoboBee can hover and pivot better in midair, more similarly to its biological inspiration, the bumblebee. The advancement is described in a study published this past December in IEEE Robotics and Automation Letters.

The higher level of control over flight will be beneficial in a wide range of scenarios where precision flight is needed. For instance, consider needing to explore sensitive or dangerous areas—a task to which RoboBee could be well suited—or if a large group of flying robots must navigate together in swarms.

“One particularly exciting area is in assisted agriculture, as we look ahead towards applying these vehicles in tasks such as pollination, attempting to achieve the feats of biological insects and birds,” explains Rebecca McGill, a Ph.D. candidate in materials science and mechanical engineering at Harvard, who helped codesign the new RoboBee flight model.

But achieving precision control with a flapping-winged robot has proven challenging and for a good reason. Helicopters, drones, and other fixed-wing vehicles can tilt their wings and blades, and incorporate rudders and tail rotors, to change their heading and lateral movement. Flapping robots, on the other hand, must move their wings up and down at different speeds in order to help the robot rotate while upright in midair. This type of rotational movement induces a force called yaw torque.

However, flapping-wing micro-aerial vehicles (FWMAVs) such as RoboBee have to precisely balance the upstroke and downstroke speeds within a single fast-flapping cycle to generate the desired yaw torque to turn the body. “This makes yaw torque difficult to achieve in FWMAVs ,” explains McGill.

To address this issue, McGill and her team developed a new model that analytically maps out how the different flapping signals associated with flight affect forces and torques, determining the best combination for yaw torque (along with thrust, roll torque, and pitch torque) in real time.

“The model improves our understanding of how much yaw torque is produced by different flapping signals, giving better, controllable yaw performance in flight,” explains McGill.

In the team’s study, they tested the new model through 40 different flight scenarios with RoboBee, while varying the control inputs and observing the thrust and torque response for each flight. With its new model, the RoboBee was able to fly in a circle while keeping its gaze focused on the center point, mimicking a scenario in which the vehicle focuses a camera on a target while circling around it.

“Our experimental results revealed that yaw torque filtering can be mitigated sufficiently…to achieve full control authority in flight,” says Nak-seung Patrick Hyun, a postdoctoral fellow at Harvard who was also involved in the study. “This opens the door to new maneuvers and greater stability, while also allowing utility for onboard sensors.”

McGill and Hyun note that advances like this will not only help robots in the field with tasks such as pollination and emergency response, but also provide us with more insights into biology as well. “Flapping-wing robots are exciting because they give us the chance to explore and learn about insect and bird-flight mechanisms through imitation, creating a ‘two-way’ path of discovery towards both robotics and biology,” explains Hyun, noting that the team is interested in studying aggressive aerial maneuvers through their new RoboBee flight model.

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓Show less