Quadrupeds Are Learning to Dribble, Catch, and Balance

Upcoming ICRA papers show off a trio of talented quadrupeds

5 min read

Three photos showing different quadrupedal robots catching a ball, kicking a soccer ball, and walking across a balance beam on a colorful background

Now that anyone, anywhere can get themselves a quadrupedal robot without having to apply for a major research grant, we’re seeing all kinds of fun research being done with our four-legged electromechanical friends. And by “fun research” I mean very serious research that is making valuable contributions toward practical robotics. But seriously, there are lots of important robotics problems that can be solved in fun and interesting ways; don’t let anyone tell you different, especially not the current United States ambassador to Turkey.

At the 2023 International Conference on Robotics and Automation (ICRA), slated to take place in London next month, three papers will be presented that report on the talents of quadrupedal robots and the researchers who teach them new things, including dribbling, catching, and traversing a balance beam.

MIT’s dribbling quadruped

Quadrupedal soccer robots have a long and noble history; for years, Sony Aibos were the standard platform at RoboCup. But quadrupeds have made some enormous four-legged strides since the late 1990s and early 2000s. Now that basic quadrupedal mobility has been pretty well figured out, it’s time to get these robots doing fun stuff. In an upcoming ICRA paper, roboticists from MIT describe how they have taught a quadruped to dribble a soccer ball across rough terrain, which is actually really impressive for anyone who has tried to do this themselves.

Let’s just get this out of the way: For most of the world, we’re talking about football here. But the paper calls it soccer, so I’m going to call it soccer too. Whatever you call it, it’s the one with the round ball where most of the time a game is actually being played instead of the one with the pointy ball where most of the time people are just standing around not doing anything.

DribbleBot, a name given to an automaton whose functionality the paper describes as “Dexterous Ball Manipulation with a Legged Robot,” is a Unitree Go1. The machine can dribble a soccer ball under the same real-world conditions as humans who don’t have access to an actual soccer field. For those of us who have experience playing zero-budget pick-up soccer wherever we won’t get yelled at, flat and smooth grass is often an unattainable luxury. The real world is unfortunately full of tree roots and rocks and gravel and snow and all kinds of other things that make soccer balls behave unpredictably—and give me knee problems. This is the kind of terrain that DribbleBot is learning to handle.

The robot is using only onboard sensing and computation for this task, and it was first trained extensively through reinforcement learning in simulation. There’s actually a lot going on with dribbling: As the paper says, “successful dribbling involves adjusting the leg swings to apply targeted forces while the robot moves, balances itself, and orients its position relative to a moving ball.” But if you can look past the soccer-specific aspect, the real problem that’s being solved here is legged locomotion while manipulating an occasionally adversarial object in the real world. This obviously opens up other potential applications. Even if soccer were the only application, though, I’d totally pick DribbleBot for my team.

DribbleBot: Dynamic Legged Manipulation in the Wild, by Yandong Ji, Gabriel B. Margolis, and Pulkit Agrawal from MIT, will be presented at ICRA 2023 in London.

Agile object-catching from UZH

I would argue that one of the most impressive things that animals (humans included) can do is catch. And we do it effortlessly—there’s a small object flying at you that you have to detect, track, estimate its trajectory, and then actuate a bunch of different muscles to make sure your hand is in exactly the right place at the right time, and usually you only have a couple of seconds to make all of this happen. It’s amazing that we’re able to do it at all, so it’s understandable that this confluence of tasks makes catching an especially thorny problem for robots.

The biggest problem for robots in a task like this is the relatively short amount of time they have to sense, think, and react. Conventional cameras make this problem worse, which is why the University of Zurich researchers are instead relying on event cameras. We’ve written about event cameras a bunch, but basically they’re a kind of camera that only detects movement but can do so almost instantly. By drastically lowering perception latency relative to a traditional camera, the robot is able to detect, track, and estimate a catching location for a ball thrown from 4 meters away and traveling at up to 15 meters per second.

The catching maneuver was trained in simulation, and run in real life on an ANYmal-C quadruped, which displays some impressive self-sacrificing behaviors like lunges. An overall success rate of 83 percent isn’t bad at all, and the researchers point out that this is just a “first working demo” and that there’s plenty of room for optimization. The really important thing here is giving quadrupedal robots new capabilities by adding event cameras to a sensing arsenal that’s been suck in stereo camera and lidar land for far too long. Especially considering the new dynamic skills that we’ve been seeing from quadrupeds recently, event cameras could unlock all kinds of new capabilities that depend on rapid perception of moving objects.

Event-based Agile Object Catching with a Quadrupedal Robot, by Benedek Forrai, Takahiro Miki, Daniel Gehrig, Marco Hutter, and Davide Scaramuzza from the University of Zurich, will be presented at ICRA 2023 in London.

CMU’s quadruped stays balanced

Balancing is a skill you’d think robots would excel at, because we can equip them with exquisitely sensitive pieces of hardware that can tell them how they’re moving with an astounding level of precision. But, a robot knowing exactly how out of balance it is is different from a robot being able to get itself back into balance. A problem that many (if not most) legged robots have when it comes to balancing is that they have a limited amount of ankle and foot actuation. Some humanoids have it, and you can see for yourself how important it is by taking off your shoes and standing on one foot—pay attention to the constant corrective motions coming from all of those teeny muscles in your ankle, foot, and toes. Even the most sophisticated humanoid robots don’t have that level of control, and with quadrupeds, they’ve usually only got pointy feet to work with. That’s why, when it comes to balancing, they need a little help.

Aww, just look at those adorable little steps! Unfortunately, the adorable little steps aren’t doing the job of keeping the robot from tipping over. For that, you can thank the reaction wheels mounted on its back. You’ll notice that the robot ambulates two legs at a time, meaning that only two legs are keeping it off the ground, and that’s not enough legs on the ground for the robot to keep itself stable. The reaction wheels compensate by spinning up and down to exert torque on the body of the robot, independently of its legs. If this seems like cheating to you, well, you can just think of the reaction wheels as the equivalent of a tail, which many animals (and a few robots) use as a supplemental control system.

The researchers suggest that a smaller and lighter version of these reaction wheels could be usefully integrated into many legged robot designs, and would help all of them successfully cross balance beams. For the tiny minority of robots that don’t find themselves crossing balance beams full-time, reaction wheels would be an added source of stability, making robots better able to (among other things) withstand the obligatory shoves and kicks that every single quadruped robot in a robotics lab has to endure.


Enhanced Balance for Legged Robots Using Reaction Wheels, by Chi-Yen Lee, Shuo Yang, Benjamin Bokser, and Zachary Manchester from Carnegie Mellon University, will be presented at ICRA 2023 in London.
The Conversation (0)