Play With a Swarm of Robots at NYC's Museum of Mathematics

Museumgoers can watch behaviors emerge underfoot

4 min read
A group of ovoid robots underneath a transparent floor, illuminated by yellow, green, or red LEDs on their backs.
Three sub-swarms of robots—identified by color—search for humans to interact with in a new exhibit at MoMath.
Photo: Stephen Cass

Self-organizing robot swarms can be found in research laboratories around the world. Biologists like them because they can give insight into the group activities of animals, such as flocking, while roboticists like them because they open the door to accomplishing tasks without the need to program the exact individual behavior of dozens—or even hundreds or thousands—of robots.

Now anyone visiting New York City can interact with a robot swarm. MoMath, the National Museum of Mathematics (which bills itself as “the coolest thing that ever happened to math”) unveiled its new Robot Swarm exhibit yesterday morning. While getting journalists to attend an 8:30 a.m. briefing without the promise of a copious supply of free coffee (and maybe some of those mini Danish pastries) is a feat in itself, creating a multirobot exhibit that is appealing to visitors—and tough enough to withstand them walking all over it—is the real achievement.

The exhibit looks like a boxing ring (MoMath is located beside the site of the original Madison Square Garden, where many famous matches were held). The robots—gliding ovoids that old-school Doctor Who fans will find somewhat familiar—move about in an arena just underneath the transparent floor of the ring.  

img Visitors can see the robot swarm moving beneath their feet. Recharging stations and spare robots hide under the floor to the right-hand side of the arena.

img The robots can interact with humans by using infrared cameras that track the location of special shoulder patches, like this one worn by Glen Whitney, co-founder of MoMath.

Kiosks around the ring let watchers queue up one of five types of behavior for the robots to execute. These include “pursue”—where the robots try to minimize the distance between themselves and a target, while avoiding bumping into each other or any obstacles—and “robophobia”—where the robots try to maximize the distance between themselves and a host of targets, typically ultimately arranging themselves into a lattice pattern. The targets are museumgoers walking on the floor above the robots while wearing backpacks with an identifying shoulder patch. The robots often group themselves into sub-swarms, each responding to a different target and identified by color LEDs on their backs.

Chris Keitel, principal at Three Byte Intermedia, which built the exhibit for MoMath along with Knowledge Resources, walked me through how the system operates. The robots have two wheels and only one sensor: a camera that points at the floor beneath it. The floor is made up of tiles printed with a pattern of markers that look like lots of little bulls-eyes. A Xylinx FPGA-based processor analyses the markers in the robot’s field of view. Depending on which markers it can see, the robot can work out both its position and orientation.

img A downward-facing camera under each robot keeps track of circular position markers. The robots report their locations via radio to a central computer. Behaviors such as obstacle detection and determining the distance to nearby robots are performed by the central computer, which constantly updates a software model of the arena.

img The circular markers cover the floor of the arena. They can be read like a bar code using a camera and a dedicated processor. By having a field of view wide enough to cover several markers at once, a robot can determine both its position and orientation.

This data is radioed back to a central computer using a proprietary data protocol operating at 434 MHz (which helps reduce interference from mobile devices). The human targets are tracked using a set of infrared cameras; while only the shoulder patches are required for tracking, attaching them to a light backpack ensures they are always worn with a specific orientation, which means the tracking system can work out which way the human targets are facing.

The central computer maintains a model of the physical layout of the arena and notes the location of each robot and human. The computer also runs copies of the chosen behavioral program, one for each robot. The arena model informs each copy about the location of nearby obstacles, other robots, and human targets. Movement instructions based the response of each program to this information are radioed back to their corresponding robots.

Running the robot’s control software on the central computer, rather than embedding it in each robot, means that separate hardware sensors for obstacle detection, could be eliminated. The control software also manages the number of robots in the arena: if more than 25 are operating at the same time, it gets too crowded. And when each robot nears the end of its battery charge (up to 4 hours) it can be directed to plug into a charging “garage” hidden underneath the arena. While a robot is charging, one of a few spare robots lurking in the garage can be sent out in its place. Keitel says that they have a total of 100 robots on hand, to provide a buffer for longer-term maintenance issues.

The fact that a software model of the arena exists also makes it possible to test out new behaviors without having to have access to the exhibit itself. Glen Whitney, co-founder and co-executive director of MoMath, said they hope to be able to make the model accessible online so that high school students could try out different swarm programs. The best new programs would be added to the repertoire of behaviors that can be chosen by museumgoers at the exhibit’s kiosks.

Robot Swarm will be opened to the public on 14 December. 

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less