Play With a Swarm of Robots at NYC's Museum of Mathematics

Museumgoers can watch behaviors emerge underfoot

4 min read

A group of ovoid robots underneath a transparent floor, illuminated by yellow, green, or red LEDs on their backs.
Three sub-swarms of robots—identified by color—search for humans to interact with in a new exhibit at MoMath.
Photo: Stephen Cass

Self-organizing robot swarms can be found in research laboratories around the world. Biologists like them because they can give insight into the group activities of animals, such as flocking, while roboticists like them because they open the door to accomplishing tasks without the need to program the exact individual behavior of dozens—or even hundreds or thousands—of robots.

Now anyone visiting New York City can interact with a robot swarm. MoMath, the National Museum of Mathematics (which bills itself as “the coolest thing that ever happened to math”) unveiled its new Robot Swarm exhibit yesterday morning. While getting journalists to attend an 8:30 a.m. briefing without the promise of a copious supply of free coffee (and maybe some of those mini Danish pastries) is a feat in itself, creating a multirobot exhibit that is appealing to visitors—and tough enough to withstand them walking all over it—is the real achievement.

The exhibit looks like a boxing ring (MoMath is located beside the site of the original Madison Square Garden, where many famous matches were held). The robots—gliding ovoids that old-school Doctor Who fans will find somewhat familiar—move about in an arena just underneath the transparent floor of the ring.  

imgVisitors can see the robot swarm moving beneath their feet. Recharging stations and spare robots hide under the floor to the right-hand side of the arena.

imgThe robots can interact with humans by using infrared cameras that track the location of special shoulder patches, like this one worn by Glen Whitney, co-founder of MoMath.

Kiosks around the ring let watchers queue up one of five types of behavior for the robots to execute. These include “pursue”—where the robots try to minimize the distance between themselves and a target, while avoiding bumping into each other or any obstacles—and “robophobia”—where the robots try to maximize the distance between themselves and a host of targets, typically ultimately arranging themselves into a lattice pattern. The targets are museumgoers walking on the floor above the robots while wearing backpacks with an identifying shoulder patch. The robots often group themselves into sub-swarms, each responding to a different target and identified by color LEDs on their backs.

Chris Keitel, principal at Three Byte Intermedia, which built the exhibit for MoMath along with Knowledge Resources, walked me through how the system operates. The robots have two wheels and only one sensor: a camera that points at the floor beneath it. The floor is made up of tiles printed with a pattern of markers that look like lots of little bulls-eyes. A Xylinx FPGA-based processor analyses the markers in the robot’s field of view. Depending on which markers it can see, the robot can work out both its position and orientation.

imgA downward-facing camera under each robot keeps track of circular position markers. The robots report their locations via radio to a central computer. Behaviors such as obstacle detection and determining the distance to nearby robots are performed by the central computer, which constantly updates a software model of the arena.

imgThe circular markers cover the floor of the arena. They can be read like a bar code using a camera and a dedicated processor. By having a field of view wide enough to cover several markers at once, a robot can determine both its position and orientation.

This data is radioed back to a central computer using a proprietary data protocol operating at 434 MHz (which helps reduce interference from mobile devices). The human targets are tracked using a set of infrared cameras; while only the shoulder patches are required for tracking, attaching them to a light backpack ensures they are always worn with a specific orientation, which means the tracking system can work out which way the human targets are facing.

The central computer maintains a model of the physical layout of the arena and notes the location of each robot and human. The computer also runs copies of the chosen behavioral program, one for each robot. The arena model informs each copy about the location of nearby obstacles, other robots, and human targets. Movement instructions based the response of each program to this information are radioed back to their corresponding robots.

Running the robot’s control software on the central computer, rather than embedding it in each robot, means that separate hardware sensors for obstacle detection, could be eliminated. The control software also manages the number of robots in the arena: if more than 25 are operating at the same time, it gets too crowded. And when each robot nears the end of its battery charge (up to 4 hours) it can be directed to plug into a charging “garage” hidden underneath the arena. While a robot is charging, one of a few spare robots lurking in the garage can be sent out in its place. Keitel says that they have a total of 100 robots on hand, to provide a buffer for longer-term maintenance issues.

The fact that a software model of the arena exists also makes it possible to test out new behaviors without having to have access to the exhibit itself. Glen Whitney, co-founder and co-executive director of MoMath, said they hope to be able to make the model accessible online so that high school students could try out different swarm programs. The best new programs would be added to the repertoire of behaviors that can be chosen by museumgoers at the exhibit’s kiosks.

Robot Swarm will be opened to the public on 14 December. 

The Conversation (0)