Close

This Robot Wants to Beat You at Air Hockey

It adapts to your playing style, and wants to get on your nerves

2 min read
This Robot Wants to Beat You at Air Hockey

When it comes to playing games against robots, the future doesn't look too bright for us humans. Machines will likely beat us, or are already beating us, at soccer, ping pong, chess, Go, baseball, basketball, rock-paper-scissorsiPhone games, and, of course, Jeopardy. Now add air hockey to the list.

Japanese researchers at Chiba University's Namiki Lab have developed an air-hockey robot that is skillful enough to compete against human players. It's not the first air-hockey robot developed, but the team led by Professor Akio Namiki has upped the ante: their robot changes its strategy based on its human opponent's playing style.

The system consists of an air-hockey table, a Barrett four-axis robotic arm, two high-speed cameras, and an external PC. It builds on the lab's work with high-speed tracking. Previously, the researchers (in collaboration with University of Tokyo's Ishikawa Oku Lab) paired a ultrafast vision system with a dexterous robot hand to juggle balls and fold towels, but here it tracks the puck and opponent's paddle. The position data from the camera images is then processed by the external PC, which determines the robot's next move. The robot is tracking the game at an insanely fast rate of 500 frames per second. Which means that, from the robot's point of view, its human opponent is moving at a laughably slow pace. It's like the robot is playing the game in a Matrix-style bullet-time frame.

To keep the game entertaining for human players, the researchers programmed the robot with a three-layer control system. The first layer is responsible for basic motion control at the hardware level. A second layer decides its short-term strategy—whether it should hit the puck, defend the goal, or stay still—to choose which motion can effectively counter the incoming trajectory of the puck. The third layer determines the machine's long-term strategy, and this is where things get interesting.

Basically, the robot observes the speed and position of the player's paddle in relation to the puck. This data can be described by what is known as a Motion Pattern Histogram (MPH). The robot uses this data to estimate whether its opponent is playing aggressively or defensively. Over the course of a game, the robot can detect these MPHs in real-time and compare them with reference patterns to help it figure out what you're doing.

If the robot is not adaptable to its opponent's style, the game can get boring. Say you are being offensive and the robot is being defensive; in this case, the game could become repetitive: you attack, the robot defends, you attack, the robot defends, and so forth. Conversely, if you're playing defensively and the robot offensively, the same problem arises. "To avoid this, the robot should be offensive when the opponent is offensive and should be defensive when the opponent is defensive," the roboticists write in a research paper.

So in a sense, by detecting and matching a given playing style, the robot isn't just physically playing the game against you: it's adding a psychological component to the match. A series of experiments showed that the robot was successful in detecting playing behaviors, and forcing them to change their strategies. As a result, players reported that this made the game more exciting, even if they're playing against a robot that is likely going to defeat them.

The Chiba researchers—Professor Namiki, Sakyo Matsushita, Takahiro Ozeki, and Kenzo Nonami—presented their paper, "Hierarchical Processing Architecture for an Air-Hockey Robot System," at the IEEE International Conference on Robotics and Automation (ICRA) last month.

[ Namiki Laboratory ]

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

This article is part of our special report on AI, “The Great AI Reckoning.

"I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓ Show less