As much trouble as humanoid robots are to build and control, we keep on trying to make it work because it's easiest to operate in a human environment if you can do the same things that a human can. There are some good arguments for why it makes a lot more sense to modify our environments to better suit robots, but the fact is, if you can pull it off, humanoid is still the best way to go.

Even for flying airplanes.

If this sounds crazy to you, it sounded crazy to us too, until we saw it basically working at an IROS presentation.

The little robot in the picture above is a PIBOT, a small, very low-cost humanoid (actually a Bioloid Premium from Robotis). It's been slightly modified to be able to work the controls of a scaled-down, simulated aircraft cockpit, as in the pic above. PIBOT is able to identify and use all of the buttons and switches and stuff that you'd find in the cockpit of a normal light aircraft designed for humans:

Most of the inputs come from the simulator itself (roll, pitch, yaw, airspeed, GPS location), although the robot does use vision for some things, like identifying the runway using edge detection. And this is all it takes, according to the researchers, who state that: "PIBOT can satisfy the various requirements specified in the flying handbook by the Federal Aviation Administration."

You can see PIBOT rocking a simulation in the video below, and for you pilot-types, appended is a comprehensive description of what the robot is doing. Remember, this is all autonomous.

The airplane is initially parked on a runway of an airport. The robot prepares the flight by 1) pulling throttle to zero-point, 2) turning on the battery, 3) the altimeter, 4) the avionics, 5) the fuel pump, and 6) start the engine while pressing the switches on the panel. Then, PIBOT grabs the two control sticks for flight control and brakes are released. When the heading of the airplane aligns with the runway within an error less than 5 degree and its speed exceeds the taxiing speed, the second sequence begins and PIBOT increase the power. The airplane takes off at a proper speed and PIBOT controls both pitch and speed so that the vertical velocity of the airplane reaches the initial rate of climbs. At a certain distance from its departure, PIBOT starts Sequence 3. The airplane turns to the opposite direction, while maintaining its speed and altitude at their given references. In this sequence, PIBOT performs straight-and-level of flight, turns and climbs. In order to land on the runway, PIBOT turns the airplane while decreasing speed when it establishes an enough distance from the expected landing point. This is sequence 4, base leg. Final approach starts at the sequence 5. PIBOT aligns the aircraft with the runway and gradually pitch down at a slower speed. When it flies at around 20 feet above ground, it flares and gently lands on the ground.

Your question right now is probably the same as ours was: "when are you going to get it out of the simulator and flying a real plane?" That work will be presented at a forthcoming conference, but they're doing it already, and you can see a little teaser in the picture at the top of this article: the Macbook on the right is playing a video showing a little humanoid robot at the controls of a small-scale model biplane, flying it fully autonomously with its grippers on the controls.

The robot looked wasn't doing the best job keeping the model plane stable, but being a robot, it doesn't get airsick and puke all over the instrument panel like I would. It can do takeoffs, follow waypoints, maneuver, and make a final approach to landing, although at this point, it still needs some human help for the final touchdown. By the time the researchers publish, however, the 'bot may have nailed that too: there are still some perception challenges to solve, but they're getting very, very close.

A Robot-Machine Interface for Full-functionality Automation using a Humanoid, by Heejin Jeong, David Hyunchul Shim and Sungwook Cho from KAIST in South Korea, was presented yesterday at IROS 2014 in Chicago.

The Conversation (0)

How the U.S. Army Is Turning Robots Into Team Players

Engineers battle the limits of deep learning for battlefield bots

11 min read
Robot with threads near a fallen branch

RoMan, the Army Research Laboratory's robotic manipulator, considers the best way to grasp and move a tree branch at the Adelphi Laboratory Center, in Maryland.

Evan Ackerman
LightGreen

“I should probably not be standing this close," I think to myself, as the robot slowly approaches a large tree branch on the floor in front of me. It's not the size of the branch that makes me nervous—it's that the robot is operating autonomously, and that while I know what it's supposed to do, I'm not entirely sure what it will do. If everything works the way the roboticists at the U.S. Army Research Laboratory (ARL) in Adelphi, Md., expect, the robot will identify the branch, grasp it, and drag it out of the way. These folks know what they're doing, but I've spent enough time around robots that I take a small step backwards anyway.

This article is part of our special report on AI, “The Great AI Reckoning.”

The robot, named RoMan, for Robotic Manipulator, is about the size of a large lawn mower, with a tracked base that helps it handle most kinds of terrain. At the front, it has a squat torso equipped with cameras and depth sensors, as well as a pair of arms that were harvested from a prototype disaster-response robot originally developed at NASA's Jet Propulsion Laboratory for a DARPA robotics competition. RoMan's job today is roadway clearing, a multistep task that ARL wants the robot to complete as autonomously as possible. Instead of instructing the robot to grasp specific objects in specific ways and move them to specific places, the operators tell RoMan to "go clear a path." It's then up to the robot to make all the decisions necessary to achieve that objective.

Keep Reading ↓Show less