Marathon runners require long hours of training, plenty of water, and an iron will. In the world's first bipedal robot marathon, the key ingredients seemed to be line-tracking algorithms, batteries, and lots of compressed air coolant.
The 42.195-kilometer race (the length of a real marathon) took place in Osaka, and a little humanoid robot called Robovie-PC was the big champion. It crossed the finish line on Saturday, after a grueling 54 hours, 57 minutes and 50.26 seconds -- more than two days running non-stop on the track. Only 1.73 seconds later, another contestant, Robovie-PC Lite, completed the race. The robot naming isn't a coincidence: The two robots were the submissions of Vstone robotics company, which organized the event with the city of Osaka.
It was an exciting ending. Watch:
What makes a winning robot? Team Vstone used line-tracking to navigate the track, taking advantage of the rule allowing autonomous navigation. The other four teams, including two student teams from Osaka Institute of Technology, patiently controlled their bots using game controller-like remotes. A dedicated human presence was also necessary to support the "runners": When batteries ran low, teams rushed in to swap them for fresh ones. Periodically, teams also needed to spray overheating motors with cans of cool, compressed air. Falling, however, was not a problem -- all robots had to be designed with an automatic "getting up" feature.
At an average speed of 0.7 km/h, the robots were about as exciting as watching a tortoise cross the Sahara. However, these endurance races highlight the requirements for long-running, autonomous robots. Robots -- that don't have their own dedicated pit crew -- need autonomous navigation, automatic recharging, and low-maintenance actuators. The bipedal aspect was also important; stairs and raised sidewalks are constant reminders that our world is designed for two-legged humans.
The Japanese government is aiming for robots to take care of their nation's aging population. But if we want robots to take care of us, instead of the other way around, we'll first need to see a robot marathon where no human intervention is required. So what we saw here were the first steps -- literally.
Image: Vstone Co.
Angelica Lim is a graduate student at the Okuno and Ogata Speech Media Processing Group at Kyoto University, Japan.
Need to destroy something? Get a F16. No, not thatF16. The F16 demolition robot from Stanley Hydraulic Tools. Unveiled this month, this electrically-driven hydraulic monster comes with five different attachments: shear, breaker, grapple, drop hammer, and our favorite, a concrete-cracking claw. Sure, it's more of a remote-controlled shrunk excavator than a robot. But who cares? It can tear down walls and cut steel like butter. Can we bring this guy to RoboGames?
GPS is generally the standard to which all other localization technologies are compared, and in most outdoor environments, it's hard to beat for accuracy, precision, and reliability. It's funny, then, that the times you need GPS the most (in places like downtown New York or San Francisco) usually end up being the times that GPS utterly fails due to tall buildings blocking out the sky.
Robots have this same problem, so researchers have been trying to find other ways that a robot can localize itself where GPS is intermittent. A common strategy is to use wheel odometers or inertial measurement units to "guess" where the robot has gotten to since its last external position fix, but accuracy still relies on landmarks to help correct for errors. So let's see, what are some things that are easy to find in urban environments and don't go anywhere? You probably weren't thinking "manhole covers," but yeah, it's manhole covers.
The reason manhole covers might be a good idea is because they have well-defined locations and because they're fairly large and made of metal they're pretty easy for a robot to spot, even if it's dark or rainy or snowy or whatever. With sensitive enough sensors, a robot could detect all the dents and wear that make each cover unique, and fix its position within inches. Sounds great, and it works in tests along with a database of pre-scanned manhole covers, except that generally, you tend to find manhole covers in the middle of the street, implying that a system like this would be best for autonomous cars as opposed to other robots that might not do as well running into traffic to determine their position.
And, yes, that's a picture of a manhole robot. That shoots manholes. Thank you, Mighty Morphin Power Rangers.
PR2 robot with Automaton t-shirt. Robots love Automaton. Automaton loves robots.
Good news, everyone! We're thrilled to report that Automaton is a finalist in this year's National Magazine Awards for Digital Media, aka the Digital Ellies. The Ellies are the magazine industry's answer to the Oscars -- but with more scruffy people with unstylish hair.
Automaton is in good company, with Salon, Sports Illustrated, Sunset, and Tablet as the other finalists in the blogging category. This means that robotics as a subject is competing with politics, sports, and food, which is what the other blogs cover. Thankfully, SI doesn't have a blog on swimsuit models!
Last year, our Robots for Real show was a finalist in the podcast category, so this marks the second year in a row that IEEE Spectrum has been recognized with a Digital Ellie nomination. The 2011 winners will be announced at the Digital Ellies ceremony in New York City on March 16. Fingers, human and robotic, crossed!
I just want to say thanks to you, our readers, and to all roboticists and their robots for keeping us inspired about the possibilities of science and engineering.
Need someone to zip up your dress? In 1964, the Hughes Mobot was there to help.
Hughes Aircraft's Mobot, aka Mobot the Magnificent Monster (seriously), was originally designed for the Atomic Energy Division of the Phillips Petroleum Company in the late 1950s and early 1960s as a remote manipulator.
A 150-meter [500-foot] cable led back to a control console, where a human operator could safely direct remote cleanup operations of radioactive material and other nasty stuff. Mobot had two manipulator arms along with two cameras, which are the things that look vaguely like water-cooled machine guns but aren't. Sad.
Apparently, this degree of usefulness wasn't good enough for Life magazine, which decided that Mobot (and its delicate touch) would be better off helping women put on makeup and get dressed. Or is it undressed? Feel free to use your imagination on that one.
Tomorrow is a huge day for robotkind. If all goes as planned, at 4:50 p.m. EST, the space shuttle Discovery will blast off from Cape Canaveral, Florida, carrying aboard a crew of astronauts and also NASA's Robonaut 2, which will become the first humanoid robot in space.
The shuttle's destination is the International Space Station, ISS, where Robonaut 2 will become a permanent resident and work alongside humans as a robotic helper. Astronauts will mount the robot on a fixed pedestal inside one of the ISS labs and use it to perform tasks like flipping switches and holding tools.
So no, Robonaut won't be fixing meals for the human crew. The main goal is to find out how manipulation robots behave in space -- and also give crew members a second pair of hands. NASA hopes the experience will allow it to upgrade the robot in the future, so it would be able to support astronauts in more complex tasks, including repairs and scientific missions outside the ISS.
The robot can perform tasks autonomously or under remote control, or a mix of both, Nic Radford, the Robonaut deputy project manager, told us. Astronauts on the station will operate the robot using a laptop, he said, though it can also be "joysticked" and directly controlled from Earth, with a few seconds of delay.
Sending Robonaut to space is a great feat for NASA, but it raises the question: Is this another step in using robots to replace humans in space exploration? In my opinion, using teleoperated and semi-autonomous robots makes a lot of sense. Robotic explorers have already demonstrated that unmanned missions offer formidable rewards, with immensely smaller costs and risks than manned ones. Of course, NASA enjoys cheering for its robots, but it's quick to point out that robots are not a replacement for humans in space, but rather "companions that can carry out key supporting roles."
That might be the case for now, as robots still can't match human manipulation and other capabilities. But robots are catching up fast. One of Robonaut 2's key features is its dexterous, humanlike arms and hands. Each arm is about 80 cm [31 in] long and can hold 9 kg [20 lb] in Earth's gravity. Each hand has 12 degrees of freedom: 4 DOFs in the thumb, 3 DOFs in both the index and middle fingers, and 1 DOF in the other fingers. The fingers are articulated and driven by tendons, just like human hands, and Robonaut is able to use the same tools that human astronauts use.
At the IEEE Humanoids conference last December, I spoke with GM researcher Muhammad E. Abdallah, who explained how Robonaut's hands work:
The Robonaut's hands work a bit differently than similar humanlike robot hands. Existing tendon-driven robotic fingers typically control their joints using tension controllers on each tendon. In other words, desired joint torques are translated into desired tendon tensions. The problem is that, in this approach, there's a coupling between the tendon and joint displacement that results in disturbances in the movement of the fingers. NASA and GM engineers solved the problem by implementing a joint-based torque control method. It decouples the tendon effects and is faster and more reliable than traditional methods.
The ability to control torque is important for Robonaut, and other humanoid robots, for that matter, because its hands will interact with unexpected objects or items slightly out of position. Industrial robots, by contrast, interact with known objects in well-defined spaces. Robonaut's hands mimic human hands in their ability to adapt to variation -- a capability that NASA demonstrated by having different people shake hands with the robot.
But the robot is more than just arms and hands, of course. Robonaut 2 weighs in at 150 kg [300 lbs] and if you're wondering, it has no legs -- it will remain stationary inside the ISS, although NASA researchers have been experimenting with robotic legs and wheels. Built primarily with aluminum with steel parts, it carries over 350 sensors and has a total of 42 degrees of freedom.
Behind its helmet visor are four visible light cameras: two provide stereo vision for the robot and remote operators, and two work as auxiliary cameras. A fifth infrared camera is housed in the mouth area for depth perception. Because the head is full of cameras, the robot's computer system -- 38 PowerPC processors -- are housed inside the torso. Or as NASA puts it, Robonaut 2 "thinks with its stomach -- literally." See this cool infographic that SPACE.com prepared:
In a second phase of the Robonaut project, at an undecided date, NASA will be making the unit mobile using a leg-type system, giving it the ability to move around inside the ISS. The third phase will feature a robot that will perform missions outside the space station. Robonaut is also a part of Project M, which wants to put a humanoid robot on the moon in 1,000 days -- beating Japan’s proposed goal of 2015.
For now, all eyes will be locked on the space shuttle at Cape Canaveral. It's been a long wait for this launch. And once Robonaut arrives at the ISS, it might take several months until astronauts unpack it and bring it to life. Still, I find the idea of a robot in space -- a staple of science fiction -- truly exciting. What do you think? Is this the beginning of a new era in robotic space exploration?
PS: Watch the "movie trailer" NASA prepared about the "new recruit."
Images: NASA; videos: IEEE Spectrum and NASA; infographic: SPACE.com
Bio-inspired robots are an awesome idea, since they take aspects that evolution has been refining for however many bajillions of years and put them into practice giving robots new capabilities. Northwestern University has created a robot called GhostBot modeled on the black ghost knifefish, which uses one single horizontal fin to propel itself forwards, backwards, and even straight up:
Pretty cool, right? Here's how it works:
Observations revealed that while the fish only uses one traveling wave along the fin during horizontal motion (forward or backward depending on the direction on the wave), while moving vertically it uses two waves. One of these moves from head to tail, and the other moves tail to head. The two waves collide and stop at the center of the fin.
The team then created a computer simulation that showed that when these “inward counterpropagating waves” are generated by the fin, horizontal thrust is canceled and the fluid motion generated by the two waves is funneled into a downward jet from the center of the fin, pushing the body up. The flow structure looks like a mushroom cloud with an inverted jet.
To get a sense of the potential of this kind of mobility system, check out a video of the actual fish:
That's a pretty impressive adaptation, if you ask me, and effectively puts conventional thrusters to shame.
"Stochastic" is another way of saying random, and stochastic robots are robots that harness the powers of randomness to construct themselves. It's a fairly simple idea that can result in fairly complex objects: you've got some number of different modules, which can come together to form a robot. Instead of putting the modules together and building the robot directly, you instead just toss all of the modules and shake it really really hard. As the modules randomly run into each other, each is programed to latch on if it happens to bump into a module that it's supposed to be next to in the final design. And if you do this for long enough, eventually you'll end up with a fully assembled robot. Or that's the basic idea, anyway.
The following video demonstrates an interesting application of this concept. Along with lots of assembling modules come a few disassembling modules, whose job is to break up the assembled robots. This creates a system that's sort of a robotic chemical reaction, and by adjusting how long the disassembling bots take to recharge themselves, the overall number of functional robots can be controlled:
One application for these types of robots might be in the medical field, where building a robot inside someone's body could prove to be much more effective than building one outside. All you have to do is inject a bunch of little modules into the bloodstream, they'd randomly whirl about and run into each other and grab on where appropriate, and in a little bit you'd have your robot. You could even program the modules not to assemble themselves until they reached a certain place in the body, and while such precision might take a while (or a whole bunch of injections), the potential is there for extremely precise treatments and repairs.
Next time you need heart surgery, this little snakebot is going to make himself right at home deep inside your chest via a small hole in your solar plexus. It's CardioARM, and don't panic, he's here to help. Developed by CMU's Howie Choset, CardioARM has 102 joints (plus a camera for a head) and can be directed to slither around your vital organs with the utmost precision, making it unnecessary to 'crack open your chest,' which is apparently what they normally do when your ticker needs an overhaul.
Last February, CardioARM was successfully tested on a human for the first time, performing a diagnostic heart mapping procedure, which sounds like it was probably a pile o' fun for everyone involved. Dr. Choset has bigger plans for his snakebots, though:
"He hopes to test the device in other surgeries, such as ablation—which involves burning away a small amount of heart muscle to correct an abnormal beat."
Burning? Burning, you say? What, with lasers? We're giving these flesh-burrowing robot snakes lasers now? What else?!
“We’re hoping to use a remote-controlled robot to go through small caves in Egypt,” [Choset] says, “and find remains of ancient Egyptian tombs.”
Snakebots. Lasers. Ancient Egyptian tombs. Wow, I smell a blockbuster...
After testing iPhone, iPad and an eye-tracking device as possible user interfaces to maneuver our research car, named "MadeInGermany," we now also use Brain Power. The "BrainDriver" application is of course a demonstration and not roadworthy yet, but in the long run human-machine interfaces like this could bear huge potential in combination with autonomous driving.
To record brain activity, the researchers use an Emotiv "neuroheadset," an electroencephalography, or EEG, sensor by San Francisco-based company Emotiv, which design it for gaming. After a few rounds of "mental training," the driver learns to move virtual objects only by thinking. Each action corresponds to a different brain activity pattern, and the BrainDriver software associates the patterns to specific commands -- turn left, turn right, accelerate etc. The researchers then feed these commands to the drive-by-wire system of the vehicle, a modified Volkswagen Passat Variant 3c. Now the driver's thoughts can control the engine, brakes, and steering.
To road test their brain-controlled car, the Germans headed out to the former airport in Berlin Tempelhof. The video below shows a driver thought-controlling the car, Yoda-style. "Don't try this at home," the narration says, only half-jokingly.
The researchers caution that the BrainDriver application is still a demonstration and is not ready for the road. But they say that future human-machine interfaces like this have huge potential to improve driving, especially in combination with autonomous vehicles. As an example, they mention an autonomous cab ride, where the passenger could decide, only by thinking, which route to take when more than one possibility exist.
This type of non-invasive brain interface could also allow disabled and paralyzed people to gain more mobility in the future, similarly to what is already happening in applications such as robotic exoskeletons and advanced prosthetics.