On Monday, U.S. President Barack Obama opened the (first ever) White House Science Fair with the following:
“One of the great joys of being President is getting to meet young people like all of you -- and some of the folks in the other room who I just had a chance to see some of their exhibits and the work that they were doing. It’s inspiring -- and I never miss a chance to see cool robots when I get a chance.”
Wow, me neither! Also introduced at the event was a new DARPA initiative to give resources to students to help them build those aforementioned robots. BTW, I’m still waiting to hear back on that anti-robot takeover czar position…
We knew that Innvo Labs was working on some upgrades to Pleo, but all has now been revealed over at Bob The Pleo forums, where Innvo Lab CEO Derek Dotson discussed the new “Pleo Reborn.” If you’re a fan of Pleo, you pretty much have to read the entire interview, but I’ve condensed most of the new features if you’d rather just skim:
-Pleos are now male or female (blue or pink), and will react to each other accordingly: females make gentle noises at each other, males step back and shout at each other, and a male and female will make noises and lean against each other
-Pleo skin coloring will be randomized slightly, so that two Pleos produced at the same time will look distinctive. There will also be 10 different eye colors.
-Pleo skin durability improved, should now last 5x longer
-New lithium polymer battery more than doubles lifetime to 120 – 150 minutes, LED battery indicator added underneath Pleo
-Pleo will have a ’seed’ personality from the factory. Some will learn faster than others, and some will tend to be happier (or more mopey).
-Pleo now knows what time it is and will alter its behaviors accordingly; for example, it will want to be fed in the afternoon and act sleepy in the evening
-Pleo can ’smell’ RFID tags
-Pleo now has voice recognition, and you can name it, and it will respond to that name… As long as it’s you saying it, not anyone else
-By combining RFID tags that instruct Pleo to perform specific behaviors with voice recognition, it’s now possible to train Pleo to respond to different commands, such as “bow” or “come to me”
-Many more touch sensors have been added, along with corresponding behaviors. For example, if you pet Pleo’s side, it will lean into you.
-Pleo now has a G sensor that lets it detect acceleration and impacts as well as touch
-Pleo’s nose cam now allows for target tracking
-Motor speed and response have been improved, especially in the tail, head, and neck
More awesome stuff, plus video, after the jump.
There are a few more things that I thought were so interesting that I had to quote part of the interview:
Pleo needs to be fed now. If you don’t kind of take care of your Pleo, eventually it will get sick. It will cough, get lethargic, and all that. It needs it’s food to keep it healthy. And if it falls, here’s a big thing, Pleo actually feels pain now. So, if you abuse your Pleo or drop it, remember he knows if he’s been dropped or violated, he feels pain. It takes him a while to recover. He’ll limp. If you touch the area, it’ll be sore and he’ll cry. There are medicines that come with Pleo. You give him these medicines and it helps with the healing process. So, love and affection, a little healing foods, will get Pleo back up. In the labs, we’re striving to make this a pet. Along with a pet, there are certain obligations. So, if you’ve got children who want to pet, this is a good training device. If you can keep Pleo healthy, you can more likely keep a real pet healthy.
Now, here’s one of the quirkiest things we’ve done. Pleo can sense temperature. This is kind of a cool feature. So, let’s say you’re traveling, and you’ve got Pleo in the car and it drops below a certain temperature, and I’m not gonna give the temperature now because it’s adjustable, Pleo starts to shiver and shake like he’s cold. And if you leave him too cold too long, he’ll catch a cold. he also senses heat. If it’s too hot for Pleo, he’ll start panting. Along with what you get right off the get-go is a little chunk of ice. You put it under his chin. If he’s hot, he’ll open his mouth. You put it in, he’ll chew on this ice. It cools him down. Like I said, in our strive to make Pleo a more realistic pet, he has to interact better with his environment.
As far as availability goes, it sounds like there’s a bit of a supply problem, but the first few units should be available to members of the PleoWorld email list later this month. Price? Well, the unit in the pics was purchased at a trade show for about $500, so that’s likely to be the ballpark. Innvo has promised a giant, Jurassic-y booth at CES next year, and we’ll be bringing that to you in early January.
Japanese robotics company HiBot has unveiled a nimble snake bot capable of moving inside air ducts and other narrow places where people can't, or don't want to, go.
The ACM-R4H robot, designed for remote inspection and surveillance in confined environments, uses small wheels to move but it can slither and undulate and even raise its head like a cobra.
The new robot, which is half a meter long and weighs in at 4.5 kilograms, carries a camera and LEDs on its head for image acquisition and can be fitted with other end-effectors such as mechanical grippers or thermo/infrared vision systems.
Despite its seemingly complex motion capabilities, "the control of the robot is quite simple and doesn't require too much training," says robotics engineer and HiBot cofounder Michele Guarnieri.
"All [degrees of freedom] can be easily controlled by a game-style joystick, including the motion of recovering from an upside-down position."
The company says applications include the inspection of ducts, pipes, and ceilings, as well as remote surveillance and security. Indeed, I bet the CIA and other spy agencies could find some uses for this bot!
Watch the ACM-R4H in action:
HiBot is a spin-off of Tokyo Tech's Hirose-Fukushima Lab, which has brought to life some of the world's most amazing mechanical snakes. The company is transforming some of the research creatures into commercial-grade systems.
The ACM-R4H is smaller than other HiBot snake models, so it can easily enter and zigzag through tight spaces. The head and tail segments can move up and down and the middle joint can turn left and right.
It can negotiate 90 degree corners inside an air duct, for instance, or move inside pipes less than 14 centimeters in diameter. It can also overcome obstacles on its path.
The current version relies on a tether connected to a control unit, which provides communication and power (the control box has a rechargeable battery that lasts for over 3 hours).
The user interface shows images from the camera and a set of data from the robot, including power consumption, temperature, and position of each joint. It also shows a 3D image of the robot's current position that the operator can use for assisting with navigation.
Another tool to help with controlling and planning missions for the robot is a 3D simulator, called V-REP, that HiBot offers with its robots or as a stand-alone program:
HiBot, which also develops power line inspection robots, says some customers using the robot -- and most won't disclose what they're using for -- had no issues with the tether. "But we can change the robot architecture to have wireless communication," Guarnieri says.
And though the robot is resistant to water splashes, it can be made completely waterproof, he adds. You never know what people will use it for...
Below, some more snake bot videos, just because it's so cool to watch these lifelike machines. The first video shows the ACM-R3H, which is a long wheeled machine -- watch the entertaining demonstration on a Japanese TV show!
The other video shows the ACM-R5H, capable of slithering on the ground and also swimming. Yes, this snake bot swims just like the real thing.
In the year 3,000, robots are an integral part of society. Futurama's anti-hero is a robot called Bender, whom Wikipedia describes as a "foul-mouthed, heavy-drinking, cigar-smoking, kleptomaniacal, misanthropic, egocentric, ill-tempered robot." Other robots include Donbot, a criminal robot heading the robot mafia and Calculon, a hopelessly self-absorbed robot heading the robot supremacy society. There's even a "Robot Santa," which, due to a programming error, judges everyone to be naughty and goes on yearly Christmas rampage across Futurama's universe.
Futurama is foremost a comedy show, and its flawed robots are foremost theatrical characters. But Cohen and colleagues are science buffs (Cohen himself is a former Harvard and Berkeley graduate and even worked at the Harvard robotics lab for a while) and take joy and pride in providing the occasional "science relief" -- the "z-ray" on Bender's head shown in the picture to the left is one such example (more on that in our previous interview).
As becomes clear in his Robots Podcast interview, Cohen deeply cares about the way science and technology are portrayed in Futurama. It is a difficult balancing act, but an important one given the wild success of Futurama (now in its fifth season!) and the subtle but enormous influence of science fiction on robotics: I suspect sci-fi has had some influence on the career choice, goals and dreams of most roboticists I know, and it certainly does greatly affect public perception.
I, for one, love the influence and am a huge Futurama fan. Thanks for the interview, David X.!
Images: "Futurama" TM and (C) 2009 Twentieth Century Fox Film Corporation. All Rights Reserved.
Remember 5 years ago when a bunch of robotic cars managed to navigate through the desert all by themselves? And remember 3 years ago when a bunch of robotic cars managed to navigate through a (fake) urban area all by themselves? Well, today it’s the future, and autonomous robotic cars from Google have already logged 140,000 miles on busy, complex city streets and highways with only occasional human intervention, and 1,000 miles without any human control whatsoever.
I’m sure you remember Diego-San, whom we spotted in an issue of Kokoro News back in January. Reactions to these pictures were… Well, let’s just say, reactions were decidedly mixed. And by decidedly mixed, I mean predominantly negative. Diego-San’s createor, Dr. Javier Movellan, has been exploring possible alterations to Diego-San’s face, and has made this concept public:
As Dr. Movellan pointed out in one of his comments on our post, a lot of what’s relevant about designing the appearance of a humanoid robot is simply about trial and error:
“Everybody has strong opinions about why the current version generates such negative reactions: face too large, robot babies are freaky, skin texture is wrong, mixing mechanical body with biological face is scary, giganto-babies are scary … For just about every theory examples can be given that contradict the theories. The truth is nobody really knows. It is a trial and error process.”
With that in mind, Dr. Movellan is looking for some feedback (constructive feedback, please) on what you do and don’t like about this new concept for Diego-San’s face. Personally, I’d say it’s a good start, with the helmet, antenna and exposed electronics all reinforcing the fact that the robot isn’t intending to fool you into thinking it’s real. However, I’d be curious as to what the effect would be if more of the human features were removed. Like, what is strictly necessary for the robot to accomplish its research goals, which may not necessarily involve a substantial amount of expression recognition? Does Diego-San need ears, for example? A nose?
While one route might be to make it less human, the other route would be to make it much more cartoony. So basically, keep all the human features, just make it look intentionally fake… Again, the idea being that you’re reinforcing the fact that the robot isn’t trying to fool you into thinking it’s human.
Anyway, please let Dr. Movellan know what you think by posting a comment. For more background, read through some of the comments on our original post, and Plastic Pals has a very interesting interview with Dr. Movellan here.
The woman in this picture is Amanda Boxtel, who has had a T11/12 spinal injury for 18 years. She’s a paraplegic, but she’s now able to walk with the aid of eLEGS, a robotic exoskeleton system from Berkeley Bionics. You probably remember Berkeley Bionics from their cargo-carrying exoskeleton, HULC, which they’ve since licensed to Lockheed Martin for production for the military. eLEGS is largely based on HULC, except designed for (eventual) home use. The system is relatively light at 45 pounds, and you strap into it by yourself while sitting down. After only a few hours of practice, paraplegics are able to use eLEGS to stand up and walk:
eLEGS is very efficient, and allows for an entire day of walking without needing to be recharged. It’s also extremely quiet, which is very important for a device that is designed to allow you to move around and interact with people in public and social situations.
I made a point of asking how exactly the interface between the user and the system works, and was told that it was proprietary, “but nice try.” In general, however, it appears as though eLEGS senses arm movements through ’smart crutches’ (it also looks like there’s some kind of sensor attached to each upper arm), and as the user moves one crutch forward, eLEGS moves the opposite leg. However, to some extent eLEGS learns and adapts to each user, so there must be some other stuff going on under the hood.
eLEGS will be available next July to a select group of rehab centers, but from the beginning, eLEGS was designed for people to take home and use by themselves. By 2013, eLEGS should be available for purchase for something in the low six figures, although the eventual target price is something in the neighborhood of $50k, which is equivalent to a top of the line wheelchair.
I especially liked what Amanda says at the end of the video:
“This is not a wave of the future. The eLEGS is right now. I don’t have to be hopeful… This is reality.”
Now other countries are trying to catch up. Below I describe four humanoids that may give the Asian humanoids a run for their money. Or as one editor here put it, these robots may kick your Asimo.
But first, a digression. Every time I encounter a roboticist building legged humanoids, I ask the same question, Why do we need legged humanoids? Wheels appear to be easier and cheaper to implement and provide great maneuverability -- so why legs?
The answer they give me is two-fold: First, they argue that robots with human-shaped bodies are more apt to navigate human environments. So if we want robots to operate in our homes and offices, where there are stairs, uneven surfaces, and shaggy rugs, we need legs. The second part of the answer is that by building walking humanoids we can better understand how humans walk, balance, and move our bodies to do things like pirouette on a toe or perform incredible kicks.
After hearing their answer, my next question to the humanoid builder is, And why is it so hard to create full-body walking humanoids? Researchers have been working on this for over three decades and it seems we're still taking, well, baby steps. When can we expect a quantum leap in humanoid legged locomotion?
The answer is too complex -- and too interesting -- to summarize here; I will have to write another post on this topic. For now, let's just say there is a preferred walking control scheme, but some researchers are betting on competing approaches, and that although dc motors are the preferred actuators, some groups are seeking alternatives such as compact, powerful linear actuators.
Reem-B was designed to assist humans with everyday tasks, says Davide Faconti, founder of Pal Robotics. The 1.47-meter-high robot, unveiled two years ago, can walk at a relatively slow speed of 1.5 kilometers per hour, but thanks to powerful actuators in its legs and arms, Reem-B "is probably the strongest humanoid in the world," says Faconti, boasting that his robot can carry a 12-kilogram payload—say, a big watermelon. Try that, Asimo.
Photo: PAL Robotics
Watch Reem-B walking. The video is a bit old. I'd love to know if Pal has continued to improve the robot's mobility and see what it can do today.
Justin is by far one of the most impressive humanoids unveiled in recent years. Its lightweight, strangely shaped arms are amazingly dexterous, and the German researchers are consistently pushing the envelope in terms of hardware and software design. At every major robotics conference you can expect to see Justin showing off anewtrick.
Photos: Institute of Robotics and Mechatronics/DLR
The thing is, Justin, at this point, is not actually a full-body humanoid. It's currently an upper body with head, torso, and two arms that can be mounted on a fixed base or a four-wheeled mobile platform [see photo above].
The legs use the same powerful yet lightweight motors employed in Justin's arms. The idea was to explore joint torque-based control concepts for biped balancing and walking, according to Christian Ott, the lead researcher working on the legs.
If Justin's lower body turns out to be as nimble as its upper body, this robot will be able to do things we have never seen a robot doing before.
We wrote about CHARLI before. CHARLI is the first untethered, autonomous, full-size walking humanoid robot built in the United States, according to Virginia Tech roboticist Dennis Hong. Hong loves creating acronyms for his robots. CHARLI stands for Cognitive Humanoid Autonomous Robot with Learning Intelligence.
There are actually two CHARLI models. One, smaller, called CHARLI-L uses motors and a linkage system of pulleys and springs to generate movement. Hong and his team are now building a heavier version, CHARLI-H, to be equipped with custom-made linear actuators. See CHARLI-H's future leg on the photo, right.
Hong is secretive about these new actuators, saying only they will help mimic how human limbs move. (They rely on compliance, or "springiness," at the joints instead of stiff position control like most other humanoid robots use, Hong says.)
I look forward to seeing CHARLI-H play the humanoid league in RoboCup! Will it kick like Roberto Carlos?
Watch CHARLI-L taking somewhat timid steps, but steps nonetheless!
Finally, we're including here the Iranian robot Surena 2, unveiled a few months ago, just because it was such a surprising development. After the first reports surfaced, some people were skeptical the robot was more than an Asimo-looking plastic shell. But finally, video proved the humanoid was indeed a humanoid.
The 1.45-meter-high robot was developed to help researchers explore aspects of bipedal locomotion, Tehran University professor Aghil Yousefi-Koma told IEEE Spectrum. His team is working on a feedback control system that yields a much more humanlike motion.
Surena might be a slow walker, but it has its tricks: It can bow, stand on one leg, and according to some news reports, dance. Dance-off, Asimo?
A strange creature, half robot, half rat, has been seen scuttling across a laboratory in Japan. It's RatCar, a rat-vehicle experiment that scientists hope could lead to improved mobility for people with disabilities.
"We wanted to develop a brain-machine interface system aiming for future wheelchairs that paralyzed patients can control only with thought," says Osamu Fukayama of the university's Medical Engineering and Life Science Laboratory. "RatCar is a simplified prototype to develop better electrodes, devices, and algorithms for those systems."
In the RatCar, tiny neural electrodes [the dark dots on the tip of the device shown on the photo, right] were implanted in the motor cortex of rat brains, and the animals were suspended under a lightweight, motorized "neuro-robotic platform" with wheels. The objective was to make the vehicle collaborate with the rats to achieve the locomotion they desire.
The rats were trained on the car by towing it around an enclosed area with the motors disengaged. A vision system positioned above tracked the rats by following colored markers on their backs and the vehicle. It fed the positions into a "locomotion estimation model" program that correlated the motion of the animals with readings from the electrodes.
Next the rats were suspended more tightly to the car so their limbs touched the floor only slightly. The researchers then switched the system into "neuro-robotic mode," with the neural signals used to help drive the car. Six out of eight rats used in the study adapted well to the car.
"The vehicle moved forward synchronously with a rat when it was placed inside," says Fukayama, but he adds that the degree to which the car was being controlled by the rat itself was unclear.
Since the rat would be forcibly moved along with the car, measuring its real intentions became a challenging problem. Another difficulty was that only a small percentage of the electrodes actually recorded neural activity, and the recorded neurons didn't necessarily correlate with target movements.
Fukayama and colleagues Takafumi Suzuki and Kunihiko Mabuchi plan to perform more experiments to address the uncertainties. They want to confirm that the rats can drive the car in different directions and also measure the force that the rats are exerting when trying to move under the car. That way, they could track differences in its motion and the rats' apparent intentions.The less force, the better the neural link is working.