More news about Geminoid F, the ultrarealistic android unveiled early this year: the robot got a job.
Geminoid F is working as an actress, taking the stage in a play that opened yesterday in a Tokyo theater.
In the 20-minute play, titled "Sayonara" ("good bye" in Japanese), the android shares the stage with another actress (of the human kind) named Bryerly Long. Long plays the role of a young woman who is suffering from a fatal illness and whose parents bring her an android to serve as a companion.
A human operator controls the robot from a soundproof chamber behind the stage. A microphone captures the operator's voice and cameras track head and face movements. When the operator speaks or moves, the android follows suit.
The robot is in a permanent sitting posture, so movements are limited to the head, torso, and arms. The performance is "a bit mechanical," as Reuters puts it, but that doesn't seem to be a problem: the android is playing the role of an android after all.
The "Android-Human Theater" project is a collaboration between Ishiguro and Japanese director Oriza Hirata, who writes and directs.
According to Ishiguro, the play explores the question, "What do life and death mean to humans and robots?," and it will "alter the audience's images of robots and humans, and present a compelling fusion of theater arts and science."
Kevin Warwick is most certainly the preeminent cyborg of our time. More than a decade ago he implanted an RFID chip in himself to control simple functions like turning on the lights, and it's been 8 years since he inserted a more elaborate, 100-electrode array into the nerves in his forearm that allowed him to manipulate a robotic arm on another continent. He's assisted students at the University of Reading, in England, who wished to implant magnets in the tips of their fingers and at least one who wished for an electrode in the tongue (with the help, Warwick says, of a Manchester tattoo artist who goes by the name "Dr. Evil").
More recently, he's been growing rat neurons on a 128-electrode array and using them to control a simple robot consisting of two wheels with a sonar sensor. The rudimentary little toy has no microprocessor of its own -- it depends entirely on a rat embryo's brain cells. The interesting question is just how big one of these neuron-electrode hybrid brains can grow, and those brain cell networks are now getting more complicated, and more legitimately mammalian, Warwick said this week in a keynote speech at the IEEE Biomedical Circuits and Systems conference. Warwick's twist predates the living rat-controlled robot we wrote about recently, and it just goes to show that weird cyborg animal projects have virtually unlimited potential.
To start off a rat brain robot, embryonic neurons are separated out and allowed to grow on an electrode array. Within minutes the neurons start to push out tentacles and link up to each other, becoming interconnected dendrites and axons. A dense mesh of about 100,000 neurons can grow within several days. After about a week, Warwick and his collaborators can start to pulse the electrodes under the neural mesh in search of a pathway -- that is, when neurons near an active electrode fire, another group of neurons on a different side of the array shows an inclination to fire as well.
Once they have a pathway -- the groups fire in tandem at least a third of the time -- the University of Reading researchers can use that connection to get the robot to roam around and learn to avoid crashing into walls. They connect the electrode array to the robot using Bluetooth. When the sonar senses it's nearing a wall, it stimulates the electrode at one end of the neural pathway, and at first the brain sends back a coherent response only every once in awhile. The robot interprets the response as an instruction to turn its wheels. With time and repetition, the neural pathways become stronger, and the robot runs into the walls less frequently. In effect, the robot works out for itself how to not bash into obstacles.
To add complexity to the experiments, Warwick's lab is now collaborating with a Canadian group to culture neurons in three dimensions, meaning they are attempting to grow a network of 30 million neurons -- a big step towards the 100 billion found in a human brain. After that, the next step will be to bring in human neurons. "If we have 100 billion human neurons," Warwick says, "should we give it rights? Does it get to vote?" More to the point, he wonders: "Is it conscious?"
Japanese roboticists recently showed off a female android singing and dancing along with a troupe of human performers. Video of the entertaining and surprisingly realistic demonstration received went viral on the Net.
The secret behind the dance routine, Dr. Yokoi tells me, is not the hardware -- it's software.
The hardware, of course, plays a key role. The AIST humanoids group is one of the world’s top places for robot design. Their HRP-2 humanoids are widely used in research. And the group's latest humanoids, the HRP-4 and a female variant, the HRP-4C, which is the robot in the dance demo, are even more impressive.
But now the biggest innovation is a new software to program the movements of the robot. The software is similar to those popularly used in CG character animation. You basically click on the legs, arms, head, or torso and drag them to the position you want. You create a sequence of key poses and the software generates the trajectories and low-level control to make the robot move.
So by editing a relatively small number of key poses you can compose complex whole-body motion trajectories. See a screen shot of the software interface below, with a 6.7-second sequence that uses only eight key poses:
The software developed at AIST to create sequences of movements.
The software verifies that the robot can indeed perform the transitions from one pose to the next. If the angular velocity or range of one of the joints exceeds the maximum values, the software adjusts the pose, so that it's feasible to execute.
The software also monitors the robot’s stability. When it generates a trajectory between two key poses, it checks that the waist trajectory won't create instabilities and that foot trajectories will result in enough contact with the floor. If a pose is not safe, the software finds a similar pose that would keep the robot in balance.
After creating a sequence, the user can preview the resulting motion on the 3D simulator—or, if you have an HRP-4C you can upload the code to the robot and watch it dance.
One of their goals in developing the software, Dr. Yokoi says, is simplifying the creation of robot motion routines, so that even non-roboticists can do it. "We want other people—like CG creators, choreographers, anyone—to be able to create robot motions," he adds.
Here’s my full interview with Dr. Yokoi, in which he describes how the new software works, what it took to create the dance routine, and why he thinks Apple's business models could help robotics.
Erico Guizzo: I watched the video of the HRP-4C dancing with the human dancers several times—it’s fascinating. How did you have the idea for this demonstration?
Kazuhito Yokoi: We wanted to prepare a demonstration for this year’s Digital Content Expo, in Tokyo, and one of our colleagues, Dr. [Masaru] Ishikawa from the University of Tokyo, suggested this kind of event. At last year’s Expo, we used the robot as an actress. We didn’t have the software to create complex motions, so we were limited to movements of the arms and face. It was a fun presentation. But this time we wanted to do something different, and one of the ideas we had was a dance performance. One of the key collaborators was SAM, who is a famous dancer and dance choreographer in Japan. He created our dance routine. The human dancers are members of his dance school.
EG: Did he choreograph the robot’s dance movements as well?
KY: We wanted to make the dance as realistic as possible. So we didn’t choreograph the robot first. Instead, SAM created a dance using one of his students. Then we used the software to “copy” the dance from the human to the robot.
HRP-4C performs with human dancers.
EG: How long did this process take?
KY: Programming the software is relatively fast. But because this was a complex performance, we did several rehearsals. After SAM created the dance and we transferred it to the robot, he watched the robot and wanted to make some adjustments to the choreography. We expected that would happen because, of course, there are differences between the abilities of a human and a humanoid. For example, the joint angle and speed have maximum values. So it’s difficult to copy the dance exactly, but we tried to copy as close as possible. Then we transferred SAM's changes to the robot and we did another rehearsal. And at some point we also brought in the human dancers. I think we spent about one month until we had the final performance.
EG: When you’re using the software, what if you program a movement that the robot can’t execute, either because of angle or speed limitations or because it would fall?
KY: What you give the software are key poses. If, for example, you have one pose and you create a new pose and making that transition would require a joint angular velocity higher than what the robot can perform, then the software would inform you about that, and you can adjust the pose, reducing the final angle of the joint. The software also automatically keeps track of stability. Of course, users should have some basic understanding of their robot, how it balances, but the software does the rest—it will alert the user if a pose is unstable and correct the pose.
EG: Does the software compute ZMP [Zero Moment Point] to detect poses that are unstable?
KY: Yes, we use the ZMP concept. Again, the user can freely design the key poses. If a pose is not stable, the software automatically detects that the pose is a problem and modifies it. So it’s doing that in real time, as you design your sequence of movements. And if you don’t like the “corrected” pose you can choose another pose and keep trying until you’re satisfied with the movements. And of course, you can try your whole choreography using the software, before you test it in the real robot!
The software automatically adds a key pose needed to maintain stability.
EG: Was the software designed specifically for HRP-4C?
KY: No. The software is robot independent. You just need the robot model. For example, we have the model for HRP-2, so we can create HRP-2 movements. We also have the model for HRP-4, and we recently created movements for this robot as well.
[See below a recent video of HRP4.]
EG: Speaking of HRP-4, is HRP-4 and HRP-4C the same robot with just different exteriors? And are they both made by Kawada Industries?
KY: They are not the same. HRP-4C has 8 actuators in its head and it can make facial expressions. HRP-4 has no such kind of actuators. HRP-4 is made by Kawada. HRP-4C is special. It’s a collaboration. At AIST we designed the robot, but we have no factory to make robot hardware, so we collaborated with [Japanese robotics firms] Kawada and Kokoro. Kawada makes the body and Kokoro the head. You may know the Geminoid created by Professor [Hiroshi] Ishiguro of Osaka University. He's made several androids. His androids are made by Kokoro. So we also asked them to develop our robot head for HRP-4C. They have very good know-how to make humanlike skin. That’s an important factor.
EG: Can you use the software to design other kinds of movements, such as tasks to help a person in a house?
KY: Yes. That’s our dream. We need more capabilities to do that, like recognizing a person and objects in the house, for example. That’s not part of this software. But this software lets you program any kind of movement. And we want more people to try to program the robot. Now only researchers can do that. But in our opinion that’s not good enough. We want other people—like CG creators, choreographers, anyone—to be able to create robot motions. And maybe that will lead to robotics applications not only in entertainment, but in industry and home applications too. Think about the iPhone. Many people want an iPhone because it has hundreds of nice software applications. Apple didn’t create all of those; they were developed by others, including some small developers, and they were able to have great success. So the iPhone is a platform—video game consoles and computers are similar in that sense—and we want to follow this business model.
EG: When will researchers and others be able to use the software?
KY: We just finished developing the software and we’ve not delivered it to anybody. We have not yet decided what kind of license we will adopt, but we have plans to make it available maybe by the end of next March.
EG: What about the HRP-4 and HRP-4C robots—who will be able to use them?
KY: If you buy one, you can use it. [Laughs.]
EG: So what is the goal of your group at AIST? Do you want to create humanoid robots to help other researchers who study robotics or do you want to develop robots that one day will actually be used in people’s homes and factories?
KY: Humanoid robots in homes and factories, as you mentioned, that’s our final goal. That’s our long, long final goal. But in the mean time, we think we can contribute to other application areas in humanoid robotics. One is the hobby and toy humanoid robots—it's a big area. The second consists of research platforms, like HRP-2 or HRP-4, that people in academia can use to develop new software or theories on how to control robots and how to make them perform tasks naturally. The third area is entertainment. That’s why we created the dance performance. We have also shown the HRP-4C wearing a wedding dress at a fashion show. Or used it as a master of ceremony. But our final goal is not just entertainment. For example this new software can make any kind of motion. Maybe we could use it to make the robot perform tasks to help elderly people, or to perform activities involving education or communication. There are many possibilities.
EG: AIST’s humanoids are among the most impressive. Where do you get inspiration for creating them? And do you always want to make them look more human or is it sometimes a good idea to make them look robotic?
KY: Good questions. I think it depends on the application. HRP-2, HRP-3, and HRP-4 look robotic. If a robot is just walking or doing some dangerous, dirty, or dull task, okay, it doesn’t need a human face. But if we want to bring our robots to the entertainment industry, for example, then a more humanlike appearance is more different and maybe more attractive. That’s why we created a female humanoid. When we decided to bring our humanoids into the entertainment industry, we thought that a female type would be better.
EG: Going back to the beginning of our conversation, about the HRP-4C dance, a lot of people have seen the video—why do you think people are so fascinated with this demonstration?
KY: I don’t know. I guess this was a large trial in humanoid robotics. Dancing is something very human. You don't expect to see robots dancing like that with other dancers. Maybe people have seen smaller robots dancing, like Sony's QRIO or the Nao humanoid robot from Aldebaran. But for these small types of robot it’s difficult to collaborate or interact with humans. In our demonstration we wanted to show a realistic dance performance. And of course, we wanted it to be fun!
Kokoro Co., the Japanese firm that manufactures the android and sells it with the name Actroid F, recently demonstrated its newest capabilities. The android features facial movements even more realistic than before. It blinks and twitches and moves its head with remarkable realism. Watch:
Geminoid F was unveiled by Japanese roboticist Hiroshi Ishiguro early this year. The robot is a copy of a woman in her 20s with long dark hair.
When first demonstrated, the robot could laugh, smile, and exhibit other facial expressions. Now it's even more impressive in the way it naturally changes its facial expressions.
German researchers have developed a robotic camera that mimics the motion of real eyes and even moves at superhuman speeds.
The camera system can point in any direction and is also capable of imitating the fastest human eye movements, which can reach speeds of 500 degrees per second. But the system can also move faster than that, achieving more than 2500 degrees per second. It would make for very fast robot eyes.
In many experiments in psychology, human-computer interaction, and other fields, researchers want to monitor precisely what subjects are looking at. Gaze can reveal not only what people are focusing their attention on but it also provides clues about their state of mind and intentions.
Mobile systems to monitor gaze include eye-tracking software and head-mounted cameras. But they're not perfect; sometimes they just can't follow a person's fast eye movements, and sometimes they provide ambiguous gaze information.
The system, propped on a person's head, uses a custom made eye-tracker to monitor the person's eye movements. It then precisely reproduces those movements using a superfast actuator-driven mechanism with yaw, pitch, and roll rotation, like a human eyeball. When the real eye move, the robot eye follows suit.
The engineers at the Institute of Applied Mechanics have been working on the camera orientation system over the past few years. Their previousdesigns had 2 degrees of freedom (DOF). Now researcher Thomas Villgrattner is presenting a system that improves on the earlier versions and features not 2 but 3 DOF.
He explains that existing camera-orientation systems with 3 DOF that are fast and lightweight rely on model aircraft servo actuators. The main drawback of such actuators is that they can introduce delays and require gear boxes.
So Villgrattner sought a different approach. Because this is a head-mounted device, it has to be lightweight and inconspicuous -- you don't want it rattling and shaking on the subject's scalp. Which actuators to use?
The solution consists of an elegant parallel system that uses ultrasonic piezo actuators. The piezos transmit their movement to a prismatic joint, which in turns drives small push rods attached to the camera frame. The rods have spherical joints on either end, and this kind of mechanism is known as a PSS, or prismatic, spherical, spherical, chain.
It's a "quite nice mechanism," says Masaaki Kumagai, a mechanical engineering associate professor at Tohoku Gakuin University, in Miyagi, Japan, who was not involved in the project. "I can't believe they made such a high speed/acceleration mechanism using piezo actuators."
The advantage is that it can reach high speeds and accelerations with small actuators, which remain on a stationary base, so they don't add to the inertial mass of the moving parts. And the piezos also provide high forces at low speeds, so no gear box is needed.
The current prototype weighs in at just 100 grams. It was able to reproduce the fastest eye movements, known as saccades, and also perform movements much faster than what our eyes can do.
The system, Villgrattner tells me, was mainly designed for a "head-mounted gaze-driven camera system," but he adds that it could also be used "for remote eye trackers, for eye related 'Wizard of Oz' tests, and as artificial eyes for humanoid robots."
In particular, this last application -- eyes for humanoid robots -- appears quite promising, and the Munich team is already working on that. Current humanoid eyes are rather simple, typically just static cameras, and that's understandable given all the complexity in these machines. It would be cool to see robots with humanlike -- or super human -- gaze capabilities.
Below is a video of the camera-orientation system (the head-mount device is not shown). First, it moves the camera in all three single axes (vertical, horizontal, and longitudinal) with an amplitude of about 30 degrees. Next it moves simultaneously around all three axes with an amplitude of about 19 degrees. Then it performs fast movements around the vertical axis at 1000 degrees/second and also high dynamic movements around all axes. Finally, the system reproduces natural human eye movements based on data from an eye-tracking system.
UPDATE: Corrected maximum speeds for saccades and added collaborators involved in the project.
As robot hands go, this is the weirdest I've ever seen. But you know what? It works really well.
Researchers at Cornell University, University of Chicago, and iRobot reported this week that they've developed a fingerless robotic gripper made from a rubber bag filled with coffee grounds.
The gripper can transition from a soft state, when it's easily deformable and can conform to the shape of various objects, to a rigid state, when it can firmly hold the objects.
The secret, the researchers report in a paper published in the Proceedings of the National Academy of Sciences (PNAS), is the "jamming" phase transition of granular materials -- in this case, coffee grounds.
The approach, they write, "opens up new possibilities for the design of simple, yet highly adaptive systems that excel at fast gripping of complex objects.”
Researchers have used the "jamming" principle for robot locomotion before, but this appears to be the first application in manipulation.
So how does it work? When a granular material like sand or coffee grounds is loosely packed it can flow almost like a liquid. But when the particles in the material are packed tightly together, they "jam," or lock into one another.
One way of jamming them together is by applying a vacuum. You've seen it before: Vacuum-sealed coffee packages are hard bricks, but when opened, air rushes in and the packages become deformable.
The gripper is a clever idea. Now, is it practical? How strong is it? And can this coffee-powered robot hand fetch you a cup of coffee?
Erico Guizzo: This is an unusual gripper approach and I was wondering how the idea came about.
Eric Brown: This grew out of a program to develop a new field of soft robotics that was sponsored by DARPA. It was based on the observation that humans and animals are mostly made of soft materials, but robots have usually been made of hard materials like metal, and maybe we could build robots with more of the functionality of humans if we switched to softer materials. My group at the University of Chicago had experience in the physics of soft and granular materials, and we were paired up with robot engineers at Cornell and iRobot. My group has been studying a transition between soft and hard states of granular materials, called "jamming", and together we decided that gripping was a good robotics application for this.
EG: You successfully tested the gripper with varied objects -- what's the one object you were most surprised it could hold? And is there anything that it failed to hold?
EB: I think I was most surprised that it was able to pick up a penny, because we had expected it would have a harder time forming around very flat objects. We were especially excited that it could pick up fragile objects like raw eggs and wine glasses, because these are traditionally challenging for robotic grippers. It doesn't do well with extremely soft objects like cotton balls.
EG: How strong is the gripper? That is, how much weight can it lift, and once it's holding something, how strong is the grasp?
EB: The heaviest objects we lifted with the hand-sized gripper were a pair of gallon jugs of water, weighing a total of about 15 pounds [6.8 kilograms]. The grip is easily strong enough to lift and hold onto anything that fits into your hand, and can hold on, for example, if something bumped into it. However, you could pull something out of the grip with your own hand. The technology is scalable, so we predict that with a larger gripper, about 3 feet across [0.9 meters], would be able to lift up a car.
EG: One of the press materials states that the gripper could "outperform robotic fingers when confronted with unfamiliar objects or complex shapes" -- could you give a few actual examples?
EB: We never did direct comparison tests against other grippers. The argument behind this statement is that the gripper can pick up a variety of different objects, and quickly, because the granular material will conform to the shapes of the objects. Even very robust robotic hands with fingers need to spend extra time to examine the object and calculate optimal positioning for each finger, things that our
gripper doesn't need to spend time doing.
EG: This mechanism of soft-to-rigid transformation -- was it a well-understood phenomenon? Did anyone try to exploit it before?
EB: People have understood that soft pads on fingers or pincers give a little more tolerance so we can pick up things that have unusual shapes or are fragile, such as a wine glass, without breaking them. While people had proposed the idea of using granular materials in bags for these soft pads, no one even bothered to try it out because they didn't think it would be more advantageous that a piece of soft rubber. What they had not thought of was that you could use the jamming transition could do the pinching as well, eliminating the need for individual fingers or pincers.
A decade ago the term Unmanned Aerial Vehicle (UAV) was synonymous for expensive equipment, complex aerodynamics, and cruise-missile-type control algorithms. But since then, a rapid price decay in IMUs caused by the rise of mobile computing has slashed equipment costs. Today, open-source software like the Arduino environment and open-source hardware like the ArduPilot allow you and me to build our own UAV in a weekend for less than 300 USD.
Much of this progress is due to what has become the largest amateur UAV community and one of the largest robotics communities: DIY Drones. Founded by Chris Anderson, whose day job is Editor-in-Chief of WIRED, the site now has more than 12'000 members and covers all aspects of UAVs.
In its latest episode, the Robots podcast interviews Chris Anderson about DIY Drones. Anderson explains how to go about building a cheap UAV, why autonomous stabilization and navigation has become easy, the technical and legal aspects of flying your autonomous plane around the neighborhood, and the risk of putting UAV technology into the wrong hands. Anderson also talks about his experience with producing open source hardware and the economic challenges - and benefits - this brings about. To conclude, he shares some anecdotes on some of the crazy projects run on his site.
On Monday, U.S. President Barack Obama opened the (first ever) White House Science Fair with the following:
“One of the great joys of being President is getting to meet young people like all of you -- and some of the folks in the other room who I just had a chance to see some of their exhibits and the work that they were doing. It’s inspiring -- and I never miss a chance to see cool robots when I get a chance.”
Wow, me neither! Also introduced at the event was a new DARPA initiative to give resources to students to help them build those aforementioned robots. BTW, I’m still waiting to hear back on that anti-robot takeover czar position…
We knew that Innvo Labs was working on some upgrades to Pleo, but all has now been revealed over at Bob The Pleo forums, where Innvo Lab CEO Derek Dotson discussed the new “Pleo Reborn.” If you’re a fan of Pleo, you pretty much have to read the entire interview, but I’ve condensed most of the new features if you’d rather just skim:
-Pleos are now male or female (blue or pink), and will react to each other accordingly: females make gentle noises at each other, males step back and shout at each other, and a male and female will make noises and lean against each other
-Pleo skin coloring will be randomized slightly, so that two Pleos produced at the same time will look distinctive. There will also be 10 different eye colors.
-Pleo skin durability improved, should now last 5x longer
-New lithium polymer battery more than doubles lifetime to 120 – 150 minutes, LED battery indicator added underneath Pleo
-Pleo will have a ’seed’ personality from the factory. Some will learn faster than others, and some will tend to be happier (or more mopey).
-Pleo now knows what time it is and will alter its behaviors accordingly; for example, it will want to be fed in the afternoon and act sleepy in the evening
-Pleo can ’smell’ RFID tags
-Pleo now has voice recognition, and you can name it, and it will respond to that name… As long as it’s you saying it, not anyone else
-By combining RFID tags that instruct Pleo to perform specific behaviors with voice recognition, it’s now possible to train Pleo to respond to different commands, such as “bow” or “come to me”
-Many more touch sensors have been added, along with corresponding behaviors. For example, if you pet Pleo’s side, it will lean into you.
-Pleo now has a G sensor that lets it detect acceleration and impacts as well as touch
-Pleo’s nose cam now allows for target tracking
-Motor speed and response have been improved, especially in the tail, head, and neck
More awesome stuff, plus video, after the jump.
There are a few more things that I thought were so interesting that I had to quote part of the interview:
Pleo needs to be fed now. If you don’t kind of take care of your Pleo, eventually it will get sick. It will cough, get lethargic, and all that. It needs it’s food to keep it healthy. And if it falls, here’s a big thing, Pleo actually feels pain now. So, if you abuse your Pleo or drop it, remember he knows if he’s been dropped or violated, he feels pain. It takes him a while to recover. He’ll limp. If you touch the area, it’ll be sore and he’ll cry. There are medicines that come with Pleo. You give him these medicines and it helps with the healing process. So, love and affection, a little healing foods, will get Pleo back up. In the labs, we’re striving to make this a pet. Along with a pet, there are certain obligations. So, if you’ve got children who want to pet, this is a good training device. If you can keep Pleo healthy, you can more likely keep a real pet healthy.
Now, here’s one of the quirkiest things we’ve done. Pleo can sense temperature. This is kind of a cool feature. So, let’s say you’re traveling, and you’ve got Pleo in the car and it drops below a certain temperature, and I’m not gonna give the temperature now because it’s adjustable, Pleo starts to shiver and shake like he’s cold. And if you leave him too cold too long, he’ll catch a cold. he also senses heat. If it’s too hot for Pleo, he’ll start panting. Along with what you get right off the get-go is a little chunk of ice. You put it under his chin. If he’s hot, he’ll open his mouth. You put it in, he’ll chew on this ice. It cools him down. Like I said, in our strive to make Pleo a more realistic pet, he has to interact better with his environment.
As far as availability goes, it sounds like there’s a bit of a supply problem, but the first few units should be available to members of the PleoWorld email list later this month. Price? Well, the unit in the pics was purchased at a trade show for about $500, so that’s likely to be the ballpark. Innvo has promised a giant, Jurassic-y booth at CES next year, and we’ll be bringing that to you in early January.