Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.
Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on "cognitive developmental robotics," which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)
Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.
Here are some of the expressions that Affetto can make to share its emotions with the caregiver.
Norri Kageki is a journalist who writes about robots. She is originally from Tokyo and currently lives in the San Francisco Bay Area. She is the publisher of GetRobo and also writes for various publications in the U.S. and Japan.
Robotics is off to a good start this year. In January, there was CES, with lots of cool new robot products and demos, and we've also seen plenty of robot hacks using Microsoft's Kinect 3D sensor, which is creating quite a stir. But there was much more, of course, so it's time to review the most striking, stunning, and strange robot videos of January.
No. 10 This mind-bending action sequence from the Indian robot movie Enthiran is a must-watch. Insane, awesome, ridiculous? You be the judge.
Northrop Grumman’s sexily badass X-47B unmanned combat air system made its first flight ever on Friday, circling a desert runway a couple times all by itself before successfully not crashing. Northrop seemed pretty happy about the way things went:
“The flight provided test data to verify and validate system software for guidance and navigation, and the aerodynamic control of the tailless design. The X-47B aircraft will remain at Edwards AFB for flight envelope expansion before transitioning to Naval Air Station Patuxent River, Md. later this year. There, the system will undergo additional tests to validate its readiness to begin testing in the maritime and carrier environment.”
"Flight envelope expansion" means that they’re going to see how crazy the X-47B can get in the air. After that, they’re going to get it ready for its intended purpose, which is carrier operations. We know that drones are already pretty good at precision maneuvers, but I hear carrier landings are especially tricky. I’m optimistic (I always am about robots), but seeing this thing manage an autonomous carrier touchdown is going to go a long way towards convincing skeptics that drones really can function on a level similar to even the most skilled humans in many aspects of combat aircraft control.
As part of the European project RoboEarth, I am currently one of about 30 people working towards building an Internet for robots: a worldwide, open-source platform that allows any robot with a network connection to generate, share, and reuse data. The project is set up to deliver a proof of concept to show two things:
RoboEarth greatly speeds up robot learning and adaptation in complex tasks.
Robots using RoboEarth can execute tasks that were not explicitly planned for at design time.
The vision behind RoboEarth is much larger: Allow robots to encode, exchange, and reuse knowledge to help each other accomplish complex tasks. This goes beyond merely allowing robots to communicate via the Internet, outsourcing computation to the cloud, or linked data.
But before you yell "Skynet!," think again. While the most similar things science fiction writers have imagined may well be the artificial intelligences in Terminator, the Space Odyssey series, or the Ender saga, I think those analogies are flawed. RoboEarth is about building a knowledge base, and while it may include intelligent web services or a robot app store, it will probably be about as self-aware as Wikipedia.
That said, my colleagues and I believe that if robots are to move out of the factories and work alongside humans, they will need to systematically share data and build on each other’s experience.
Imagine the following scenario: A service robot like the one in the hospital room [photo, top] is pre-programmed to serve a drink to a patient. A simple program might include: Locate the drink, navigate to its position, grasp it, pick it up, locate the patient in the bed, navigate to the patient, and finally hand over the drink.
Now imagine that during task execution this robot monitors and logs its progress and continuously updates and extends its rudimentary, pre-programmed world model with additional information. It updates and adds the position of detected objects, it evaluates the correspondence of its map with its actual perception, and it logs successful and unsuccessful attempts during its task performance. If the robot is not able to fulfill a task, it asks a person for help and stores any newly learned knowledge. At the end of its task performance, the robot shares its acquired knowledge by uploading it to a Web-style database.
Some time later, the same task is to be performed by a second robot that has no prior knowledge on how to execute the task. This second robot queries the database for relevant information and downloads the knowledge previously collected by other robots. Although differences between the two robots (e.g., due to wear and tear or different robot hardware) and their environments (e.g., due to changed object locations or a different hospital room) mean that the downloaded information may not be sufficient to allow this robot to re-perform a previously successful task, this information can nevertheless provide a useful starting point.
Recognized objects, such as the bed, can now provide occupancy information even for areas not directly observed. Detailed object models (e.g., of a cup) can increase the speed and reliability of the robot's interactions. Task descriptions of previously successful actions (e.g., driving around the bed) can provide guidance on how the robot may be able to successfully perform its task.
This and other prior information (e.g., the previous location of the cup, the likely place to find the patient) can guide this second robot’s search and execution strategy. In addition, as the two robots continue to perform their tasks and pool their data, the quality of prior information will improve and begin to reveal underlying patterns and correlations about the robots and their environment.
As you can see in the video above, RoboEarth has a way to go. One year into the project, we can download task descriptions from RoboEarth and execute a simple task. We can also upload simple things, like an improved map of the environment. But for now we are far from using or creating the rich amount of prior information described in the scenario above, or addressing potential safety or legal challenges.
I think that the availability of such prior information is a necessary condition for robots to operate in more complex, unstructured environments. The people working on RoboEarth -- me included -- believe that, ultimately, the nuanced and complicated nature of human spaces can't be summarized within a limited set of specifications. A World Wide Web for robots will allow them to achieve successful performance in increasingly complex tasks and environments.
This is maybe only peripherally (ha!) related to robotics, but it’s cool enough that I thought it was worth sharing… Besides, it’s Friday, and you deserve some nifty videos to watch. Anyway, we’ve posted before on all the cool things that roboticists have been able to do with Microsoft’s stupidly cheap and effective 3D camera system, and Willow Garage took some initiative and sponsored a contest to try and kick start even more open source Kinect innovation.
First place (and $3k) went to Garratt Gallagher’s "Customizable Buttons." Using a piece of paper and a pen, you can just draw your own touch-sensitive controls:
Taking home no awards, but one of my personal favorite demos, was Kinemmings, a game of Lemmings played using your body and the Kinect sensor. Yes, it may not be advancing the field of robots or whatever, but it sure looks like fun:
Microsoft should absolutely pay those guys a bajillion dollars and hire them as game designers or something. Seriously, Kinect has way more potential than one company can possibly harness. And as for robots, great strides are obviously being made, and the future is (hopefully) limitless. If any of these projects are of use to you personally, remember that since they’re on ROS, you can just download them and put them to work yourself.
Inspection of high-voltage power lines is costly, difficult, and a dangerous job even for skilled workers. Which means it's the perfect job for a robot.
We first wrote about Expliner, an incredible inspection robot that balances on power lines like an acrobat, more than a year ago. Since then, HiBot, the Japanese company that developed Expliner, has gone on several inspection jobs, remote operating the robot as it crawls on 500-kilovolt live lines.
The company is now gearing up to deliver the robot to customers, first in Japan, and later abroad as well.
Expliner is like a wheeled cable car that rolls along the upper pair of bundled cables. In addition to its manipulator arm, it carries laser sensors, to spot corrosion or scratches, and a high-definition camera, which records details of bolts and spacers far more effectively than even a human worker.
HiBot says that Expliner is a semi-autonomous robot.
"There is always a human in the control loop, but the basic repetitive tasks are automated," says Michele Guarnieri," a HiBot co-founder. "Tasks that require a high degree of precision, like maintaining balance or moving parts to a certain angle, are also automated."
He explains that the robot can inspect up to four cables simultaneously, and software automatically checks all recorded videos and alert users about potential damages or problems on the lines.
HiBot has recently released a new video that shows off the robot's capabilities, including being able to go over cable suspension clamps through a series of acrobatic maneuvers using a dangling counterweight to shift the robot's center of gravity. Watch:
HiBot, which spun off from the laboratory of Tokyo Tech roboticist Shigeo Hirose (known for his incredible snakebots), has recently won an award for the Expliner robot from Japan's Ministry of Economy, Trade, and Industry.
And if you're wondering, "Expliner doesn't fall," claims Guarnieri. "It's equipped with safety devices that prevent the robot from falling, even in case of strong winds."
Last year, we reported that British researchers are using a Charles Babbage robot head to develop emotional machines. We wondered whether the Charles head was a Hanson Robotics creation. We now have the answer.
"Yes, Charles is a Hanson Robotics creation," David Hanson, founder and CTO of the company, tells us.
Hanson says they built the robot more than a year ago and he was pleased to see that the Cambridge researchers have put it to work. "I think they’re up to some good stuff," he says.
Above is an image of Charles at the Hanson robot factory.
Hanson also updated us on his company's latest developments -- they've been busy working on some new robots and updating old ones. These creations are incredible, and I can't decide where I'd put them in the uncanny valley chart.
First, there's Zeno. No, not the little Zeno. This is a big Zeno, modeled after Zeno of Elea, the mathematical philosopher who, as Hanson puts it, "introduced riddles of recursion that vexed the Greeks so terribly, and inspired [Douglas] Hofstadter so much that he included Zeno as a character in 'Gödel, Escher, Bach.' "
Here's a video, and there's a photo of it below as well:
Hanson has been putting a lot of effort on software, and the latest version has "features enabling common sense reasoning and learning," he says. "This is a collaboration of numerous groups through the Apollo Mind Initiative"—a nonprofit he helped found—"dedicated to helping institutions collaborate on realizing greater-than-human genius in machine intelligence."
The company has also just rebuilt their famed Philip K. Dick robot. The upgrade, commissioned by a Dutch public TV station working on a documentary about the author, is "more expressive and intelligent," Hanson says:
And I know you're wondering: What about the little Zeno? Hanson pointed us to this video from last year, and shared this bit of news: He expects the robot to be ready for a release to researchers in 2011, and consumers in 2012.
The pi4 Workerbot is a new industrial robot capable of using its two arms to perform a variety of handling, assembly, and inspection tasks. It's designed to work alongside human workers -- and the robot's LCD face even displays a broad smile when things are running smoothly.
One of the innovative things about the robot is its control system. The Workerbot, which made its debut at the Automatica show last June, relies on a method known as impedance control, which allows the robot's arms to cooperate as they handle objects, keeping forces at desired levels and adjusting to disturbances -- a crucial capability when it comes to bimanual manipulations.
With its human-inspired size and looks [see images above], the Workerbot is a far cry from traditional factory bots, especially those used by the auto industry.
That's not to say that the automotive industry hasn't been good to robotics. Quite the opposite. Thanks to car manufacturers, industrial robots evolved into fast, reliable, powerful, and precise machines. But there's a flip side to the story.
Traditional industrial robots are rather complex to integrate into existing manufacturing processes; deploying them at a factory is an arduous, costly, and time-consuming task. The robots are also difficult to reprogram when changes become necessary, and they can't safely share spaces with human workers.
This barrier to entry has kept small and medium companies in industrialized countries "robot-less" -- at a time when robots, more than ever, could boost productivity and ameliorate labor shortages. To automate their production lines, which often include many different items manufactured in low volumes, these companies need robots that are inexpensive and intuitive, but still reliable and precise.
This is a promising, and potentially hugely lucrative, market that pi4_robotics and other companies -- including, it appears, Rodney Brooks' secretive start-up, Heartland Robotics -- want to explore.
The Workerbot's arms have seven degrees of freedom each (like humans arms), with grippers equipped with force sensors that can adjust the pressure that they apply. The head has two inspection cameras on the sides, a 3-D camera on the forehead, and a display screen that provides feedback to operators (a smile means all is okay; a frown indicates that something is wrong, or that the robot could work faster). The Workerbot is not a mobile robot, though human workers can use its wheeled base to move it manually.
Watch the robot in action:
According to Fraunhofer engineer Dragoljub Surdilovic, their approach to compliant control is what makes the Workerbot different from similar two-armed bots, like the Motoman SDA10D and the DLR/KUKA Justin humanoid.
"We created a new dual-arm programming language and environment that incorporate impedance control and make it easier to plan, program, and realize bimanual contact tasks," Surdilovic says.
Most industrial robots don't use impedance control, but rather they implement position control. In this approach, the robot tries to make its arms follow as closely as possible a series of positions in space. If the arms go off their trajectory, the motors try to bring them back on track.
The problem is, if you have two robots, or one robot with two arms, that need to collaborate and they are position controlled, coordinating their movements can be difficult. Imagine that the two arms are manipulating the same object. If at any point one of the arms becomes off-trajectory and starts pushing to get back on track, the other arm might go off-trajectory and start exerting forces as well.
Using impedance control, bimanual manipulation becomes much easier. The way this scheme works is the robot simulates a dynamic behavior for its arms that is different from the arms' intrinsic mechanical dynamics (which depends on its linkages, motors, and joints). The idea is to actuate the motors by simulating a mass-damper system. Imagine moving an object through a viscous liquid. The control system can adjust its parameters so you feel that you are moving a greater mass, for example, or add more damping so you don't overshoot when trying to bring the arm to a given position.
The upshot is that impedance control makes the arms capable of adjusting to errors and disturbances while at the same time keeping applied forces within desired limits.
This approach is also key to improve safety, because the robot won't push back if a person accidentally comes into contact with it. Indeed, the Workerbot meets the ISO 10218 norm for "inherent safe design of industrial robot." Another important benefit is that human operators can manually guide the robot arms to teach it an assortment of tasks, simplifying the programming process.
It will be interesting to compare the Workerbot to the Heartland Robotics system. Both companies seem to target assembly, handling, and inspection tasks. Whereas pi4_robotics claims that its bot will "help keep European production competitive," Heartland wants to "reinvigorate American manufacturing." The German firm plans to lease their robot for about 4,800 euros per month, and recentreports indicate that Heartland might sell its robot for US $5,000, although details are still murky.
One thing is certain: This is going to be an exciting chapter in robotics, and I'm looking forward to seeing how things will unfold -- and most important, whether these robots will help the many manufacturers that have long awaited for them.
We’ve been waiting for this moment for literally three years now. Keepon, everyone’s favorite yellow squishbot and arguably the world’s best robot dancer, is going to be available.
BeatBots, maker of the Keepon Pro (the $30,000 research Keepon) has partnered with the UK’s Wow! Stuff (who also made this) to create ‘My Keepon,’ a toy version of the Keepon that we know and love. From the press release:
Wow!’s design experts and robotics engineers, based in their recently-opened Los Angeles office, worked closely with BeatBots to design a toy that captured the essence of the Keepon character while replicating the robot’s most engaging interactive traits. These features include reactivity to touch and an amazing ability to listen to music, detect the beat, and dance in perfect rhythm!
But most importantly, Wow! Stuff and BeatBots are working to ensure that the success of My Keepon will directly support the social welfare goals at the heart of the Keepon story. “A percentage of the pro¿t from each My Keepon will go towards subsidizing and donating BeatBots’ research-grade robots to therapists and researchers,” said Taylor. “We are so proud to make Keepon available to a broader audience, and we will choose retail partners who also feel proud to sell him.”
Michalowski commented, “Our dream is to make Keepon Pro units widely available to researchers and practitioners. Our work with Keepon suggests that the character’s simplicity, combined with a caregiver’s ability to conduct mediated interactions through the robot, can facilitate social engagement in a novel and exciting way. We hope that the toy version of our robot can channel public excitement towards general autism awareness while supporting our distribution of tools and resources to people and organizations around the world working to understand and treat it.”
Sadly, these are all the details that I’ve got for you at the moment. I did talk with Dr. Michalowski a little bit about the toy, and I can say that they’re trying very hard to make sure that the core functionality (and look) that makes Keepon Pro so endearing will be there in the toy version. It’s easy to look at that $40 price point with concern, but I’m optimistic, and it’s also important to remember that commercializing My Keepon is going to help make Keepon Pro cheaper and more available to people who need it, so it’s good news for everyone.
My Keepon is still in the prototype stage, but we’ve been promised one of the first review units when they’re available, which means you’ll get the first look at them too. SWEET!
Silicon Valley start-up Anybots is announcing today that it has begun shipping its QB telepresence robot.
Customers who pre-ordered the US $15,000 QB robot will begin receiving it this week. Those who order today will receive their units in March, the company said.
Over the past year, several people and organizations have been beta testing the robot, providing feedback to the company. The beta testers include Carnegie Mellon University faculty and NASA executives.
Even I got to be a QB for a week last year -- and I have to say, showing up to work in a robot body is a pretty cool experience:
Now the initial beta-testing phase has ended, and after improving the QB design, Anybots is ready to ship the robot. And who's buying it? Alas, the company declined to name any customers who ordered this first batch of bots.
The QB model shipping now includes several new features. Now users controlling the robot can use high-definition zoom to get a closer look of people and objects. The robot is also capable of seamlessly switching from one Wi-Fi access point to another.
But the most important feature: Finally, QB is capable of two-way video streaming, with the face of the operator appearing on a small LCD display on the robot's head. This was a major limitation with the pre-production prototype I tested.
Needless to say, Anybots is pretty excited about the possibilities of robotic telepresence. Indeed, it's been a long journey for them, and I applaud their persistence in putting a sophisticated robot in the market.
"Everyone from a cookie manufacturer looking to manage remote factories to a CEO who simply can’t make it to every meeting in person--teleporting via an Anybot has already given these people a new perspective on work," Anybots founder and CEO Trevor Blackwell said in a statement.
"At first I thought the bot would pay for itself if it could just replace one international trip," said Phil Libin, founder and CEO of Evernote and one of the beta testers, "but now I realize that the real value is letting me preserve spontaneous interactions at the office even when I'm thousands of miles away."