Automaton iconAutomaton

High-Speed Robot Hands Fold a Towel in 0.4 Second

Remember those crazy fast robotic hands that can dribble a ball in the blink of an eye? A research group from the University of Tokyo has been teaching them to fold towels (very small towels) at blistering speed, poking some fun at Berkeley's PR2 and its rather more, um, sedate pace.

What the researchers figured out was that if you move something deformable (like a piece of cloth) fast enough, it'll just follow the motion path of whatever it's attached to, and you don't have to worry about niggling little annoyances like the effects of gravity. Using this method, it's possible to calculate the path that the cloth will take, enabling a robot to fold super fast it as in the video above.

These high speed hands were able to fold a cloth in half in an average of 0.4 second with a success rate of about 80 percent, but researchers hope to improve that with the addition of an improved visual feedback system (similar to the one they use to scan a book just by flipping its pages) that will be able to tell the hands exactly when to close. Eventually, the hope is to teach the hands to fold a more versatile range of objects, along with crazier things like high-speed origami.

This research was presented by Yuji Yamakawa, Akio Namiki, and Masatoshi Ishikawa of the University of Tokyo and Chiba University, in their ICRA paper entitled "Motion Planning for Dynamic Folding of a Cloth with Two High-speed Robot Hands and Two High-speed Sliders."

[ Ishikawa Oku Lab ]

READ ALSO:

Treebot Learns to Autonomously Climb Trees
Wed, May 18, 2011

Blog Post: Not even trees can save you from this inchworm-inspired climbing robot

How Robots Can Learn From Your Pathetic Failures
Wed, May 18, 2011

Blog Post: Getting a robot to do what you want is never an easy task, especially if you can't even do the task yourself

Lingodroid Robots Invent Their Own Spoken Language
Tue, May 17, 2011

Blog Post: These little robots make up their own words to tell each other where they are and where they want to go

Awesomely Bad Ideas: Teaching a Robot to Sword Fight
Fri, May 13, 2011

Blog Post: Georgia Tech has given a robot a sword and told it that humans are out to get it, all in the name of safety

Treebot Learns to Autonomously Climb Trees

This is Treebot. As you might expect, Treebot was designed to do one thing: climb trees. It is by no means the first robot able to do this, but its arboreal predecessors (RiSE and Modsnake and accidentally PackBot are just a few) weren't autonomous and didn't have the skills necessary to negotiate the complex network of branches that you tend to find on trees worth climbing.

The design of Treebot is fairly unique: it uses a set of flexible linear actuators connecting two gripping claws to allow it to move around like an inchworm. While the back gripper holds on, the front gripper releases and the body extends forward, allowing the robot to literally feel around for a good place to grip.

Keeping to the inchworm theme, the robot doesn't use much in the way of fancy sensors. Instead, it's all tactile. You can tell the robot which direction you'd like it to go and how far, and the robot will grope its way to its destination, adaptively navigating from trunk to branches.

At the moment, Treebot is more or less blind. This isn't necessarily a problem, but it could get where it wants to go much faster if it's able to tell which branches have the highest potential to allow it to efficiently climb higher up, so researchers are working on ways to help Treebot optimize its climbing path.

TreeBot was designed by Tin Lun Lam and Yangsheng Xu from The Chinese University of Hong Kong, and their research was presented at ICRA last week in a paper entitled "Treebot: Autonomous Tree Climbing by Tactile Sensing."

How Robots Can Learn From Your Pathetic Failures

Robots that can learn from demonstrations are capable of watching a human do something, and then copying (or even improving on) the motions that the human makes in order to learn new tasks. This is fine if you're good at the task that you're trying to teach the robot, but if you're bad at it, you and your robot student are going to run into some problems.

Daniel H. Grollman and Aude Billard from the Learning Algorithms and Systems Laboratory at EPFL, in Lausanne, Switzerland, are working on ways for robots to learn from demonstrations, even if those demonstrations are failures. In the following video, a human shows a robot how to prop up a block and toss a ball into a basket without actually succeeding at either task:

The researchers developed learning algorithms that allow the robot to analyze your behavior and mathematically determine what parts of the task you're getting right (or you think you're getting right) and where you're screwing up, and eventually, it teaches itself to perform the task better than you. At the moment, the robot isn't using an adaptive learning approach; it's just trying different things until it accomplishes the objective. But part of the appeal of this system is that it uses failed human examples to help it know the extent of what it should try. I can almost hear a robotic voice saying, "Human, it's okay to fail."

Grollman and Billard describe their work in a paper, "Donut As I Do: Learning From Failed Demonstrations," presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai, and they were honored with the Best Cognitive Robotics Paper award. Congrats!

[ Post updated to correct for the fact that the robot can't yet infer what your overall goal is... But they're working on it! ]

Lingodroid Robots Invent Their Own Spoken Language

lingodroids language robots

When robots talk to each other, they're not generally using language as we think of it, with words to communicate both concrete and abstract concepts. Now Australian researchers are teaching a pair of robots to communicate linguistically like humans by inventing new spoken words, a lexicon that the roboticists can teach to other robots to generate an entirely new language.

Ruth Schulz and her colleagues at the University of Queensland and Queensland University of Technology call their robots the Lingodroids. The robots consist of a mobile platform equipped with a camera, laser range finder, and sonar for mapping and obstacle avoidance. The robots also carry a microphone and speakers for audible communication between them.

To understand the concept behind the project, consider a simplified case of how language might have developed. Let's say that all of a sudden you wake up somewhere with your memory completely wiped, not knowing English, Klingon, or any other language. And then you meet some other person who's in the exact same situation as you. What do you do?

What might very well end up happening is that you invent some random word to describe where you are right now, and then point at the ground and tell the word to the other person, establishing a connection between this new word and a place. And this is exactly what the Lingodroids do. If one of the robots finds itself in an unfamiliar area, it'll make up a word to describe it, choosing a random combination from a set of syllables. It then communicates that word to other robots that it meets, thereby defining the name of a place.

lingodroids language robots

From this fundamental base, the robots can play games with each other to reinforce the language. For example, one robot might tell the other robot “kuzo,” and then both robots will race to where they think “kuzo” is. When they meet at or close to the same place, that reinforces the connection between a word and a location. And from “kuzo,” one robot can ask the other about the place they just came from, resulting in words for more abstract concepts like direction and distance:

lingodroids language robots
This image shows what words the robots agreed on for direction and distance concepts. For example, “vupe hiza” would mean a medium long distance to the east.

After playing several hundred games to develop their language, the robots agreed on directions within 10 degrees and distances within 0.375 meters. And using just their invented language, the robots created spatial maps (including areas that they were unable to explore) that agree remarkably well:

lingodroids language robots

In the future, researchers hope to enable the Lingodroids to "talk" about even more elaborate concepts, like descriptions of how to get to a place or the accessibility of places on the map. Ultimately, techniques like this may help robots to communicate with each other more effectively, and may even enable novel ways for robots to talk to humans.

Schulz and her colleagues -- Arren Glover, Michael J. Milford, Gordon Wyeth, and Janet Wiles -- describe their work in a paper, "Lingodroids: Studies in Spatial Cognition and Language," presented last week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

READ ALSO:

Awesomely Bad Ideas: Teaching a Robot to Sword Fight
Fri, May 13, 2011

Blog Post: Georgia Tech has given a robot a sword and told it that humans are out to get it, all in the name of safety

Google Shows Us Why We All Need Robot Cars
Fri, March 04, 2011

Blog Post: Robot cars can drive like maniacs, as Google demonstrates, but it's all in the name of safety

PR2 Robot Can Scan And Bag Your Groceries
Wed, May 11, 2011

Blog Post: Stanford University has their PR2 picking up items in a checkout line, scanning them, and putting them in a bag for you

Little Rolling Robot Transforms Into Helicopter
Mon, May 16, 2011

Blog Post: How do you get a ground robot over an obstacle? Just turn it into a helicopter

Disney Working on Artistic Robot Swarms

disney robot swarm

I didn't know that Disney had a research arm, but they do, and the work that they're presenting at the IEEE International Conference on Robotics and Automation (ICRA) here in Shanghai might give a little peek into one of their future theme park attractions.

What Disney researchers, working with colleagues at ETH Zurich, want to do is develop algorithms that instruct swarms of robots on how to move into into different patterns using smooth and visually appealing transitions. It's kind of like a marching band, except with lots of little robots that light up in pretty colors:

While the algorithms haven't been specifically designed to make the transitions nice looking, a variety of different strategies were tested, and the prettiest one was chosen. Next, the researchers are going to try to toss some obstacles into the mix, and see how well the robots do with moving patterns, as opposed to static shapes.

This isn't the only interesting paper that Disney Research is presenting at ICRA. They're also working on developing a control system for a robot that can walk around on a ball:

Disney robot ball

As to whether or how any of this is going to make it into a Disney theme park near you, well, you'll just have to keep your fingers crossed and use your imagination.

The Disney and ETH researchers -- Javier Alonso-Mora, Andreas Breitenmoser, Martin Rufli, Roland Siegwart, and Paul Beardsley -- describe the work in a paper, "Multi-Robot System for Artistic Pattern Formation," presented yesterday at ICRA.

Little Rolling Robot Transforms Into Helicopter

Designing a robot that can traverse variable terrain usually involves a number of unsatisfactory compromises. You can go with a flying robot, which will almost never get stuck, but is of limited use in detailed sensing and can't operate for very long. Or, you can go with a ground robot, which is much more efficient, but also much more likely to run into an obstacle that it can't get around.

An ideal platform would spend most of its time on the ground but still be able to fly when it needs to, but this is a very tricky thing to make happen, since the design of something that drives is fundamentally different from the design of something that flies. Researchers from the Center for Distributed Robotics at the University of Minnesota have managed to create a single robot that can actually do this effectively:

This is just the first (very fragile) prototype; the next version is much more robust and relies on a non-coaxial system for flight:

hybrid land air robot

hybrid land air robot

As it turns out, it was actually more efficient to design the robot with two completely independent motor systems than to try to design a transmission that would allow the low speed wheel motors to power the rotors or vice versa. And even then, it's still extremely complicated: the rotor folding mechanism cost almost US $20,000 to create. With that in mind, future developments for this platform will focus on making things simpler, while also teaching the robot to take advantage of its hybrid nature when it comes to autonomous path planning.

The researchers -- Alex Kossett and robotics professor and IEEE Fellow Nikolaos Papanikolopoulos -- describe their work in a paper, "A Robust Miniature Robot Design for Land/Air Hybrid Locomotion," presented yesterday at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

READ ALSO:

Aldebaran Robotics To Open Source Code of Nao Robot
Fri, May 13, 2011

Blog Post: The little French humanoid is going to reveal its code

KURMET Bipedal Robot Can Hop Over Obstacles
Thu, May 12, 2011

Blog Post: KURMET uses elastic actuators and a fuzzy control system to bounce up and over objects

PR2 Robot Can Scan And Bag Your Groceries
Wed, May 11, 2011

Blog Post: Stanford University has their PR2 picking up items in a checkout line, scanning them, and putting them in a bag for you

ParkourBot Can Do Parkour
Tue, May 10, 2011

Blog Post: Carnegie Mellon has been working on a robot that can (almost) bounce straight up walls

Awesomely Bad Ideas: Teaching a Robot to Sword Fight

In a paper presented this week at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai, Georgia Tech researcher Tobias Kunz starts thusly: "In order to deploy safe and flexible robots for service and automation, robots must act safely in close contact with humans." Accompanying this innocuous first sentence is this picture:

human robot sword fight

You're probably wondering, at this point, just what the heck a robot with a sword has to do with safety of all things. And why do people keep giving swords to robots anyway? There was Hubo II dancing with one last year, and just three days ago we saw two industrial manipulators dueling with lightsabers.

As it turns out, Kunz says that one good way to get a robot to be dynamically safe around humans is to just program it to think of humans as adversaries. Huh? You may still be wondering why giving a robot a sword and teaching it to think of humans as bad guys is somehow a good thing, but bear with me.

On a fundamental level, a lot of what sword fighting is about is predicting the intentions of a human and then deciding how to respond. By teaching a robot to defensively (just defensively, mind you) block incoming sword attacks, the idea is to create a general model that robots can use to react quickly and safely around the unpredictable movements of nearby humans.

Plus, come on, it's just awesome. Here's a simulation of the work in progress:

So far, the sword fighting is only taking place in a computer, but as you can see from the pic, Georgia Tech does apparently have a real robot that's capable of wielding a real(ish) sword. Letting this thing loose against a pack of real-life ninjas is clearly the next logical step.

Kunz did the work with colleagues Peter Kingston, Mike Stilman, and Magnus Egerstedt, and their ICRA paper was titled, "Dynamic Chess: Strategic Planning for Robot Motion."

READ ALSO:

Treebot Learns to Autonomously Climb Trees
Wed, May 18, 2011

Blog Post: Not even trees can save you from this inchworm-inspired climbing robot

How Robots Can Learn From Your Pathetic Failures
Wed, May 18, 2011

Blog Post: Getting a robot to do what you want is never an easy task, especially if you can't even do the task yourself

Lingodroid Robots Invent Their Own Spoken Language
Tue, May 17, 2011

Blog Post: These little robots make up their own words to tell each other where they are and where they want to go

Dodge Shouldn't Be Scared of Robot Cars
Thu, May 05, 2011

Blog Post: Dodge thinks that a future full of autonomous cars is going to make driving boring, but they're wrong

Aldebaran Robotics To Open Source Code of Nao Robot

aldebaran nao

Aldebaran Robotics has just announced that it's going to open the source code of its popular humanoid robot Nao.

The French firm has been developing Nao over the past five years, turning an initially obscure robot with a quirky name into a widely adopted research and education platform used to study human-robot communication, help treat hospitalized children, and play soccer.

It's not yet clear exactly which parts of Nao's software platform will become open source and which will remain proprietary. The company said in a release that it will "share a significant part of [Nao's] source code with the research and developer community by the end of 2011."

Last year, when I interviewed Bruno Maisonnier, Aldebaran's founder and CEO, he hinted that making Nao more open was a desired move for the company, aimed at both building a stronger developer community and improving the robot's overall capabilities. Here's what he says in the release:

“Building robotics applications is challenging: applications are built upon many state-of-the-art complex technologies like speech recognition, object recognition, grasping, navigation and mapping. Applications need to be secure and be able to run in a constrained environment with limited resources. With over five years of experience in developing embedded systems for robotics platforms, we want to share our middleware, our core communication library, our cross-platform build tools and other essential modules with researchers, developers and emerging projects in humanoid robotics."

Open source is becoming a powerful trend in robotics. A year ago, another French company, Gostai, maker of robotics software and the Jazz telepresence robot, also announced it was opening the code of Urbi, its flagship product. This week, RoboDynamics introduced a US $3,000 personal robot called Luna that will likely run on open source software. Also this week, Google unveiled its Android Open Accessory, a project combining its mobile operating system, Android, and the popular DIY open source microcontroller Arduino. And then, of course, there's Willow Garage, the Silicon Valley firm that is perhaps the strongest backer of open source robotics and whose Robot Operating System, or ROS, and other open source projects are becoming ever more popular.

Not everyone in robotics has embraced open source, though. At least one major figure has said he doesn't favor the trend. In an interview with Forbes, iRobot CEO Colin Angle called into question the benefits of open source for the industry:

Angle also questions the benefits of some open-source robotic initiatives. He notes that giving away technology can hurt any business. “Solving the hard problems of robotics and giving those solutions away is the worst thing someone can do for the robotics industry,” says Angle. “You have to understand the economic engine behind things.”

That's a provocative statement. I'm sure many people would question the claim that open source robotics is bad for the consumers. But who knows, maybe there are others who would agree with him? What do you think?

Updated May 16, 2011: Fixed date when Urbi was released as open source.

Image: Aldebaran Robotics

Video: Reporter Meets, Hugs Creepy Telenoid Robot

telenoid

Last year, Hiroshi Ishiguro, a roboticist at Osaka University who's built android copies of himself and other people, shocked the world with his strangest creation yet: a creepy robotic creature called Telenoid that looked like a supersized fetus. Ishiguro envisioned the Telenoid as a device a person would teleoperate to communicate with others.

The robot received a lot of attention, but there weren't really good videos showing how the thing operated. What we needed was an intrepid reporter willing to do a, uh, hands-on test with the bot. Now IDG has done just that and brings great footage of the Telenoid talking and squiggling under the grasp of their somewhat creeped-out correspondent.

It's clear from the video that the Telenoid can move its head and change its facial expressions, although only slightly, but still more than I expected. So for the person holding it, it's more than just a fancy, sperm-shaped giant telephone; whether you can feel the operator's "presence" via the robot I don't know, but according to the reporter who tried it, you might even want to hug it.

Researchers Smash Super-Tough Robot Arm With Baseball Bat

DLR Hand Arm System

Why is this man about to smash his robot with a bat?

Don't worry. It's just another day at the German Aerospace Center (DLR), where researchers have built what's probably the most advanced -- and tough -- robotic arm ever. And they aren't afraid of testing its limits.

It's called the DLR Hand Arm System [photo, below]. It has an anthropomorphic design and packs 52 motors, ultra-miniaturized control electronics, a supercapacitor-based power supply, and a web of synthetic tendons stronger than Kevlar. But what makes it stand out compared to conventional systems is its ability to withstand collisions, thanks to ingeniously designed joints and actuators that can absorb and dissipate energy, much like our own arms and hands do.

Which brings us to their latest test with the aforementioned wooden bat. The DLR researchers are known for their bold experiments. They've equipped a robot with a knife and tested a stabbing-avoidance algorithm on themselves. They've used massive industrial manipulators to mercilessly destroy crash test dummies. Last year, they demonstrated that their super-strong robot hand could endure a beating from a hammer. (Blogs promptly dubbed it the "Terminator hand.")

This time, they wanted to assess the robustness of the upper arm and shoulder. Would it still work after a violent collision? There was only one way to find out:

Did you notice how, after getting struck, the arm gently touches the yellow ball, exactly as it had done before? That's to show that its mechanisms and electronics are as good as new. If I could design a robot like this, I would be grinning just like the guy in the video.

The researchers, members of DLR's Institute of Robotics and Mechatronics, described the overall system in detail for the first time today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.

Markus Grebenstein, the Hand Arm System's lead designer, tells me that robustness is essential if we want to deploy service robots in the real world, where collisions are likely to happen. Even small shocks, he says, can damage conventional robots, which rely on motors coupled to joins in mechanically stiff configurations.

dlr hand arm system

This means that, in a collision, shock and vibration will spread through these rigid structures and, because they lack a mechanism to dissipate the energy, they will tear themselves apart. Even newer impedance control techniques, which lets robots move compliantly when a person or object pushes on them, can't help in collision situations, because feedback controllers and motors aren't fast enough.

The DLR team noticed this limitation in its own robots. One of them, Justin, is able to catch a ball thrown at him -- and even two balls at once -- most of the time. But in some cases, when the robot computed an inaccurate trajectory for the ball, it might stretch its fingers completely and the 80-gram ball, upon hitting the fingertips, would damage the hand. By comparison, a handball goalkeeper withstands the impact of a 480-gram ball moving at 120 kilometers per hour, or nearly five times as fast, and delivering 100 times as much energy.

floating spring joint variable stiffness actuationThe solution, the DLR group figured, was to completely rethink the design of robotic joints and actuators. For the upper arm, the researchers designed intricate shock-absorbing structures that they call "floating spring joints." As the name implies, it consists of two cam disks coupled to a spring, which helps dissipate energy and damp vibrations. The mechanism [diagram, above] is also different from conventional robot joints in that it uses not one but two motors: A powerful motor effectively moves the joint, while a smaller one adjusts the position of the cams and the spring to vary the stiffness of the joint. A harmonic drive gear interfaces the two motor modules.

variable stiffness actuationSimilarly, the hand also relies on variable stiffness actuation. As we described before in detail, the hand has 38 tendons, each connected to an individual motor. Two motors drive each finger articulation [diagram, right], in an antagonistic arrangement (when the motors turn in one direction, the articulation bends one way; when the motors reverse direction, the articulation bends the other way). Coupled to the tendons, which are made from a super strong synthetic fiber called Dyneema, is a spring-based nonlinear elastic mechanism. It allows the fingers to store and release energy -- again, just like our own hands do.

And why vary the stiffness of the fingers anyway? Because different situations demand different levels of stiffness. If you embed lots of springs and other shock-absorbing structures into your arm system, it will be able to withstand collisions, but when you try to pick a grape on a table it will vibrate and fail.

By varying its stiffness, the DLR system can perform well both in terms of robustness and accuracy. And it can perform tasks that other robots would have difficulty doing, such as using a hammer to drive nails on a wooden board. Watch:

The complete Hand Arm System is an engineering masterpiece. The shoulder has 3 degrees of freedom, and the elbow has 1 DoF. Each hand alone has 19 DoF, with two additional DoF on the wrist, which uses a "spherical antiparallelogram mechanism" powered by four motors, allowing for 30-degree side motion and 90-degree flexion/extension.

I mean, just look at this thing [photo, below]. It's crammed with motors, electronics, and cables, and yet it can endure vicious mechanical abuse without losing performance.

Now, it would be nice to see more dexterous manipulations. Can the DLR hand grasp objects of various sizes and shapes? Can you put two arms together and perform bimanual manipulations? Can a person wearing a sensor suit teleoperate the robot as the same DLR team has demonstrated with Justin?

I suspect the researchers will be working on some of these capabilities in the near future. One thing is certain, though: Expect more awesome videos from these guys. (UPDATE: As if to tease me, Grebenstein just send me the photos below.)

DLR Hand Arm System

DLR Hand Arm System cup

Updated May 13: Corrected explanation of Justin catching balls; added more photos.

Images and videos: DLR/IEEE Spectrum

READ ALSO:

Robots With Knives
Thu, May 06, 2010

Blog Post: What would happen if a knife-wielding robot struck a person?

Top 20 Robot Videos of 2010
Tue, January 11, 2011

Blog Post: Quadrotors performing acrobatics, ultrarealistic humanoids dancing, dexterous robots folding towels, and more

DARPA's Manipulation Plans
Mon, October 18, 2010

Blog Post: An ambitious four-year program aims at transforming robotic manipulation from art into science

Superfast Robot Eyes
Mon, November 01, 2010

Blog Post: German researchers have developed robotic eyes that move at superhuman speeds

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More