Automaton iconAutomaton

No Couch Is Safe from the CLASH Cloth-Climbing Robot

UC Berkeley has a long history of developing innovative legged robots: There was ROACH, there was BOLT, there was DASH. DASH, a cockroach-Inspired design, was a very simple, very fast hexapedal robot that could scuttle along the ground at 15 body lengths per second.

Now meet the latest addition to this family of robot bugs: CLASH, pictured above, is a vertically-enabled successor to DASH, and it's designed to zip up vertical or near-vertical cloth surfaces with the aid of tiny little spiny toes. It's sort of like what you'd get if you put DASH and SpinyBot together in a dark room along with a 3D printer and some Barry Manilow (or whatever it is robots are listening to these days).

For a vertical climbing robot, CLASH is surprisingly quick. It may actually be one of the quickest climbing robots in existence, able to move upwards at 24 centimeters per second, which is really quite a lot faster than it sounds:

Part of the reason that CLASH can scramble around so fast is that it's small and lightweight with a simple, but clever, design. CLASH is 10 centimeters long and weighs only 15 grams. The back-and-forth climbing motion of four legs (the back two are passive) is entirely driven by one single motor that gives CLASH a gait frequency of a brisk 34 strides per second.

The actual gripping and climbing technique is integrated into the beautiful series of linkages that connect CLASH's legs to its motor and to each other, making the mechanism completely passive all the way from initial grip to retraction. The battery and electronics are all onboard, and are located in the tail to help keep the robot balanced.

Next up is to endow CLASH with the ability to turn (which will likely involve the addition of a second actuator somewhere), and modification of the rear legs to allow the robot to scamper along horizontal surfaces too. And while CLASH is currently restricted to climbing things like fabric and carpet that it can sink its claws into, other methods of passive adhesion (like some of that gecko tape) might give CLASH a little extra versatility.

"CLASH: Climbing Vertical Loose Cloth" was presented by P. Birkmeyer, A. G. Gillies, and R. S. Fearing from the University of California, Berkeley, at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week. Special thanks to Paul Birkmeyer for the CLASH videos, and for forgiving me for mistakenly suggesting that he was at Stanford, not Berkeley, which is just about the worst screw-up I could have possibly made.

[ UC Berkeley's Biomimetic Millisystems Lab ]

New Switchblade Robot Design is Leaner, More Agile

The first generation of UCSD's Switchblade robot used a battery pack on a big swingy arm-thing to alter its center of gravity enough to balance on its treads and climb stairs.

At the IEEE International Conference on Intelligent Robots and Systems last week, we spotted an updated version of Switchblade, which trades in the external movable mass for a slick compact case. Instead of compromising its balancing skills, this new design (and some extra brains) have made Switchblade more agile than ever, being able to remain stable even when grad students push it with their sandal-clad feet:

This new form-factor makes Switchblade a bit more appealing as a capable replacement for a variety of tactical robots which shall remain nameless but rely on infinitely less cool movable paddle tracks to get themselves over obstacles way less obstacle-y than what Switchblade is able to surmount.

Switchblade has been refined to reduce its cost and complexity, and according to its creator Nick Morozovsky, it's "well suited for a variety of socially-relevant applications, including reconnaissance, mine exploration, and search and rescue." So someone just needs to put it into action already, and give those fancy balancing tricks some practical applications.

[ UCSD Coordinated Robotics Lab ]

Monkeys Use Brain Interface to Move and Feel Virtual Objects

bidirectional brain machine interface to sense texture of objects

Scientists have demonstrated that monkeys using a brain-machine interface can not only control a computer with their minds but also "feel" the texture of virtual objects in a computer.

This is the first-ever demonstration of bidirectional interaction between a primate brain and a virtual object.

In the experiment, described in paper published today in the journal Nature, Duke University scientists equipped monkeys with brain implants that allowed the animals to control a virtual arm, shown on a computer screen, using only their thoughts. This part of the experiment was not a new result -- scientists, including the Duke team, have previously demonstrated that the brain can control advanced robotic devices and even learn to operate them effortlessly.

What's new is that, this time, the scientists are using the brain-machine interface not only to extract brain signals but also to send signals to the brain. The device is actually a brain-machine-brain interface. The monkeys were able to interpret the signals fed to their brains as a kind of artificial tactile sensation that allowed them to identify the "texture" of virtual objects.

"Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton," study leader Miguel Nicolelis, a professor of neurobiology at Duke, in Durham, N.C., said in a statement.

Initially, the monkeys used their real hands to operate a controller and move their virtual limbs on the screen. During this part of the experiment, the researchers recorded brain signals to learn how to correlate the brain activity to the movement of the virtual arm [see illustration above]. Next, the researchers switched from hand control to brain control, using the brain signals to directly control the virtual arm; after a while, the animals stopped moving their limbs altogether, using only their brains to move the virtual hand on the screen.

The monkeys used their virtual hand to explore three objects that appear visually identical but have different "textures" -- each texture corresponding to different electrical signals sent to the brain of the animals. The researchers selected one of the objects as the "target," and whenever the monkeys were able to locate it they would receive a sip of juice as a reward. After a small number of trials, the monkeys learned to quickly explore the virtual environment, feeling the textures of the objects to find the target.

Watch:

One of the monkeys used got the tasks right more than 85 percent of the time; another monkey got the tasks right about 60 percent of the time.

To allow the monkeys to control the virtual arm, the scientists implanted electrodes to record electrical activity of populations of 50 to 200 neurons in the motor cortex. At the same time, another set of electrodes provided continuous electrical feedback to thousands of neurons in the primary tactile cortex, allowing the monkeys to discriminate between objects based on their texture alone.

"It's almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves," Nicolelis said.

A major challenge was to "keep the sensory input and the motor output from interfering with each other, because the recording and stimulating electrodes were placed in connected brain tissue," according to a news report in Nature:

"The researchers solved the problem by alternating between a situation in which the brain-machine-brain interface was stimulating the brain and one in which motor cortex activity was recorded; half of every 100 milliseconds was devoted to each process."

The Duke researcher is leading an international consortium called the Walk Again Project, whose goal is to restore full body mobility to quadriplegic patients using brain-machine interfaces and robotic exoskeletons.

An avid fan of soccer, Nicolelis hopes to have a demonstration ready for 2014, with a quadriplegic child performing the kickoff for the FIFA World Cup in Brazil, his home country.

Images and video: Duke University

Japanese Researchers Developing Robotic Chameleon, Tongue First

Tomofumi Hatakeyama and Hiromi Mochiyama have not yet created a robotic chameleon like the (completely fake) one in the above pic, but they have started in on one of the most important parts: the tongue. Chameleons can shoot their tongues out to capture prey in just three one-hundredths of a second, and then reel their tongues back in and chow down. Seems like a handy thing for a robot to be able to do, right? Sure, why the heck not!

This system is deceptively simple, relying on an air cannon of sorts to fire a magnetic projectile attached to a thin elastic cord. Over 90 percent of the time, the robotic tongue can snap up magnets dropped 0.7 meter away, taking barely a tenth of a second to traverse the distance, and making the entire round trip in another tenth. It's wicked quick, and can nail almost exactly the same spot in mid-air every time.

Obviously, there are a few reasons why this particular version is probably not going to replace a real chameleon any time soon. For one, it only works on magnetic stuff, and in order to make the catch, it needs a break-beam sensor to tell it when to fire. But there are ways to get around both of these issues, like maybe some gecko-foot tape plus a laser sensor system or something. The researchers want this thing to ultimately shoot out to 10 meters (!), and they're planning to mount it on some kind of mobile robot platform that will scuttle around and catch cockroaches and other bugs. You know, humanely.

"Shooting Manipulation System with High Reaching Accuracy" was presented by Tomofumi Hatakeyama and Hiromi Mochiyama from the University of Tsukuba at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week.

Image: Robot chameleon rendering via Tommix on deviantART

PR2 Can Now Fetch You a Sandwich from Subway

Want a sandwich? Yeah, me too. PR2 has learned how to fetch stuff from the fridge, which is great and all, but thanks to a technique called semantic search, it can now bring you a sandwich when it's not even sure where the sandwich is.

"Semantic search" is simply the ability to make inferences about an object based on what is known about similar objects and the environment. It sounds complicated, but it's really just a computerized version of what we humans think of as "common sense." For example, if someone asks you to bring them a cup without telling you exactly where the cup is, you're probably clever enough to infer that cups can be found in drawers or cabinets or dishwashers, and that drawers and cabinets and dishwashers are all usually located in a kitchen, so you can go to the kitchen, poke around for a little bit, and find a cup. Semantic search allows robots to do the same sort of thing.

The advantage of this technique is that it gives robots the ability to infer things that it doesn't know from things that it does know, and use reason to make deductions about parts of the word that it's less familiar with. Additionally, the robot can add to its knowledge base to quickly adapt to new places and people with weird habits. So like, if you're one of those people who stores peanut butter in the bathroom, the robot can start associating peanut butter with bathrooms.

Weirdo.

The following demo, from the University of Tokyo and Technische Universität München, puts semantic search to the test by tasking a PR2 with fetching a sandwich. The PR2 has no detailed information on sandwiches, but its database tells it that sandwiches are a type of food, and that food can be found in kitchens and restaurants, and from that, it figures out where to look:

PR2, which already knows how to get drinks and bake cookies, managed to get from a simple "bring me a sandwich" command to going and ordering a sandwich at a Subway in a separate building, all entirely autonomously. Yeah, it had a little trouble with the elevator buttons, but let's think about the big picture: using techniques like these, we're getting closer to being able to give robots vague and general commands, and having them figure out what we want really want and how to make it happen.

"Semantic Object Search in Large-scale Indoor Environments" by Manabu Saito, Haseru Chen, Kei Okada, Masayuki Inaba, Lars Kunze, and Michael Beetz from the University of Tokyo and Technische Universität München was presented last week at the IEEE International Conference on Intelligent Robots and Systems in San Francisco.

Crawling iCub Is the Robot Baby You Never Wanted

As far as robot babies go, iCub is not the weirdest of the bunch. The fact that it's also one of the most capable robot babies out there doesn't necessarily help its case, though, since watching it crawl around the floor is a tad unnatural, to say the least:

iCub, if you remember, is designed to emulate a three and a half year-old child, although personally I don't know any kids that young who I'd trust with a bow and arrow (or a lit torch). In addition to these potentially destructive hobbies, and crawling, iCub is intended to explore how human cognition develops, using facial expressions and adaptive learning techniques. Sometimes those facial expressions don't work out so well, though, especially when iCub is being calibrated:

Yeah, uh, I don't know exactly what button they pushed to get iCub to look like that, but I just wish I had one attached to me somewhere. Here's a photo of iCub enjoying San Francisco, and below is one more pic of the bot looking slightly more normal, from the expo floor at the IEEE International Conference on Intelligent Robots and Systems last week:

[ iCub ]

Asimo Can Copy Your Dance Moves

Asimo, the Honda humanoid, one of the world's most loved robots, was showing off its dance moves this week at the IEEE International Conference on Intelligent Robots and Systems in San Francisco.

The robot was here to demonstrate some new tricks it's been learning from scientists at the Honda Research Institute in Mountain View, Calif.

Victor Ng-Thow-Hing, Behzad Dariush, and colleagues work with Asimo seeking to develop robotics technologies that can assist people, especially in terms of mobility.

In one demonstration, the scientists showed how Asimo can mimic a person's movements in real time. The researchers use Microsoft's Kinect 3D sensor to track selected points on a person's upper body, and their software uses an inverse kinematics approach to generate control commands to make Asimo move. The software prevents self collisions and excessive joint motions that might damage its system and is integrated with Asimo's whole-body controller in order to maintain balance. The researchers say that the ability of mimicking a person in real time could find applications in robot programming and interactive teleoperation, among other things.

In another demo, the scientists showed how they're using gestures to improve Asimo's communication skills. They're developing a gesture-generating system that takes any input text and analyzes its grammatical structure, timing, and choice of word phrases to automatically generate movements for the robot. To make the behavior more realistic, the scientists used a vision system to capture humans performing various gestures, and then they incorporated these natural movements into their gesture-generating system.

Here's a video showing these two demos:

This was my first encounter face to face with Asimo, and upon close inspection I noticed something on Asimo's face that I didn't know it was there. Take a look at the photo below. Can you see it?

honda asimo humanoid robot smiling

Photos: Evan Ackerman; video: Erico Guizzo and Evan Ackerman

Boston Dynamics' AlphaDog Quadruped Robot Prototype on Video

Looks like one those Boston Dynamics prototype videos that we were treated to on Tuesday here at the IEEE International Conference on Intelligent Robots and Systems has been approved for public release by DARPA. It shows Boston Dynamics' gigantic new quadruped, which is apparently not called "BullDog" as we were told a few days ago. Instead, the official name is now "AlphaDog," but it may as well be "HugeAndAwesomeDog." Seriously, check this beast out, and and make sure to listen very, very closely:

Badass. Oh, and if you were listening, you may have noticed that AlphaDog does not sound like a swarm of killer zombie bees. Amazing!

A couple notes on the video: those weights that AlphaDog is carrying in a few of the clips weigh a total of 400 pounds (180 kilograms), and the robot will be able to carry that load up to 20 miles (30 kilometers) over the course of 24 hours without having to refuel. At the end of the running demo (just after the 45 second mark), the robot collapses into the safety frame like that simply because it ran out of room, not because of any kind of mechanical problem. And notice how two people pushing as hard as they can don't phase AlphaDog in the least, and in the event that it does tip over for some reason, it has no trouble self-righting, which is a useful new feature.

As cool as BigDog was (and is), its relatively limited payload, range, and awful noise kept it from being a realistically deployable system. AlphaDog, on the other hand, looks like it's getting very close to something that we could see out in the field, using GPS navigation and computer vision to follow soldiers while carrying their gear over any kind of terrain. Boston Dynamics' schedule has the first walk-out of AlphaDog taking place sometime in 2012, when DARPA and the U.S. Marines will begin to put the robot to the test for real.

[ Boston Dynamics LS3 AlphaDog ]

Tiny Robot Makes Big Jumps with Explosive Microrockets

We've seen all kinds of crazy jumping robots, from humanoids to grasshoppers to soft and flexible spheres. But when you start making small robots, like seriously small robots (on the millimeter scale), you have to find new ways to get them to jump, and the Army Research Laboratory has teamed up with the University of Maryland to develop a couple clever ideas.

Pictured above is a four millimeter-long robot, complete with a power source, an integrated control system, and light sensors. To move, it relies on on the rapid conversion of stored chemical energy to gas in a chemical reaction, which is just a fancy way of saying either "rocket motor" or "controlled explosion." Underneath the robot is a small chip of nanoporous silicon that gets infused with a sodium perchlorate oxidizer, and when a current heats up the chip, it ignites, propelling the robot upward. Initial tests have yielded a jump height of about eight centimeters, which doesn't sound like much, but the robot is so small that it's still outjumping its own size by a factor of 20.

The other jumping bot that these researchers have come up with is a bit more traditional, using microfabricated elastomer springs to store up energy and release it all at once to make a jump. This method may be a bit less violent than the rocket-powered bot, but the spring robot depends on an external power source (a dude pushing the spring down with tweezers). With this human help (which will eventually replaced by micromotors to wind the spring up) it can jump really, really high, at about 80 times its own height. You can see both of these robots in action in the video below: 

The next step for these robots is to tweak them to be able to jump more than once, and in the direction that you want them to go. Oh, and to figure out how to get them to land properly, and then do productive stuff once they return to Earth. For the chemical jumping robot, adding little nozzles to the chemical engine should solve the steering problem while also quadrupling its effective power by directing the thrust more efficiently. Stitching an array of about 100 of these engines together along the bottom of a microbot could allow for a whole series of jumps (and even jumps followed by mid-air rocket pulses to keep flying), ultimately resulting in a range of some 65 meters, which works out to be a staggering 16,000 times the length of the robot itself. Not bad at all.

Ultimately, the idea is that these bots will be fast and cheap to manufacture, easy to deploy, and expendable enough that it'll be possible to use swarms of them for things like surveillance and monitoring and terrorizing your imagination.

"First Leaps Toward Jumping Microrobots" by Wayne A. Churaman, Aaron P. Gerratt, and Sarah Bergbreiter from the Army Research Laboratory and the University of Maryland Microrobotics Lab was presented this week at the IEEE International Conference on Intelligent Robots and Systems.

[ UMD Microrobotics ]

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More