Automaton iconAutomaton

Monkeys Use Brain Interface to Move and Feel Virtual Objects

bidirectional brain machine interface to sense texture of objects

Scientists have demonstrated that monkeys using a brain-machine interface can not only control a computer with their minds but also "feel" the texture of virtual objects in a computer.

This is the first-ever demonstration of bidirectional interaction between a primate brain and a virtual object.

In the experiment, described in paper published today in the journal Nature, Duke University scientists equipped monkeys with brain implants that allowed the animals to control a virtual arm, shown on a computer screen, using only their thoughts. This part of the experiment was not a new result -- scientists, including the Duke team, have previously demonstrated that the brain can control advanced robotic devices and even learn to operate them effortlessly.

What's new is that, this time, the scientists are using the brain-machine interface not only to extract brain signals but also to send signals to the brain. The device is actually a brain-machine-brain interface. The monkeys were able to interpret the signals fed to their brains as a kind of artificial tactile sensation that allowed them to identify the "texture" of virtual objects.

"Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton," study leader Miguel Nicolelis, a professor of neurobiology at Duke, in Durham, N.C., said in a statement.

Initially, the monkeys used their real hands to operate a controller and move their virtual limbs on the screen. During this part of the experiment, the researchers recorded brain signals to learn how to correlate the brain activity to the movement of the virtual arm [see illustration above]. Next, the researchers switched from hand control to brain control, using the brain signals to directly control the virtual arm; after a while, the animals stopped moving their limbs altogether, using only their brains to move the virtual hand on the screen.

The monkeys used their virtual hand to explore three objects that appear visually identical but have different "textures" -- each texture corresponding to different electrical signals sent to the brain of the animals. The researchers selected one of the objects as the "target," and whenever the monkeys were able to locate it they would receive a sip of juice as a reward. After a small number of trials, the monkeys learned to quickly explore the virtual environment, feeling the textures of the objects to find the target.

Watch:

One of the monkeys used got the tasks right more than 85 percent of the time; another monkey got the tasks right about 60 percent of the time.

To allow the monkeys to control the virtual arm, the scientists implanted electrodes to record electrical activity of populations of 50 to 200 neurons in the motor cortex. At the same time, another set of electrodes provided continuous electrical feedback to thousands of neurons in the primary tactile cortex, allowing the monkeys to discriminate between objects based on their texture alone.

"It's almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves," Nicolelis said.

A major challenge was to "keep the sensory input and the motor output from interfering with each other, because the recording and stimulating electrodes were placed in connected brain tissue," according to a news report in Nature:

"The researchers solved the problem by alternating between a situation in which the brain-machine-brain interface was stimulating the brain and one in which motor cortex activity was recorded; half of every 100 milliseconds was devoted to each process."

The Duke researcher is leading an international consortium called the Walk Again Project, whose goal is to restore full body mobility to quadriplegic patients using brain-machine interfaces and robotic exoskeletons.

An avid fan of soccer, Nicolelis hopes to have a demonstration ready for 2014, with a quadriplegic child performing the kickoff for the FIFA World Cup in Brazil, his home country.

Images and video: Duke University

Japanese Researchers Developing Robotic Chameleon, Tongue First

Tomofumi Hatakeyama and Hiromi Mochiyama have not yet created a robotic chameleon like the (completely fake) one in the above pic, but they have started in on one of the most important parts: the tongue. Chameleons can shoot their tongues out to capture prey in just three one-hundredths of a second, and then reel their tongues back in and chow down. Seems like a handy thing for a robot to be able to do, right? Sure, why the heck not!

This system is deceptively simple, relying on an air cannon of sorts to fire a magnetic projectile attached to a thin elastic cord. Over 90 percent of the time, the robotic tongue can snap up magnets dropped 0.7 meter away, taking barely a tenth of a second to traverse the distance, and making the entire round trip in another tenth. It's wicked quick, and can nail almost exactly the same spot in mid-air every time.

Obviously, there are a few reasons why this particular version is probably not going to replace a real chameleon any time soon. For one, it only works on magnetic stuff, and in order to make the catch, it needs a break-beam sensor to tell it when to fire. But there are ways to get around both of these issues, like maybe some gecko-foot tape plus a laser sensor system or something. The researchers want this thing to ultimately shoot out to 10 meters (!), and they're planning to mount it on some kind of mobile robot platform that will scuttle around and catch cockroaches and other bugs. You know, humanely.

"Shooting Manipulation System with High Reaching Accuracy" was presented by Tomofumi Hatakeyama and Hiromi Mochiyama from the University of Tsukuba at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week.

Image: Robot chameleon rendering via Tommix on deviantART

PR2 Can Now Fetch You a Sandwich from Subway

Want a sandwich? Yeah, me too. PR2 has learned how to fetch stuff from the fridge, which is great and all, but thanks to a technique called semantic search, it can now bring you a sandwich when it's not even sure where the sandwich is.

"Semantic search" is simply the ability to make inferences about an object based on what is known about similar objects and the environment. It sounds complicated, but it's really just a computerized version of what we humans think of as "common sense." For example, if someone asks you to bring them a cup without telling you exactly where the cup is, you're probably clever enough to infer that cups can be found in drawers or cabinets or dishwashers, and that drawers and cabinets and dishwashers are all usually located in a kitchen, so you can go to the kitchen, poke around for a little bit, and find a cup. Semantic search allows robots to do the same sort of thing.

The advantage of this technique is that it gives robots the ability to infer things that it doesn't know from things that it does know, and use reason to make deductions about parts of the word that it's less familiar with. Additionally, the robot can add to its knowledge base to quickly adapt to new places and people with weird habits. So like, if you're one of those people who stores peanut butter in the bathroom, the robot can start associating peanut butter with bathrooms.

Weirdo.

The following demo, from the University of Tokyo and Technische Universität München, puts semantic search to the test by tasking a PR2 with fetching a sandwich. The PR2 has no detailed information on sandwiches, but its database tells it that sandwiches are a type of food, and that food can be found in kitchens and restaurants, and from that, it figures out where to look:

PR2, which already knows how to get drinks and bake cookies, managed to get from a simple "bring me a sandwich" command to going and ordering a sandwich at a Subway in a separate building, all entirely autonomously. Yeah, it had a little trouble with the elevator buttons, but let's think about the big picture: using techniques like these, we're getting closer to being able to give robots vague and general commands, and having them figure out what we want really want and how to make it happen.

"Semantic Object Search in Large-scale Indoor Environments" by Manabu Saito, Haseru Chen, Kei Okada, Masayuki Inaba, Lars Kunze, and Michael Beetz from the University of Tokyo and Technische Universität München was presented last week at the IEEE International Conference on Intelligent Robots and Systems in San Francisco.

Crawling iCub Is the Robot Baby You Never Wanted

As far as robot babies go, iCub is not the weirdest of the bunch. The fact that it's also one of the most capable robot babies out there doesn't necessarily help its case, though, since watching it crawl around the floor is a tad unnatural, to say the least:

iCub, if you remember, is designed to emulate a three and a half year-old child, although personally I don't know any kids that young who I'd trust with a bow and arrow (or a lit torch). In addition to these potentially destructive hobbies, and crawling, iCub is intended to explore how human cognition develops, using facial expressions and adaptive learning techniques. Sometimes those facial expressions don't work out so well, though, especially when iCub is being calibrated:

Yeah, uh, I don't know exactly what button they pushed to get iCub to look like that, but I just wish I had one attached to me somewhere. Here's a photo of iCub enjoying San Francisco, and below is one more pic of the bot looking slightly more normal, from the expo floor at the IEEE International Conference on Intelligent Robots and Systems last week:

[ iCub ]

Asimo Can Copy Your Dance Moves

Asimo, the Honda humanoid, one of the world's most loved robots, was showing off its dance moves this week at the IEEE International Conference on Intelligent Robots and Systems in San Francisco.

The robot was here to demonstrate some new tricks it's been learning from scientists at the Honda Research Institute in Mountain View, Calif.

Victor Ng-Thow-Hing, Behzad Dariush, and colleagues work with Asimo seeking to develop robotics technologies that can assist people, especially in terms of mobility.

In one demonstration, the scientists showed how Asimo can mimic a person's movements in real time. The researchers use Microsoft's Kinect 3D sensor to track selected points on a person's upper body, and their software uses an inverse kinematics approach to generate control commands to make Asimo move. The software prevents self collisions and excessive joint motions that might damage its system and is integrated with Asimo's whole-body controller in order to maintain balance. The researchers say that the ability of mimicking a person in real time could find applications in robot programming and interactive teleoperation, among other things.

In another demo, the scientists showed how they're using gestures to improve Asimo's communication skills. They're developing a gesture-generating system that takes any input text and analyzes its grammatical structure, timing, and choice of word phrases to automatically generate movements for the robot. To make the behavior more realistic, the scientists used a vision system to capture humans performing various gestures, and then they incorporated these natural movements into their gesture-generating system.

Here's a video showing these two demos:

This was my first encounter face to face with Asimo, and upon close inspection I noticed something on Asimo's face that I didn't know it was there. Take a look at the photo below. Can you see it?

honda asimo humanoid robot smiling

Photos: Evan Ackerman; video: Erico Guizzo and Evan Ackerman

Boston Dynamics' AlphaDog Quadruped Robot Prototype on Video

Looks like one those Boston Dynamics prototype videos that we were treated to on Tuesday here at the IEEE International Conference on Intelligent Robots and Systems has been approved for public release by DARPA. It shows Boston Dynamics' gigantic new quadruped, which is apparently not called "BullDog" as we were told a few days ago. Instead, the official name is now "AlphaDog," but it may as well be "HugeAndAwesomeDog." Seriously, check this beast out, and and make sure to listen very, very closely:

Badass. Oh, and if you were listening, you may have noticed that AlphaDog does not sound like a swarm of killer zombie bees. Amazing!

A couple notes on the video: those weights that AlphaDog is carrying in a few of the clips weigh a total of 400 pounds (180 kilograms), and the robot will be able to carry that load up to 20 miles (30 kilometers) over the course of 24 hours without having to refuel. At the end of the running demo (just after the 45 second mark), the robot collapses into the safety frame like that simply because it ran out of room, not because of any kind of mechanical problem. And notice how two people pushing as hard as they can don't phase AlphaDog in the least, and in the event that it does tip over for some reason, it has no trouble self-righting, which is a useful new feature.

As cool as BigDog was (and is), its relatively limited payload, range, and awful noise kept it from being a realistically deployable system. AlphaDog, on the other hand, looks like it's getting very close to something that we could see out in the field, using GPS navigation and computer vision to follow soldiers while carrying their gear over any kind of terrain. Boston Dynamics' schedule has the first walk-out of AlphaDog taking place sometime in 2012, when DARPA and the U.S. Marines will begin to put the robot to the test for real.

[ Boston Dynamics LS3 AlphaDog ]

Tiny Robot Makes Big Jumps with Explosive Microrockets

We've seen all kinds of crazy jumping robots, from humanoids to grasshoppers to soft and flexible spheres. But when you start making small robots, like seriously small robots (on the millimeter scale), you have to find new ways to get them to jump, and the Army Research Laboratory has teamed up with the University of Maryland to develop a couple clever ideas.

Pictured above is a four millimeter-long robot, complete with a power source, an integrated control system, and light sensors. To move, it relies on on the rapid conversion of stored chemical energy to gas in a chemical reaction, which is just a fancy way of saying either "rocket motor" or "controlled explosion." Underneath the robot is a small chip of nanoporous silicon that gets infused with a sodium perchlorate oxidizer, and when a current heats up the chip, it ignites, propelling the robot upward. Initial tests have yielded a jump height of about eight centimeters, which doesn't sound like much, but the robot is so small that it's still outjumping its own size by a factor of 20.

The other jumping bot that these researchers have come up with is a bit more traditional, using microfabricated elastomer springs to store up energy and release it all at once to make a jump. This method may be a bit less violent than the rocket-powered bot, but the spring robot depends on an external power source (a dude pushing the spring down with tweezers). With this human help (which will eventually replaced by micromotors to wind the spring up) it can jump really, really high, at about 80 times its own height. You can see both of these robots in action in the video below: 

The next step for these robots is to tweak them to be able to jump more than once, and in the direction that you want them to go. Oh, and to figure out how to get them to land properly, and then do productive stuff once they return to Earth. For the chemical jumping robot, adding little nozzles to the chemical engine should solve the steering problem while also quadrupling its effective power by directing the thrust more efficiently. Stitching an array of about 100 of these engines together along the bottom of a microbot could allow for a whole series of jumps (and even jumps followed by mid-air rocket pulses to keep flying), ultimately resulting in a range of some 65 meters, which works out to be a staggering 16,000 times the length of the robot itself. Not bad at all.

Ultimately, the idea is that these bots will be fast and cheap to manufacture, easy to deploy, and expendable enough that it'll be possible to use swarms of them for things like surveillance and monitoring and terrorizing your imagination.

"First Leaps Toward Jumping Microrobots" by Wayne A. Churaman, Aaron P. Gerratt, and Sarah Bergbreiter from the Army Research Laboratory and the University of Maryland Microrobotics Lab was presented this week at the IEEE International Conference on Intelligent Robots and Systems.

[ UMD Microrobotics ]

Boston Dynamics' Bigger BigDog Robot Is Alive

UPDATE 9/28 10:55 a.m.: Looks like the embargo on the videos was broken. At least one person has posted videos on YouTube. We're including these vids below.
UPDATE 9/28 12:26 p.m.: Videos were removed. Sorry, folks, we'll have to wait for the official vids.
UPDATE 9/30 4:05 a.m.: Video of Boston Dynamics' new, bigger quadruped, called AlphaDog, is here. Vid of Petman still not available.

boston dynamics ls3 bulldog robot quadruped

Boston Dynamics, the company that brought the world the beloved BigDog quadruped robot, is now showing off its newest beast.

Think BigDog on steroids. The new robot is stronger, more agile, and bigger than BigDog. The official name is LS3 (Legged Squad Support System), but it seems that the Boston Dynamics guys are calling it BullDog instead.

Marc Raibert, the flower-patterned-shirt-wearing founder and president of Boston Dynamics, discussed the LS3 project in a keynote talk today at the IEEE International Conference on Intelligent Robots and Systems.

Boston Dynamics, based in Waltham, Mass., has made significant progress in transforming the DARPA-funded LS3 robotic mule project into reality.

boston dynamics ls3 quadruped robot bulldog

boston dynamics ls3 cad image

Like BigDog, the new robot is designed to assist soldiers in carrying heavy loads over rough terrain. But whereas the original BigDog could carry a payload of 340 pounds (about 150 kilograms) and had a range of 12 miles (20 kilometers), LS3 can carry 400 pounds (180 kilograms) and will have a range of 20 miles (about 30 kilometers).

It's also quieter, and the Boston Dynamics engineers are teaching it some new tricks: It will be able to jump over obstacles, right itself after a fall, and navigate with greater autonomy than its predecessor.

Raibert awed the audience with some amazing videos of the LS3 robot mule navigating rough terrain, trotting, and getting shoved (without losing its balance) not by one but two people at the same time! Alas, we can't show you the videos yet. Raibert told us that he's still getting permission from DARPA to make them public. So in a week or two we'll have them for you.

Raibert also talked about Boston Dynamics' humanoid project, called Petman. It's an adult-sized humanoid that the U.S. Army, which funds the project, will use to test chemical suits and other protective gear.

boston dynamics petman humanoid robot

Petman is another amazing Boston Dynamics creation. Raibert again stunned the audience with some really impressive videos of the humanoid walking, kneeling, squatting, and even doing push-ups!

This is the first time I see a machine performing movements like that. They look remarkably human, yet there's something uncanny valley-esque to them. No wonder Petman creeps out even Raibert himself. And you guessed it: The videos are embargoed as well; we hope to have them here soon.

By the way, if you like robot dogs, Boston Dynamics is hiring. Check out all robotics projects at the company in the slide below.

boston dynamics robotics projects

Images: Boston Dynamics

Robot Birds and Octoroaches On The Loose at UC Berkeley

No matter how fancy and complicated we make robots, nature always has us beat. Is there anything more capable, more efficient, and more utterly indestructible than a cockroach? Of course not. Not yet, anyway. UC Berkeley's Biomimetic Millisystems Lab is trying to harness all the cleverness of birds and insects to create an entirely new generation of little robots with insect-like capabilities, and one of their most recent creations is called "Octoroach." OCTOROACH!

Octoroach has eight compliant legs and is small enough and light enough to rest comfortably on your palm. Batteries, sensors, and navigation are all completely integrated. Eventually, Octoroach and robots like it are destined for the military, to provide that last 100 meters of vital close-up surveillance. And if 100 meters ends up being too far, you can just drop off your robo-roaches using robo-birds like this one:

This is BOLT, which stands for "Bipedal Ornithopter for Locomotion Transitioning." It's got a pair of little legs under its wings, and it can skitter around on the ground and over obstacles, saving energy by not having to fly unless it has to. Berkeley is also working on a second ornithopter called iBird, which is capable of flying towards a reflective target completely autonomously.

Check out all of these robots in action in the following demo, which was presented during a technical tour of UC Berkeley as part of this year's IEEE International Conference on Intelligent Robots and Systems:

[ UC Berkeley Biomimetic Millisystems Lab ]

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More