Automaton iconAutomaton

Photo Gallery: Asimo, Kilobots, iCub, and More

We have a few more solid weeks worth of IROS awesomeness to share with you, but since it's Friday and all, we thought it might be a nice time to put together a little gallery of some of the robots from the expo floor of the IEEE International Conference on Intelligent Robots and Systems, which took place last month in San Francisco.

Most of the bots you'll recognize easily, but keep an eye out for some (okay a LOT) of those little Kilobots, as well as a guest appearance by Disney's swarming display robots. Enjoy!


 


And just in case that wasn't enough for you, Willow Garage (one of the IROS sponsors) also put together this little video montage:

[ IROS 2011 ]

Robot Masters Jenga, Next the World

Confirming our suspicions that roboticists basically just sit around and invent ways to play games with robots all day, here's a video from Torsten Kröger (the same guy who took us through the JediBot demonstration at the IEEE International Conference on Intelligent Robots and Systems, go figure) detailing how he and a bunch of his friends built themselves a robot that plays Jenga back in 2005:

As with most, uh, "research" projects like this, there's supposedly some larger purpose to it. Something about the potential of multi-sensor integration in industrial manipulation. Or whatever. I don't buy it, of course, but we can certainly applaud the fact that the robot was able to make 29 moves in a row, which means that it added nearly ten solid layers of blocks to the top of the tower without knocking it over. Time to preemptively surrender, folks. Here's one more vid of the robot making a move:

[ Jenga Robot ]

Video: Meka Robotics Talks Up its Anime-Style Expressive Head

Meka Robotics is based in San Francisco, which is lucky for us, since that made it pretty much impossible for them to not show up at the IEEE International Conference on Intelligent Robots and Systems. They're probably best known for their underactuated, compliant hand (and the arm that goes with it) and more recently for their humanoid head. The S2 head is notable because it manages to maintain a high degree of expressiveness (those eyes are amazing) while entirely avoiding the Uncanny Valley effect, thanks to its vaguely cartoonish look. We asked Meka's co-founder, Aaron Edsinger to take us through it:

The particular robot in this video is called Dreamer, and it belongs to the Human Centered Robotics Lab at the University of Texas, Austin. Dreamer's head was a cooperative effort involving Meka and UT Austin professor Luis Sentis, who came up with the subtle and effective anime look. Part of what helps keep Dreamer's motions so compliant (and lifelike) is its software: called "Whole Body Control," it's a collaboration between UT Austin, Meka, Stanford, and Willow Garage.

Meka is also offering an entirely new system consisting of an arm, gripper, sensor head, and mobile base for $200,000. It's no coincidence that the one-armed PR2 SE costs the exact same amount; the NSF's National Robotics Initiative provides research grants including up to $200k for research platforms. Yep, the government is basically giving these things away for free, all you have to do is convince them that you deserve one, and then pick your flavor.

[ Meka Robotics ]

[ HCRL ]

Video: SQ1 Quadruped Robot from South Korea

This feisty little guy is a quadruped robot called SQ1. It's a project by South Korean company SimLab, whom we met at the IEEE International Conference on Intelligent Robots and Systems last month. Their RoboticsLab simulation software is being used to figure out how to get the quadruped to walk without actually, you know, having to risk a trial-and-error approach on a real robot. And it works! Or rather, it mostly works:

We don't know too much about it, but apparently, there's a much larger (think BigDog/AlphaDog sized) quadruped in existence (sponsored by the South Korean government). This smaller robot is being used to test out different gaits that have proven themselves in simulation, before the full-sized (and more expensive) version tries not to fall over on its own.

[ RoboticsLab ]

How JediBot Got Its Sword Fighting Skills

JediBot, which we saw in action back in July, was a brilliant final project conceived by a group of students for an experimental robotics course at Stanford University. Kuka spotted the video on YouTube, and shortly thereafter, JediBot found itself with a new job as the main attraction at Kuka's booth on the expo floor of the IEEE International Conference on Intelligent Robots and Systems last month. We caught up with Stanford roboticist Torsten Kroeger, who took us through the brains programming behind JediBot's unquenchable thirst for the blood of Sith lords:

It's worth mentioning that due to a slight miscalibration, JediBot was not acting as aggressive as it could have been when we shot this demo. I took some whacks at it myself a little later on, and the robot was having a great time going for my throat every time I let my guard down. I have to say, it's really quite an experience to be on the other end of a robot with a sword doing its level best to separate your head from your body, but considering all the dull, dirty, and dangerous tasks that we tend to saddle robots with, can you really blame them for being overly enthusiastic when we ask them to take a few good-natured swings in our direction?

[ Stanford Robotics ]

No Couch Is Safe from the CLASH Cloth-Climbing Robot

UC Berkeley has a long history of developing innovative legged robots: There was ROACH, there was BOLT, there was DASH. DASH, a cockroach-Inspired design, was a very simple, very fast hexapedal robot that could scuttle along the ground at 15 body lengths per second.

Now meet the latest addition to this family of robot bugs: CLASH, pictured above, is a vertically-enabled successor to DASH, and it's designed to zip up vertical or near-vertical cloth surfaces with the aid of tiny little spiny toes. It's sort of like what you'd get if you put DASH and SpinyBot together in a dark room along with a 3D printer and some Barry Manilow (or whatever it is robots are listening to these days).

For a vertical climbing robot, CLASH is surprisingly quick. It may actually be one of the quickest climbing robots in existence, able to move upwards at 24 centimeters per second, which is really quite a lot faster than it sounds:

Part of the reason that CLASH can scramble around so fast is that it's small and lightweight with a simple, but clever, design. CLASH is 10 centimeters long and weighs only 15 grams. The back-and-forth climbing motion of four legs (the back two are passive) is entirely driven by one single motor that gives CLASH a gait frequency of a brisk 34 strides per second.

The actual gripping and climbing technique is integrated into the beautiful series of linkages that connect CLASH's legs to its motor and to each other, making the mechanism completely passive all the way from initial grip to retraction. The battery and electronics are all onboard, and are located in the tail to help keep the robot balanced.

Next up is to endow CLASH with the ability to turn (which will likely involve the addition of a second actuator somewhere), and modification of the rear legs to allow the robot to scamper along horizontal surfaces too. And while CLASH is currently restricted to climbing things like fabric and carpet that it can sink its claws into, other methods of passive adhesion (like some of that gecko tape) might give CLASH a little extra versatility.

"CLASH: Climbing Vertical Loose Cloth" was presented by P. Birkmeyer, A. G. Gillies, and R. S. Fearing from the University of California, Berkeley, at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week. Special thanks to Paul Birkmeyer for the CLASH videos, and for forgiving me for mistakenly suggesting that he was at Stanford, not Berkeley, which is just about the worst screw-up I could have possibly made.

[ UC Berkeley's Biomimetic Millisystems Lab ]

New Switchblade Robot Design is Leaner, More Agile

The first generation of UCSD's Switchblade robot used a battery pack on a big swingy arm-thing to alter its center of gravity enough to balance on its treads and climb stairs.

At the IEEE International Conference on Intelligent Robots and Systems last week, we spotted an updated version of Switchblade, which trades in the external movable mass for a slick compact case. Instead of compromising its balancing skills, this new design (and some extra brains) have made Switchblade more agile than ever, being able to remain stable even when grad students push it with their sandal-clad feet:

This new form-factor makes Switchblade a bit more appealing as a capable replacement for a variety of tactical robots which shall remain nameless but rely on infinitely less cool movable paddle tracks to get themselves over obstacles way less obstacle-y than what Switchblade is able to surmount.

Switchblade has been refined to reduce its cost and complexity, and according to its creator Nick Morozovsky, it's "well suited for a variety of socially-relevant applications, including reconnaissance, mine exploration, and search and rescue." So someone just needs to put it into action already, and give those fancy balancing tricks some practical applications.

[ UCSD Coordinated Robotics Lab ]

Monkeys Use Brain Interface to Move and Feel Virtual Objects

bidirectional brain machine interface to sense texture of objects

Scientists have demonstrated that monkeys using a brain-machine interface can not only control a computer with their minds but also "feel" the texture of virtual objects in a computer.

This is the first-ever demonstration of bidirectional interaction between a primate brain and a virtual object.

In the experiment, described in paper published today in the journal Nature, Duke University scientists equipped monkeys with brain implants that allowed the animals to control a virtual arm, shown on a computer screen, using only their thoughts. This part of the experiment was not a new result -- scientists, including the Duke team, have previously demonstrated that the brain can control advanced robotic devices and even learn to operate them effortlessly.

What's new is that, this time, the scientists are using the brain-machine interface not only to extract brain signals but also to send signals to the brain. The device is actually a brain-machine-brain interface. The monkeys were able to interpret the signals fed to their brains as a kind of artificial tactile sensation that allowed them to identify the "texture" of virtual objects.

"Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton," study leader Miguel Nicolelis, a professor of neurobiology at Duke, in Durham, N.C., said in a statement.

Initially, the monkeys used their real hands to operate a controller and move their virtual limbs on the screen. During this part of the experiment, the researchers recorded brain signals to learn how to correlate the brain activity to the movement of the virtual arm [see illustration above]. Next, the researchers switched from hand control to brain control, using the brain signals to directly control the virtual arm; after a while, the animals stopped moving their limbs altogether, using only their brains to move the virtual hand on the screen.

The monkeys used their virtual hand to explore three objects that appear visually identical but have different "textures" -- each texture corresponding to different electrical signals sent to the brain of the animals. The researchers selected one of the objects as the "target," and whenever the monkeys were able to locate it they would receive a sip of juice as a reward. After a small number of trials, the monkeys learned to quickly explore the virtual environment, feeling the textures of the objects to find the target.

Watch:

One of the monkeys used got the tasks right more than 85 percent of the time; another monkey got the tasks right about 60 percent of the time.

To allow the monkeys to control the virtual arm, the scientists implanted electrodes to record electrical activity of populations of 50 to 200 neurons in the motor cortex. At the same time, another set of electrodes provided continuous electrical feedback to thousands of neurons in the primary tactile cortex, allowing the monkeys to discriminate between objects based on their texture alone.

"It's almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves," Nicolelis said.

A major challenge was to "keep the sensory input and the motor output from interfering with each other, because the recording and stimulating electrodes were placed in connected brain tissue," according to a news report in Nature:

"The researchers solved the problem by alternating between a situation in which the brain-machine-brain interface was stimulating the brain and one in which motor cortex activity was recorded; half of every 100 milliseconds was devoted to each process."

The Duke researcher is leading an international consortium called the Walk Again Project, whose goal is to restore full body mobility to quadriplegic patients using brain-machine interfaces and robotic exoskeletons.

An avid fan of soccer, Nicolelis hopes to have a demonstration ready for 2014, with a quadriplegic child performing the kickoff for the FIFA World Cup in Brazil, his home country.

Images and video: Duke University

Japanese Researchers Developing Robotic Chameleon, Tongue First

Tomofumi Hatakeyama and Hiromi Mochiyama have not yet created a robotic chameleon like the (completely fake) one in the above pic, but they have started in on one of the most important parts: the tongue. Chameleons can shoot their tongues out to capture prey in just three one-hundredths of a second, and then reel their tongues back in and chow down. Seems like a handy thing for a robot to be able to do, right? Sure, why the heck not!

This system is deceptively simple, relying on an air cannon of sorts to fire a magnetic projectile attached to a thin elastic cord. Over 90 percent of the time, the robotic tongue can snap up magnets dropped 0.7 meter away, taking barely a tenth of a second to traverse the distance, and making the entire round trip in another tenth. It's wicked quick, and can nail almost exactly the same spot in mid-air every time.

Obviously, there are a few reasons why this particular version is probably not going to replace a real chameleon any time soon. For one, it only works on magnetic stuff, and in order to make the catch, it needs a break-beam sensor to tell it when to fire. But there are ways to get around both of these issues, like maybe some gecko-foot tape plus a laser sensor system or something. The researchers want this thing to ultimately shoot out to 10 meters (!), and they're planning to mount it on some kind of mobile robot platform that will scuttle around and catch cockroaches and other bugs. You know, humanely.

"Shooting Manipulation System with High Reaching Accuracy" was presented by Tomofumi Hatakeyama and Hiromi Mochiyama from the University of Tsukuba at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week.

Image: Robot chameleon rendering via Tommix on deviantART

PR2 Can Now Fetch You a Sandwich from Subway

Want a sandwich? Yeah, me too. PR2 has learned how to fetch stuff from the fridge, which is great and all, but thanks to a technique called semantic search, it can now bring you a sandwich when it's not even sure where the sandwich is.

"Semantic search" is simply the ability to make inferences about an object based on what is known about similar objects and the environment. It sounds complicated, but it's really just a computerized version of what we humans think of as "common sense." For example, if someone asks you to bring them a cup without telling you exactly where the cup is, you're probably clever enough to infer that cups can be found in drawers or cabinets or dishwashers, and that drawers and cabinets and dishwashers are all usually located in a kitchen, so you can go to the kitchen, poke around for a little bit, and find a cup. Semantic search allows robots to do the same sort of thing.

The advantage of this technique is that it gives robots the ability to infer things that it doesn't know from things that it does know, and use reason to make deductions about parts of the word that it's less familiar with. Additionally, the robot can add to its knowledge base to quickly adapt to new places and people with weird habits. So like, if you're one of those people who stores peanut butter in the bathroom, the robot can start associating peanut butter with bathrooms.

Weirdo.

The following demo, from the University of Tokyo and Technische Universität München, puts semantic search to the test by tasking a PR2 with fetching a sandwich. The PR2 has no detailed information on sandwiches, but its database tells it that sandwiches are a type of food, and that food can be found in kitchens and restaurants, and from that, it figures out where to look:

PR2, which already knows how to get drinks and bake cookies, managed to get from a simple "bring me a sandwich" command to going and ordering a sandwich at a Subway in a separate building, all entirely autonomously. Yeah, it had a little trouble with the elevator buttons, but let's think about the big picture: using techniques like these, we're getting closer to being able to give robots vague and general commands, and having them figure out what we want really want and how to make it happen.

"Semantic Object Search in Large-scale Indoor Environments" by Manabu Saito, Haseru Chen, Kei Okada, Masayuki Inaba, Lars Kunze, and Michael Beetz from the University of Tokyo and Technische Universität München was presented last week at the IEEE International Conference on Intelligent Robots and Systems in San Francisco.

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More