Automaton iconAutomaton

PR2 Scoops Some Poop

This could be it, folks. The one killer application that the entire robotics world has been waiting for. It's bold, it's daring, it's potentially transformative, and you know you want it.

It's POOP.

Ben Cohen and his colleagues from the GRASP Lab at the University of Pennsylvania devoted literally an entire weekend to programming their PR2 robot, Graspy, to handle POOPs. POOPs (Potentially Offensive Objects for Pickup) are managed by the robot using a customized POOP SCOOP (Perception Of Offensive Products and Sensorized Control Of Object Pickup) routine. While POOP can be just about anything that you'd rather not have to pick up yourself, in this particular case, the POOP does happen to be poop, since arguably, poop is the worst kind of POOP.*

Oh yes, there absolutely is video:

While you can't hear it in the video, Graspy begins its task by declaring in a vaguely disappointed robotic monotone, "time for me to scoop some poop." You get the sense that this $400,000 robot is asking itself whether or not this kind of work is really what it signed up for. Using its color camera, the robot first identifies poops based on their color, navigates to said poop, and then using a special human tool, it performs the scoop. Haptics are employed to ensure that each poop scoop is a success, and if not, the robot will give it another try. Failure doesn't happen often, though: Graspy is able to successfully scoop poop about 95% of the time in over 100 trials, at a rate of over one poop per minute.

There's still some work to do be done in order to get PR2 scooping poop like a pro (or an obedient human). For example, it's currently only able to handle high fiber poop, although that may be solvable with a different tool. If you think you have a clever way of making PR2 a better poop scooper, you can download the POOP SCOOP ROS stack and contribute to the betterment of humanity through robotics at the link below.

"POOP SCOOP: Perception Of Offensive Products and Sensorized Control Of Object Pickup" was presented at the PR2 workshop at IROS 2011.

[ ROS Wiki ]

[ GRASP Lab ]

*My guess is that this is an IEEE Spectrum record for the most uses of the word "poop" in one single sentence.

Photo Gallery: Asimo, Kilobots, iCub, and More

We have a few more solid weeks worth of IROS awesomeness to share with you, but since it's Friday and all, we thought it might be a nice time to put together a little gallery of some of the robots from the expo floor of the IEEE International Conference on Intelligent Robots and Systems, which took place last month in San Francisco.

Most of the bots you'll recognize easily, but keep an eye out for some (okay a LOT) of those little Kilobots, as well as a guest appearance by Disney's swarming display robots. Enjoy!


 


And just in case that wasn't enough for you, Willow Garage (one of the IROS sponsors) also put together this little video montage:

[ IROS 2011 ]

Robot Masters Jenga, Next the World

Confirming our suspicions that roboticists basically just sit around and invent ways to play games with robots all day, here's a video from Torsten Kröger (the same guy who took us through the JediBot demonstration at the IEEE International Conference on Intelligent Robots and Systems, go figure) detailing how he and a bunch of his friends built themselves a robot that plays Jenga back in 2005:

As with most, uh, "research" projects like this, there's supposedly some larger purpose to it. Something about the potential of multi-sensor integration in industrial manipulation. Or whatever. I don't buy it, of course, but we can certainly applaud the fact that the robot was able to make 29 moves in a row, which means that it added nearly ten solid layers of blocks to the top of the tower without knocking it over. Time to preemptively surrender, folks. Here's one more vid of the robot making a move:

[ Jenga Robot ]

Video: Meka Robotics Talks Up its Anime-Style Expressive Head

Meka Robotics is based in San Francisco, which is lucky for us, since that made it pretty much impossible for them to not show up at the IEEE International Conference on Intelligent Robots and Systems. They're probably best known for their underactuated, compliant hand (and the arm that goes with it) and more recently for their humanoid head. The S2 head is notable because it manages to maintain a high degree of expressiveness (those eyes are amazing) while entirely avoiding the Uncanny Valley effect, thanks to its vaguely cartoonish look. We asked Meka's co-founder, Aaron Edsinger to take us through it:

The particular robot in this video is called Dreamer, and it belongs to the Human Centered Robotics Lab at the University of Texas, Austin. Dreamer's head was a cooperative effort involving Meka and UT Austin professor Luis Sentis, who came up with the subtle and effective anime look. Part of what helps keep Dreamer's motions so compliant (and lifelike) is its software: called "Whole Body Control," it's a collaboration between UT Austin, Meka, Stanford, and Willow Garage.

Meka is also offering an entirely new system consisting of an arm, gripper, sensor head, and mobile base for $200,000. It's no coincidence that the one-armed PR2 SE costs the exact same amount; the NSF's National Robotics Initiative provides research grants including up to $200k for research platforms. Yep, the government is basically giving these things away for free, all you have to do is convince them that you deserve one, and then pick your flavor.

[ Meka Robotics ]

[ HCRL ]

Video: SQ1 Quadruped Robot from South Korea

This feisty little guy is a quadruped robot called SQ1. It's a project by South Korean company SimLab, whom we met at the IEEE International Conference on Intelligent Robots and Systems last month. Their RoboticsLab simulation software is being used to figure out how to get the quadruped to walk without actually, you know, having to risk a trial-and-error approach on a real robot. And it works! Or rather, it mostly works:

We don't know too much about it, but apparently, there's a much larger (think BigDog/AlphaDog sized) quadruped in existence (sponsored by the South Korean government). This smaller robot is being used to test out different gaits that have proven themselves in simulation, before the full-sized (and more expensive) version tries not to fall over on its own.

[ RoboticsLab ]

How JediBot Got Its Sword Fighting Skills

JediBot, which we saw in action back in July, was a brilliant final project conceived by a group of students for an experimental robotics course at Stanford University. Kuka spotted the video on YouTube, and shortly thereafter, JediBot found itself with a new job as the main attraction at Kuka's booth on the expo floor of the IEEE International Conference on Intelligent Robots and Systems last month. We caught up with Stanford roboticist Torsten Kroeger, who took us through the brains programming behind JediBot's unquenchable thirst for the blood of Sith lords:

It's worth mentioning that due to a slight miscalibration, JediBot was not acting as aggressive as it could have been when we shot this demo. I took some whacks at it myself a little later on, and the robot was having a great time going for my throat every time I let my guard down. I have to say, it's really quite an experience to be on the other end of a robot with a sword doing its level best to separate your head from your body, but considering all the dull, dirty, and dangerous tasks that we tend to saddle robots with, can you really blame them for being overly enthusiastic when we ask them to take a few good-natured swings in our direction?

[ Stanford Robotics ]

No Couch Is Safe from the CLASH Cloth-Climbing Robot

UC Berkeley has a long history of developing innovative legged robots: There was ROACH, there was BOLT, there was DASH. DASH, a cockroach-Inspired design, was a very simple, very fast hexapedal robot that could scuttle along the ground at 15 body lengths per second.

Now meet the latest addition to this family of robot bugs: CLASH, pictured above, is a vertically-enabled successor to DASH, and it's designed to zip up vertical or near-vertical cloth surfaces with the aid of tiny little spiny toes. It's sort of like what you'd get if you put DASH and SpinyBot together in a dark room along with a 3D printer and some Barry Manilow (or whatever it is robots are listening to these days).

For a vertical climbing robot, CLASH is surprisingly quick. It may actually be one of the quickest climbing robots in existence, able to move upwards at 24 centimeters per second, which is really quite a lot faster than it sounds:

Part of the reason that CLASH can scramble around so fast is that it's small and lightweight with a simple, but clever, design. CLASH is 10 centimeters long and weighs only 15 grams. The back-and-forth climbing motion of four legs (the back two are passive) is entirely driven by one single motor that gives CLASH a gait frequency of a brisk 34 strides per second.

The actual gripping and climbing technique is integrated into the beautiful series of linkages that connect CLASH's legs to its motor and to each other, making the mechanism completely passive all the way from initial grip to retraction. The battery and electronics are all onboard, and are located in the tail to help keep the robot balanced.

Next up is to endow CLASH with the ability to turn (which will likely involve the addition of a second actuator somewhere), and modification of the rear legs to allow the robot to scamper along horizontal surfaces too. And while CLASH is currently restricted to climbing things like fabric and carpet that it can sink its claws into, other methods of passive adhesion (like some of that gecko tape) might give CLASH a little extra versatility.

"CLASH: Climbing Vertical Loose Cloth" was presented by P. Birkmeyer, A. G. Gillies, and R. S. Fearing from the University of California, Berkeley, at the IEEE International Conference on Intelligent Robots and Systems in San Francisco last week. Special thanks to Paul Birkmeyer for the CLASH videos, and for forgiving me for mistakenly suggesting that he was at Stanford, not Berkeley, which is just about the worst screw-up I could have possibly made.

[ UC Berkeley's Biomimetic Millisystems Lab ]

New Switchblade Robot Design is Leaner, More Agile

The first generation of UCSD's Switchblade robot used a battery pack on a big swingy arm-thing to alter its center of gravity enough to balance on its treads and climb stairs.

At the IEEE International Conference on Intelligent Robots and Systems last week, we spotted an updated version of Switchblade, which trades in the external movable mass for a slick compact case. Instead of compromising its balancing skills, this new design (and some extra brains) have made Switchblade more agile than ever, being able to remain stable even when grad students push it with their sandal-clad feet:

This new form-factor makes Switchblade a bit more appealing as a capable replacement for a variety of tactical robots which shall remain nameless but rely on infinitely less cool movable paddle tracks to get themselves over obstacles way less obstacle-y than what Switchblade is able to surmount.

Switchblade has been refined to reduce its cost and complexity, and according to its creator Nick Morozovsky, it's "well suited for a variety of socially-relevant applications, including reconnaissance, mine exploration, and search and rescue." So someone just needs to put it into action already, and give those fancy balancing tricks some practical applications.

[ UCSD Coordinated Robotics Lab ]

Monkeys Use Brain Interface to Move and Feel Virtual Objects

bidirectional brain machine interface to sense texture of objects

Scientists have demonstrated that monkeys using a brain-machine interface can not only control a computer with their minds but also "feel" the texture of virtual objects in a computer.

This is the first-ever demonstration of bidirectional interaction between a primate brain and a virtual object.

In the experiment, described in paper published today in the journal Nature, Duke University scientists equipped monkeys with brain implants that allowed the animals to control a virtual arm, shown on a computer screen, using only their thoughts. This part of the experiment was not a new result -- scientists, including the Duke team, have previously demonstrated that the brain can control advanced robotic devices and even learn to operate them effortlessly.

What's new is that, this time, the scientists are using the brain-machine interface not only to extract brain signals but also to send signals to the brain. The device is actually a brain-machine-brain interface. The monkeys were able to interpret the signals fed to their brains as a kind of artificial tactile sensation that allowed them to identify the "texture" of virtual objects.

"Someday in the near future, quadriplegic patients will take advantage of this technology not only to move their arms and hands and to walk again, but also to sense the texture of objects placed in their hands, or experience the nuances of the terrain on which they stroll with the help of a wearable robotic exoskeleton," study leader Miguel Nicolelis, a professor of neurobiology at Duke, in Durham, N.C., said in a statement.

Initially, the monkeys used their real hands to operate a controller and move their virtual limbs on the screen. During this part of the experiment, the researchers recorded brain signals to learn how to correlate the brain activity to the movement of the virtual arm [see illustration above]. Next, the researchers switched from hand control to brain control, using the brain signals to directly control the virtual arm; after a while, the animals stopped moving their limbs altogether, using only their brains to move the virtual hand on the screen.

The monkeys used their virtual hand to explore three objects that appear visually identical but have different "textures" -- each texture corresponding to different electrical signals sent to the brain of the animals. The researchers selected one of the objects as the "target," and whenever the monkeys were able to locate it they would receive a sip of juice as a reward. After a small number of trials, the monkeys learned to quickly explore the virtual environment, feeling the textures of the objects to find the target.

Watch:

One of the monkeys used got the tasks right more than 85 percent of the time; another monkey got the tasks right about 60 percent of the time.

To allow the monkeys to control the virtual arm, the scientists implanted electrodes to record electrical activity of populations of 50 to 200 neurons in the motor cortex. At the same time, another set of electrodes provided continuous electrical feedback to thousands of neurons in the primary tactile cortex, allowing the monkeys to discriminate between objects based on their texture alone.

"It's almost like creating a new sensory channel through which the brain can resume processing information that cannot reach it anymore through the real body and peripheral nerves," Nicolelis said.

A major challenge was to "keep the sensory input and the motor output from interfering with each other, because the recording and stimulating electrodes were placed in connected brain tissue," according to a news report in Nature:

"The researchers solved the problem by alternating between a situation in which the brain-machine-brain interface was stimulating the brain and one in which motor cortex activity was recorded; half of every 100 milliseconds was devoted to each process."

The Duke researcher is leading an international consortium called the Walk Again Project, whose goal is to restore full body mobility to quadriplegic patients using brain-machine interfaces and robotic exoskeletons.

An avid fan of soccer, Nicolelis hopes to have a demonstration ready for 2014, with a quadriplegic child performing the kickoff for the FIFA World Cup in Brazil, his home country.

Images and video: Duke University

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More