The current generation of bicycle-riding robots (I'm talking about those crazy kids from Murata) are extremely complicated, relying on giant gyroscopes and thick wheels to keep themselves upright even while stationary. This is certainly a neat trick, but it's not something that most humans can pull off. It's not a problem that robots are better at something than we are (by now, we're used to it), but there's something to be said for human emulation, too.
It turns out that getting a robot to ride a bicycle doesn't need to involve much more than a hobby level humanoid employing a relatively simple gyroscope that sends steering commands to keep things generally upright. This KHR3HV bipedal robot (which can be yours for about $2200) has a nifty custom bike that it got from I know not where, and can zip around under remote control at up to 10 kph, even making its own starts and stops:
Robots have ears. They're called microphones, and you usually find them just inside some tiny little hole somewhere. But you have to figure that there are good reasons why animals like this exist: big ears can confer an advantage. Namely, big ears allow animals to hear quieter sounds, and localize those sounds more precisely.
This is the idea behind "active soft pinnae," which is fancy roboticist talk for "ears that wiggle." The robotic ear in the picture above is a reasonably faithful reproduction of a kitty ear, including a fake fur covering on the back and the ability to both rotate side to side and deform downwards. There's a microphone buried down inside the ear, of course, but the external structure is the important part.
So what good is it? I mean, you can ask your cat, but testing has shown that it's possible to pinpoint the direction (azimuth and elevation) to a sound with just two wigglable ears instead of needing a complex microphone array. Furthermore, the ears can be used to localize sounds by moving independently of the head or body of a robot, which is a much more efficient approach. And of course, ears like these are awfully cute, and with the addition of some touch sensors, you could give your robot that friendly scritching that it deserves.
Technically, what this robot uses is hot-melt adhesive, or HMA. This is the stuff that comes out of hot glue guns, and it goes from a solid to a sticky liquid when it's passed through a heating element. As it cools, it solidifies again. The robot uses this property to temporarily bond its limbs to a vertical surface one by one and hoist itself up, unsticking itself as it goes by re-heating the blobs of glue that it sets down:
By now, you've probably spotted several issues that this robot has to deal with: first, it's very, very slow, since it has to wait for the adhesive to cure every time it takes a step, a 90 second process. And second, totally it leaves a trail of sticky little glue spots along every surface that it climbs, making its usefulness questionable in many (if not most) environments.
So yes, a few things need to be addressed, but this technique has a bunch of upsides, too. The biggest one is that glue, being glue, sticks to just about anything. It doesn't have to be especially rough, especially smooth, or especially magnetic, which makes it more versatile than than the current generation of just about every other robot adhesion system that I can think of off the top of my head.
Also, the hot melt adhesive can support a lot of weight, and it can do it completely passively: you don't need to expend energy once the adhesive sets to keep from falling. The bonding strength of the HMA in its solid state is such that a four square centimeter little patch can hold a staggering 60 kilograms, easily enough to hold this robot plus a fairly gigantic payload, most of which is likely going to have to consist of extra sticks of glue.
Robots are quite good at doing very specific tasks. Arguably, doing very specific tasks are what robots are best at. When you put a robot into an unknown situation, however, odds are you're not going to have a design that's optimized for whatever that situation ends up being. This is where modular robots come in handy, since they can reconfigure themselves on the fly to adapt their hardware to different tasks, and the Modular Robotics Lab at the University of Pennsylvania has come up with a wild new way of dynamically constructing robots based on their CKBot modules: spray foam.
The process starts with a "foam synthesizer cart" that deploys several CKBot clusters, each consisting of a trio of jointed CKBot modules. The CKBot clusters can move around by themselves, sort of, and combined with some helpful nudging from the cart, they can be put into whatever position necessary to form the joints of a robot. The overall structure of the robot is created with insulation foam that the cart sprays to connect the CKBot clusters in such a way as to create a quadruped robot, a snake robot, or whatever else you want. Watch:
Having a robot that shoots foam is good for lots more than building other robots; for example, Modlab has used it to pick up hazardous objects and to quickly deploy permanent doorstops. There's still some work to be done with foam control and autonomy, but Modlab is already thinking ahead. Way ahead:
"By carrying a selection of collapsible molds and a foam generator, a robot could form end effectors on a task-by-task basis -- for example, forming wheels for driving on land, impellers and oats for crossing water, and high aspect ratio wings for gliding across ravines. Molds could also be made of disposable material (e.g. paper) that forms part of the final structure. Even less carried overhead is possible by creating ad-hoc molds: making a groove in the ground or placing found objects next to each other."
With this kind of capability, you could (say) send a bunch of modules and foam to Mars, and then create whatever kind of robots you need once you get there. And with foam that dissolves or degrades, you could even recycle your old robots into new robots if the scope of the mission changes. Modular robots were a brilliant idea to begin with, but this foam stuff definitely has the potential to make them even more versatile.
Once a secret project, Google's autonomous vehicles are now out in the open, quite literally, with the company test-driving them on public roads and, on one occasion, even inviting people to ride inside one of the robot cars as it raced around a closed course.
This could be it, folks. The one killer application that the entire robotics world has been waiting for. It's bold, it's daring, it's potentially transformative, and you know you want it.
Ben Cohen and his colleagues from the GRASP Lab at the University of Pennsylvania devoted literally an entire weekend to programming their PR2 robot, Graspy, to handle POOPs. POOPs (Potentially Offensive Objects for Pickup) are managed by the robot using a customized POOP SCOOP (Perception Of Offensive Products and Sensorized Control Of Object Pickup) routine. While POOP can be just about anything that you'd rather not have to pick up yourself, in this particular case, the POOP does happen to be poop, since arguably, poop is the worst kind of POOP.*
Oh yes, there absolutely is video:
While you can't hear it in the video, Graspy begins its task by declaring in a vaguely disappointed robotic monotone, "time for me to scoop some poop." You get the sense that this $400,000 robot is asking itself whether or not this kind of work is really what it signed up for. Using its color camera, the robot first identifies poops based on their color, navigates to said poop, and then using a special human tool, it performs the scoop. Haptics are employed to ensure that each poop scoop is a success, and if not, the robot will give it another try. Failure doesn't happen often, though: Graspy is able to successfully scoop poop about 95% of the time in over 100 trials, at a rate of over one poop per minute.
There's still some work to do be done in order to get PR2 scooping poop like a pro (or an obedient human). For example, it's currently only able to handle high fiber poop, although that may be solvable with a different tool. If you think you have a clever way of making PR2 a better poop scooper, you can download the POOP SCOOP ROS stack and contribute to the betterment of humanity through robotics at the link below.
"POOP SCOOP: Perception Of Offensive Products and Sensorized Control Of Object Pickup" was presented at the PR2 workshop at IROS 2011.
We have a few more solid weeks worth of IROS awesomeness to share with you, but since it's Friday and all, we thought it might be a nice time to put together a little gallery of some of the robots from the expo floor of the IEEE International Conference on Intelligent Robots and Systems, which took place last month in San Francisco.
Most of the bots you'll recognize easily, but keep an eye out for some (okay a LOT) of those little Kilobots, as well as a guest appearance by Disney's swarming display robots. Enjoy!
And just in case that wasn't enough for you, Willow Garage (one of the IROS sponsors) also put together this little video montage:
As with most, uh, "research" projects like this, there's supposedly some larger purpose to it. Something about the potential of multi-sensor integration in industrial manipulation. Or whatever. I don't buy it, of course, but we can certainly applaud the fact that the robot was able to make 29 moves in a row, which means that it added nearly ten solid layers of blocks to the top of the tower without knocking it over. Time to preemptively surrender, folks. Here's one more vid of the robot making a move:
Meka Robotics is based in San Francisco, which is lucky for us, since that made it pretty much impossible for them to not show up at the IEEE International Conference on Intelligent Robots and Systems. They're probably best known for their underactuated, compliant hand (and the arm that goes with it) and more recently for their humanoid head. The S2 head is notable because it manages to maintain a high degree of expressiveness (those eyes are amazing) while entirely avoiding the Uncanny Valley effect, thanks to its vaguely cartoonish look. We asked Meka's co-founder, Aaron Edsinger to take us through it:
The particular robot in this video is called Dreamer, and it belongs to the Human Centered Robotics Lab at the University of Texas, Austin. Dreamer's head was a cooperative effort involving Meka and UT Austin professor Luis Sentis, who came up with the subtle and effective anime look. Part of what helps keep Dreamer's motions so compliant (and lifelike) is its software: called "Whole Body Control," it's a collaboration between UT Austin, Meka, Stanford, and Willow Garage.
Meka is also offering an entirely new system consisting of an arm, gripper, sensor head, and mobile base for $200,000. It's no coincidence that the one-armed PR2 SE costs the exact same amount; the NSF's National Robotics Initiative provides research grants including up to $200k for research platforms. Yep, the government is basically giving these things away for free, all you have to do is convince them that you deserve one, and then pick your flavor.
This feisty little guy is a quadruped robot called SQ1. It's a project by South Korean company SimLab, whom we met at the IEEE International Conference on Intelligent Robots and Systems last month. Their RoboticsLab simulation software is being used to figure out how to get the quadruped to walk without actually, you know, having to risk a trial-and-error approach on a real robot. And it works! Or rather, it mostly works:
We don't know too much about it, but apparently, there's a much larger (think BigDog/AlphaDog sized) quadruped in existence (sponsored by the South Korean government). This smaller robot is being used to test out different gaits that have proven themselves in simulation, before the full-sized (and more expensive) version tries not to fall over on its own.