Festo's SmartBird robotic seagull is barely four months old, but already it's flown (or we should probably assume, been flown) from Germany to Edinburgh for the 2011 TEDGlobal conference. Festo's Markus Fischer, the SmartBird project leader, presented a short talk about SmartBird, along with a couple live demonstrations of the robot, complete with a few friendly dive-bombings:
If you're looking for another TEDTalk to wile away your Monday with, allow me to recommend Kevin Slavin's fascinating presentation on how algorithms are shaping our world, which uses some vacuuming robot pics that you might recognize to illustrate how abstract programming can have tangible effects on our daily lives.
Micro Air Vehicles (MAVs) are way, way more useful if they can hover. Hovering capability allows MAVs to operate indoors, and to make it happen, you have to rely on platform like a helicopter (or a quadrotor) or something moreexotic. This thing definitely falls into the "more exotic" category -- it's called a cyclogyro, or cyclocopter.
Fundamentally, a cyclocopter is similar to a helicopter in that it creates lift through rapidly moving airfoils. Unlike a helicopter, a cyclocopter's airfoils rotate around a horizontal axis, continually changing their pitch in order to generate thrust in one single direction:
It's certainly not a simple system, which is why this idea (which has been around in the form of various prototypes for nearly a century) only got off the ground to make a first untethered flight just recently, thanks to a lot of hard work from Moble Benedict and his team at the University of Maryland. They've been developing a cycloidal rotor system made of carbon fiber and titanium that's so far been applied to both a quad cyclocopter and a twin cyclocopter, and they've successfully gotten the two rotor version (with a supplemental tail rotor) into an untethered and more or less stable hover:
You're probably wondering what the advantages of such a complex system are, and luckily, there are a few. Primarily, it's suggested that a cyclocopter would be more efficient than a helicopter, able to generate more thrust for a given amount of power. It's also thought that cyclocopters will prove to be more maneuverable, since the thrust can be vectored very rapidly. On the downside, you've got the overall complexity of the system to deal with, and the weight of the rotors might cancel out any efficiency gains.
There are definitely a lot of questions about the feasibility of a design like this, but in order to figure it out, the best thing to do is just build them and see what happens, and from the sound of things, the UMD team is finally cashing in on about a century worth of speculation.
Back in October of 2009 when iRobot first announced their new healthcare robotics unit, I posted my prediction about what kind of platform we might expect to see:
Here’s my guess: a small mobile platform that integrates the telepresence features of the ConnectR with some kind of simple artificial intelligence that could locate and recognize people, deliver reminders and information based on natural language voice queries, and summon help in an emergency. It would be connected to the internet and could integrate with, say, a doctor’s office or a pharmacy to provide prescription schedules and monitor drug interactions.
Turns out I wasn't far off with my telepresence concept. Just yesterday, iRobot announced a new partnership with InTouch Health (a remote presence telemedicine solution provider) to "explore potential opportunities for healthcare applications on iRobot platforms such as the iRobot Ava mobile robotics platform."
There aren't any details about what Ava will actually be doing (beyond nebulous statements like "we will revolutionize how people communicate and deliver information through remote presence"), but if anything, Ava is going to be capable of much more than I suggested in my original prediction, which was based on the relatively limited ConnectR platform. In retrospect, this is likely a big part of the reason why iRobot canceled ConnectR in the first place in favor of what would become Ava, but I digress.
It seems likely that Ava is going to start off in a hospital setting, cruising around and letting doctors interact with patients via telepresence. This isn't the first step towards robots replacing human doctors or anything, but if there's a specialist that you want to see who lives across the country, telepresence is far more effective than a phone call. As far as when we can expect Ava to start making house calls, well... Telepresence is one of the few robotics markets that consumers (or small businesses) could actually get involved in, so it's certainly possible that some of the telepresence technology embodied in Ava might eventually end up in some kind of Ava / ConnectR love-child. Now there's a mental picture for you.
This past weekend, a crowd of robot geeks, artists, and filmmakers converged on the futuristic-looking 3-Legged Dog studios in downtown Manhattan for the world's firstRobot Film Festival. IEEE Spectrum photo editor Randi Silberman Klett and I were left dazzled by all the robots and people and the more than 50 short films screened, which had the effect of discharging an electrical assault on the audience's brain interfaces; the films made people laugh, cry, cringe, but above all think about what it means to build robots and share our world with them.
The organizer, roboticist Heather Knight of Marilyn Monrobot [photo, below], and her coproducers -- Magic Futurebox, Beatbots, and Science House -- did a fantastic job in putting together a robotics extravaganza that included not only screenings (Spike Jonze's "I'm Here" opened the festival) but also live performances, a make-your-own-robot-film workshop, and a robot-themed BBQ. The whole thing culminated in the Bostker Awards Ceremony, with robots and humans parading on the red carpet and3D-printed bot statuettes awarded for categories like "Best Robot Actor" and "Most Uncanny."
The Digger D-3 is the most recent addition to my own personal list of robots not to stand in front of. It's a mine-clearing robot, and not the sort of mine-clearing robot that pokes around with a metal detector. Instead, it's the sort of mine-clearing robot that just sucks it up and tells the landmines to bring it.
At the front of the D-3 is a giant spinning metal pulverizer thing of death, which has tungsten hammers that beat down a quarter meter into the ground, turning everything they touch into mulch. This includes landmines, and although the mines do tend to blow up before getting shredded, the robot hardly seems to notice:
An operator commands this beast from a safe distance using a remote control unit. The hull of the robot is made up of hardened steel plates in a "V" shape to help limit any damage from antitank mines and unexploded shells of sizes up to 81mm, and the D-3 has been able to successfully ingest mines containing as much as 8 kilograms of explosive, which is nothing to sneeze at. The only potentially vulnerable spots are the air intakes, which are themselves protected from flying shrapnel by special grates. At full throttle, the D-3 can reliably clear a comforting 100 percent of landmines from the ground at a rate of 1,000 square meters per hour [about 10,000 square feet per hour], while also divesting the land of any unwanted shrubbery and unlucky mole colonies.
Despite all the protection, machines do break down on occasion, and Digger has taken the somewhat unusual step of making the robot as easy as possible for other people to repair. The guts of the robot are straightforward to access, the armor has been designed to be easy to weld, and Digger even provides plans so that if you have the means, you can build your own spare parts. The reason for doing this is that Digger wants the D-3 to be able to make a difference in far-flung communities crippled by the threat of landmines, and to do that, you need an extremely reliable robot.
The future for the D-3 likely lies in some form of limited autonomy, but don't worry: The people who actually end up using this thing don't like the idea of it being fully autonomous any more than you do. Expect it to eventually be able to obey pretty specific instructions like "go here," as opposed to commands like "hey, why don't you find a spot where you think there might be landmines, beat it into a pulp, and come back when you're done."
While thinking about robotics as a still-emerging field, as we do, we don't often stop to consider how even the relatively recent past has a significant historical relevance. Fortunately, this is the job of the Smithsonian Institution, and they seem to be very proactive about it, having just acquired nine robots from Sandia National Labs for their permanent collection.
The robots in the above picture include MARV (Miniature Autonomous Robotic Vehicle), a design from 1996 that used mostly commercial parts and measured only about one inch square [about 6.5 square centimeters]. MARV was one of the first robots to really tackle miniaturization head on, and it inspired all kinds of tiny little descendants, including Sandia's own dime-sized tank.
Also heading to the Smithsonian are SIR, a robot that could navigate through a building autonomously in 1985 (on the left), Dixie, a reconnaissance robot from 1987 (at the back), and some of those crazy hopping robots.
It's fun to think about what robots that we have around us right now are likely to find a place in the Smithsonian's collection within a decade or two... After five seconds of thought (which means I'm missing all kinds of slightly less obviously but equally worthy choices), I'd have to put my money on a Roomba, PR2, Keepon, a Predator, and Wall-E. What do you think?
If there was one bad thing about those lightsaber-wielding robots from Yaskawa that we saw at ICRA, it was that you couldn't bust out your own lightsaber and jump in the middle of the fight. A paper also presented at ICRA showed us robots swinging swords in simulation against humans, but without much in the way of physical combat. Now a student project at Stanford has put these two brilliant ideas together and come up with "JediBot," a robot arm that will actually try to kill you with a foam sword:
"The robot applies quite a bit of force." Get it? Force? Yeah!
This project was part of Stanford's three and a half week long "Experimental Robotics" course, which, from the sound of things, is basically just an excuse for students to mess around with robots to get them to do cool stuff. Also developed as part of the course were a robot that plays golf, several robots that draw, and a robot that can make hamburgers and then drown then in ketchup for you:
We love watching PR2 fold laundry, play pool, bake cookies, and bring us beer, but robots with the capability to do the same kinds of things that humans can do aren't around just to take over for us when we're feeling lazy. Robots also exist to do things that humans can't do, whether that's making fast and precise movements, defusing bombs, or lending a gripper to a person with a disability.
Henry Evans, the dude in the above video, has been a quadriplegic for the last ten years, having suffered a stroke when he was just 40 years old. He saw a PR2 on TV last year, and thought that a robot might be a handy thing to have around the house to help him live a bit more independently. Georgia Tech's Healthcare Robotics Lab and Willow Garage have been collaborating with Henry since then, and he's been able to use a PR2 to do things like shave himself and scratch itches when he has them, things for which Henry has been dependent on other people for the last decade.
Part of what makes the PR2 ideal for this sort of thing are its high-level autonomous capabilities. Using a head tracker, Henry can give the robot commands to navigate to specific locations or fetch objects, and the PR2's sensors and software handle the rest. Of course, it's not realistic to hope that every disabled person will be able to one day get a PR2 (each costs $400,000). What is realistic (I hope) is that what Willow Garage and Georgia Tech are learning here will help them to design better software and hardware for the next generation of home service and healthcare robots, which will be affordable so more people can have them.
This project is an important reminder that while most of us are hoping that robots will at some point step in and make our lives easier and more convenient, most of us actually don't really need robots. Some people do need them, though, and it's great to see companies and research groups with so much expertise in this area working to make robots available where they have the potential to do the most good.
We're always impressed by how much expressiveness and emotion can be squeezed out of even the simplest robot faces if they're cleverly done, and Emys (for "emotive head system"), a robot from the Wroclaw University of Technology in Poland, is a fantastic example. Just watch:
Yeah, I didn't entirely get all that either, but that "surprise" face is priceless. For a less, um, dramatic run-through of all of the expressions that Emys can make, there's another video here.
Emys is part of the LIREC Project, which is a European research project that's "exploring how to design digital and interactive companions who can develop and read emotions and act cross-platform." In short, they're trying to figure out how to make robots a little more fun to hang out with, by giving them some tools to tell how you're feeling, and giving you an expressive face (of sorts) to look at.
This disembodied head also comes with a fancy wheeled Segway-style body called FLASH, and there's even an arm. Just one arm, yeah, but that's enough to shake hands and give a thumbs-up, and who could want anything more than that?
Professor Hideyuki Sawada from Kagawa University in Japan was at Robotech 2011 showing off that incredibly bizarre robot mouth of his. It's based as closely as possible on a human mouth, complete with an air pump for lungs, eight fake vocal cords, a silicon tongue, and even a nasal resonance cavity that opens and closes. Like other robot mouths, it uses a microphone to listen to itself speak (or whatever you want to call it) and analyze what it hears to try to figure out how to be more understandable and less, you know, borderline nightmarish.
I know, there wasn't a demo in that vid. But I've got one right here for you, of this robot attempting to sing a Japanese children's song called "Kagome Kagome." You can hear what it's supposed to sound like over on Wikipedia before or after you listen to the robot have a go, but either way, you're not gonna recognize much. The action starts at about 30 seconds in:
Wonderful. Don't get me wrong, on principle this is some undeniably fascinating stuff. I have to wonder, though, whether the effort it would take to get this thing into a humanoid robot would really pay off relative to a voice synthesis system based on software and speakers. I guess there might be other advantages to a bionic mouth, but I'll leave the speculation up to you.