Disney Research seems to be bullish on this whole robotics thing, and that partnership with ETH Zurich that we heard about at ICRA looks to be developing nicely. Their latest creation is a robot that can climb up buildings and then leap off, deploying an absolutely adorable little parafoil to come to a safe landing. Watch the video, but be warned: the drama has been cranked up to intense.
The wall climbing talent comes courtesy of a vortex generator (a propeller of sorts), which creates a freakin' tornado inside a cylinder in the body of the robot. The interior of the vortex is a region of low pressure, which allows the bot to stick to vertical surfaces, and since the vortex itself forms an air barrier, there's no need for a seal. This means that the bot doesn't need a smooth surface to climb (although it probably helps).
The robot, called Paraswift, was originally intended to be used for entertainment. And you are entertained, are you not? The team from ETH Zurich, though, sees more possibilities for their bot, like using it to create vertical 3D models of tall buildings. They also want to automate the parachute deployment so that if the robot ever slips and falls, it'll land safely. Sounds like a good idea to me, although they'll have to work on the parafoil steering, lest Paraswift suffer the same fate as iRobot's poor little PackBot Griffin.
Yesterday we posted a video showing how robots can use machine learning to adapt to objects and situations that they've never seen before. The same research group at Cornell has also been working on clever ways to allow robots to efficiently interpret scenes and identify objects, which is one of those things that robots are going to have to be good at before they can really become helpful in our homes.
Humans have the ability to look at a scene and immediately pick out important elements while ignoring everything else, because we have brains that are awesome like that. Robots, in general, don't really work that way. They have to examine each and every pixel of what they're looking at and decide whether or not it's something they're interested in. So if you ask a robot go to find you (say) a computer keyboard, it's got to enter a room and methodically search every pixel-equivalent area until it finds what it's looking for. While the robot is likely to find its target in this manner, it'll take roughly forever to do so.
What the Cornell research group has been teaching robots to do is to be able to rapidly break down a scene into general categories, and then recognize how some categories are related to others. In a general sense, this is the same thing that we humans do. For example, if you enter a room and want to locate a computer keyboard, you first (subconsciously) identify the places that you don't need to pay attention to, like the walls and ceiling and probably the floor. Next, if you don't immediately see the keyboard, you might take a closer look at the places where keyboards like to hang out, namely in close proximity to a computer. You see a computer, you get closer, and hey, there's the keyboard. Nicely done.
Now, watch a robot do the exact same thing:
The key to getting this to work goes back to the machine learning and adaptability system that Cornell has been working on. First, they showed a robot a series of about 50 different scenes, with most objects in the scene labeled. The robot read the labels and then remembered characteristics of the associated objects (like color, texture, and what other objects are nearby), such that it was then able to use those characteristics categorize entirely new objects in environments it's never experienced.
Armed with this knowledge and way of thinking (if you want to call it that), the robot was then able to enter a room it had never been in before in search of a keyboard, and "know" that since it spotted a monitor, and keyboards are often found near monitors, it should go check out the monitor. And of course, the keyboard is right there.
This ability to follow chains of reasoning to identify, categorize, and locate objects is obviously still being developed, but again, giving robots the ability to understand context and use it to adapt to new things brings them that much closer to being the machines that take over all of our household chores and leave us with nothing better to do than lie around and play video games all day. Because that's what we all secretly want, right? ...Right?
You've heard it before: robots are bad at new things. They're bad at adapting to new situations, they're bad at recognizing new objects, and they're bad at coming up with their own ideas about how to carry out tasks in changing environments. One way to deal with this general ineptitude is to provide a robot with endlessly detailed instructions to minimize the amount of unfamiliar things it experiences. The world is an awfully big place, however, and if we want robots to be able to do robot stuff outside of tolerably controlled environments, such a brute force approach may not be the best way to go.
A much more elegant solution is to teach robots to think and adapt for themselves. Sounds easy, right? Right! I mean, wrong! It's not easy at all. But once you figure out how to do it, you can plop a robot down anywhere and ask it to do anything and it'll have a reasonable chance of figuring it out, or at the very least, be able to ask an intelligent question or two to get going.
At Cornell's Personal Robotics Laboratory, a research group is teaching a robot to generalize groups of objects, which is one of the most basic aspects of reliable adaptability. For example, instead of teaching a robot "this is a cup, and this is a slightly different cup" and so on, you can instead teach a robot to recognize features common to all cups, so that when it sees something cup-like, it can say to itself, "hey, that's a small container with a handle, I bet it's a cup!"
This same sort of learning method can also be applied to actions. By teaching a robot how to pick a few different types of cups, the robot can then generalize the lessons and apply them to completely new cups. And if you teach the robot to put a few dishes into a dish rack, it can then use what it knows about the objects and the rack to figure out how to put pretty much anything in there:
If you're the type who's impressed by numbers, this robot was able to put unseen objects into the right spot in the dish rack in the right way 92% of the time, which is about 92% better than I do. Touché dishwasher loading robot, touché.
The U.S. Air Force has been looking for what they're calling a "Lethal Miniature Aerial Munition System" to be fielded with special ops units next year. If the name of the program doesn't explain it, the above pic should: they essentially want a mortar round with wings, a camera, and a little engine. In other words, a surveillance UAV that can suicidally attack targets on command.
There are several systems with this capability currently in the works, but the operational requirements and principles are all the same. LMAMS needs to weigh three kilos or less, including the vehicle and the launching system. It needs to be able to deploy and fire in under 30 seconds, reach an altitude of 100 meters, and acquire and track a human-sized target in a further 20 seconds. At that point, the drone can either dive at its target, landing within a one meter radius and exploding its small (but still quite lethal) warhead, or it can loiter for up to 30 minutes, sending back live video.
Now, this seems like a fairly dangerous little robot to have around, but before you get all worked up about killer robots and stuff, remember that these special ops units already have tools to deal with situations that the LMAMS is designed for: namely, blindly chucking dumb mortars and grenades at things, calling in air support, or putting themselves in harms way to get a better view of their target. All the LMAMS does is to reduce risk and collateral damage. Or at least, that's the idea, but whether it'll work in practice remains to be seen.
The UAV in the picture is Textron Defense Systems' T-RAM, which stands for Tactical Remote Aerial Munition. You can watch it in action in the video below, which features a soundtrack that's inappropriately Indiana Jonesish:
Cyborg insects have been flying under remote control for over two years now, but the strict weight limits imposed by the fact that you're trying to turn a bug into a functional UAV means that their usefulness is still somewhat constrained. A rhinoceros beetle, for example, can manage to haul about 30% of its own weight as payload. This works out to be somewhere around 2.5 grams, which is not a whole heck of a lot, and if you're eating up a significant portion of that space with a battery, it doesn't leave much room for (say) a camera or missiles.
One option is a small nuclear battery, but a much more elegant solution (with less potential for creating a giant mutant cyborg insect of doom) is to simply harvest power directly from the insect itself. Researchers from the University of Michigan and Western Michigan University have developed a prototype insect energy harvester, pictured above, made of a piezoelectric material that converts wingbeats into electricity. By mounting one of these piezoelectric springs on each wing, simulations show that over 100 microwatts (μW) can be harvested, which is significantly more than the maximum of 80μW it takes to control the insect itself.
While this level of power isn't going to be able to charge those miniaturized laser cannons that I'm reasonably sure DARPA is working on, it does significantly reduce the energy drain on any auxiliary power system that might have to be carried along anyway. And as with all electronics, efficiency will only go up as mass goes down, until ultimately power will only be limited by the lifespan of the insect and the amount of tasty fruit that you can get your bug to chow down on in the middle of a mission.
This very cool demo comes from a company called AnthroTronix. Their AcceleGlove uses accelerometers to detect hand movements, and it's precise enough to get a little robot to pick up a ball in its gripper:
AirBurr, or the original AirBurr at least, is from 2009. It's got a rotor like a helicopter plus control surfaces like an airplane, and since the whole thing is completely enclosed, it can bounce off walls and land and take off on its own, ideal for indoor flying:
Any doubts that AeroVironment's hummingbird MAV could fly outside of a staged demo were erased when it took off from a table at an AUVSI press conference:
Noisy little bugger, but very clever.
This TEDTalk from MIT's Skylar Tibbits shows several examples of how robots (and structures in general) can be programmed to be self-assembling and self-reconfiguring:
Want to know a surefire way of creating jobs and spurring innovation in the aerial robotics market? It's easy: let people fly UAVs. This isn't something that's easily done here in the U.S. thanks to the FAA being, oh, about a decade (or two) behind the times, but over in Wales, they've already gone and given over an entire airport plus 1300 square kilometers [500 square miles] of airspace to UAV testing.
Conveniently located right off the B4333 between Blaenannerch and Aberporth (and a short distance from Brynhoffnant, Llangranog, Gwbert, and Mwnt), West Wales Airport has just been officially designated as a UAV testing area by the United Kingdom's Civilian Aviation Authority. This means that you can go out there and test your UAVs over a large area of unrestricted airspace, with civilian and military manned aircraft passing through from time to time that your robot should probably know how not to get run over by. Or vice versa, if you like to think big.
While the U.K. is taking a proactive approach to the whole UAV issue and encouraging small commercial companies to develop the technology, the FAA (which controls airspace here in the U.S.) is being a gigantic stick-in-the-mud by saying that they're, like, totally working on some rules or something, honest. Just be patient. For the next thirteen years. That's right, their target date for letting you fly an unmanned aircraft higher than 400 feet or out of visual range is apparently 2025. They hope.
I can understand why the FAA is cautious and wants to make sure that UAVs are operated reliably and safely in commercial airspace, but they can't just stick their heads in the sand for the next decade, or the rest of the world is simply going to keep making progress and small American companies who want to develop UAV technology are going to be forced to move to some unpronounceable town in Wales that's just to the northeast of Pantygrwndy.
For their part, the Welsh seem pretty excited about the prospect of UAVs being developed in their backyard: "we can do forestry, we can do whale-watching, we can do pipeline surveys or even peat bog monitoring," they say. Yes, that's right: Wales does apparently have whales. Fascinating.
Last time a circuit breaker went bad on the International Space Station, astronauts had to go outside and swap out the old breaker box for a new one. As much as you and I would love to be tasked with a spacewalk, it's a bit dangerous, and it takes up a lot of time that astronauts could better spend doing science and fooling around. Now that Dextre the space robot is operational, though, the humans get to sleep in while the robot does the housework.
Of course, space housework is a little bit different than terrestrial housework. Your house has circuit breakers too, and you may even need to replace them from time to time, but it's likely a bit less intensive than what has to happen on the ISS. Controlled from the ground, Dextre -- which according to one of its creators could "insert a DVD into a player" -- spent Sunday and Monday nights unbolting the bad breaker box and swapping it out for a new one on a nearby spare parts pallet. It was fast, easy, and there was no need for any of the human astronauts to even bother waking up.
So on the upside, having Dextre -- and other robots -- on the station to do important work is great. But the question is starting to be, is it worth it to have humans exploring space at all? We're very fragile, and keeping us alive is a complicated and expensive chore. There is definitely something to be said for having us go out and explore our solar system in person so that we can all feel as though our species is experiencing something new, but what if we could field five or ten times as many robotic exploration missions for the same amount of resources?
In any case, it's a little bit ironic that we've now got this big and capable and impervious space robot living outside the ISS, with the primary job of making sure that the puny little humans inside stay safe and sound.
The go-to way of delivering medial supplies to rural areas of developing nations is to not deliver them at all, and force sick people to hike miles through mountains and jungles to get the drugs they need. That, or some dude delivers them on a motorbike. And if the weather's been bad and the roads are washed out, well, good luck.
Solution? Do it all by air. The only way to do that efficiently (or at all) is to scale it way, way down from planes and helicopters to small UAVs. This is the concept behind Matternet, which seems to be both a technology and a company who wants to revolutionize the way medicine is delivered to the billion (with a "b") or so people who live completely cut off from road networks for at least some of the year. Matternet will be a network of autonomous quadrotor UAVs that use GPS and a beacon system to rapidly deliver small packages (containing drugs or medical testing supplies) to people who can't otherwise get them. Their first commercial platform (look for it in three to six months) will be able to fly 10 km while carrying a 2kg load, and it should be durable enough to make thousands of trips in variable weather. All this for only a few hundred dollars a unit. If it works out, Matternet could mean a drastic quality of life improvement for a lot (a lot) of people.
Matternet will develop in three distinct phases: phase one involves using a single UAV for point-to-point cargo transport. For example, a clinic uses a UAV to deliver drugs to an otherwise inaccessible nearby village in 30 minutes or less (or they're free). Phase two will add remote, autonomous recharging stations to allow UAVs to juice up in between deliveries, enabling them to roam farther afield and make multiple deliveries without having to return to base. Connect the dots between base stations and you have a delivery network. In phase three, all of these discrete networks grow large enough that they overlap, and it becomes possible to use a continuous chain of autonomously cooperating UAVs to transport things across entire continents very quickly and for cheap. Eventually, the idea is that Matternet turns into a sort of Internet for stuff, where you can make a request and get a physical object delivered to you. Matternet. Get it?
The obvious question now is, why stop with essential goods like medicines? Forget about the U.S. Postal Service, UPS, FedEx, and all of those short-lived microdelivery services. Autonomous UAVs are faster, cheaper, more efficient, more environmentally friendly, easier to scale, and don't arrogantly double-park all over the place. They could be the urban delivery system of the future, at least until we all get flying cars, at which point all those little flying robots and their packages will likely end up splattered across our windshields. Yay progress!
A chatbot is a computer program that's intended to fool us into thinking that it's human. Historically, this has been a tricky thing to do, and for the last 20 years there's been a $100,000 prize and gold medal waiting for the first computer program that can carry on a conversation indistinguishably from a human. Arguably (very arguably), this could also be the first computer program to demonstrate an artificial intelligence.
Cornell's Creative Machines Lab decided to see what would happen if they put two chatbots face to virtual face and got them started talking to one another. Things didn't go quite as crazy as might have been expected, but a fair amount of pointless argument, passive aggression, and random hilarity did ensue:
The 2011 Loebner Prize Competition in Artificial Intelligence takes place on October 19th, and if any of the entrant programs manages to fool two or more judges comparing two or more humans into thinking that it's a human, the program will win $25,000 and a silver medal. The final $100,000 prize will go to a program that includes a completely convincing audiovisual component as well, and that too may be closer than you think.