Automaton iconAutomaton

RoboBowl: Carnegie Mellon Makes Awesome Robot Ideas Real

Designing a robot is easy. Well, not really. But what's even harder is getting enough resources (i.e. cash) to get your design from, uh, design, to actual operating death machine, or whatever your own personal robotics bent is.

Carnegie Mellon is helping to nudge things along by hosting a venture competition called RoboBowl specifically for robotics. If you're well funded and have no idea what a venture competition is, then you're very very lucky, but for the rest of us, venture competitions are a way of matching people with great ideas with people who have a lot of money to turn great ideas into commercial products. They're probably not looking for the next Roomba clone, but if you've got a concept that embodies the "next generation of robotics" (whatever that may be), RoboBowl can help get you the money you need to make it happen.

Here's how it works: for a modest $100 entry fee, teams can submit a business summary that will be reviewed by judges. The semi-finalists will then make online presentations, and the five best will win a cool $5,000 and a chance to present their ideas in person directly to a panel of judges and win an additional $20,000. Tasty.

RoboBowl is actually intended to be several different competitions, the first of which is focused on healthcare and quality of life robotics, including:

  • Surgical & Interventional Robotic Systems
  • Robotic Rehabilitation & Prosthetic Systems
  • Robotic Assistive & Wellness Systems (e.g. mobility, manipulation, social assistance aids)
  • Robotic Telemedicine Systems
  • Logistic & Operational Hospital Robotic Systems

The final round of judging takes place on October 13, and we're keeping our fingers crossed that it'll be webcast. Stay tuned.

That awesome robobowl in the pic, by the way, is from Etsy user Danmade.

[ RoboBowl ]

Thanks Aaron!

Practice Makes Perfect for Path-Following Quadrotors

It's starting to seem like programming a robot to do anything is old and busted, and the new hotness is to program a robot to learn instead. And it makes sense: why spend a bunch of time and effort programming a robot to solve a specific problem when (with perhaps a little more time and effort) you can create a generalist that can learn to do absolutely anything? 

Learning is essentially just the inherent ability to adapt to a new situation, and new situations crop up disturbingly frequently out there in what they call "real life." UAVs, for example, have to deal with annoyances like wind, which has a tendency to blow them off of whatever route they're supposed to be taking. While you could certainly program a UAV to follow a specific trajectory, and then program it to be able to account for wind of varying degrees of windishness, it's much easier just to program it to follow a trajectory adaptively, learning to deal with wind (or any other type of disturbance as it goes). Those crazy quadrotors from the Flying Machine Arena at ETH Zurich demonstrate the concept:

Just like humans, these robots start off being fairly terrible at a given task. Also like humans, they get better quickly, and quite unlike humans, robots never make the same mistake twice, never get tired or bored, can practice and practice until they perfectly master whatever task they've been assigned. Got a new variable to introduce? No problem! Just add in a few more practice sessions and the robot will figure it out

Eventually, the hope is that robots will be able to figure out new situations completely on their own, without even having to ask a human for help. And since networked robots can learn from the mistakes of other networked robots, all it takes is just a few adventurous non-souls to take the plunge on a given task, and robots everywhere can learn and benefit from whatever mayhem may or may not (but probably will) ensue.

[ ETH Zurich ]

Thanks Markus!

Disney Builds Wall Climbing Base Jumping Tornado Powered Robot

Disney Research seems to be bullish on this whole robotics thing, and that partnership with ETH Zurich that we heard about at ICRA looks to be developing nicely. Their latest creation is a robot that can climb up buildings and then leap off, deploying an absolutely adorable little parafoil to come to a safe landing. Watch the video, but be warned: the drama has been cranked up to intense.


The wall climbing talent comes courtesy of a vortex generator (a propeller of sorts), which creates a freakin' tornado inside a cylinder in the body of the robot. The interior of the vortex is a region of low pressure, which allows the bot to stick to vertical surfaces, and since the vortex itself forms an air barrier, there's no need for a seal. This means that the bot doesn't need a smooth surface to climb (although it probably helps).

The robot, called Paraswift, was originally intended to be used for entertainment. And you are entertained, are you not? The team from ETH Zurich, though, sees more possibilities for their bot, like using it to create vertical 3D models of tall buildings. They also want to automate the parachute deployment so that if the robot ever slips and falls, it'll land safely. Sounds like a good idea to me, although they'll have to work on the parafoil steering, lest Paraswift suffer the same fate as iRobot's poor little PackBot Griffin.

[ Disney Research ] via [ TechCrunch ]

Clever Cornell Robot Finds your Lost Keyboard

Yesterday we posted a video showing how robots can use machine learning to adapt to objects and situations that they've never seen before. The same research group at Cornell has also been working on clever ways to allow robots to efficiently interpret scenes and identify objects, which is one of those things that robots are going to have to be good at before they can really become helpful in our homes.

Humans have the ability to look at a scene and immediately pick out important elements while ignoring everything else, because we have brains that are awesome like that. Robots, in general, don't really work that way. They have to examine each and every pixel of what they're looking at and decide whether or not it's something they're interested in. So if you ask a robot go to find you (say) a computer keyboard, it's got to enter a room and methodically search every pixel-equivalent area until it finds what it's looking for. While the robot is likely to find its target in this manner, it'll take roughly forever to do so.

What the Cornell research group has been teaching robots to do is to be able to rapidly break down a scene into general categories, and then recognize how some categories are related to others. In a general sense, this is the same thing that we humans do. For example, if you enter a room and want to locate a computer keyboard, you first (subconsciously) identify the places that you don't need to pay attention to, like the walls and ceiling and probably the floor. Next, if you don't immediately see the keyboard, you might take a closer look at the places where keyboards like to hang out, namely in close proximity to a computer. You see a computer, you get closer, and hey, there's the keyboard. Nicely done.

Now, watch a robot do the exact same thing: 

The key to getting this to work goes back to the machine learning and adaptability system that Cornell has been working on. First, they showed a robot a series of about 50 different scenes, with most objects in the scene labeled. The robot read the labels and then remembered characteristics of the associated objects (like color, texture, and what other objects are nearby), such that it was then able to use those characteristics categorize entirely new objects in environments it's never experienced.

Armed with this knowledge and way of thinking (if you want to call it that), the robot was then able to enter a room it had never been in before in search of a keyboard, and "know" that since it spotted a monitor, and keyboards are often found near monitors, it should go check out the monitor. And of course, the keyboard is right there.

This ability to follow chains of reasoning to identify, categorize, and locate objects is obviously still being developed, but again, giving robots the ability to understand context and use it to adapt to new things brings them that much closer to being the machines that take over all of our household chores and leave us with nothing better to do than lie around and play video games all day. Because that's what we all secretly want, right? ...Right?

[ Cornell ]

Robots Figuring Out how to Figure Things Out

You've heard it before: robots are bad at new things. They're bad at adapting to new situations, they're bad at recognizing new objects, and they're bad at coming up with their own ideas about how to carry out tasks in changing environments. One way to deal with this general ineptitude is to provide a robot with endlessly detailed instructions to minimize the amount of unfamiliar things it experiences. The world is an awfully big place, however, and if we want robots to be able to do robot stuff outside of tolerably controlled environments, such a brute force approach may not be the best way to go.

A much more elegant solution is to teach robots to think and adapt for themselves. Sounds easy, right? Right! I mean, wrong! It's not easy at all. But once you figure out how to do it, you can plop a robot down anywhere and ask it to do anything and it'll have a reasonable chance of figuring it out, or at the very least, be able to ask an intelligent question or two to get going.

At Cornell's Personal Robotics Laboratory, a research group is teaching a robot to generalize groups of objects, which is one of the most basic aspects of reliable adaptability. For example, instead of teaching a robot "this is a cup, and this is a slightly different cup" and so on, you can instead teach a robot to recognize features common to all cups, so that when it sees something cup-like, it can say to itself, "hey, that's a small container with a handle, I bet it's a cup!"

This same sort of learning method can also be applied to actions. By teaching a robot how to pick a few different types of cups, the robot can then generalize the lessons and apply them to completely new cups. And if you teach the robot to put a few dishes into a dish rack, it can then use what it knows about the objects and the rack to figure out how to put pretty much anything in there:

If you're the type who's impressed by numbers, this robot was able to put unseen objects into the right spot in the dish rack in the right way 92% of the time, which is about 92% better than I do. Touché dishwasher loading robot, touché.

[ Cornell ] via [ KurzweilAI ]

Textron's T-Ram is the Suicidal Mini-UAV You've Always Wanted

The U.S. Air Force has been looking for what they're calling a "Lethal Miniature Aerial Munition System" to be fielded with special ops units next year. If the name of the program doesn't explain it, the above pic should: they essentially want a mortar round with wings, a camera, and a little engine. In other words, a surveillance UAV that can suicidally attack targets on command.

There are several systems with this capability currently in the works, but the operational requirements and principles are all the same. LMAMS needs to weigh three kilos or less, including the vehicle and the launching system. It needs to be able to deploy and fire in under 30 seconds, reach an altitude of 100 meters, and acquire and track a human-sized target in a further 20 seconds. At that point, the drone can either dive at its target, landing within a one meter radius and exploding its small (but still quite lethal) warhead, or it can loiter for up to 30 minutes, sending back live video.

Now, this seems like a fairly dangerous little robot to have around, but before you get all worked up about killer robots and stuff, remember that these special ops units already have tools to deal with situations that the LMAMS is designed for: namely, blindly chucking dumb mortars and grenades at things, calling in air support, or putting themselves in harms way to get a better view of their target. All the LMAMS does is to reduce risk and collateral damage. Or at least, that's the idea, but whether it'll work in practice remains to be seen.

The UAV in the picture is Textron Defense Systems' T-RAM, which stands for Tactical Remote Aerial Munition. You can watch it in action in the video below, which features a soundtrack that's inappropriately Indiana Jonesish: 

[ Textron Systems ] via [ Aviation Week ]

Micro Energy Harvesters Will Make Cyborg Insects Unstoppable

Cyborg insects have been flying under remote control for over two years now, but the strict weight limits imposed by the fact that you're trying to turn a bug into a functional UAV means that their usefulness is still somewhat constrained. A rhinoceros beetle, for example, can manage to haul about 30% of its own weight as payload. This works out to be somewhere around 2.5 grams, which is not a whole heck of a lot, and if you're eating up a significant portion of that space with a battery, it doesn't leave much room for (say) a camera or missiles.

One option is a small nuclear battery, but a much more elegant solution (with less potential for creating a giant mutant cyborg insect of doom) is to simply harvest power directly from the insect itself. Researchers from the University of Michigan and Western Michigan University have developed a prototype insect energy harvester, pictured above, made of a piezoelectric material that converts wingbeats into electricity. By mounting one of these piezoelectric springs on each wing, simulations show that over 100 microwatts (μW) can be harvested, which is significantly more than the maximum of 80μW it takes to control the insect itself.

While this level of power isn't going to be able to charge those miniaturized laser cannons that I'm reasonably sure DARPA is working on, it does significantly reduce the energy drain on any auxiliary power system that might have to be carried along anyway. And as with all electronics, efficiency will only go up as mass goes down, until ultimately power will only be limited by the lifespan of the insect and the amount of tasty fruit that you can get your bug to chow down on in the middle of a mission.

[ Paper (*.PDF) ] via [ NBF ]

Video Friday: PowerGloves, MAVs, and Self-Assembling Robots

This very cool demo comes from a company called AnthroTronix. Their AcceleGlove uses accelerometers to detect hand movements, and it's precise enough to get a little robot to pick up a ball in its gripper:

AirBurr, or the original AirBurr at least, is from 2009. It's got a rotor like a helicopter plus control surfaces like an airplane, and since the whole thing is completely enclosed, it can bounce off walls and land and take off on its own, ideal for indoor flying:

Any doubts that AeroVironment's hummingbird MAV could fly outside of a staged demo were erased when it took off from a table at an AUVSI press conference:

Noisy little bugger, but very clever.

This TEDTalk from MIT's Skylar Tibbits shows several examples of how robots (and structures in general) can be programmed to be self-assembling and self-reconfiguring:

We've seen a few things like this before, and those chains at the end of the talk remind me of Cornell's stochastic self-reconfiguring robots from a few years ago.

And finally, the week would just not be complete without a video about robots going nuts and killing all the humans they can get their steely claws on. Behold, Robotropolis!

Wales Gets Dedicated UAV Airport, Leaves U.S. in the Dust

Want to know a surefire way of creating jobs and spurring innovation in the aerial robotics market? It's easy: let people fly UAVs. This isn't something that's easily done here in the U.S. thanks to the FAA being, oh, about a decade (or two) behind the times, but over in Wales, they've already gone and given over an entire airport plus 1300 square kilometers [500 square miles] of airspace to UAV testing.

Conveniently located right off the B4333 between Blaenannerch and Aberporth (and a short distance from Brynhoffnant, Llangranog, Gwbert, and Mwnt), West Wales Airport has just been officially designated as a UAV testing area by the United Kingdom's Civilian Aviation Authority. This means that you can go out there and test your UAVs over a large area of unrestricted airspace, with civilian and military manned aircraft passing through from time to time that your robot should probably know how not to get run over by. Or vice versa, if you like to think big.

While the U.K. is taking a proactive approach to the whole UAV issue and encouraging small commercial companies to develop the technology, the FAA (which controls airspace here in the U.S.) is being a gigantic stick-in-the-mud by saying that they're, like, totally working on some rules or something, honest. Just be patient. For the next thirteen years. That's right, their target date for letting you fly an unmanned aircraft higher than 400 feet or out of visual range is apparently 2025. They hope.

I can understand why the FAA is cautious and wants to make sure that UAVs are operated reliably and safely in commercial airspace, but they can't just stick their heads in the sand for the next decade, or the rest of the world is simply going to keep making progress and small American companies who want to develop UAV technology are going to be forced to move to some unpronounceable town in Wales that's just to the northeast of Pantygrwndy.

For their part, the Welsh seem pretty excited about the prospect of UAVs being developed in their backyard: "we can do forestry, we can do whale-watching, we can do pipeline surveys or even peat bog monitoring," they say. Yes, that's right: Wales does apparently have whales. Fascinating.

Via [ Discovery News ]

Dextre Robot Repairs ISS While Astronauts Sleep

Last time a circuit breaker went bad on the International Space Station, astronauts had to go outside and swap out the old breaker box for a new one. As much as you and I would love to be tasked with a spacewalk, it's a bit dangerous, and it takes up a lot of time that astronauts could better spend doing science and fooling around. Now that Dextre the space robot is operational, though, the humans get to sleep in while the robot does the housework.

Of course, space housework is a little bit different than terrestrial housework. Your house has circuit breakers too, and you may even need to replace them from time to time, but it's likely a bit less intensive than what has to happen on the ISS. Controlled from the ground, Dextre -- which according to one of its creators could "insert a DVD into a player" -- spent Sunday and Monday nights unbolting the bad breaker box and swapping it out for a new one on a nearby spare parts pallet. It was fast, easy, and there was no need for any of the human astronauts to even bother waking up.

So on the upside, having Dextre -- and other robots -- on the station to do important work is great. But the question is starting to be, is it worth it to have humans exploring space at all? We're very fragile, and keeping us alive is a complicated and expensive chore. There is definitely something to be said for having us go out and explore our solar system in person so that we can all feel as though our species is experiencing something new, but what if we could field five or ten times as many robotic exploration missions for the same amount of resources?

In any case, it's a little bit ironic that we've now got this big and capable and impervious space robot living outside the ISS, with the primary job of making sure that the puny little humans inside stay safe and sound.

[ CSA ] via [ TFT ]



IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:

Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
Jason Falconer
Angelica Lim
Tokyo, Japan

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Load More