It's been a while since we've gotten an update from Boston Dynamics about their BigDog quadruped. And this isn't really an update, I guess, as much as a video of BigDog's noble robotic lineage, with a whole bunch of, shall we say, "outtakes" thrown in for good measure:
We know that the University of Michigan's MABEL biped robot is fast and all, but it's also had some issues in the past with taking the occasional bad step with painful results. It now looks like MABEL has learned some fancy new footwork, with this demonstration of her ability to not completely faceplant when confronted with a surprise 20cm step:
And lastly, I hope you're not burned out on quadrotors yet, because this is pretty sweet. Daniel Mellinger, Alex Kushleyev, and Vijay Kumar at UPenn's GRASP Lab have taught a big quadrotor to act as a landing (and launching) platform for a little quadrotor. Oh, and there's a bunch of hula-hoop dodging with multiple quadrotors at the end, too:
Microsoft, to their credit, has done a good job of embracing Kinect as a game-changing robotics tool instead of just a... A... A video game controller, was it? Well, whatever it was originally designed as, it's all about cheap and effective robotic 3D vision now. Microsoft knows that Kinect is a big deal for robotics enthusiasts of all kinds, and they've just announced the availability of a new beta release of Microsoft Robotics Developer Studio that incorporates the full Kinect SDK that was released back in June. This includes skeleton tracking, speech, and the raw Kinect data stream for creating 3D maps of your house (or anything else).
Besides the full-fledged Kinect integration, the other big news about RDS 4 is that for the first time, Microsoft has their own hardware reference platform designed to make it fast and easy (sort of) for consumers to get straight to programming without having to actually build themselves a robot. Eddie, pictured above, features a round multi-level design that incorporates a Kinect sensor and off-the-shelf laptop. ::cough::. ::cough again::. So yes, Eddie is clearly in the same class as both TurtleBot and Bilibot, which offer similar designs and capabilities and run ROS. We probably shouldn't create some kind of Mac vs. PC thing here, but strictly by the numbers, Eddie is a significantly more expensive proposition at $1200 assembled without a Kinect sensor or a laptop, while both TurtleBot and Bilibot cost the same amount including a Kinect sensor and a netbook.
Whether or not you decide to use Eddie and Microsoft RDS, it's always great to see companies like Microsoft embracing robotics by helping give more access to the developer community with free software releases and customized hardware platforms. As Microsoft puts it,
"This beta release is one of our early steps towards realizing our long term strategy of accelerating the consumer robotics industry. Our motivation in releasing these tools is to extend and democratize access to robotics development, bringing value to the space through ease-of-use, accessibility, and a robust existing developer community."
If you're interested in checking out the RDS 4 beta, you might also be interested in Microsoft's Robotics @ Home Contest, where you could win a free robot and possibly $10,000 for coming up with "a cool idea." Yep, that's it.
Robotics Developer Studio 4 Beta is available for download for free, and includes a simulation environment to get you started without needing to buy any hardware at all. When you're ready to take the plunge, Eddie is also available now, directly from Parallax.
Looking at the product designs from OLogic, a consumer robotics company launching at DemoFall 2011 earlier this week, the words to the classic Wizard of Oz song popped into my head “If I only had a brain…” OLogic robots don’t have brains, instead, they rely on a users’ smart phones to provide the brainpower.
At Demo this week OLogic introduced the A.M.P. (Automated Music Personality), essentially, a boom box on wheels that will sell for $300 to $400. The roving music player doesn’t do a whole lot, but you can dance to it—or with it. The company also displayed Oddwerx, a $50 smartphone dock that makes your mobile phone really mobile—it can wander around your desk. Both devices seem destined for the gadget catalogs that live in airline seat pockets, but they are good reminders that we’re carrying an awful lot of computing power around in our pockets these days. Larson introduces his gizmos in the video above.
I know we've been posting a bunch about quadrotors recently, but it's hard not to when they keep doing cool new stuff. This demo comes from Pat Bouffard and Anil Aswani and shows (eventually) a quadrotor catching tossed ping-pong balls starting at about 1:40:
All that other malarkey at the beginning of the vid (you didn't skip over it, did you?) talks about the programming that goes into making sure that this quadrotor, with what I think we can all agree is a fairly small container, can reliably make catches. Essentially, the robot pays special attention to what's physically going on with itself, using experience to compensate for thing like increased lift due to ground effect.
This technique is called LBMPC (that's Learning Based Model Predictive Control), and you can see it in action when the quadrotor needs to move sideways to catch the ball, as it figures in the fact that it's going to drift a little bit after it cancels out its lateral movement. Clever.
So, if Berkeley's quadrotor teams up with this robot, this robot, maybe this robot, and of course these robots, you've got yourself a halfway decent chance at giving any Little League team a run for their juice boxes, and I for one would pay money to see it happen.
I'm not quite sure what to make of this, but that A.M.P. music robot from 2008 has just shown up at the DEMO Conference, being (re?) launched by a company called OLogic. A little background: the first A.M.P. was a collaboration between Sega Toys and Hasbro, and it was a music playing robot that you could plug your iPod into. It came with a remote control, and could dance around or even follow you thanks to a couple obstacle detection sensors. It launched in in August of 2008 in both Japan and the U.S. for about $500. It seems like it didn't really get anywhere, and in fact looks to have made even less of a commercial impact than the Sony Rolly.
This new version of A.M.P. looks to be essentially the same, possibly even identical, with the addition of Bluetooth music streaming plus a smartphone app that can be used to control the robot directly instead of the remote. It'll cost $300 - $400 and be available in 2012 or 2013.
I'll confess to be not entirely sure where OLogic is going with this, or why they think they'll have better luck than the original A.M.P. robot did back in 2008, especially without a more dramatic price reduction. For $300, you could buy a fairly decent stationary non-robotic stereo system that likely sounds a whole heck of a lot better than I bet this robot does. So really, this is a toy, for people who want a robot, not a music player. At the same time, it's a very 2008 robot, without much of the exciting new technology that's made comparably priced robots like the Parrot AR Drone such a success.
OLogic says that they're looking to "revolutionize consumer robotics with smartphones" and that they'll be coming out with a bunch more smartphone powered and integrated robots in the coming years. They've got significant (and growing) competition in that space, but I certainly hope that they can make it happen. The world always needs more robots.
This video is a good overview of some of the cooler stuff that they're working on at the DFKI Robotics Innovation Center. I'm not that great with German, but "DFKI" may somehow stand for the German Research Center for Artificial Intelligence. They've partnered up with the Robotics Group at the University of Bremen to help turn basic robotics research into robots with "real-world applications," which also apparently includes a lot of space-type stuff.
One of the robots in the above video caught my eye in particular: ARAMIES, a sort of robot space explorer dog thing that looks like it came straight out of that lousy Red Planet movie:
ARAMIES was a DFKI project that began in 2004. It was sponsored by DLR (the German Space Agency), and was specifically designed to climb up steep and uneven terrain, like lunar craters or Martian canyons, using a mean looking set of claws:
Despite the additional complexity that comes with legs, the amount of mobility they offer is unprecedented, which is why robots like BigDog are so terrain-capable. While the ARAMIES project concluded in 2007, elements of the design and software live on in another crater-clambering robot called SpaceClimber, which is the spidery guy in the first video who looks cool enough to have earned itself an extra picture:
Designing a robot is easy. Well, not really. But what's even harder is getting enough resources (i.e. cash) to get your design from, uh, design, to actual operating death machine, or whatever your own personal robotics bent is.
Carnegie Mellon is helping to nudge things along by hosting a venture competition called RoboBowl specifically for robotics. If you're well funded and have no idea what a venture competition is, then you're very very lucky, but for the rest of us, venture competitions are a way of matching people with great ideas with people who have a lot of money to turn great ideas into commercial products. They're probably not looking for the next Roomba clone, but if you've got a concept that embodies the "next generation of robotics" (whatever that may be), RoboBowl can help get you the money you need to make it happen.
Here's how it works: for a modest $100 entry fee, teams can submit a business summary that will be reviewed by judges. The semi-finalists will then make online presentations, and the five best will win a cool $5,000 and a chance to present their ideas in person directly to a panel of judges and win an additional $20,000. Tasty.
RoboBowl is actually intended to be several different competitions, the first of which is focused on healthcare and quality of life robotics, including:
Surgical & Interventional Robotic Systems
Robotic Rehabilitation & Prosthetic Systems
Robotic Assistive & Wellness Systems (e.g. mobility, manipulation, social assistance aids)
Robotic Telemedicine Systems
Logistic & Operational Hospital Robotic Systems
The final round of judging takes place on October 13, and we're keeping our fingers crossed that it'll be webcast. Stay tuned.
That awesome robobowl in the pic, by the way, is from Etsy user Danmade.
It's starting to seem like programming a robot to do anything is old and busted, and the new hotness is to program a robot to learn instead. And it makes sense: why spend a bunch of time and effort programming a robot to solve a specific problem when (with perhaps a little more time and effort) you can create a generalist that can learn to do absolutely anything?
Learning is essentially just the inherent ability to adapt to a new situation, and new situations crop up disturbingly frequently out there in what they call "real life." UAVs, for example, have to deal with annoyances like wind, which has a tendency to blow them off of whatever route they're supposed to be taking. While you could certainly program a UAV to follow a specific trajectory, and then program it to be able to account for wind of varying degrees of windishness, it's much easier just to program it to follow a trajectory adaptively, learning to deal with wind (or any other type of disturbance as it goes). Those crazy quadrotors from the Flying Machine Arena at ETH Zurich demonstrate the concept:
Just like humans, these robots start off being fairly terrible at a given task. Also like humans, they get better quickly, and quite unlike humans, robots never make the same mistake twice, never get tired or bored, can practice and practice until they perfectly master whatever task they've been assigned. Got a new variable to introduce? No problem! Just add in a few more practice sessions and the robot will figure it out
Eventually, the hope is that robots will be able to figure out new situations completely on their own, without even having to ask a human for help. And since networked robots can learn from the mistakes of other networked robots, all it takes is just a few adventurous non-souls to take the plunge on a given task, and robots everywhere can learn and benefit from whatever mayhem may or may not (but probably will) ensue.
Disney Research seems to be bullish on this whole robotics thing, and that partnership with ETH Zurich that we heard about at ICRA looks to be developing nicely. Their latest creation is a robot that can climb up buildings and then leap off, deploying an absolutely adorable little parafoil to come to a safe landing. Watch the video, but be warned: the drama has been cranked up to intense.
The wall climbing talent comes courtesy of a vortex generator (a propeller of sorts), which creates a freakin' tornado inside a cylinder in the body of the robot. The interior of the vortex is a region of low pressure, which allows the bot to stick to vertical surfaces, and since the vortex itself forms an air barrier, there's no need for a seal. This means that the bot doesn't need a smooth surface to climb (although it probably helps).
The robot, called Paraswift, was originally intended to be used for entertainment. And you are entertained, are you not? The team from ETH Zurich, though, sees more possibilities for their bot, like using it to create vertical 3D models of tall buildings. They also want to automate the parachute deployment so that if the robot ever slips and falls, it'll land safely. Sounds like a good idea to me, although they'll have to work on the parafoil steering, lest Paraswift suffer the same fate as iRobot's poor little PackBot Griffin.
Yesterday we posted a video showing how robots can use machine learning to adapt to objects and situations that they've never seen before. The same research group at Cornell has also been working on clever ways to allow robots to efficiently interpret scenes and identify objects, which is one of those things that robots are going to have to be good at before they can really become helpful in our homes.
Humans have the ability to look at a scene and immediately pick out important elements while ignoring everything else, because we have brains that are awesome like that. Robots, in general, don't really work that way. They have to examine each and every pixel of what they're looking at and decide whether or not it's something they're interested in. So if you ask a robot go to find you (say) a computer keyboard, it's got to enter a room and methodically search every pixel-equivalent area until it finds what it's looking for. While the robot is likely to find its target in this manner, it'll take roughly forever to do so.
What the Cornell research group has been teaching robots to do is to be able to rapidly break down a scene into general categories, and then recognize how some categories are related to others. In a general sense, this is the same thing that we humans do. For example, if you enter a room and want to locate a computer keyboard, you first (subconsciously) identify the places that you don't need to pay attention to, like the walls and ceiling and probably the floor. Next, if you don't immediately see the keyboard, you might take a closer look at the places where keyboards like to hang out, namely in close proximity to a computer. You see a computer, you get closer, and hey, there's the keyboard. Nicely done.
Now, watch a robot do the exact same thing:
The key to getting this to work goes back to the machine learning and adaptability system that Cornell has been working on. First, they showed a robot a series of about 50 different scenes, with most objects in the scene labeled. The robot read the labels and then remembered characteristics of the associated objects (like color, texture, and what other objects are nearby), such that it was then able to use those characteristics categorize entirely new objects in environments it's never experienced.
Armed with this knowledge and way of thinking (if you want to call it that), the robot was then able to enter a room it had never been in before in search of a keyboard, and "know" that since it spotted a monitor, and keyboards are often found near monitors, it should go check out the monitor. And of course, the keyboard is right there.
This ability to follow chains of reasoning to identify, categorize, and locate objects is obviously still being developed, but again, giving robots the ability to understand context and use it to adapt to new things brings them that much closer to being the machines that take over all of our household chores and leave us with nothing better to do than lie around and play video games all day. Because that's what we all secretly want, right? ...Right?