Automaton iconAutomaton

Robot Learns to Clean Whiteboard, Schoolchildren Rejoice

This is HOAP-2, and it likes to clean. It doesn't really know how to clean, but that's okay, because it does know how to learn. A human can move HOAP-2's arms in different cleaning patterns, and the bot will remember and then be able to clean by itself later on. Take a look:

The cool thing here is, of course, that HOAP is learning to erase instead of being programmed to erase. Robot learning is the focus of tons of research today. Now, in the case of HOAP, some people would argue that this is a waste of time, because robots should be able to detect marks on a whiteboard and erase them autonomously. And that's true, but it's also not the point.

If you're a teacher with a bunch of dirty whiteboards and no naughty kids and someone hands you a robot, you don't want to have to worry about whether your whiteboards are the right shade of white or the right size or whatever... And what if you have chalkboards instead? It really makes much more sense to have a robot be a generalist, and to be an effective generalist a robot has to be adaptable, something that (for now at least) robots are notoriously bad at. But robots are notoriously good at following instructions, so robots that can learn new tasks from humans on the fly have the potential to be much more effective, and much less frustrating for their users.

[ Petar Kormushev ]

Thanks Tipper!

Watson AI Fails to Destroy Humans in First Round of Jeopardy

UPDATE: See who prevailed -- man or machine? -- in the third and final round!

You can relax: so far, we humans aren't completely redundant, as IBM's Watson artificial intelligence system managed to not answer every single question instantly and correctly in the first round of a three day Jeopardy exhibition match with past champions Ken Jennings and Brad Rutter.

After round one (which was as far as today's show got), Watson was tied with Brad for first with $5000, and Ken Jennings was in third with $2000. It's worth mentioning that Watson led for most of the round, with a few incorrect questions setting it back significantly towards the end. Watson definitely showed more 'common sense' than I was expecting, but it still seemed burdened with a fair amount of the 'so smart it's kinda dumb' that computers are known for. It definitely didn't get everything right, displaying a significant number of low-certainty answers (below its buzzing threshold) as well as buzzing in with several answers that seemed pretty far off. It also managed to give an incorrect answer to a question that Ken had incorrectly answered with the same answer moments before, which is a bug that I bet is getting resolved as we speak.

While I was honestly hoping that Watson would do slightly better, I'm relieved that it at least ended up tied for first. Irrespective of expectations, I feel like Watson turned in an impressive performance, and I think that's one of the most important aspects of this exhibition... In my experience, the overall perception that the general public has of the current state of robotics and artificial intelligence is some mix of Roomba, the Terminator, and Commander Data, which is (to put it mildly) somewhat inaccurate and unrealistic. So, it's good to see a state-of-the-art AI system put on a credible public performance, complete with some fallability to keep us feeling comfortably in control.

The next segment (featuring the the second round plus Final Jeopardy) airs on ABC tonight, with the final match on Wednesday.

[ Watson ]

TED Roundup: Heather Knight and Cynthia Breazeal Talk Robot Comedy and Interactivity

There have been a couple interesting TED Talks on robots recently featuring robots (and their human researchers). This first talk is from Heather Knight of Marilyn Monrobot Labs, who programmed a Nao to not just tell jokes, but actually pay attention to whether the audience was laughing and then adapt its comedy routine on the fly:

The other talk is by Cynthia Breazeal, from MIT's Personal Robotics Group. She talks about her past research and where she sees the future of interactive robots. If for no other reason, it's worth watching for a priceless Cookie Monster moment:

It's interesting how Cynthia discusses her research history in terms of robots like Leo who are autonomous, versus her vision for future in which she seems to focus on telepresence-type robots. I'm not sure what (if anything) to make of that, and while I tend to agree that at least commercially, telepresence probably has a stronger immediate market than autonomy (especially emotional autonomy) at its current stage of development, I'd still love to see more of Leo.

It's also interesting just how much of a difference presence makes when it comes to humans interacting with technology, and how even a subtle anthropomorphic design can inspire emotional attachment. Autom, in particular, is a good example of how the way to get people to bond with robots is not to try to make them as humanoid as possible, but just to make them slightly familiar, and we humans can fill in all the blanks with no problems.

Kinect Hack Leads to Hands-Free Roomba

kinect hack robot roomba

Getting a Roomba to obey gesture commands turns out to be pretty simple thanks to the magic of Kinect: the sensor is connected to a PC, which talks to the Roomba via a little Bluetooth dongle and sends it driving and steering commands based on the positions of your hands and hips.

I'm well aware that this hack basically defeats the entire purpose of having a Roomba, without really giving you many of the benefits of an upright vacuum, but to let those facts bother you would go against the spirit of what this is: it's a hands-free Roomba, man! How cool is that?

[ ROS-Robot ] VIA [ ROS.org ]

Bilibot Is the Affordable ROS Platform You've Been Looking For

Building robots has never been cheap, which sucks, because all that time and effort and expense that goes into building a hardware platform eats up all the time and energy that you'd probably rather spend making that hardware platform do something useful and cool. Part of the point of PR2 and ROS was to remove this hardware barrier and let people start focusing on software. Every once in a while, though, you run into someone who for some reason can't find $400,000 between their couch cushions for their own PR2. For these unfortunate souls, a new option may be the Bilibot.

The idea behind the Bilibot (which is some variation on "cheap robot" in German) is to create a robotics platform that's cheap enough for just about anyone to be able afford, yet capable enough for serious robotics researchers to be productive with. It consists of three primary components: an iRobot Create base to let it move around, a Kinect sensor to let it see where it's going, and a computer pre-configured with ROS. Part of the appeal of the platform is that it'll be capable of doing stuff right out of the box: there will be a single button you can push to get the robot to start following you around, for example.

The primary reason you might want a Bilibot is that the target price they're shooting for is a scant $650, which is basically just the cost of the parts plus the time it takes to put the thing together. And if you want to go even cheaper, you can build it yourself, the plans will be freely available.

Want in? They're currently finalizing the computer hardware, but you can sign up to be notified when pre-orders start for real at the website below.

[ Bilibot ] VIA [ Hizook ]

Modular Robotics' Cubelets Prototypes on Video

Modular Robotics' Cubelets are designed to be an absurdly simple way to build robots. You don't have to know how to program anything or even how to build anything; just snap a few specialized Cubelet blocks together and poof, you've got a robot. Want to build something different? Just use different blocks in different combinations, it's that easy:

One set of 20 Cubelets would cost you $300, if you could buy them, which you can't, because they're sold out. In that set you'd get:

Action Blocks: 2 Drive, 1 Rotate, 1 Speaker, 1 Flashlight, 1 Bar Graph
Sense Blocks: 1 Knob, 1 Brightness, 2 Distance, 1 Temperature
Think/Utility Blocks: 2 Inverse, 1 Minimum, 1 Maximum, 1 Battery, 2 Passive, 2 Blocker

Last time I posted about Cubelets, I posed a question that nobody even tried (as far as I could tell) to answer, so I'm just going to go ahead and pose it again: How many different permutations of robot you can make with one set of 20 Cubelets, keeping in mind the following:

-Each Cubelet has either 5 or 6 attachment points (depending on what it does)
-The same set of Cubelets functions differently when arranged differently
-Cubelet permutations must be able to exist in physical space (tricky!)

You may ignore the fact that using (say) two inverse blocks in a row is functionally identical to not using any inverse blocks, and assume that a Cubelet robot that has a different size or layout counts as a different robot. And while the definition of "robot" is, as always, a little bit iffy, suffice it to say that to count, a Cubelet robot has to be able to sense something or perform some action.

If you can convince us that you have the right answer (post it in the comments section below), it's good for an Automaton t-shirt. Good luck!

[ Modular Robotics ]

New Pleo Robotic Dinosaur Much More Advanced Than Original

pleo robotic dinosaur

Innvo Labs was out in force at CES 2011, and I got some cute pics of their new Pleo Reborn plus 10 minutes with Innvo’s COO Derek Dotson, one of Pleo’s original daddies from back in the Ugobe days:

While it’s a bit disappointing that those adorable pink and blue Pleos won’t be available over here, and that the male and female behaviors that we heard about weren’t implemented, Pleo rb is still much more sophisticated than the original Pleo, especially in terms of software and interactivity.

I’m particularly looking forward to some of those future features that Derek alludes to in our interview, like the wireless connectivity and nose cam access. I mean, if both of those get hooked up, presto, you’ve got a remotely accessible surveillance dino. It’ll be a while yet, but just bombard Innvo with emails, ’cause they’re listening.

Looks like Pleo Reborn is backordered until about April, which is good news for Innvo and the commercial future of Pleo but bad news for you if you want one. They’re $470, and extra food and learning stone kits are $20 each.

More photos:

pleo robotic dinosaur

pleo robotic dinosaur

[ Pleoworld ]

Natural Intelligence and Artificial Stupidity: Airport Security Needs Better Humans, Not Machines


Illustration: McKibillo

I'm quick to opt for automation where it increases productivity. However, choosing machinery over people to detect humans with foul intent at airports demonstrates a lack of understanding of just how keenly tuned the human brain is to detect subtle facial and behavioral cues.

“Okay,” you’re thinking, “Jeanne’s had a bad airport day.” And you'd be right!

You know the drill: Delays, lines, unfriendly agents, and, of course, the choice between a humiliating pat-down or a scanning machine we have to trust to be safe. I'd say this was definitely my second worst airport day ever. The first?

That would've been when a U.S. Transportation Security Administration agent scooped the pumpkin filling out of my daughter’s leftover Thanksgiving pie. Perhaps you all might wonder when Al Qaeda started watching Chef Paula Deen so they could hide explosives in homemade pumpkin pie and then convince college girls to carry them through airport security?

But TSA employees are denied the right to deploy the most advanced natural intelligence and sensing systems in existence -- the one inside their own cortices! -- in favor of the artificial stupidity of bureaucratic procedure. Pumpkin filling? Sorry, ma'am, that exceeds the 3 ounces limit for liquids and gels. It has to go.

When an organization like TSA has some US $8 billion to spend, is it better off hiring large numbers of poorly paid, unprotected staff to baby-sit radiation-scattering machines that share with the world details only your proctologist knew before? Or should it be investing in highly select, well-paid, and highly educated professionals using the sensing systems, evolved over millennia and trained over decades, to detect people with something to hide?

The advantage of a machine is that it cannot be accused of bias. But bias can be counteracted by both training and quality control. Performance reviews can show many subjects were unnecessarily delayed, with an analysis of characteristics highlighting any bias in who is being stopped unnecessarily. We in the artificial intelligence community can help you with that sort of analysis, TSA.

We in the AI community are also working assiduously to replicate the capabilities of human beings in machines, but when it comes to facial detection and behavior recognition, our algorithms barely match a child’s capabilities. Delaying people thousands of hours a day and risking radiation damage to them and TSA personnel harms our economy and reduces overall efficiency. Why not deploy natural intelligence instead of artificial stupidity?

Thank heavens we’re testing telepresence systems on our MT490 mobile robot. I’m sure I’m not the only person who’ll be opting to let my avatar drive to more meetings!

Jeanne Dietsch, an IEEE member, is CEO and co-founder of MobileRobots in Amherst, N.H., and vice president of emerging technologies at Adept Technology.

This Robotic Dragonfly Flew 40 Years Ago

This is a robotic dragonfly. If I told you that some company had just invented it and it was flying around today, you’d probably be impressed. Instead, I’m going to tell you that it was developed by the CIA and was flying in the 1970s. And not just flying like proof-of-concept-it-gets-off-the-ground flying, but reportedly, the flight tests were "impressive," whatever that means. It was powered by an ultraminiaturized gasoline engine (!) that would vent its exhaust backwards to increase the bot’s thrust, and the only reason they seemed to have scrapped it was that its performance in a crosswind wasn’t that good:

In the 1970s the CIA had developed a miniature listening device that needed a delivery system, so the agency’s scientists looked at building a bumblebee to carry it. They found, however, that the bumblebee was erratic in flight, so the idea was scrapped. An amateur entymologist on the project then suggested a dragonfly and a prototype was built that became the first flight of an insect-sized machine.

A laser beam steered the dragonfly and a watchmaker on the project crafted a miniature oscillating engine so the wings beat, and the fuel bladder carried liquid propellant.

Despite such ingenuity, the project team lost control over the dragonfly in even a gentle wind. “You watch them in nature, they’ll catch a breeze and ride with it. We, of course, needed it to fly to a target. So they were never deployed operationally, but this is a one-of-a-kind piece.”

In of itself, this dragonfly is not particularly crazy. It’s also not particularly crazy that it was done 30 or 40 years ago, I guess. What IS crazy is when you start thinking about the state of technology 40 years ago versus the state of technology today, and what might be possible now (but currently top secret) if they had an operational insect robot way back then. It blows my mind.

The CIA also came up with a robot squid (its mission is STILL classified) and a robot research fish named Charlie. Pics and video of that, after the jump.

CIA’s Office of Advanced Technologies and Programs developed the Unmanned Underwater Vehicle (UUV) fish to study aquatic robot technology. Some of the specifications used to develop “Charlie” were: speed, endurance, maneuverability, depth control, navigational accuracy, autonomy, and communications status.

The UUV fish contains a pressure hull, ballast system, and communications system in the body and a propulsion system in the tail. It is controlled by a wireless line-of-sight radio handset.

Cute! And once again, seriously not bad for such a long time ago.

[ CIA Flickr ] VIA [ Danger Room ]

Telepresence Robot Fetches Scones, Justifies Pricetag

Wondering what a $15k telepresence robot can do for you? WONDER NO LONGER. With the help of a 4G wireless hotspot, this QB wandered out of the Anybots office into downtown Mountain View, Calif., looking for a snack. A mile later, it found a Red Rock Coffee and ordered a berry scone, tipped something like 125% (!) and then rolled out. Classy.

While it’s a little hard to tell from the vid, I’m assuming that Anybots sent a chaperone of some sort along to make sure that nobody just grabbed QB by the neck and made off with it. And if they didn’t, well, let me know next time you send a robot out for coffee, because I totally want one and I think grand theft robot is the only way it’s gonna happen.

[ Anybots ]

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More