Automaton iconAutomaton

Kinect Hack Leads to Hands-Free Roomba

kinect hack robot roomba

Getting a Roomba to obey gesture commands turns out to be pretty simple thanks to the magic of Kinect: the sensor is connected to a PC, which talks to the Roomba via a little Bluetooth dongle and sends it driving and steering commands based on the positions of your hands and hips.

I'm well aware that this hack basically defeats the entire purpose of having a Roomba, without really giving you many of the benefits of an upright vacuum, but to let those facts bother you would go against the spirit of what this is: it's a hands-free Roomba, man! How cool is that?

[ ROS-Robot ] VIA [ ROS.org ]

Bilibot Is the Affordable ROS Platform You've Been Looking For

Building robots has never been cheap, which sucks, because all that time and effort and expense that goes into building a hardware platform eats up all the time and energy that you'd probably rather spend making that hardware platform do something useful and cool. Part of the point of PR2 and ROS was to remove this hardware barrier and let people start focusing on software. Every once in a while, though, you run into someone who for some reason can't find $400,000 between their couch cushions for their own PR2. For these unfortunate souls, a new option may be the Bilibot.

The idea behind the Bilibot (which is some variation on "cheap robot" in German) is to create a robotics platform that's cheap enough for just about anyone to be able afford, yet capable enough for serious robotics researchers to be productive with. It consists of three primary components: an iRobot Create base to let it move around, a Kinect sensor to let it see where it's going, and a computer pre-configured with ROS. Part of the appeal of the platform is that it'll be capable of doing stuff right out of the box: there will be a single button you can push to get the robot to start following you around, for example.

The primary reason you might want a Bilibot is that the target price they're shooting for is a scant $650, which is basically just the cost of the parts plus the time it takes to put the thing together. And if you want to go even cheaper, you can build it yourself, the plans will be freely available.

Want in? They're currently finalizing the computer hardware, but you can sign up to be notified when pre-orders start for real at the website below.

[ Bilibot ] VIA [ Hizook ]

Modular Robotics' Cubelets Prototypes on Video

Modular Robotics' Cubelets are designed to be an absurdly simple way to build robots. You don't have to know how to program anything or even how to build anything; just snap a few specialized Cubelet blocks together and poof, you've got a robot. Want to build something different? Just use different blocks in different combinations, it's that easy:

One set of 20 Cubelets would cost you $300, if you could buy them, which you can't, because they're sold out. In that set you'd get:

Action Blocks: 2 Drive, 1 Rotate, 1 Speaker, 1 Flashlight, 1 Bar Graph
Sense Blocks: 1 Knob, 1 Brightness, 2 Distance, 1 Temperature
Think/Utility Blocks: 2 Inverse, 1 Minimum, 1 Maximum, 1 Battery, 2 Passive, 2 Blocker

Last time I posted about Cubelets, I posed a question that nobody even tried (as far as I could tell) to answer, so I'm just going to go ahead and pose it again: How many different permutations of robot you can make with one set of 20 Cubelets, keeping in mind the following:

-Each Cubelet has either 5 or 6 attachment points (depending on what it does)
-The same set of Cubelets functions differently when arranged differently
-Cubelet permutations must be able to exist in physical space (tricky!)

You may ignore the fact that using (say) two inverse blocks in a row is functionally identical to not using any inverse blocks, and assume that a Cubelet robot that has a different size or layout counts as a different robot. And while the definition of "robot" is, as always, a little bit iffy, suffice it to say that to count, a Cubelet robot has to be able to sense something or perform some action.

If you can convince us that you have the right answer (post it in the comments section below), it's good for an Automaton t-shirt. Good luck!

[ Modular Robotics ]

New Pleo Robotic Dinosaur Much More Advanced Than Original

pleo robotic dinosaur

Innvo Labs was out in force at CES 2011, and I got some cute pics of their new Pleo Reborn plus 10 minutes with Innvo’s COO Derek Dotson, one of Pleo’s original daddies from back in the Ugobe days:

While it’s a bit disappointing that those adorable pink and blue Pleos won’t be available over here, and that the male and female behaviors that we heard about weren’t implemented, Pleo rb is still much more sophisticated than the original Pleo, especially in terms of software and interactivity.

I’m particularly looking forward to some of those future features that Derek alludes to in our interview, like the wireless connectivity and nose cam access. I mean, if both of those get hooked up, presto, you’ve got a remotely accessible surveillance dino. It’ll be a while yet, but just bombard Innvo with emails, ’cause they’re listening.

Looks like Pleo Reborn is backordered until about April, which is good news for Innvo and the commercial future of Pleo but bad news for you if you want one. They’re $470, and extra food and learning stone kits are $20 each.

More photos:

pleo robotic dinosaur

pleo robotic dinosaur

[ Pleoworld ]

Natural Intelligence and Artificial Stupidity: Airport Security Needs Better Humans, Not Machines


Illustration: McKibillo

I'm quick to opt for automation where it increases productivity. However, choosing machinery over people to detect humans with foul intent at airports demonstrates a lack of understanding of just how keenly tuned the human brain is to detect subtle facial and behavioral cues.

“Okay,” you’re thinking, “Jeanne’s had a bad airport day.” And you'd be right!

You know the drill: Delays, lines, unfriendly agents, and, of course, the choice between a humiliating pat-down or a scanning machine we have to trust to be safe. I'd say this was definitely my second worst airport day ever. The first?

That would've been when a U.S. Transportation Security Administration agent scooped the pumpkin filling out of my daughter’s leftover Thanksgiving pie. Perhaps you all might wonder when Al Qaeda started watching Chef Paula Deen so they could hide explosives in homemade pumpkin pie and then convince college girls to carry them through airport security?

But TSA employees are denied the right to deploy the most advanced natural intelligence and sensing systems in existence -- the one inside their own cortices! -- in favor of the artificial stupidity of bureaucratic procedure. Pumpkin filling? Sorry, ma'am, that exceeds the 3 ounces limit for liquids and gels. It has to go.

When an organization like TSA has some US $8 billion to spend, is it better off hiring large numbers of poorly paid, unprotected staff to baby-sit radiation-scattering machines that share with the world details only your proctologist knew before? Or should it be investing in highly select, well-paid, and highly educated professionals using the sensing systems, evolved over millennia and trained over decades, to detect people with something to hide?

The advantage of a machine is that it cannot be accused of bias. But bias can be counteracted by both training and quality control. Performance reviews can show many subjects were unnecessarily delayed, with an analysis of characteristics highlighting any bias in who is being stopped unnecessarily. We in the artificial intelligence community can help you with that sort of analysis, TSA.

We in the AI community are also working assiduously to replicate the capabilities of human beings in machines, but when it comes to facial detection and behavior recognition, our algorithms barely match a child’s capabilities. Delaying people thousands of hours a day and risking radiation damage to them and TSA personnel harms our economy and reduces overall efficiency. Why not deploy natural intelligence instead of artificial stupidity?

Thank heavens we’re testing telepresence systems on our MT490 mobile robot. I’m sure I’m not the only person who’ll be opting to let my avatar drive to more meetings!

Jeanne Dietsch, an IEEE member, is CEO and co-founder of MobileRobots in Amherst, N.H., and vice president of emerging technologies at Adept Technology.

This Robotic Dragonfly Flew 40 Years Ago

This is a robotic dragonfly. If I told you that some company had just invented it and it was flying around today, you’d probably be impressed. Instead, I’m going to tell you that it was developed by the CIA and was flying in the 1970s. And not just flying like proof-of-concept-it-gets-off-the-ground flying, but reportedly, the flight tests were "impressive," whatever that means. It was powered by an ultraminiaturized gasoline engine (!) that would vent its exhaust backwards to increase the bot’s thrust, and the only reason they seemed to have scrapped it was that its performance in a crosswind wasn’t that good:

In the 1970s the CIA had developed a miniature listening device that needed a delivery system, so the agency’s scientists looked at building a bumblebee to carry it. They found, however, that the bumblebee was erratic in flight, so the idea was scrapped. An amateur entymologist on the project then suggested a dragonfly and a prototype was built that became the first flight of an insect-sized machine.

A laser beam steered the dragonfly and a watchmaker on the project crafted a miniature oscillating engine so the wings beat, and the fuel bladder carried liquid propellant.

Despite such ingenuity, the project team lost control over the dragonfly in even a gentle wind. “You watch them in nature, they’ll catch a breeze and ride with it. We, of course, needed it to fly to a target. So they were never deployed operationally, but this is a one-of-a-kind piece.”

In of itself, this dragonfly is not particularly crazy. It’s also not particularly crazy that it was done 30 or 40 years ago, I guess. What IS crazy is when you start thinking about the state of technology 40 years ago versus the state of technology today, and what might be possible now (but currently top secret) if they had an operational insect robot way back then. It blows my mind.

The CIA also came up with a robot squid (its mission is STILL classified) and a robot research fish named Charlie. Pics and video of that, after the jump.

CIA’s Office of Advanced Technologies and Programs developed the Unmanned Underwater Vehicle (UUV) fish to study aquatic robot technology. Some of the specifications used to develop “Charlie” were: speed, endurance, maneuverability, depth control, navigational accuracy, autonomy, and communications status.

The UUV fish contains a pressure hull, ballast system, and communications system in the body and a propulsion system in the tail. It is controlled by a wireless line-of-sight radio handset.

Cute! And once again, seriously not bad for such a long time ago.

[ CIA Flickr ] VIA [ Danger Room ]

Telepresence Robot Fetches Scones, Justifies Pricetag

Wondering what a $15k telepresence robot can do for you? WONDER NO LONGER. With the help of a 4G wireless hotspot, this QB wandered out of the Anybots office into downtown Mountain View, Calif., looking for a snack. A mile later, it found a Red Rock Coffee and ordered a berry scone, tipped something like 125% (!) and then rolled out. Classy.

While it’s a little hard to tell from the vid, I’m assuming that Anybots sent a chaperone of some sort along to make sure that nobody just grabbed QB by the neck and made off with it. And if they didn’t, well, let me know next time you send a robot out for coffee, because I totally want one and I think grand theft robot is the only way it’s gonna happen.

[ Anybots ]

Meet Affetto, a Child Robot With Realistic Facial Expressions

affetto robot

Hisashi Ishihara, Yuichiro Yoshikawa, and Prof. Minoru Asada of Osaka University in Japan have developed a new child robot platform called Affetto. Affetto can make realistic facial expressions so that humans can interact with it in a more natural way.

Watch:

Prof. Asada is the leader of the JST ERATO Asada Project and his team has been working on "cognitive developmental robotics," which aims to understand the development of human intelligence through the use of robots. (Learn more about the research that led to Affetto in this interview with Prof. Asada.)

Affetto is modeled after a one- to two-year-old child and will be used to study the early stages of human social development. There have been earlier attempts to study the interaction between child robots and people and how that relates to social development, but the lack of realistic child appearance and facial expressions has hindered human-robot interaction, with caregivers not attending to the robot in a natural way.

Here are some of the expressions that Affetto can make to share its emotions with the caregiver.

affetto

The researchers presented a paper describing the development of Affetto's head at the 28th Annual Conference of the Robotics Society of Japan last year.

The video and photo below reveal the mechatronics inside Affetto. It might be a good idea not to show this to caregivers before they meet the robot -- or ever.

This article appeared originally at GetRobo.

Norri Kageki is a journalist who writes about robots. She is originally from Tokyo and currently lives in the San Francisco Bay Area. She is the publisher of GetRobo and also writes for various publications in the U.S. and Japan.

READ ALSO:

Building a Super Tough Robot Hand
Tue, January 25, 2011

Blog Post: Watch this robot hand getting hit by a hammer without breaking into pieces

David Hanson's Robot Heads
Fri, February 04, 2011

Blog Post: Hanson Robotics would like you to meet their new android heads

Telenoid R1 Telepresence Android
Sun, August 01, 2010

Blog Post: Is this the strangest robot ever?

Robot Head Defies Uncanny Valley
Fri, July 16, 2010

Blog Post: Reconfigurable robot face aims for sympathetic response in users

Top 10 Robot Videos of the Month

Robotics is off to a good start this year. In January, there was CES, with lots of cool new robot products and demos, and we've also seen plenty of robot hacks using Microsoft's Kinect 3D sensor, which is creating quite a stir. But there was much more, of course, so it's time to review the most striking, stunning, and strange robot videos of January.

 

No. 10 This mind-bending action sequence from the Indian robot movie Enthiran is a must-watch. Insane, awesome, ridiculous? You be the judge.

 

No. 9 Students at the Franklin W. Olin College of Engineering near Boston, Mass., know how to build some cool stuff. Their latest creation is delicious. 

 

No. 8 DIY enthusiast Jose Julio's ArduSpider is an Arduino-based little critter capable of crawling, hopping, and scaring the bejesus out of the cat.

 

No. 7 Will the amazing, acrobatic quadrotors developed at University of Pennsylvania's GRASP Lab maybe build your next house?

 

No. 6 Watch IBM's HAL 9000 Watson, a Jeopardy-playing artificial intelligence system, destroying the human contestants in this practice match.

 

No. 5 Born at the University of Pennsylvania's Kod*lab, X-RHex is the latest member of the RHex family of robots. And like it siblings, it's one agile bot. 

 

No. 4 Why use sensor suits or haptic devices when you have Microsoft's Kinect? Check out this body-motion-controlled humanoid from Japan.

 

No. 3 In what was my favorite CES demo, writer Evan Ackerman stepped into the Cyberdyne HAL robot suit -- and became Iron Man.

 

No. 2 Hit a robot with a hammer and it will likely shatter into pieces. Nein! This German super-tough robotic hand won't. Did anyone say Terminator?

 

No. 1 Drones shooting fireworks at hydrogen balloons. Robot armageddon? Nope, just some Swedish RC hackers having fun in the woods.

 

Did we forget any? Let us know.

X-47B Robot Fighter Jet Makes First Flight

Northrop Grumman’s sexily badass X-47B unmanned combat air system made its first flight ever on Friday, circling a desert runway a couple times all by itself before successfully not crashing. Northrop seemed pretty happy about the way things went:

“The flight provided test data to verify and validate system software for guidance and navigation, and the aerodynamic control of the tailless design. The X-47B aircraft will remain at Edwards AFB for flight envelope expansion before transitioning to Naval Air Station Patuxent River, Md. later this year. There, the system will undergo additional tests to validate its readiness to begin testing in the maritime and carrier environment.”

"Flight envelope expansion" means that they’re going to see how crazy the X-47B can get in the air. After that, they’re going to get it ready for its intended purpose, which is carrier operations. We know that drones are already pretty good at precision maneuvers, but I hear carrier landings are especially tricky. I’m optimistic (I always am about robots), but seeing this thing manage an autonomous carrier touchdown is going to go a long way towards convincing skeptics that drones really can function on a level similar to even the most skilled humans in many aspects of combat aircraft control.

[ Press Release ] via [ Danger Room ]

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More