Automaton iconAutomaton

Adept Quattro Crushes Humans at iPhone Game

We know the Adept Quattro is fast and precise, but that doesn't minimize the craziness of videos like this one:

Yeah, I think we need a whole new category on the app store for games that humans are better at than robots. Like, "guess the emotion" or "reasons not to enslave humanity." Although, for the record, fast humans can finish this game in about 10 seconds, which is more than a little bit impressive on its own.

For more vids of robots going wild with speed and precision, check out this post (and this post)

[ Adept Quattro ] via [ @RobotDiva ]

Teaching Robots To Interact Better With Humans

The 6th annual ACM/IEEE Conference on Human-Robot Interaction just ended in Switzerland this week, and Georgia Tech is excited to share three of their presentations showcasing the latest research in how humans and robots relate to each other. Let's start from the top:

How Can Robots Get Our Attention?

Humans rely on lots of fairly abstract social conventions when we communicate, and most of them are things that we don't even think about, like gaze direction and body orientation. Georgia Tech is using their robot, Simon, to not just try to interact with humans in the same ways that humans interact with each other, but also to figure out how to tell when a human is directing one of these abstract social conventions at the robot.

It's a tough thing, because natural interaction with other humans is deceptively subtle, meaning that Simon needs to be able to pick up on abstract cues in order to minimize that feeling of needing to talk to a robot like it's a robot, i.e. slowly and loudly and obviously. Gesture recognition is only the first part of this, and the researchers are hoping to eventually integrate lots of other perceptual cues and tools into the mix.

More info here.

How Do People Respond to Being Touched by a Robot?

This expands on previous Georgia Tech research that we've written about; the robot in the vid is Cody, our favorite sponge-bath robot. While personally, I take every opportunity to be touched by robots whenever and wherever they feel like, other people may not necessarily be so receptive. As robots spend more time in close proximity to humans helping out with tasks that involve touch, it's important that we don't start to get creeped out or scared.

Georgia Tech's research reveals that what humans perceive a robot's intent to be is important, which is a little weird considering that intent (or at least, perceived intent) is more of a human thing. Cody doesn't have intent, persay: it's just got a task that it executes, although I suppose you could argue that fundamentally, that constitutes intent. In this case, when people thought that Cody was touching their forearm to clean it, they were more comfortable than when they thought that Cody was touching their forearm (in the exact same way, mind you) just to comfort them. Curiously, people also turn out to be slightly less comfortable when the robot specifically states its intent before performing any actions, which is the opposite of what I would think would be the case. Geez, humans are frustratingly complex.

I definitely appreciate where Georgia Tech is going with this research, and why it's so important. As professor Charlie Kemp puts it:

"Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that’s not going to be enough if we actually want these robots out there helping people in the real world."

More info here.

Teaching Robots to Move Like Humans

This is all about making robots seem more natural and approachable, which is one of those things that might seem a little less important that it is, since by virtue of reading Automaton, you might be a lot more familiar (and comfortable) with robots than most people are. The angle Georgia Tech is taking here is to first try and figure out how to quantify what "human-like" means, in order to better determine what movements are more "human-like" and what movements are less "human-like."

Making more human-like movements is important for a couple reasons. First, it's easier to understand what a robot wants or is doing when it makes movements like a human would. And second, one of the most identifiable things about robots is the fact that they're all robot-y: they tend to make precise and repetitive movements, which might be very efficient, but it's not very human. Humans are a little more random, and giving robots some of that randomness, researchers say, may help people "forget that this is a robot they’re interacting with."

More info here.

Thanks Travis!

Clear Your Schedule: National Robotics Week is Just One Month Away

Last year, we had a blast at the first annual National Robotics Week, where we got the world's first look at Stickybot III, got some tasty chocolate from Willow Garage, and tried to best an Adept Quattro at pick and place robot with a Wiimote (we failed).

Once again, National Robotics Week is much too badass to be constrained by one single week, which is why it's nine days long, running from the 9th to the 17th of April. Sponsors include heavyweights like iRobot, Adept, National Instruments, and Microsoft. As far as what you personally can get out of it, well, just check on this handy map for special events in your area.

Part of the point of National Robotics Week is to spread the word about how robotics is playing an increasingly important role in our lives, and how that makes robotics education even more important. If you're reading this blog, you're probably more familiar with the epic awesomeness of robots than most people you know, so don't just go to an event: take someone else who isn't familiar with robots along with you, and show them why robotics is the future.

[ National Robotics Week ]

Robot's Magic Wheels Transform Into Legs

Wheels are great for moving fast and efficiently, but bad for negotiating terrain. Legs are great for negotiating terrain, but not as good for moving fast and efficiently. To create a robot that can move fast when it needs to but can also adapt to get around complex surfaces, a group from National Taiwan University's Bio-Inspired Robotic Laboratory (BioRoLa) created Quattroped, a robot that can turn its wheels into legs: 

How awesome is that, right?! It would probably be most accurate to say that the bot's wheels transform into not legs but whegs, several varieties of which we've seen over the last couple years. Whegs function similarly to legs, except that they move in a circle instead of back and forth, making them more effective at clambering over obstacles. And as you can see in the video, the bot can even "walk" by moving alternate pairs of whegs.

Quattroped is equipped with GPS, a vision system, and laser ranger, and the team is actively working to integrate more sensors to improve the perceptual capabilities of the robot. On the software side, it's running National Instruments' LabView, and while a remote PC is involved for control and data logging, most of the processing is done on the robot itself.

This is an amazingly adaptable platform, and besides the additional complexity in the wheel hubs and some minimal compromises on wheel strength, this type of thing seems like an obvious way to give mobile robots significant additional capabilities.

[ National Instruments ]

Navy Wants Robot Swarm That Can Autonomously Build Stuff, Apocalypse Unlikely

The US Navy is soliciting proposals for program that's intended to develop a swarm of tiny robots that are capable of manufacturing complex objects, potentially including other robots. If you let your imagination go berserk this may sound like a precursor to some sort of unstoppable robot uprising, but that's just fiction. And why would we waste our time talking about fictional robot uprisings or whatever when the real robots themselves are so much more interesting? Here's what the US Navy wants:

Develop a swarm of micro-robotic fabrication machines that will enable the manufacture of new materials and components. A micro-robot swarm should be able to perform material synthesis and component assembly, concurrently.  The micro-robots could be designed to perform basic operations such as pick and place, dispense liquids, print inks, remove material, join components, etc. Examples of complex material systems of potential interest include but are not limited to: multi-functional materials, programmable materials, metamorphic materials, extreme materials, heterogeneous materials, synthetic materials, etc.

Basically, it's one of those DARPA-esque "here's some crazy thing we want, now go make it happen" things. And it's actually crazier than it sounds, since "micro" is a bit misleading: what the Navy is really looking for are robots that are capable of manipulating "nano- and micron-scale building blocks." So these robots would be really, really small, and there'd need to be a whole heap of them cooperating and doing different jobs in the right places and in the right order. All right there, on your desk. You'd just dump out a bunch of these itty bitty robots, tell them you need a new cellphone or whatever, and they'd get busy and whip up a new one for you right there while you watch.

Unsurprisingly, we've got a little ways to go before you'll be able to buy your own jar of magic robodust. The Navy solicitation is in three phases, with phase I being a proof of concept, and it's going to take some work to even get that far. But micro, nano, and swarm robots are all a reality already, so now that the government has decided to throw a bunch of money at the problem, it's just going a matter of time before all the little pieces get put together and start working for us.

[ Navy Solicitation ] via [ Danger Room ]

Image of Alice swarmbots, from EPFL

TaxiBot Brings Autonomy to Aircraft Taxiing, Almost

 

This is TaxiBot. TaxiBot is big and strong and is capable of hauling the mighty Boeing 747 and the mightier Airbus A380 around airports, almost autonomously:

If you think about it, an airport is more or less the best possible place outside of a laboratory for an autonomous robotic vehicle to operate. It's tightly controlled, without random people wandering around all over the place or suicidal bicyclists. It's entirely flat. There are extremely well-defined areas in which vehicles can operate. Everything runs on a tight schedule (ideally). And as far as hauling airplanes around, there are huge freakin' yellow lines painted on the ground that a robot can follow anywhere it needs to go.

It's a little disappointing, then, that TaxiBot doesn't actually incorporate much in the way of autonomy. It's basically just a big remote control car that pilots can steer directly from the cockpit, and that's driven around by a human when it's not hauling aircraft. The point? The aircraft don't have to use their engines while taxiing, reducing wear and saving fuel. So that's good and all, I just kinda wish TaxiBot was, you know, a little less taxi and more a little bot. It's something they've got in the works, though: the company says that the control architecture of the vehicle is already in place to support autonomous tug operation so that in the near future no tug driver would be needed for taxiing. Sweet, bring it on!

[ Ricardo TaxiBot ]

Nao Gets Clever New Self-Charger

This is just an engineering prototype, but Nao's new self-charging station looks pretty slick. The robot checks out special marks on the base of the charger to align what looks like a special backpack with a (magnetic?) charging plug, and once it's attached, an extendable cord lets you continue to use the robot while it charges. Or, Nao will just relax a bit until its topped off. When charging is complete, Nao swipes its arm across its back to detach the plug, which retracts back into the charger:

So, that's neat. It's also a little bit convoluted, if you ask me, but what do you want, a charger Nao could just walk onto that would charge it through its feet or something? Hey, now there's an idea...

No info on pricing or availability just yet, but we'll keep you updated.

[ Nao ] via [ Robots-Dreams ]

Top 10 Robotic Kinect Hacks

We love Microsoft's Kinect 3D sensor, and not just because you can play games with it. At a mere $150, it's a dirt-cheap way to bring depth sensing and 3D vision to robots, and while open-source USB drivers made it easy, a forthcoming Windows SDK from Microsoft promises to make it even easier.

Kinect, which is actually hardware made by an Israeli company called PrimeSense, works by projecting an infrared laser pattern onto nearby objects. A dedicated IR sensor picks up on the laser to determine distance for each pixel, and that information is then mapped onto an image from a standard RGB camera. What you end up with is an RGBD image, where each pixel has both a color and a distance, which you can then use to map out body positions, gestures, motion, or even generate 3D maps. Needless to say, this is an awesome capability to incorporate into a robot, and the cheap price makes it accessible to a huge audience.

We've chosen our top 10 favorite examples of how Kinect can be used to make awesome robots, check it out:

1. Kinect Quadrotor Bolting a Kinect to the top of a quadrotor creates a robot that can autonomously navigate and avoid obstacles, creating a 3D map as it goes.

2. Hands-free Roomba Why actually vacuum when you can just pretend to actually vacuum, and then use a Kinect plus a Roomba to do the vacuuming for you?

3. iRobot AVA iRobot integrated two (two!) Kinect sensors into their AVA not-exactly-telepresence prototype: one to help the robot navigate and another one to detect motion and gestures.

4. Bilibot The great thing about Kinect is that it can be used to give complex vision to cheap robots, and Bilibot is a DIY platform that gives you mobility, eyes, and a brain in a package that costs just $650.

5. Gesture Surgery If you've got really, really steady hands, you can now use a Kinect that recognizes hand gestures to control a DaVinci robotic surgical system.

6. PR2 Teleoperation Willow Garage's PR2 already has 3D depth cameras, so it's kinda funny to see it wearing a Kinect hat. Using ROS, a Kinect sensor can be used to control the robot's sophisticated arms directly.

7. Humanoid Teleoperation Taylor Veltrop put together this sweet demo showing control over a NAO robot using Kinect and some Wii controllers. Then he gives the robot a banana, and a knife (!).

8. Car Navigation Back when DARPA hosted their Grand Challenge for autonomous vehicles, robot cars required all kinds of crazy sensor systems to make it down a road. On a slightly smaller scale, all they need now is a single Kinect sensor.

9. Delta Robot This Kinect controlled delta robot doesn't seem to work all that well, which makes it pretty funny (and maybe a little scary) to watch.

10. 3D Object Scanning Robots can use Kinect for mapping environments in 3D, but with enough coverage and precision, you can use them to whip up detailed 3D models of objects (and people) too.

Latest Geminoid Is Incredibly Realistic

geminoid dk
Geminoid DK is the first Geminoid based on a non-Japanese person, and also the first bearded one.

Okay, I admit it... I found myself wondering whether this was in fact a real robot, or actually a person pretending to be a robot.

It's not a fake. This is the latest iteration of the Geminoid series of ultra-realistic androids, from Japanese firm Kokoro and Osaka University mad scientist roboticist Hiroshi Ishiguro. Specifically, this is Geminoid DK, which was constructed to look exactly like associate professor Henrik Scharfe of Aalborg University in Denmark.

UPDATE: Wow. We've just found a new video that is absolutely amazing:

When we contacted Prof. Scharfe inquiring about the android, he confirmed: "No, it is not a hoax," adding that he and colleagues in Denmark and Japan have been working on the project for about a year now. His Geminoid, which cost some US $200,000, was built by Kokoro in Tokyo and is now at Japan's Advanced Telecommunications Research Institute International (ATR) in Nara for setup and testing.

"In a couple of weeks I will go back to Japan to participate in the experiments," he says. "After that, the robot is shipped to Denmark to inhabit a newly designed lab."

Geminoid DK does look pretty much exactly like the original template:

The Geminoid is on the right. I think.

geminoid hi-1 geminoid fIf you're wondering why on Earth someone would want an exact robotic double of themselves, besides being TOTALLY AND COMPLETELY AWESOME, the Geminoid is going to be used for studying human-robot interaction, in particular people's emotional responses when they face an android representing another person. Prof. Scharfe wants to find out if the robot can transmit a person's "presence" to a remote location and whether cultural differences in people's acceptance of robots make a difference.

These are some of the same questions that Hiroshi Ishiguro set out to explore when he created his robot clone, the Geminoid HI-1, and a copy of a twentysomething Japanese model, the Geminoid F [see photos, right].

For his part, Ishiguro, a professor at Osaka University and group leader at ATR, declined to give us more details about his involvement with the Geminoid DK project, saying only that he and Scharfe "are working together."

Like with the other Geminoid robots, all of the movements and expressions of Geminoid DK are remote controlled by an operator with a computer, who uses a motion-capture system that tracks facial expressions and head movements. Turn your head and the Geminoid does the same; move your mouth and the android follows suit.

But it's not hard to imagine full autonomy in the not-to-distant future.

Incidentally, according to a note on his website here's what Prof. Scharfe's wife thinks about his robotic double:

- She prefers body number 1

- She suggests that he should always send body number 2 to conferences and stuff

Prefers body number 1, eh? Does she know that body number 2 is upgradeable?

Here's another video and more (freaky) pics of Geminoid DK in the making to fuel your nightmares, enjoy:

geminoid dk

geminoid dk

Images and videos: Geminoid DK

READ ALSO:

Meet Elfoid, A Creepy Robot Cellphone
Thu, March 03, 2011

Blog Post: With the Elfoid robot-cellphone hybrid, you can carry the uncanny valley in your pocket

Female Android Geminoid F Unveiled 
Sat, April 03, 2010

Blog Post: Geminoid F, an android copy of a woman in her twenties, can talk, gesture, and smile

Geminoid F Gets Job as Actress
Thu, November 11, 2010

Blog Post: The Japanese android Geminoid F takes to the stage. Is her next stop -- Broadway?

Geminoid F Looks Even More Realistic
Mon, November 01, 2010

Blog Post: The female android has now facial movements even more realistic than before 

Google Shows Us Why We All Need Robot Cars

We're pretty familiar with autonomous cars around here, and we've even been treated to a ride in one of Stanford's robots at their automotive innovation lab, which they launched in partnership with Volkswagen. You might also remember Shelley, their autonomous Audi TTS, which autonomously raced to the top of Pikes Peak last year. Volkswagen's thinking behind all of this high performance autonomous car stuff is that at some point, they'll be able to program your car to be a far, far better driver than you could ever be, and it'll have the ability to pull some crazy maneuvers to save you from potential accidents.

Read More
Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More