Automaton iconAutomaton

Superfast Robotic Camera Mimics Human Eye

German researchers have developed a robotic camera that mimics the motion of real eyes and even moves at superhuman speeds.

The camera system can point in any direction and is also capable of imitating the fastest human eye movements, which can reach speeds of 500 degrees per second. But the system can also move faster than that, achieving more than 2500 degrees per second. It would make for very fast robot eyes.

Led by Professor Heinz Ulbrich at the Institute of Applied Mechanics at the Technische Universität München, a team of researchers has been working on superfast camera orientation systems that can reproduce the human gaze.

In many experiments in psychology, human-computer interaction, and other fields, researchers want to monitor precisely what subjects are looking at. Gaze can reveal not only what people are focusing their attention on but it also provides clues about their state of mind and intentions.

Mobile systems to monitor gaze include eye-tracking software and head-mounted cameras. But they're not perfect; sometimes they just can't follow a person's fast eye movements, and sometimes they provide ambiguous gaze information.

In collaboration with their project partners from the Chair for Clinical Neuroscience, Ludwig-Maximilians Universität München, Dr. Erich Schneider, and Professor Thomas Brand the Munich team, which is supported in part by the CoTeSys Cluster, is developing a system to overcome those limitations.

The system, propped on a person's head, uses a custom made eye-tracker to monitor the person's eye movements. It then precisely reproduces those movements using a superfast actuator-driven mechanism with yaw, pitch, and roll rotation, like a human eyeball. When the real eye move, the robot eye follows suit.

The engineers at the Institute of Applied Mechanics have been working on the camera orientation system over the past few years. Their previous designs had 2 degrees of freedom (DOF). Now researcher Thomas Villgrattner is presenting a system that improves on the earlier versions and features not 2 but 3 DOF.

He explains that existing camera-orientation systems with 3 DOF  that are fast and lightweight rely on model aircraft servo actuators. The main drawback of such actuators is that they can introduce delays and require gear boxes.

So Villgrattner sought a different approach. Because this is a head-mounted device, it has to be lightweight and inconspicuous -- you don't want it rattling and shaking on the subject's scalp. Which actuators to use?

The solution consists of an elegant parallel system that uses ultrasonic piezo actuators. The piezos transmit their movement to a prismatic joint, which in turns drives small push rods attached to the camera frame. The rods have spherical joints on either end, and this kind of mechanism is known as a PSS, or prismatic, spherical, spherical, chain.

It's a "quite nice mechanism," says Masaaki Kumagai, a mechanical engineering associate professor at Tohoku Gakuin University, in Miyagi, Japan, who was not involved in the project. "I can't believe they made such a high speed/acceleration mechanism using piezo actuators."

The advantage is that it can reach high speeds and accelerations with small actuators, which remain on a stationary base, so they don't add to the inertial mass of the moving parts. And the piezos also provide high forces at low speeds, so no gear box is needed.

Villgrattner describes the device's mechanical design and kinematics and dynamics analysis in a paper titled "Optimization and Dynamic Simulation of a Parallel Three Degree-of-Freedom Camera Orientation System," presented at last month's IEEE/RSJ International Conference on Intelligent Robots and Systems.

The current prototype weighs in at just 100 grams. It was able to reproduce the fastest eye movements, known as saccades, and also perform movements much faster than what our eyes can do.

The system, Villgrattner tells me, was mainly designed for a "head-mounted gaze-driven camera system," but he adds that it could also be used "for remote eye trackers, for eye related 'Wizard of Oz' tests, and as artificial eyes for humanoid robots."

In particular, this last application -- eyes for humanoid robots -- appears quite promising, and the Munich team is already working on that. Current humanoid eyes are rather simple, typically just static cameras, and that's understandable given all the complexity in these machines. It would be cool to see robots with humanlike -- or super human -- gaze capabilities.

Below is a video of the camera-orientation system (the head-mount device is not shown). First, it moves the camera in all three single axes (vertical, horizontal, and longitudinal) with an amplitude of about 30 degrees. Next it moves simultaneously around all three axes with an amplitude of about 19 degrees. Then it performs fast movements around the vertical axis at 1000 degrees/second and also high dynamic movements around all axes. Finally, the system reproduces natural human eye movements based on data from an eye-tracking system.

UPDATE: Corrected maximum speeds for saccades and added collaborators involved in the project.

Robotic Gripper Made from Coffee-Filled Balloon Picks Up Anything

As robot hands go, this is the weirdest I've ever seen. But you know what? It works really well.

Researchers at Cornell University, University of Chicago, and iRobot reported this week that they've developed a fingerless robotic gripper made from a rubber bag filled with coffee grounds.

The gripper can transition from a soft state, when it's easily deformable and can conform to the shape of various objects, to a rigid state, when it can firmly hold the objects.

The secret, the researchers report in a paper published in the Proceedings of the National Academy of Sciences (PNAS), is the "jamming" phase transition of granular materials -- in this case, coffee grounds.

The approach, they write, "opens up new possibilities for the design of simple, yet highly adaptive systems that excel at fast gripping of complex objects.” 

Researchers have used the "jamming" principle for robot locomotion before, but this appears to be the first application in manipulation.

So how does it work? When a granular material like sand or coffee grounds is loosely packed it can flow almost like a liquid. But when the particles in the material are packed tightly together, they "jam," or lock into one another.

One way of jamming them together is by applying a vacuum. You've seen it before: Vacuum-sealed coffee packages are hard bricks, but when opened, air rushes in and the packages become deformable.

The gripper is a clever idea. Now, is it practical? How strong is it? And can this coffee-powered robot hand fetch you a cup of coffee?

To find out more, I spoke to Eric Brown, the lead author of the PNAS paper and a postdoc at the laboratory led by Prof. Heinrich M. Jaeger at the University of Chicago.

The other authors are Nicholas Rodenberg and Prof. Jaeger from the University of Chicago; John Amend and Hod Lipson from the Cornell Computational Synthesis Laboratory, in Ithaca, N.Y.; Annan Mozeika and Erik Steltz from iRobot, in Bedford, Mass.; and Mitchell R. Zakin from the U.S. Defense Advanced Research Projects Agency, in Arlington, Va.

Erico Guizzo: This is an unusual gripper approach and I was wondering how the idea came about.

Eric Brown: This grew out of a program to develop a new field of soft robotics that was sponsored by DARPA. It was based on the observation that humans and animals are mostly made of soft materials, but robots have usually been made of hard materials like metal, and maybe we could build robots with more of the functionality of humans if we switched to softer materials. My group at the University of Chicago had experience in the physics of soft and granular materials, and we were paired up with robot engineers at Cornell and iRobot. My group has been studying a transition between soft and hard states of granular materials, called "jamming", and together we decided that gripping was a good robotics application for this.

EG: You successfully tested the gripper with varied objects -- what's the one object you were most surprised it could hold? And is there anything that it failed to hold?

EB: I think I was most surprised that it was able to pick up a penny, because we had expected it would have a harder time forming around very flat objects. We were especially excited that it could pick up fragile objects like raw eggs and wine glasses, because these are traditionally challenging for robotic grippers. It doesn't do well with extremely soft objects like cotton balls.

EG: How strong is the gripper? That is, how much weight can it lift, and once it's holding something, how strong is the grasp?

EB: The heaviest objects we lifted with the hand-sized gripper were a pair of gallon jugs of water, weighing a total of about 15 pounds [6.8 kilograms]. The grip is easily strong enough to lift and hold onto anything that fits into your hand, and can hold on, for example, if something bumped into it. However, you could pull something out of the grip with your own hand. The technology is scalable, so we predict that with a larger gripper, about 3 feet across [0.9 meters], would be able to lift up a car.

EG: One of the press materials states that the gripper could "outperform robotic fingers when confronted with unfamiliar objects or complex shapes" -- could you give a few actual examples?

EB: We never did direct comparison tests against other grippers. The argument behind this statement is that the gripper can pick up a variety of different objects, and quickly, because the granular material will conform to the shapes of the objects. Even very robust robotic hands with fingers need to spend extra time to examine the object and calculate optimal positioning for each finger, things that our
gripper doesn't need to spend time doing.

EG: This mechanism of soft-to-rigid transformation -- was it a well-understood phenomenon? Did anyone try to exploit it before?

EB: People have understood that soft pads on fingers or pincers give a little more tolerance so we can pick up things that have unusual shapes or are fragile, such as a wine glass, without breaking them. While people had proposed the idea of using granular materials in bags for these soft pads, no one even bothered to try it out because they didn't think it would be more advantageous that a piece of soft rubber. What they had not thought of was that you could use the jamming transition could do the pinching as well, eliminating the need for individual fingers or pincers.

Video and more photos:

Photos: John Amend/Cornell University

Robots Podcast: How to Build Your Own UAV for 300 USD

A decade ago the term Unmanned Aerial Vehicle (UAV) was synonymous for expensive equipment, complex aerodynamics, and cruise-missile-type control algorithms. But since then, a rapid price decay in IMUs caused by the rise of mobile computing has slashed equipment costs. Today, open-source software like the Arduino environment and open-source hardware like the ArduPilot allow you and me to build our own UAV in a weekend for less than 300 USD.

DIY Drones' Chris AndersonMuch of this progress is due to what has become the largest amateur UAV community and one of the largest robotics communities: DIY Drones. Founded by Chris Anderson, whose day job is Editor-in-Chief of WIRED, the site now has more than 12'000 members and covers all aspects of UAVs.

In its latest episode, the Robots podcast interviews Chris Anderson about DIY Drones. Anderson explains how to go about building a cheap UAV, why autonomous stabilization and navigation has become easy, the technical and legal aspects of flying your autonomous plane around the neighborhood, and the risk of putting UAV technology into the wrong hands. Anderson also talks about his experience with producing open source hardware and the economic challenges - and benefits - this brings about. To conclude, he shares some anecdotes on some of the crazy projects run on his site.

For more information head over to the Robots Podcast, head over to DIY Drones or directly read on about or tune in to the interview!

President Obama Likes Robots (Yay!)

barack obama loves robots

On Monday, U.S. President Barack Obama opened the (first ever) White House Science Fair with the following:

“One of the great joys of being President is getting to meet young people like all of you -- and some of the folks in the other room who I just had a chance to see some of their exhibits and the work that they were doing. It’s inspiring -- and I never miss a chance to see cool robots when I get a chance.”

Wow, me neither! Also introduced at the event was a new DARPA initiative to give resources to students to help them build those aforementioned robots. BTW, I’m still waiting to hear back on that anti-robot takeover czar position

[ The White House Blog ] via [ Robot Living ]

Innvo Unveils New, Upgraded Pleo

We knew that Innvo Labs was working on some upgrades to Pleo, but all has now been revealed over at Bob The Pleo forums, where Innvo Lab CEO Derek Dotson discussed the new “Pleo Reborn.” If you’re a fan of Pleo, you pretty much have to read the entire interview, but I’ve condensed most of the new features if you’d rather just skim:

-Pleos are now male or female (blue or pink), and will react to each other accordingly: females make gentle noises at each other, males step back and shout at each other, and a male and female will make noises and lean against each other

-Pleo skin coloring will be randomized slightly, so that two Pleos produced at the same time will look distinctive. There will also be 10 different eye colors.

-Pleo skin durability improved, should now last 5x longer

-New lithium polymer battery more than doubles lifetime to 120 – 150 minutes, LED battery indicator added underneath Pleo

-Pleo will have a ’seed’ personality from the factory. Some will learn faster than others, and some will tend to be happier (or more mopey).

-Pleo now knows what time it is and will alter its behaviors accordingly; for example, it will want to be fed in the afternoon and act sleepy in the evening

-Pleo can ’smell’ RFID tags

-Pleo now has voice recognition, and you can name it, and it will respond to that name… As long as it’s you saying it, not anyone else

-By combining RFID tags that instruct Pleo to perform specific behaviors with voice recognition, it’s now possible to train Pleo to respond to different commands, such as “bow” or “come to me”

-Many more touch sensors have been added, along with corresponding behaviors. For example, if you pet Pleo’s side, it will lean into you.

-Pleo now has a G sensor that lets it detect acceleration and impacts as well as touch

-Pleo’s nose cam now allows for target tracking

-Motor speed and response have been improved, especially in the tail, head, and neck

More awesome stuff, plus video, after the jump.

There are a few more things that I thought were so interesting that I had to quote part of the interview:

Pleo needs to be fed now. If you don’t kind of take care of your Pleo, eventually it will get sick. It will cough, get lethargic, and all that. It needs it’s food to keep it healthy. And if it falls, here’s a big thing, Pleo actually feels pain now. So, if you abuse your Pleo or drop it, remember he knows if he’s been dropped or violated, he feels pain. It takes him a while to recover. He’ll limp. If you touch the area, it’ll be sore and he’ll cry. There are medicines that come with Pleo. You give him these medicines and it helps with the healing process. So, love and affection, a little healing foods, will get Pleo back up. In the labs, we’re striving to make this a pet. Along with a pet, there are certain obligations. So, if you’ve got children who want to pet, this is a good training device. If you can keep Pleo healthy, you can more likely keep a real pet healthy.

Now, here’s one of the quirkiest things we’ve done. Pleo can sense temperature. This is kind of a cool feature. So, let’s say you’re traveling, and you’ve got Pleo in the car and it drops below a certain temperature, and I’m not gonna give the temperature now because it’s adjustable, Pleo starts to shiver and shake like he’s cold. And if you leave him too cold too long, he’ll catch a cold. he also senses heat. If it’s too hot for Pleo, he’ll start panting. Along with what you get right off the get-go is a little chunk of ice. You put it under his chin. If he’s hot, he’ll open his mouth. You put it in, he’ll chew on this ice. It cools him down. Like I said, in our strive to make Pleo a more realistic pet, he has to interact better with his environment.

One more video here.

As far as availability goes, it sounds like there’s a bit of a supply problem, but the first few units should be available to members of the PleoWorld email list later this month. Price? Well, the unit in the pics was purchased at a trade show for about $500, so that’s likely to be the ballpark. Innvo has promised a giant, Jurassic-y booth at CES next year, and we’ll be bringing that to you in early January.

[ PleoWorld ]
[ Bob The Pleo interview with Derek Dotson ]
VIA [ Engadget ]

Japanese Snake Robot Goes Where Humans Can't

Japanese robotics company HiBot has unveiled a nimble snake bot capable of moving inside air ducts and other narrow places where people can't, or don't want to, go.

The ACM-R4H robot, designed for remote inspection and surveillance in confined environments, uses small wheels to move but it can slither and undulate and even raise its head like a cobra.

The new robot, which is half a meter long and weighs in at 4.5 kilograms, carries a camera and LEDs on its head for image acquisition and can be fitted with other end-effectors such as mechanical grippers or thermo/infrared vision systems.

Despite its seemingly complex motion capabilities, "the control of the robot is quite simple and doesn't require too much training," says robotics engineer and HiBot cofounder Michele Guarnieri.

"All [degrees of freedom] can be easily controlled by a game-style joystick, including the motion of recovering from an upside-down position."

The company says applications include the inspection of ducts, pipes, and ceilings, as well as remote surveillance and security. Indeed, I bet the CIA and other spy agencies could find some uses for this bot!

Watch the ACM-R4H in action:

HiBot is a spin-off of Tokyo Tech's Hirose-Fukushima Lab, which has brought to life some of the world's most amazing mechanical snakes. The company is transforming some of the research creatures into commercial-grade systems.

The ACM-R4H is smaller than other HiBot snake models, so it can easily enter and zigzag through tight spaces. The head and tail segments can move up and down and the middle joint can turn left and right.

It can negotiate 90 degree corners inside an air duct, for instance, or move inside pipes less than 14 centimeters in diameter. It can also overcome obstacles on its path.

The current version relies on a tether connected to a control unit, which provides communication and power (the control box has a rechargeable battery that lasts for over 3 hours).

The user interface shows images from the camera and a set of data from the robot, including power consumption, temperature, and position of each joint. It also shows a 3D image of the robot's current position that the operator can use for assisting with navigation.

Another tool to help with controlling and planning missions for the robot is a 3D simulator, called V-REP, that HiBot offers with its robots or as a stand-alone program:

HiBot, which also develops power line inspection robots, says some customers using the robot -- and most won't disclose what they're using for -- had no issues with the tether. "But we can change the robot architecture to have wireless communication," Guarnieri says.

And though the robot is resistant to water splashes, it can be made completely waterproof, he adds. You never know what people will use it for...

Below, some more snake bot videos, just because it's so cool to watch these lifelike machines. The first video shows the ACM-R3H, which is a long wheeled machine -- watch the entertaining demonstration on a Japanese TV show!

The other video shows the ACM-R5H, capable of slithering on the ground and also swimming. Yes, this snake bot swims just like the real thing.

The Robots Podcast Interviews Futurama's David X. Cohen

Futurama's David X. Cohen

Good news everyone! In its latest episode, the Robots Podcast interviews David X. Cohen, the head writer and executive producer of Futurama.

In the year 3,000, robots are an integral part of society. Futurama's anti-hero is a robot called Bender, whom Wikipedia describes as a "foul-mouthed, heavy-drinking, cigar-smoking, kleptomaniacal, misanthropic, egocentric, ill-tempered robot." Other robots include Donbot, a criminal robot heading the robot mafia and Calculon, a hopelessly self-absorbed robot heading the robot supremacy society. There's even a "Robot Santa," which, due to a programming error, judges everyone to be naughty and goes on yearly Christmas rampage across Futurama's universe.

Futurama is foremost a comedy show, and its flawed robots are foremost theatrical characters. But Cohen and colleagues are science buffs (Cohen himself is a former Harvard and Berkeley graduate and even worked at the Harvard robotics lab for a while) and take joy and pride in providing the occasional "science relief" -- the "z-ray" on Bender's head shown in the picture to the left is one such example (more on that in our previous interview).

As becomes clear in his Robots Podcast interview, Cohen deeply cares about the way science and technology are portrayed in Futurama. It is a difficult balancing act, but an important one given the wild success of Futurama (now in its fifth season!) and the subtle but enormous influence of science fiction on robotics: I suspect sci-fi has had some influence on the career choice, goals and dreams of most roboticists I know, and it certainly does greatly affect public perception.

I, for one, love the influence and am a huge Futurama fan. Thanks for the interview, David X.!

Images: "Futurama" TM and (C) 2009 Twentieth Century Fox Film Corporation. All Rights Reserved.

Google's Autonomous Car Takes To The Streets

google autonomous car

Remember 5 years ago when a bunch of robotic cars managed to navigate through the desert all by themselves? And remember 3 years ago when a bunch of robotic cars managed to navigate through a (fake) urban area all by themselves? Well, today it’s the future, and autonomous robotic cars from Google have already logged 140,000 miles on busy, complex city streets and highways with only occasional human intervention, and 1,000 miles without any human control whatsoever.

Read More

Humanoid Baby Diego-San Looking for Makeover Advice

I’m sure you remember Diego-San, whom we spotted in an issue of Kokoro News back in January. Reactions to these pictures were… Well, let’s just say, reactions were decidedly mixed. And by decidedly mixed, I mean predominantly negative. Diego-San’s createor, Dr. Javier Movellan, has been exploring possible alterations to Diego-San’s face, and has made this concept public:

As Dr. Movellan pointed out in one of his comments on our post, a lot of what’s relevant about designing the appearance of a humanoid robot is simply about trial and error:

“Everybody has strong opinions about why the current version generates such negative reactions: face too large, robot babies are freaky, skin texture is wrong, mixing mechanical body with biological face is scary, giganto-babies are scary … For just about every theory examples can be given that contradict the theories. The truth is nobody really knows. It is a trial and error process.”

With that in mind, Dr. Movellan is looking for some feedback (constructive feedback, please) on what you do and don’t like about this new concept for Diego-San’s face. Personally, I’d say it’s a good start, with the helmet, antenna and exposed electronics all reinforcing the fact that the robot isn’t intending to fool you into thinking it’s real. However, I’d be curious as to what the effect would be if more of the human features were removed. Like, what is strictly necessary for the robot to accomplish its research goals, which may not necessarily involve a substantial amount of expression recognition? Does Diego-San need ears, for example? A nose?

While one route might be to make it less human, the other route would be to make it much more cartoony. So basically, keep all the human features, just make it look intentionally fake… Again, the idea being that you’re reinforcing the fact that the robot isn’t trying to fool you into thinking it’s human.

Anyway, please let Dr. Movellan know what you think by posting a comment. For more background, read through some of the comments on our original post, and Plastic Pals has a very interesting interview with Dr. Movellan here.

[ UCSD Machine Perception Lab ] VIA [ Plastic Pals ]
[ Original Kokoro News Article (*.PDF) ]



IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:

Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
Jason Falconer
Angelica Lim
Tokyo, Japan

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Load More