Automaton iconAutomaton

Review: Evolution Robotics Mint Sweeper

Evolution Robotics’ Mint sweeper robot made its debut at CES 2010, where we got a demo of it exhibiting its cleaning behaviors on video. Mint offers flexibility by doing away with the vacuum entirely, and using either wet or dry cleaning pads, Swiffer style. It’s certainly simpler, but does it work as well as the competition? Our review, after the jump.

-Design

Mint is adorable. I like its clean lines and straightforwardly modern black and white color scheme. It’s compact (10 inches wide, 3 inches high), but at about 4 pounds I wouldn’t call it light. On top, it has three backlit operation buttons and three small indicator lights, and that’s the extent of the interface, although it also sings to you to let you know when it’s charging, finished, stuck, etc.. The big black diamond is how Mint localizes itself on your floor; more on that later.

Mint’s ‘chin’ is a pressure-actuated bumper, similar to other robot vacuums. It also has a frontal proximity sensor with which it can detect impending obstacles to prevent itself from running headlong into walls and stuff.

Underneath, Mint has two drive wheels and two casters that also serve as edge sensors… Mint is heavily weighted towards its butt, so if the casters drive off a ledge, the robot won’t tip forward and has plenty of time to stop and back up.

The system for attaching Mint’s cleaning pads is ingenious. There’s a detachable panel that sticks to the bottom of Mint’s chin with strong magnets, and the cleaning pads wrap around that. To get the pads to stay on, you “zip” them into two sets of rubber teeth, and then snap the panel onto Mint and you’re good to go.

It’s very secure while also being very easy to change. The modularity of the system also means that you can stick just about anything on there, from Swiffer pads to the reusable cloths that come included with Mint to something of your own.

Mint doesn’t have a charging dock. To charge it, you use a wall adapter that plugs in underneath the robot. The manual suggests that you charge Mint while its standing on its butt, which makes it take up less space, but then you can’t easily grab it by the butt-mounted handle. It might have been better to put the charging port on the side of the robot instead, which would allow for a bit more flexibility, but I suppose that’s a pretty minor quibble.

-Cleaning Technique

First, let’s just get one fact out of the way: Mint does not clean carpets. The manual spells it out on page one… Mint is for indoor use on hard surface floors only, specifically wood, tile, vinyl, linoleum, and laminate. So, if you have lots of carpet, Mint may not be for you.

If you have carpet and hardwood or tile, Mint can deal with that. Mint is supposed to avoid getting on any area rugs you might have by registering their edges with its sensors, but if your rugs are too flat, Mint will get on them anyway. If this is a problem for you, the manual suggests that you place obstacles around carpet to dissuade Mint from getting on them, but this isn’t necessarily a very practical thing to do for most people. In our testing, Mint avoided thick shag area rugs, but once (out of six trial runs) got onto a thinner rug that had rounded edges.

The way Mint approaches cleaning isn’t pseudo-random like the Roomba; it’s more like the Neato XV-11 in that it actively builds a map of each area it cleans. Mint doesn’t have any integrated localization sensors, but instead depends on an external reference point to tell it where it is in a room.

This is a NorthStar cube. It’s essentially a little infrared projector that shines some spots onto your ceiling. The big black diamond on top of Mint can spot those spots, and Mint uses their relative orientation to figure out where it is and which way it’s going. You don’t really have to pay much attention to where exactly you stick the cube, as long as it’s in the same room that you want Mint to be cleaning and it has a clear view of the ceiling. This location information, combined with Mint’s proximity and drop sensors, allows it to build a map of the entire area that it’s cleaning.

NorthStar isn’t strictly necessary to get Mint to work, and it’s not a problem if Mint gets underneath something and can’t localize for a little while. If you don’t use NorthStar at all, Mint won’t be able to localize itself, and consequently will clean a smaller area and not do its edge cleaning behavior. This can actually be a good thing, though, for small areas like a bathroom. If you want Mint to clean more rooms, you can buy additional NorthStar cubes to help it navigate.

Mint uses two different cleaning techniques depending on whether you tell it to sweep or mop, based on what motions are most effective with what type of cleaning pad. The dry sweeping technique is a straight line approach, where Mint covers open areas in a back and forth pattern and then makes a complete edging circuit. For mopping, Mint also does straight lines, but while moving along each line it goes forward and back and to each side in a sort of “Y” motion. We’ve got some long exposure pictures that show this very well just below.

-Cleaning Effectiveness

As discussed, Mint cleans using two different techniques depending on whether it’s mopping or sweeping. I took a long exposure image of each of these patterns, and the differences are pretty cool. First is sweeping; Mint finished the room in about 15 minutes:

You can easily see the linear coverage pattern of the open space, as well as the edge coverage behavior. Mint did miss a small area at the lower left which temporarily contained a cat. The cat did not suffer any permanent physical damage, but I can’t speak to potential emotional trauma.

Here’s the mopping pattern, which took Mint a little over 20 minutes to execute:

It’s essentially a similar coverage pattern, except with the addition of the back and forth scrubbing behavior and without doing the edging.

As you can see Mint has no problems navigating around corners, under furniture, and avoiding (most) rugs. We have a tangle of cords under our entertainment center, and Mint didn’t get caught or disconnect anything after multiple runs.

I mentioned that Mint was cute, and its cleaning behavior is equally adorable. I’ve always liked watching cleaning robots at work, but Mint is my favorite, because you can see it thinking… Whenever Mint updates its internal map based on new data and decides where to go next, it pauses and its little indicator lights blink for a second. To me, this small touch lends it an incredible amount of personality.

While sweeping, Mint is excellent at picking up dust and small patches of dirt and hair, which actually stick to the pad. In open areas, it’s likely more effective than you wielding a Swiffer because of Mint’s consistent, overlapping coverage. Mint does have problems with larger objects and will just push them into corners and next to walls since they don’t get trapped on the pad. For example, we have some pet rats in our living room (in a cage, obviously), and they like to toss sunflower seed shells onto the floor. Mint doesn’t pick these up. It also has problems with big clumps of cat hair and dust bunnies, since it pushes them around as well. The dry pads also aren’t great at dealing with stickiness; they’re basically just for small dry things like dust and hair.

Overall the wet pads did a better job at cleaning since they (combined with the mopping behavior) were more effective at breaking up dirt and getting stuff to stick to them. Impressively, the pad that came with Mint stayed moist, even while cleaning most of our living room. The difference was easy to notice when walking around barefoot. Mint didn’t do quite as well on grouted tile, because it was hard for it to get down into the grout lines as effectively.

When Mint finishes cleaning, it sings at you, and then returns to where it started, which is both handy and adorable. The battery is supposed to last 3 hours dry sweeping and 2 hours wet mopping, and I never had any issues with Mint running out of power. The only thing you have to do after Mint finishes (besides plugging it in) is either tossing the pad in the trash if you used a disposable one, or tossing it in the laundry if you used a reusable one. Otherwise, there’s no additional maintenance… I didn’t experience any issues with hair tangled in the wheels or anything like that.

It is slightly irksome that there’s no easy way to confine Mint to a specific area. Placing the NorthStar cube does mean that Mint will tend to not to stray to far from there, but if you don’t want it under your couch or something, you basically have to go old school and block it off by putting stuff on the floor.

Mint not having a vacuum means that it doesn’t sound like a vacuum. In fact, I would describe Mint as silent, for all practical purposes. There’s an almost imperceptible hum from the wheel motors when it moves, but otherwise, it’s just the noise of the pad moving over the ground. What this means is that you can run Mint while you’re home, like when you’re on the couch watching TV, and it won’t bother you in the least.

-Conclusion

Many things about Mint seem, at first blush, to be compromises in functionality when compared to other cleaning robots like the Roomba or the Neato XV-11… Mint can’t be scheduled. Mint doesn’t have a charging dock. Mint can’t clean carpet. However, Mint is designed to be simple and straightforward, and you probably won’t find yourself missing those features. Realistically, even though the Neato XV-11 and Roomba dock themselves and will run on their own when you’re not around, you still have to empty their dustbins, which means that you’re spending similar amounts of time on post-cleaning maintenance for all three robots.

Another thing to keep in mind is, as Nancy Smith from iRobot told us, cleaning robots (at this point) aren’t really able to duplicate the level of cleaning that a human can accomplish with an upright vacuum or a mop. Mint is not really designed to clean a very dirty floor, but it’ll keep an already clean floor clean. You can run it every day, or every other day, and it’ll help keep dirt, dust, and pet hair under control with only minimal effort from you.

Personally, I was more than satisfied with how well Mint cleaned, although I’m not sure why I wouldn’t just have it use its wet cloth and mopping technique every time, since it seemed to clean better and it left my floors all shiny. Also, the fact that Mint is silent is a major plus, because it means I can use it whenever I need to and it doesn’t prevent me from doing other important things, like napping.

So, the big question: how does Mint compare to iRobot’s Roomba and Neato’s XV-11? Well, firstly, it’s $250, which is cheaper than either the Roomba ($300 and up for a 500 series) or the XV-11 ($400). Secondly, it doesn’t do carpet, so if you want a robot that cleans carpet… Yeah, it’s not gonna be Mint. On hardwood, Mint does as well or better than the other robot vacuums at daily maintenance of dust and pet hair, but suffers at times from the lack of a vacuum for picking up larger pieces of debris. While sweeping, Mint is similar in speed to the XV-11, and significantly faster than the Roomba. Mint’s ability to use different types of cleaning techniques (wet and dry) is also a bonus, and its lack of noise is a major distinguishing feature.

I’d recommend Mint if you’re considering a robot vacuum and don’t have a lot of carpet to deal with. Mint is smart, it’s versatile, it’s quiet, and it’s (relatively) cheap. It does have some issues picking up larger clumps of dirt and hair, but as a simple maintenance robot, it’s very effective.

Now, I realize that I may not have explicitly answered all of your questions, but fear not, I’ll be on the phone with Evolution Robotics in the next couple days for a follow-up post in which they’ll personally be answering all of the questions that I haven’t been able to, as well as whatever new comes up based on this post.

You can find out more about Mint at the Evolution Robotics website, and buy one from MintCleaner.com.

The r3 Rope Robot

I've recently had the chance to visit ETH Zurich's M3-Lab, which is part of the Sensory Motor Systems group headed by Robert Riener, and which contains one of the world's most advanced rope robots. The r3 uses ropes guided over deflection units and pulled by motorized winches to move end effectors to any point in space. A modular winch setup allows payloads of up to 100kg and top speeds of up to 18m/s. The necessary stiffness is provided by using Dyneema ropes with very little stretch (<1%) and a very stiff aluminum frame (see pictures below).

In addition, the r3 can also function as a tendon-based haptic interface: Combining information from absolute position measurement units and high-resolution motor encoders allows to determine absolute winch positions at any given time. Force sensors on each rope, and a sampling frequency of 4 kHz complete the setup, making the robot a very versatile platform.

The r3 is built around a large CAVE (Cave Augmented Virtual Environment), equipped with multiple large projection screens and an advanced 3D sound system to provide a realistic and immersive simulation of environments.

The Somnomat project shown in the video above uses the r3 system to conduct sleep research. Here is what Joachim von Zitzewitz, the lead researcher of the project, has to say on the Somnomat's motivation:
 

A child is rocked to sleep. Adults drowse in a rattling train. But why? Not much information about vestibular feedback on sleep is found in literature. Therefore, we want to investigate the underlying mechanisms by moving the test subject in different degrees of freedom while he/she is sleeping. Additionally, we focus on the influence of other modalities (acoustic, brightness of light) on sleep.

As in other sleep research projects, the additional equipment on the Somnomat monitors several physiological data such as heart rate, breathing frequency, EEG. Future studies starting in 2011 will investigate correlations between these data and the vestibular stimulation - a system identification of the test person if you will.

Bad timing of my visit did not allow me to test-sleep the bed (yet!), because the robot is also concurrently used in two other projects that focus on highly different dynamics: The rowing simulator recreates the experience of rowing on a river, down to the dynamics of the oar entering the water and the crowds cheering from the shore. The athlete sits in a rowing scull mounted at the center of the CAVE, with the r3 robot's ropes connected to the outer ends of the oar. As you row, the robot displays oar forces in 3D in accordance with the user's movements.

A third project focuses on understanding motor learning of fast movements by using the robot to support a tennis-like 3D-movement. It implements different assistive controllers, such as position control or path control (i.e., controller restricting the user's movement to a given trajectory in space).

With single rope axis generating peak forces of up to 900 N, guaranteeing the r3's safety has been a major focus, and a series of safety measures have been implemented (e.g., all drive trains are equipped with electro-mechanical brakes which block the winch in case of an emergency) and tested with dummies (see my favorite picture at the end of the post).

 

 

Thanks Joachim!

Autonomous Vehicle Driving from Italy to China

vislab

The Russian policeman waved at the orange van zigzagging at the empty plaza, ordering it to stop. The van didn’t, so the officer stepped closer to address the driver.

But there was no driver.

The van is an autonomous vehicle developed at the University of Parma’s Artificial Vision and Intelligent Systems Laboratory, known as VisLab. Crammed with computers, cameras, and sensors, the vehicle is capable of detecting cars, lanes, and obstacles -- and drive itself.

The VisLab researchers, after getting tired of testing their vehicles in laboratory conditions, decided to set out on a real-world test drive: a 13,000-kilometer, three-month intercontinental journey from Parma to Shanghai. The group is now about halfway through their trip, which started in July and will end in late October, at the 2010 World Expo in China. (See real time location and live video.)

The autonomous vehicle Grand Challenges organized by the Defense Advanced Research Projects Agency, or Darpa, in the United States, popularized the idea of cars that drive themselves. After participating twice in the Darpa competitions, VisLab was seeking new ways of testing their ideas.

“When you do things in the lab, it all really works. But when you go out in a real road, with real traffic, real weather, it’s another story,” says Alberto Broggi, VisLab's director and an engineering professor at Parma University.

That's how the idea for their Parma-Shanghai trip, which is partially funded by the European Research Council, came about. The goal is to test, and later perfect, their vision and navigation systems, which the researchers hope to one day deploy on commercial vehicles.

Unlike the Darpa vehicles, the VisLab van is not driving fully autonomously from start to finish. That approach would require at least a rough map of the complete route, Broggi says, adding that they lack maps for locations like Mongolia and Kazakhstan. So instead of programming the trajectory ahead of time, the Italians adopted a simpler, though still technically challenging, approach.

Two vans travel in line. The first uses maps and GPS to drive itself whenever possible, but a human driver is in control most of the time. The second van uses its cameras and navigation system to follow the first; it visually tracks the lead van, plans a trajectory in real time, and generates controls for steering and accelerating or braking. (If a car gets in between the two vans, the second van guides itself using GPS instructions it receives from the leader.) Watch:

And whereas the Darpa vehicles ran unmanned, the VisLab vans have people on board (including, in one occasion, hitchhikers). Inside, researchers monitor the systems and are ready to take control of the vehicles if necessary. The vans only drive with no one inside during demonstrations. That was the case when it was stopped by the police officer in Yekaterinburg, in central Russia.

Broggi insists they are not attempting to prove anything, just trying to "stress our systems." And he acknowledges that even the second van is not driving autonomously the entire time. There were situations when they had to take control because of maniac drivers or distracted pedestrians, or because they had to drive faster to compensate for delays. Still, he estimates that on average the van is in autonomous mode more than 90 percent of the time.

vislab viacThe compact vehicles carry a lot of equipment. Each has seven cameras, four laser scanners, GPS unit, and inertial sensor suite. Two cameras hanging above the windshield provide stereo vision, used for identifying lane markings and the terrain slope. Three synchronized cameras behind the windshield stitch their images into a 180-degree panoramic frontal view. The laser scanners -- three mono-beam and one four-plane laser beam -- detect obstacles, pedestrians, and other vehicles, as well as ditches and bumps on the road.

Each vehicle also carries three computers. One computer is responsible for processing images and data from the front of the vehicle. Another computer handles data for the sides. The third integrates all the data and plans a path, which in turn triggers low-level controls for steering, accelerating, and braking the vehicle. Solar panels on top of the vans power all the electronics.

A key piece of software is the one that processes the 180-degree frontal view. This component takes the large panoramic image and identifies the lead van, even when approaching a tight turn or steep hill. It can also detect road markings, pedestrians, and obstacles in general. And though it can recognize Italian traffic signs, it can’t do the same for those in Russian or Chinese.

The vans are manufactured by Piaggio, better known for its Vespa scooters. The vans are fully electric, and the researchers drive them in the morning, recharge in the afternoon, drive some more, and recharge again overnight. They use power outlets along the way or, when none is available, diesel generators.

A group of 20 staff researchers and students travel in a convoy that includes four vans (two pairs of leader-follower vehicles) and six support trucks, which provide a mechanic shop, storage, accommodation, and satellite communications. Every three weeks a few team members return to Italy to rest and others join the convoy.

The experiment will produce a mountain of data. Whenever the vehicles are running, the computers are recording data from the cameras, laser scanners, inertial sensors, GPS, vehicle actuators, batteries, and other systems. Every night the team replaces a set of hard drives with empty ones. They estimate they’ll end up with between 50 and 100 terabytes of data.

The idea is that after the test is over, the researchers can use the data to study every instance when things didn’t work, such as when the vehicle failed to detect lanes or misidentified an obstacle. Moreover, they can design new vision and navigation systems and use the data set to test them, recreating their journey.

So the end of the trip is not the end of the work for the VisLab team. "It will be the beginning of many new projects," Broggi says. Indeed, they've already done one major upgrade during the trip. At the beginning, there were problems with the second van, which was failing to follow the lead van, and they had to tweak their software and re-upload it to the vehicles.

Even with all the planning, some problems are unpredictable. During a recent stretch of the trip, the convoy found itself in the middle of the notoriously bad traffic of Moscow. Because of the congestion, a two-lane road had three rows of cars. The van's system insisted in staying on its lane, so the researchers had to turn to manual driving.

“It was too dangerous to drive lawfully!” Broggi says with a laugh.

And if the technology is challenging, the bureaucracy can be even more overwhelming. VisLab, a top research group in autonomous vehicles and intelligent transportation systems, first proposed the project to the Italian government years ago; the researchers never heard back. The project moved forward when they teamed up with Overland, an expedition organization, which handles logistics, including obtaining permissions to enter Russia and China carrying high-tech cameras and satellite equipment.

In fact, to cross the Russian border the group was held for 22 hours by custom officers, who took a huge number of pictures of the vehicles and the equipment and demanded a pile of paperwork. The Russians also insisted in having access to every data packet sent over a satellite connection.

The VisLab group is currently crossing Siberia, facing rain, snow, and icy roads. Next they will encounter even tougher conditions, a stretch of mountainous driving in Kazakhstan, with treacherous hills and roads, which will last for several days.

As for the Russian police office who stopped -- or tried to stop -- the driverless van, the researchers say they approached him and explained the situation. The officer wanted to see documents, though he didn’t ask for anyone's driver license. In the end, he didn’t look amused, but he left -- without issuing a ticket.

More photos:

HRP-4 Hides It All Somewhere

Kawada Industries and the National Institute of Advanced Industrial Science and Technology (ASIT) have just unveiled the latest edition of their family of humanoid robots, the HRP-4. HRP-4 is designed “in the image of a lean but well-muscled track-and-field athlete,” and it certainly is pretty damn lean…

Read More

Hoaloha Robotics: Tandy Trower's New Healthcare Robotics Company

Tandy Trower, who helped launch Microsoft Robotics Studio back in 2006, has started a brand spankin’ new robotics company called Hoaloha Robotics. The goal? Affordable ($5000 – $10000) socially assistive (i.e. elder care) robots in the next three to five years. Trower envisions a robot able to do all of the conventional remote monitoring and pill reminder stuff, but also able to assist with movement, object retrieval, and potentially provide some degree of intelligent social interaction.

Trower believes he can make an important contribution by developing a common interface and software that will make assistive robots easy to use and customize with applications, similar to the way Apple standardized the interface and application model for smartphones. “This is what primarily I believe is holding back most of the industry right now. It’s not that robots can’t be built, it’s that nobody has defined the software that’s going to turn robots into useful appliances,” he said.

Er, they haven’t? Hm.

“The components exist; it’s not difficult to build such a platform,” he said. “What people have lacked is the ability to envision what the right package should contain and, most important, what the applications and user interface should be.”

Now that’s something I wholeheartedly agree with. Or at least, I agree that the interface is going to be the tricky part. I’m not trying to minimize the amount of work that it’s going to take to get the hardware and programming up to snuff, but in order to be an effective assistive robot, the Hoaloha platform is going to have to be more independent than a Roomba or an XV-11, both of which are designed to be totally independent (more or less) and neither of which quite pulls it off. This, specifically, is what Hoaloha is going to be focusing on, partnering with other companies for hardware development. And when it comes to hardware components, they do exist, and they’re getting cheaper in leaps and bounds, making that three to five year timeframe (and the target price) potentially achievable.

Also, here’s the same obligatory quote we’ve been hearing for like the last decade:

Trower said the industry feels a lot like the early days of the PC, when there were Apple II and TRS-80 computers, but they weren’t yet doing a lot to enhance productivity or change people’s lives.

Dammit, I’m getting old over here… It feels like we’ve been stuck in the roboeighties forever.

[ Hoaloha Robotics ] VIA [ Seattle Times ] and [ Hizook ]

Thanks Dan!

Is Telepresence the Next Big Thing in Robotics?

Is telepresence the next big thing in robotics? Will telepresence robots revolutionize work, manufacturing, energy production, medicine, space exploration, and other facets of modern life? Or is it just all hype?

See below a compilation of opinions from interviews I did and from other sources. Then tell us what you think in the comment section at the bottom of the page.

"Manual labor could easily be done without leaving your home ... One region of the world could export the specialized skills it has. Anywhere."
—Marvin Minsky, MIT professor, in his 1980 telepresence essay

"Telepresence is vastly easier to do than AI is, so remotely controlling a robot -- be it to visit a remote location or do surgery -- will mature much sooner than autonomous robots will."
—Rob Enderle, principal analyst, Enderle Group

"After 100 years of advances in communications, where we discovered how to transmit text, voice, images, why not try to transmit presence?"
—Trevor Blackwell, founder and CEO of Anybots

"It made me realize that the telepresence experience -- you actually can have these robotic avatars, then your consciousness is injected into the vehicle, into this other form of existence. It was really quite profound."
—James Cameron, movie director, about piloting a robotic submersible into the shipwreck of Titanic, in a TED talk in Long Beach, Calif., early this year

"Whatever hugs do for people, I'm quite sure telehugs won't do it."
—Hubert Dreyfus, philosopher at the University of California, Berkeley, in The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet

"All of these [robotic telepresence] products are just begging me to kick them over."
—Lou Mazzucchelli, an expert in video teleconferencing, in this New York Times article

NASA Ready to Send Humanoid Robot to Space

Robonaut 2

In one giant leap for robotkind, NASA will send the world’s first humanoid robot to space later this year.

The humanoid, called Robonaut 2 or R2, is set to launch on space shuttle Discovery on 1 November, 2010, and travel to the International Space Station, where it will become a permanent resident and work alongside humans as a robotic helper.

The Robonaut features dexterous arms and hands that can manipulate objects and tools just like humans do. Astronauts will mount the robot on a fixed pedestal inside one of the space station labs and use it to perform tasks like flipping switches, cleaning air filters, and holding tools.

The main goal is to find out how manipulation robots behave in space -- and also give crew members a second pair of hands. NASA hopes the experience will allow it to upgrade the robot in the future, so it would be able to support astronauts in more complex tasks, including repairs and scientific missions inside and outside the ISS.

"It’s the first time ever in the history of the planet that we’ve decided to launch a humanoid robot into space," says Nic Radford, the Robonaut deputy project manager. "It’s been an amazing experience."

The robot can perform tasks autonomously or under remote control, or a mix of both. Astronauts on the station will operate the robot using a laptop. The Robonaut can also be "joysticked" and directly controlled from Earth, though there's a delay of several seconds for commands to reach the space station.

Most of the time the robot will receive instructions designating a task and carry it through autonomously. But NASA has tested a sensor suit that a human operator can wear to transmit motions to the robot. Eventually the Robonaut could become a powerful telepresence system for space exploration.

Robonaut 2
 
Robonaut 2
 
Robonaut 2
 
Robonaut 2

And why a human-shaped robot? The advantage of a humanoid design, Radford says, is its ability to interact with the same exact technologies that the crew can.

“Space shuttles and stations were made with humans in mind,” he explains. “The technology we’ve invested in over the years requires five fingers and two arms to operate. So the humanoid system with fine points of dexterity is a logical design as opposed to redesigning the shuttle interface for a non-humanoid robot.”

The Robonaut, which looks a bit like Star Wars' Boba Fett, is about 150 kilograms [330 pounds]. Built primarily with aluminum with steel parts, it carries over 350 sensors and has a total of 42 degrees of freedom.

Each arm is about 80 centimeters long and can hold 9 kg [20 lb] in Earth's gravity. Each hand has 12 degrees of freedom: 4 degrees of freedom in the thumb, 3 degrees of freedom each in the index and middle fingers, and 1 each in the other fingers.

Behind its helmet visor are four visible light cameras: two provide stereo vision for the robot and remote operators, and two work as auxiliary cameras. A fifth infrared camera is housed in the mouth area for depth perception.

Because the head is full of cameras, the robot's computer system -- 38 PowerPC processors -- are housed inside the torso. Or as NASA puts it, "R2 thinks with its stomach -- literally."

This version of the robot has no legs or wheels. It's 101 cm [3 feet, 4 inches] tall from waist to head. “From the waist up he looks quite like me,” Radford jokes. “Big biceps and very muscular.”

Working with GM over the past three years, NASA originally designed Robonaut 2 as a prototype to be used on Earth so engineers could understand what would be needed to eventually send a robot to space. But when mission managers saw the robot early this year, they were so impressed they decided to send it to the space station. The Robonaut team was given six months to get the robot ready.

Here's an overview of the project:

In a second phase of the Robonaut project, at an undecided date, NASA will be making the unit mobile using a leg-type system, giving it the ability to move around inside the ISS. The third phase will feature a robot that will perform missions outside the space station.

The Robonaut is also a part of Project M, which wants to put a humanoid robot on the moon in 1,000 days -- beating Japan’s proposed goal of 2015. Even if the effort, which Radford describes as "a quick, hot burn project," is not successful, it will likely develop useful technologies.

It seems the future of space exploration will surely include advanced telepresence robots with expert operators controlling them safely from Earth. But Radford sees the robots being used in cooperation with humans, not replacing them completely. “They are just another tool in a human explorer’s toolbox,” says Radford.

“This is the dawning of a different era,” he concludes. “Ubiquitous robots that are in and around us, doing everything. We represent the beginning of that.”

As for Robonaut, it's currently cocooned inside a foam-cushioned aluminum frame called the Structural Launch Enclosure to Effectively Protect Robonaut, or SLEEPR for short, where it's awaiting to rocket to space. Bon voyage, Robonaut!

Watch Robonaut getting ready for the big trip:

Images and videos: NASA

Robot Companions to Befriend Sick Kids at European Hospital

aliz-e project

If you've ever spent time with an interactive robot, it's always a novel experience at first -- but over time the thrill will fade. Each time the robot meets you, it will run through a routine of canned responses that gets old pretty quickly.

But what if robots were able to remember who you are and retain information associated with you? It might make them better companions, and a step up from passive toys.

The ALIZ-E project is a European Commission-funded venture aimed at producing robots that can forge "longer-term constructive bonds" with people in a hospital setting. Grouping academic and commercial partners, it kicked off six months ago and will run for four and a half years.

It's essentially an artificial intelligence effort. Aldebaran Robotics' Nao humanoids, used in RoboCup and research centers around the world, will be interacting with and helping kids at the Hospital San Raffaele in Milan. The children are about 8 years old, and have recently been diagnosed with diabetes.

The patients will be learning about diabetes and how to manage it during their hospitalization. The researchers, meanwhile, will see whether these robot caregivers can be as effective as pet therapy in hospitals, which can be expensive and pose hygiene issues. Other groups are exploring similar ideas using a robot seal called Paro.

aliz-e project"We hope the robots can make their stay at the hospital more pleasant and also take over some of the tasks of the medical staff, such as the educational aspects of the [diabetes] program," says ALIZ-E coordinator Tony Belpaeme of the University of Plymouth, in the U.K. "In every aspect of the project's development we are trying to push boundaries."

The researchers are programming 20 Nao robots to work one-on-one with children and caregivers, and are focusing on cognitive abilities including natural language processing, child speech recognition, facial expression recognition, and episodic and semantic memory. The children will be evaluated on their knowledge of diabetes from training with Nao versus a nurse. 

"If you use a chatbot, the chatbot really doesn’t understand what you are talking about," says Belpaeme. "Here, we want the robot, upon hearing a proper name, say Fabio, to recall interaction history, images, and memories in general pertaining to Fabio."

The Nao robots wouldn't be able to store much information, so the researchers will use a cloud-based computer system for the heavy data processing. The robots will rely on this cloud infrastructure to process speech and non-verbal inputs, store a distributed model of long-term memory, and use that information to produce a more lifelike interaction.

The computing architecture for this will be GostaiNet from partner firm Gostai, a French start-up known for its open-source Urbi robotics platform. Urbi has been used in everything from Aibo robot dogs to Segway RMP platforms to Lego Mindstorms NXT kits.

The main innovation of Urbi is its script language, says Gostai CEO Jean-Christophe Baillie. "This language makes it much easier to handle complex parallel tasks and also to handle and process events, which are two essential parts of any robotic program," he says. "Urbiscript is strongly coupled to C++ and other traditional languages, so it's easy to integrate it within a project."

Could more realistic human-robot interactions develop into relationships? Will children become attached to Nao? We'll see how the sick kids take to robot's bedside manner. In the future they might look back at Nao as the first of many robot doctors.

Boston Dynamics' Marc Raibert On The Future Of BigDog

Back in May (I think, although the video wasn’t posted until now), Marc Raibert, founder of Boston Dynamics, gave a talk at Stanford on the current progress and future plans for BigDog. It’s an over an hour long, but (as you might expect) the juicy bits come in towards the end regarding the future plans. If you don’t have an hour or so, I’d recommend starting in at about the 46:50 mark, where you get to see some video of a quieter BigDog with an electric motor, among other things. If you don’t have time for even that, here’s a summary of what I thought were the most interesting bits:

-Marc Raibert says he’s inspired by mountain goats, which is pretty daunting when you’re designing a quadrupedal robot.

-Robots vs. mules: mules are better, except: they can only carry about a third of their body weight, they don’t take direction well, and they’re not easy to warehouse.

-That video of BigDog slipping on ice and recovering? It wasn’t programmed specifically to deal with slippery surfaces, and they didn’t even know it was icy out, they were just shooting some other test video and it happened to cross a patch of ice, recovering using its standard dynamic balance programming.

-BigDog is able to run (actually run, including a stride phase without any ground contact) at a little bit over 6 mph, although they’re still working on its balance while running.

-Boston Dynamics has two working BigDogs, both of which you can see in action at 30:40 (this is new video). Raibert wants to get 7 or 8 of them together to go dog sledding (!).

-BigDog can’t yet get up on its own, but they’re working on it… The next generation will have the hip (or shoulder) joints positioned outside of the body and higher up, with an increased range of motion that will allow the robot to get its legs under its body, which the current generation can’t do.

-Kinematically, the orientation of BigDog’s legs (knee front or knee back) just doesn’t matter. They’re able to take the legs off and swap them around.

-The noise BigDog makes is “much worse” in person. The videos “don’t do it justice.”

-Electric motor BigDog still sounds like bees (although they’ll be able to mute it completely), only runs for 10 minutes, and is slightly underpowered… They’re contemplating a “hybrid” version, where you can switch to silent operation for 10 minutes and then back to gas.

-BigDog can follow people autonomously using a scanning LIDAR system, engineers say it’s “really scary to have the robot following you going down hills” (ha!).

-There’s no redundancy in the walking system, “BigDog goes down when you shoot off a leg.”

-The biggest challenge so far has been making the system able to run in the heat (due to the engine).

There’s also a little bit of an update on PETMAN; unfortunately, the outtakes weren’t approved for webcast (neither, for that matter, were the BigDog outtakes. FROWNY FACE.). But you do get to see a CAD rendering of PETMAN:

Marc says PETMAN freaks him out a little bit because of the whole Uncanny Valley thing, but he’s trying to be mindful of that while designing PETMAN.

At the end, Marc Raibert even gives a shout-out to that brilliant BigDog parody video… He says that his new metric is how many views his BigDog YouTube videos (and their parodies) receive.

[ Boston Dynamics BigDog ]
[ Stanford @ YouTube ]

Cyborg Fly Pilots Robot Through Obstacle Course

Swiss researchers have used a fruit fly to steer a mobile robot through an obstacle course in the lab. They call it the Cyborg Fly.

Chauncey Graetzel and colleagues at ETH Zurich's Institute of Robotics and Intelligent Systems started by building a miniature IMAX movie theater for their fly. Inside, they glued the insect facing a LED screen that flashed different patterns. These patterns visually stimulated the fly to beat its left or right wing faster or slower, and a vision system translated the wing motion into commands to steer the robot in real time.

The fly, in other words, believed to be airborne when in reality it was fixed to a tether ("A" in the image below), watching LEDs blink ("B") while remote controlling a robot ("C") from a virtual-reality simulation arena ("D"). Is this The Matrix, or Avatar, for flies?

cyborg fly

Graetzel tells me the goal of the project was to study low-level flight control in insects, which could help design better, bio-inspired robots. "Our goal was not to replace human drivers with flies," he quips.

Watch:

The key component in their setup was a high-speed computer vision system that captured the beating of the fly's wings. It extracted parameters such as wing beat frequency, amplitude, position, and phase. This data, in turn, was used to drive the mobile robot. Closing the loop, the robot carried cameras and proximity sensors; an algorithm transformed this data stream into the light patterns displayed on the LED screen.

In a paper in the July 2010 issue of IEEE Transactions on Automation Science and Engineering, they describe the vision system's latest version. It uses a camera that focuses on a small subset of pixels of interest (the part of the fly's wings responsible for most lift, for instance) and a predictive algorithm that constantly reevaluates and selects this subset. The researchers report that their system can sample the wings at 7 kilohertz -- several times as fast as other tracking techniques.

"As autonomous robots get smaller, their size and speed approach that of the biological counterparts from which they are often inspired," they write in the paper, adding that their technique could "be relevant to the tracking of micro and nano robots, where high relative velocities make them hard to folow and where robust visual position feedback is crucial for sensing and control."

The main Cyborg Fly experiments took place about two years ago as part of a research effort led by professor Steven Fry at the the Fly Group at ETH/University of Zurich. That work was a collaboration with ETH's Institute of Robotics and Intelligent Systems, directed by professor Bradley Nelson. Tufts University's Center for Engineering Education and Outreach, in Boston, directed by mechanical engineering professor Chris Rogers, was also involved.

The Cyborg Fly is not the only "flight simulator" for bugs, and other research groups have used insects to control robots. But still, the ETH project stands out because of its high-speed vision component. This system could be useful not only for biology research, to study insect flight and track fast movements of appendages or the body, but also for industrial applications -- for monitoring a production line or controlling fast manipulators, for example.

Graetzel says they tested two different "movie theater" configurations. One used two parallel LED panels, with the fly in the middle. They later upgraded it to a cylindrical LED panel. They also used two types of robot. The first was an e-puck, a small wheeled robot designed for use in research projects. Later the researchers built a robot using Lego NXT.

The Cyborg Fly project was a finalist in the robotics category at this year's Graphical System Design Achievement Awards, an event organized by National Instruments, in Austin, Tex.

Graetzel has since received his PhD degree and moved on to other things -- that do not involve flies.

Another video:

Images and videos: ETH Zurich/Institute of Robotics and Intelligent Systems

Read also:

Man Replaces Eye with Bionic Camera
Fri, June 11, 2010

Blog Post: Canadian filmmaker Rob "Eyeborg" Spence has replaced his false eye with a bionic camera eye

Monkey Controls Robot with Mind
Wed, June 02, 2010

Blog Post: A monkey with a brain-machine interface commands a 7-degree-of-freedom robotic arm

Robot Bacteria Builds Pyramid
Thu, March 25, 2010

Blog Post: Researchers made a swarm of bacteria perform micro-manipulations and build a tiny pyramid

Cockroach-Inspired Robot Dashes Off
Tue, October 13, 2009

Blog Post: This UC Berkeley robot can survive a 7-story fall -- and dash off at high-speed.

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More