Automaton iconAutomaton

Review: iRobot Roomba 560

The iRobot Roomba is not a new product. Since 2002, it has been (more or less) the only robotic vacuum available to consumers in the US. iRobot has been continually improving the Roomba, however, and the Roomba 560 is one of the latest and most sophisticated models. Now that there’s some new competition on the horizon, it’s a good time to take an updated look at the Roomba and what makes it a reliable and effective autonomous vacuum. We’ll have a review of the aforementioned competition (the Neato XV-11) up tomorrow so that you can compare the two, but for today, we have a review of the Roomba 560.

If you’re not familiar with the Roomba, here’s the deal: it’s a robotic vacuum cleaner that can clean your floors all by itself. All you have to do is tell it to start cleaning, and it’ll go clean, avoiding obstacles and getting around furniture and ultimately returning to its home base to recharge itself. There’s a lot more too it than that, of course… Lots more, after the jump.

The particular Roomba that we’re reviewing is a 560. The 560 is a fifth generation Roomba, which is a significant upgrade from the earlier 400 series. It’s generally about the same size and shape, with a 13″ diameter and a weight of about 8 pounds. It’s also quite stylish, with a slick black and silver color scheme that doesn’t make you want to hide it in a closet like a conventional vacuum. This is good, because having your Roomba in a closet pretty much defeats the entire purpose of a robotic vacuum, especially one with on-board scheduling like the 560 has. But I’m getting ahead of myself.

The Roomba is able to clean autonomously thanks to its suite of sensors. Proximity sensors on the front of the robot work with a physical bump sensor to help the robot avoid walls and maneuver around obstacles and furniture. Drop sensors underneath keep the robot from going over stairs or ledges. On top, the Roomba has an infrared sensor that allows it to find its dock and use virtual walls (more on those later).

All of these sensors provide a limited amount of information about the Roomba’s environment and path, but they don’t directly tell the Roomba where it is in a given room. Instead, the robot relies on an algorithm to tell it where to go next, and cleans in a variable pattern that ends up covering most areas of a room between three and five times. While this pattern looks totally random, it’s not, and the patterns are actually derived from MIT research on the efficient coverage behaviors of foraging insects.

To do the actual cleaning, the Roomba combines a vacuum system with two counter-rotating brushes. The brushes help pick up all the big things (with the bristle brush and rubber beater brush working together like a broom and dustpan), while the vacuum itself takes care of smaller particles and dust. All off this stuff ends up in a removable bin at the rear of the Roomba, which incorporates a replaceable air filter. The entire vacuum module (the green piece in the above picture) is flexible and can move up and down, which helps the Roomba to adapt to floors and different lengths of carpet. Since the Roomba is fetchingly round, it does have some issues getting into corners. On the right side near the front is a spinning brush that’s designed to mitigate that problem to some extent by sweeping dirt and stuff out of corners and back under the Roomba.

While the Roomba is entirely capable of cleaning by itself, it does take a little bit of work to “pre-clean” your floor for it to be most effective. The robot can’t lift things, of course, so if you have a bunch of stuff all over the floor, you’re not going to get the best cleaning. The Roomba will nudge things a bit, but it tends to get caught on stuff like loose clothing and may become stuck. If that happens, the robot will try to free itself, and if it can’t it’ll stop and beep at you to come free it. Not a big deal, but it does keep it from autonomously completing its cleaning, so it’s best to keep things tidy. Supposedly, the 500 series of Roombas are good at not getting themselves entangled in cords and rug fringes and other stringy things. This has not been my experience. I guess it’s partially my fault for having ten thousand power cords strewn all over the place, but the Roomba likes to grab them and then run away, unplugging things as it does. It also manages to (occasionally) rip out pieces of fringe from a rug. My guess is that the anti-tangle system works if the Roomba considers itself tangled, but it has a tendency to tug pretty hard before it reaches that point.

Overall, I’ve been very impressed with how well the Roomba cleans. I bought my parents a 530, which managed to fill its dirt bin in one run around our living room the day after the carpet had been professionally steam cleaned. The bristle brushes do a great job of picking up stuff like pet hair, and for beating dirt out of carpet. The Roomba is least effective in corners and around objects, where it can’t always get its brushes close enough to the edges of things. I ran the 560 every other day or so for about a week, and in each case, the floor (which is half carpet, half hardwood) was noticeably cleaner when it was finished, and the Roomba’s dirt bin was nearly full. It’s not a substitute for a conventional vacuum, not completely, but it does a pretty darn good job for day-to-day cleaning.

The 500 series Roombas also include a ‘Dirt Detect’ feature, which gives them the ability to sense where there’s a particularly dirty spot and then spend more time there (in a tight spiral). Incidentally, if there’s just one spot you want cleaned, you can set the Roomba down manually and have it ‘Spot Clean’ just that bit instead of the whole room.

To clean an average sized room takes the Roomba 560 about 45 minutes. This seems like an awfully long time, especially if you watch it at work, which you totally will, because it’s adorable. It can also be frustrating at times, since you start to wonder why it’s covered that particular spot 37 times but still hasn’t managed to catch the rogue dust bunny over by the couch. Remember, while the Roomba may look like it’s just bumbling around randomly, it’s actually following an algorithm designed to cover all areas of a room multiple times. If there are places you don’t want it to go, you can set up little round towers the project a ‘virtual wall’ of infrared light that the Roomba won’t cross. The 560 is able to clean up to four rooms before it needs to head back to its home base to recharge, which it does all by itself when it considers itself finished or when it’s low on battery power.

While the Roomba certainly cleans effectively, it tends to make quite a mess of itself while doing so, which calls for weekly maintenance (or more often, depending on how frequently you run it). I’m not talking about just emptying the dustbin… Dust and hair get trapped in and around the brushes, and even inside the brush bearings themselves, necessitating partial disassembly of the cleaning compartment. It’s very easy to do this, but it’s still a chore, and often extraordinarily tangled and dirty requiring patience (and scissors) to clean out.

Really, it’s surprising how well the Roomba is able to clean on its own. iRobot has gotten the design to the point where with a little bit of forethought, you can just leave the robot cleaning and come back a few hours later and it’ll be back on its base charging. The 560 gives you some additional options (if you trust it) to schedule cleaning for when you’re not around. This is all done on the robot itself using a few buttons and an LCD… You can set different cleaning times for each day, and the robot will wake itself up, clean your room(s), and then go back to its base. You still have to remember to empty the bin and clean it and stuff, but daily vacuuming doesn’t get much simpler than that.

Although we reviewed the Roomba 560, iRobot makes a variety of different models with slightly different capabilities (and different costs). The base 500 model is the 510, for $280, but you don’t want that one ’cause it doesn’t come with a charging dock. As you go up through the different (and increasingly expensive) models, you gain battery life, on-board scheduling, some accessories, and (eventually) the ability to use Lighthouses, which are special Virtual Wall units that help the Roomba navigate around multiple rooms. The 560 that we reviewed here costs $350 and can’t use Lighthouses, which is funny, since my other Roomba (a 535 model that cost $250 and appears to be discontinued) can. Anyway, when you’re looking at buying a Roomba, it’s important to put some thought into how you’re going to use it. You definitely want a 500 series with a self-charging dock, but as far as other features go, consider how many rooms you’d like it to clean, whether you’d like it to move from room to room on its own, and whether you’re going to start the Roomba cleaning yourself or you’d like it to start by itself (when you’re not home, for example). It’s important to remember, though, that the fundamental cleaning technology is basically the same. You can pay a bit more for some extra features, but the robot isn’t going to navigate any differently or pick up any extra dirt.

The 560 is a fifth generation Roomba. As such, it benefits from a half decade worth of improvements that iRobot has implemented based on customer feedback and testing. It’s a practical and polished robotic vacuum that works in your home and can actually make your life easier… Or at least, make your floors cleaner. You can pick one up at iRobot.com, but I might recommend that you buy it from a retail store like Best Buy so that you can try it out and take it back if it’s not for you. You won’t take it back, though… Once you let it run around your house a little bit, you’ll be sold. It’s awesome.

For more on how the Roomba works, check out our interview with Nancy Dussault Smith, Vice President of Marketing Communications at iRobot.

[ iRobot Roomba 560 ]

Kokoro's I-Fairy Robot Conducts Wedding in Japan


Photo: Mr. Moriyama/Node

The groom is a robotics researcher. The bride works at a robotics firm. Robots brought them together. So when it came time to plan their wedding, the choice only seemed natural: A robot would conduct the ceremony.

The wedding took place today in Tokyo, according to this AP report. The groom was Tomohiro Shibata, a professor of robotics at the Nara Institute of Science and Technology in central Japan; the bride was Satoko Inoue, who works at famed robotics firm Kokoro.

Leading the ceremony was a little humanoid robot called I-Fairy with a high-pitch voice and flashing eyes. Kokoro, which unveiled the robot early this year, designed the I-Fairy as a robot receptionist and entertainer. It sells for 6.3 million yen (US $68,000).

The robot has a humanoid body in a sitting posture and, as the company puts it, its appearance was "based on the image of a lovely fairy." It can talk, gesture with its arms, and detect the presence of a person, according to this story in the Japanese blog Node.

Kokoro says this was the first time a robot celebrated a wedding.

At one point the robot told the groom: "Please lift the bride's veil."

Then the couple kissed.

Watch:

Thanks, Dr. Kumagai!

Read also:

Thomas and Janet: first kissing humanoid robots
Mon, August 24, 2009

Blog Post: Developed by the National Taiwan University of Science and Technology, the theatrical robots performed the first robot kiss during a performance of Phantom of the Opera.

Geminoid F: Hiroshi Ishiguro Unveils New Smiling Female Android
Sat, April 03, 2010

Blog Post: Geminoid F, a copy of a woman in her 20s with long dark hair, exhibits facial expressions more naturally than previous androids

Hiroshi Ishiguro: The Man Who Made a Copy of Himself
April 2010

Article: A Japanese roboticist is building androids to understand humans--starting with himself

Robots: 50 Years of Robotics (Part 2)

Two weeks ago the Robots podcast celebrated its 50th episode and has now released the second part of its comprehensive "50th Special", summarizing the most remarkable developments in robotics over the last 50 years and experts' predictions for the next half-century. In part two, Jean-Christophe Zufferey discusses flying robots, Dan Kara the robotics market, Kristinn Thórisson Artificial Intelligence, Andrea Thomaz robot interactions, Terry Fong space robotics and Richard Jones nano robots. Read more on the Robots website or directly tune in!

Amazing Robotic Sculpture Balances Itself on One Corner

balancing cube eth zurich
Is it a sculpture? Is it a robot? The Balancing Cube is both.

The Balancing Cube is a robotic sculpture that can stand on any of its corners. Pendulum-like modules, located on the inner faces of the cube, constantly adjust their positions to shift the structure's center of gravity and keep it balanced. The cube remains stable even if you poke it. But not too hard!

Created by Raffaello D'Andrea, Sebastian Trimpe, and Matt Donovan at ETH Zurich, the contraption is half art and half technology. They got their inspiration from a Cirque du Soleil performance in which acrobats use their bodies to support each other and balance together in seemingly impossible positions.

D'Andrea, whose robotic creations include a self-assembly chair, a people-seeking table, and an army of warehouse bots -- couldn't help but wonder: Could a machine perform a similar balancing act, with individual elements coordinating their movements to keep the system stable?

See the result in the video below. I love the part when Trimpe pushes the cube slightly and its balancing mechanisms respond, the motors screeching as if he were teasing a living creature.

So how does it work?

The Balancing Cube is an example of a distributed control platform. Each module [see illustration below] is a self-contained unit with a computer, battery, motor, and inertial sensors (a tri-axis accelerometer and tri-axis rate gyro). So instead of relying on a centralized controller, the modules share their inertial data through a bus network. Then each module combines its own data with the shared data to determine the orientation of the cube -- and command its motor accordingly.

In other words, each module makes its own computations and moves its own motor, but as a result the combined motion keeps the system stable -- just like the Cirque du Soleil acrobats.

D'Andrea and Trimpe discussed the cube's control scheme last week at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska.

Their control algorithm uses inertial data to estimate how the cube is oriented relative to gravity and how fast it is moving. But this estimate is independent of the rigid body dynamics of the cube; that is, the algorithm doesn't require a dynamic model of the cube, and the method works both in static conditions and for when the structure is in motion.

The cube, made of aluminum, is 1.2 meter on its sides and stands about 2 m tall. It may look like a star, but that's because its faces consist of X-shaped elements. It's cubic shape becomes apparent if you imagine lines connecting its corners [see image above]. (Think of Isamu Noguchi's Red Cube in New York.)

The goal of the project was more than just building a high-tech piece of art. The researchers wanted to investigate the advantages and limits of distributed control. In particular, they knew that the balancing mechanisms didn't need to share all their sensor data, but they wanted to find out which pieces they did need to share.

In terms of hardware, they focused on a modular design, trying to create a balancing system that would consist entirely of self-contained mechanisms. Indeed, you can use their mechanisms to balance not only a cube but also other shapes.

A final but critical design requirement: the hardware had to be robust enough to withstand repeated falls.

Images and video: Raffaello D'Andrea and Sebastian Trimpe/Institute for Dynamic Systems and Control - ETH Zurich

Do Robots Take People's Jobs?

Technology taking jobs is a notion that probably dates back to the invention of the wheel. After all, it took four bearers to carry the emperor and only one to pull a chariot!

The problem is that most people stop thinking after the first domino falls instead of following the chain of events further on. Let's continue the chain: Once the wheel is invented, more people can travel comfortably, goods can be carried farther, better roads are built and commerce thrives. A few bearers of the ruling class have to find new work, the remainder of the world benefits and thousands of jobs are created.

Let's fast-forward through history and take a look at the tractor. Now it happens that my grandfather bred workhorses. The family oral history has it that, upon the introduction of Henry Ford's tractor in the 1920s, the price of workhorses dropped 10 percent per week. My grandfather lost his farm, moved his family to Florida where my father at age 14 had the only job in this family of six, delivering newspapers. However, the advent of the tractor and modern farming techniques transformed the United States from a country where 40 percent of the population needed to farm to one in which 2 percent of the population could feed the other 98 percent. This freed a larger proportion of young adults to attend college and start the computer revolution that has created millions of jobs in the U.S. and worldwide.

Did people lose jobs to computers? Yes, a number of secretaries had to upgrade their skills, and executives who refused to learn to type had a tough time of it, just to cite two examples. But these jobs were replaced by tens of thousands of high-paying software engineering positions, plus computer installers, computer operators, data storage firms and more.

Simplistic thinking visualizes a fixed pool of jobs, with new technology taking some away. In reality, new technologies create new opportunities for many more people, specially our children. In the case of robots, the direct new jobs involve designing, building, programming, integrating, installing, servicing, maintaining, managing and refining the machines. Robots will enable humans to work in hostile environments where they could never work before: for instance, farming the ocean floor, mining super subterranean excavations, manufacturing in space and in Antarctica all become realistic endeavors. Building on nano- and cosmic scales begin to become practicable. The limited imaginations that believe jobs will stay the same, except that robots will do them all, should take a look around them.

If it were true that technology makes people poorer, would we not find evidence of that all around us? Technology-poor countries would have full employment and technology-rich countries would have the lowest GDP per person. Instead, in technology-rich nations, so-called "poor" people often own cars and televisions, have a roof over their head and food for their tables.

Of course, anyone can argue that material wealth does not make for spiritual wealth; that's a matter for philosophers to wrestle with. And certainly there is room for improving systems for helping those in transition between jobs. But finding evidence that technological advance decreases material wealth for the general population is very difficult. Technology raises the floor for all; it is the great uplifter.

Jeanne Dietsch is co-founder and CEO of MobileRobots, based in Amherst, N.H.

Autonomous Car Learns To Powerslide Into Parking Spot

The Stanford Racing Team may have placed second at the DARPA Urban Challenge back in 2007, but they haven't stopped pushing the limits of autonomous cars. At ICRA last week, Zico Kolter and his collaborators showed video of one of the most extreme stunts a robotic car has ever pulled off.

They taught the car to accelerate in reverse up to 25 miles per hour, then suddenly hit the brakes, turn the wheel, and start a 180-degree skid--ending up right in a desired parking spot. It's not just a cool stunt--this research should give autonomous cars greater flexibility to deal with unexpected situations.

 

Reliably executing such a stunt isn't easy. "Junior" (as the car is known) usually operates under closed-loop control, where real-time sensor data is used to continually adjust the controls. This works well for driving the car in a straight line, where the physics of the car's motion are pretty straightforward to model. Unfortunately, the dynamics model tends to break down as the car enters the complex sliding turn. In the first clip of the video below, you can see Junior miss the mark under this type of control.

As an alternate approach, the team "taught" Junior the stunt through a basic demonstration. The researchers found that even though the sliding is complex to model, it's a highly deterministic motion--by just blindly repeating the control inputs from the demonstration, the car usually ended up in the same place. But as you can see in the second attempt in the video, this open-loop control method also has a weakness: errors in the straight approach go uncorrected and cause big differences in Junior's final position.


So to get the best result, the team combined approaches: keeping the car under closed-loop control during the well-modeled approach section, and then letting it transition to open-loop control for the final slide.

Most impressively, the Stanford team allowed the car to determine for itself which approach was better and when to smoothly switch between the two. The result (the third attempt in the video) lands the car right on target. For testing purposes the team decided to use cones rather than actual cars. Just in case.

Simple Vibrating Bot Climbs Tubes With Ease

I have to admit that I'm a sucker for simple solutions to difficult problems. At ICRA this week, one of the cleverest new designs (and winner of the award for best video) was for a small tube climbing robot. The Biorobotics lab and Manipulation lab at Carnegie Mellon University have been working for several years on dynamic climbing bots that can climb between walls without any special attachment mechanism. But they wanted to come up with a smaller design that could make it up three-dimensional tubes.

 

The result is this little device. It's simple motor turns an unbalanced mass at a uniform velocity. As the mass swings around, it causes the robot to bounce back and forth between the tube walls. Two rubber o-rings let the researches specify the exact contact points and increase friction with the walls.

This isn't the first tube-climbing, vibrating robot, but it has some distinct advantages. Earlier designs relied on fibers or bristles to create anisotropic friction with the walls and vibration caused motion in the direction of lowest friction. The problem with these designs comes when you need to remove the robot--now you're forced to work against the maximum friction.

What's most impressive about Carnegie Mellon's new bot is its speed, versatility, and payload capability. In the video, you can see that it travels up to 20 body-lengths per second and has a payload capacity of roughly 5x it's weight. The robot can even climb different sized tubes, although at different rates.

The researchers say they weren't application driven, but it's not hard to imagine such a simple device coming in handy for navigating tubing quickly.

(Video courtesy of Amir Degani, Siyuan Feng, Howie Choset, and Matthew T. Mason)

Robots With Knives: A Study of Soft-Tissue Injury in Robotics

UPDATE: It turns out that the courageous individual in the video is Sami Haddadin, the study's lead author, who was clearly confident in the collision-detection system he devised. I incorporated additional details he gave me.

The idea of a robot in the kitchen cooking us meals sounds great. Just watch out when the automaton is handling the knives!

To find out what would happen if a robot holding a sharp tool accidentally struck a person, German researchers set out to perform a series of stabbing, puncturing, and cutting experiments.

They fitted an articulated robotic arm with various tools (scalpel, kitchen knife, scissors, steak knife, screwdriver) and programmed it to execute different striking maneuvers. They used a block of silicone, a pig's leg, and at one point a human volunteer's bare arm as the, uh, test surface.

The researchers -- Sami Haddadin, Alin Albu-Schaffer, and Gerd Hirzinger from the Institute of Robotics and Mechatronics, part of DLR, the German aerospace center, in Wessling, Germany -- presented their results today at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska.

The main goal of the study was to understand the biomechanics of soft-tissue injury caused by a knife-wielding robot. But the researchers also wanted to design and test a collision-detection system that could prevent or at least minimize injury. Apparently the system worked so well that in some cases the researchers were willing to try it on human subjects.

We applaud the guy [editor's note: see update above] at the end of the video who put his body on the line in the name of robotic science.

Warning: Some people may consider content graphic or upsetting.

The researchers acknowledge that there are huge reservations about equipping robots with sharp tools in human environments. It won't happen any time soon. (Sorry, you'll still have to chop that cucumber salad yourself). But they argue that only by getting more data can roboticists build safer robots.

The experiments involved the DLR Lightweight Robot III, or LWRIII, a 7 degrees-of-freedom robot manipulator with a 1.1 meter reach and moderately flexible joints. The robot, which weighs 14 kilograms, is designed for direct physical interaction and cooperation with humans.

The tools the researchers tested included [photo, right]: (1) scalpel; (2) kitchen knife; (3) scissors; (4) steak knife; (5) screwdriver.

The researchers performed two types of experiments: stabbing and cutting, testing the different tools striking at various speeds, with and without the collision-detection system active.

In most cases, the contact resulted in deep cuts and punctures, with potentially lethal consequences. But remarkably, the collision-detection system was able to reduce the depth of the cuts and in a few cases even prevent penetration altogether.

Although the robotic arm has a force-torque sensor on its wrist, this sensor is not used in the collision-detection system; it only serves as a measurement reference in the experiment. "The collision detection and reaction," Haddadin told me, "is based on a very good dynamics model of the robot and the fact that, unlike other robots, we have torque sensors and position sensors in every joint."

With the dynamics model (which includes rigid body dynamics, joint elasticity, and motor model) and the sensor measurements, the robot can detect a collision nearly instantaneously. (The control system relies on a "nonlinear disturbance observer.")

"This method does not require any additional external sensors and only relies on the internal capabilities of the robot," says Haddadin.

This is the first study to investigate soft-tissue injuries caused by robots and sharp instruments. Previous studies by the same researchers, as well as other groups, have focused on blunt collisions involving non-sharp surfaces.

The video below shows impact experiments using crash-test dummies and large industrial robots. Ouch.

READ ALSO:

The Man Who Made a Copy of Himself
April 2010

Article: A Japanese roboticist is building androids to understand humans--starting with himself

Who's Afraid of the Uncanny Valley?
Fri, April 02, 2010

Blog Post: To design the androids of the future, we shouldn't fear exploring the depths of the uncanny valley

Swarm of Bacteria Builds Tiny Pyramid
Thu, March 25, 2010

Blog Post: Researchers are putting swarms of bacteria to work, using them to perform micro-manipulations, propel microrobots, and act as biosensors

Surgeons and Robots Scrub Up
Thu, December 17, 2009

Blog Post: At Johns Hopkins University, doctors and engineers collaborate to create the next generation of robots for the operating room

Georgia Tech Robot Masters the Art of Opening Doors and Drawers

To be useful in human environments, robots must be able to do things that people do on a daily basis -- things like opening doors, drawers, and cabinets. We perform those actions effortlessly, but getting a robot to do the same is another story. Now Georgia Tech researchers have come up with a promising approach.

Professor Charlie Kemp and Advait Jain at Georgia Tech's Healthcare Robotics Laboratory have programmed a robot to autonomously approach and open doors and drawers. It does that using omni-directional wheels and compliant arms, and the only information it needs is the location and orientation of the handles.

The researchers discussed their results yesterday at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska, where they presented a paper, "Pulling Open Doors and Drawers: Coordinating an Omni-Directional Base and a Compliant Arm with Equilibrium Point Control."

One of the neat things about their method is that the robot is not stationary while opening the door or drawer. "While pulling on the handle," they write in their paper, "the robot haptically infers the mechanism's kinematics in order to adapt the motion of its base and arm."

In other words, most researchers trying to make robots open doors, cabinets, and similar things rely on a simple approach: keep the robot's base in place and move its arms to perform the task. It's easier to do -- and in fact that's how most robot manipulation   but limits the kinds of tasks a robot could accomplish.

The Georgia Tech researchers allow their robot to move its omni-directional base while simultaneously pulling things open -- an approach they say improves the performance of the task.

There's no better way to understand it than seeing the robot in action:

So how did they do it?

First, a look at their robot. According to Travis Deyle, a researcher at the Healthcare Robotics Lab who first reported on the new robot and its capabilities at Hizook, the robot is called Cody [photo, right]. It consists of a Segway RMP 50 Omni base with Mecanum wheels, a vertical linear actuator to raise the robot's torso up to 1.2 meter above the ground, a laser range finder, and a pair of 7-DOF MEKA Robotics arms.

A Mac Mini running Linux performs all the computation for the sensing and high-level control. Another computer running a Linux-based real time system controls the MEKA arms. The researchers wrote all their software in Python and used open source packages like ROBOOP and ROS.

The robot uses a simple hook as its end effector, which the researchers built with a 3D printer and coated with rubber to increase friction. At the wrist, a 6-axis force sensor measures the forces on the hook, which was based the way a person uses a finger to pull something open [photo below].

But the most innovative thing is the control method they implemented, which they call equilibrium point control, or EPC. Here's the gist. Rather than model the dynamics of the arm and the impedance at the end effector or use inverse dynamics, the researchers created a control system that relies on simulated visco-elastic springs at the robot's joints. The EPC system uses these virtual springs, whose stiffness can be adjusted, to determine how the joints should move to achieve a desired movement.

Kemp and Jain say that this approach, combined with the robot's low mechanical impedance (which reduces the forces resulting from contact and thus minimizes the risks of damage to the robot, objects, and people), proved "easy to work with, easy to implement, and surprisingly effective."

They tested their approach with 10 different doors and drawers, reporting that the robot succeeded in 37 out of 40 trials. What's more, the robot was able to open doors and drawers from initial positions that would be difficult for static robots to succeed at the task.

They write: "We empirically demonstrate that the system is robust to common forms of task variation, including variation in the mechanism being operated (tested with 7 doors and 3 drawers), and variation in the pose of the robot's base with respect to the handle."

I think that's researchspeak for "It works!"

Images and video: Georgia Tech's Healthcare Robotics Lab

Micro Robots Compete in 2-Millimeter Dash


This is the microrobotics competition arena. Image: NIST

At the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska, this week, the U.S. National Institute of Standards and Technology, the famed NIST, is holding a robotics competition for small robots -- very small robots.

In the Mobile Microrobotics Challenge, robots with dimensions measured in micrometers will square off in a series of challenges taking place at a, uh, microchip playing field [photo above]. First there's a race across a 2 millimeter distance, or the equivalent to the diameter of a pin head. Then the microbots will compete in a microassembly challenge in which they'll have to insert tiny pegs into tiny holes. Finally, there's a freestyle competition in which each team chooses how to show off its small bot in a grand way.

Researchers will remote operate the microrobots, viewed under a microscope, using magnetic fields or electrical signals transmitted across . The bots, made from materials like silicon, gold, aluminum, nickel, and chromium, are a few tens of micrometers to a few hundred micrometers across and weigh just a few nanograms.

As the organizers put it, "These events are designed to 'road test' agility, maneuverability, response to computer control and the ability to move objects—all skills that future industrial microbots will need for tasks such as microsurgery within the human body or the manufacture of tiny components for microscopic electronic devices."

Here's a schematic of the 2 millimeter racetrack:

And here are the contenders:

Mag-µBot (Magnetic-Micro-Robot)
Team Name: Magic & Voodoo
Organization: Carnegie Mellon University (Pittsburgh, Pennsylvania)
Robot Dimensions: Under 500 micrometers in all dimensions
Materials: Neodymim-Iron-Boron magnetic particles suspended in a polyurethane matrix material

MagPieR (Magnetic - Piezoelectric microRobot)
Team Name: CNRS Team (The French Team)
Organization: FETO-ST Institute; ISIR - Institut des Systemes Intelligents et de Robotique (France)
Robot Dimensions: Under 400 micrometers in all dimensions
Materials: The MagPieR microrobotis composed of two distinct layers, an upper using a ferromagnetic metal (such as nickel) and a lower using a piezoelectric material.

MagMite
Team Name: MagMite Team
Organization: ETH Zurich (Switzerland)
Robot Dimensions: Under 300 micrometers in all dimensions
Materials: The device consists of two nickel masses connected through a gold spring.

µMAB (Micro-scale Magnetostrictive Asymmetric thin-film Bimorph)
Team Name: Stevens Institute of Technology
Organization: Stevens Institute of Technology (Hoboken, New Jersey)
Robot Dimensions: Under 600 micrometers in all dimensions
Materials: Nickel, copper

LEEPS Microrobot
Team Name: University of Maryland
Organization: University of Maryland (College Park, Maryland)
Robot Dimensions: 500 micrometers in all dimensions
Materials: The device is silicon cube with a layer of nitride on the top surface, and platium on the bottom

EMMA
(ElectroMagnetic Microrobotic Actuation)
Team Name: University of Waterloo Nanorobotics Group
Organization: University of Waterloo (Canada)
Robot Dimensions: 500 micrometers and under in all dimensions
Materials: Nickel, cobalt, manganese, phosphorus

USS TESLA
Team Name: U.S. Naval Academy Microrobotics Team
Organization: U.S. Naval Academy (Annapolis, Maryland)
Robot Dimensions: 300 micrometers in diameter
Materials: Nickel, gold, polysilicon, nitride

We'll keep you updated about the competition and winners, and try to get some video as well.

Images: NIST

Most Commented Posts

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement
Load More