Automaton iconAutomaton

Researchers Using Rat-Robot Hybrid to Design Better Brain Machine Interfaces

ratcar

A strange creature, half robot, half rat, has been seen scuttling across a laboratory in Japan. It's RatCar, a rat-vehicle experiment that scientists hope could lead to improved mobility for people with disabilities.

Researchers at the University of Tokyo wanted to see whether rats could control a miniature vehicle through the brain signals that move their limbs. They recently presented their project at the IEEE Engineering in Medicine and Biology Society annual conference in Buenos Aires, Argentina.

"We wanted to develop a brain-machine interface system aiming for future wheelchairs that paralyzed patients can control only with thought," says Osamu Fukayama of the university's Medical Engineering and Life Science Laboratory. "RatCar is a simplified prototype to develop better electrodes, devices, and algorithms for those systems."

Unlike some brain-machine interface, or BMI, devices that rely on noninvasive EEG to detect neural activity, the RatCar works through direct contact with the brain. Other researchers have used this technique in getting rats to control robotic grippers and monkeys to control computer cursors and even advanced robot arms.

ratcarIn the RatCar, tiny neural electrodes [the dark dots on the tip of the device shown on the photo, right] were implanted in the motor cortex of rat brains, and the animals were suspended under a lightweight, motorized "neuro-robotic platform" with wheels. The objective was to make the vehicle collaborate with the rats to achieve the locomotion they desire.

The rats were trained on the car by towing it around an enclosed area with the motors disengaged. A vision system positioned above tracked the rats by following colored markers on their backs and the vehicle. It fed the positions into a "locomotion estimation model" program that correlated the motion of the animals with readings from the electrodes.

Next the rats were suspended more tightly to the car so their limbs touched the floor only slightly. The researchers then switched the system into "neuro-robotic mode," with the neural signals used to help drive the car. Six out of eight rats used in the study adapted well to the car.

"The vehicle moved forward synchronously with a rat when it was placed inside," says Fukayama, but he adds that the degree to which the car was being controlled by the rat itself was unclear.

Since the rat would be forcibly moved along with the car, measuring its real intentions became a challenging problem. Another difficulty was that only a small percentage of the electrodes actually recorded neural activity, and the recorded neurons didn't necessarily correlate with target movements. 

Fukayama and colleagues Takafumi Suzuki and Kunihiko Mabuchi plan to perform more experiments to address the uncertainties. They want to confirm that the rats can drive the car in different directions and also measure the force that the rats are exerting when trying to move under the car. That way, they could track differences in its motion and the rats' apparent intentions.The less force, the better the neural link is working.

Rats have helped bring about many medical breakthroughs, and we'll see whether they'll help make thought-controlled wheelchairs commonplace.

Images: University of Tokyo's Medical Engineering and Life Science Laboratory

Read also:

Cyborg Fly Pilots Mobile Robot
Thu, August 26, 2010

Blog Post: Swiss researchers have made a fruit fly steer a mobile robot through an obstacle course

Man Replaces Eye with Bionic Camera
Fri, June 11, 2010

Blog Post: Canadian filmmaker Rob "Eyeborg" Spence has replaced his false eye with a bionic camera eye

Monkey Controls Robot with Mind
Wed, June 02, 2010

Blog Post: A monkey with a brain-machine interface commands a 7-degree-of-freedom robotic arm

Robot Bacteria Builds Pyramid
Thu, March 25, 2010

Blog Post: Researchers made a swarm of bacteria perform micro-manipulations and build a tiny pyramid

Cockroach-Inspired Robot Dashes Off
Tue, October 13, 2009

Blog Post: This UC Berkeley robot can survive a 7-story fall -- and dash off at high-speed

Omniwheels Gaining Popularity in Robotics

A recent workshop that involved some rapid development using RoboCup robots, a demonstration of KUKA's omniMove platform, and my previous posts on KUKA's youBot and the ETH Zurich's Rezero ballbot have gotten me interested in omniwheels.

Omniwheels and their variations (also called omnidirectional wheels, Swedish wheels, Mecanum wheels, or Ilon wheels) are an ingenious invention that allows a platform to move in any direction while facing any other direction.

Historically, omniwheels date back to a 1919 patent by J. Grabowiecki, and have since been created in many different designs. Since their introduction by the Cornell RoboCup team in 2000 (see Raffaello D'Andrea's research paper), they have been widely used in some RoboCup categories, where they allow the soccer robots to move in a straight line, while rotating along the line in order to arrive with the desired orientation.

Apart from the RoboCup setting, omniwheels are also used in omnidirectional conveyor systems, for example for handling packages. As you can see from the top picture (showing the parts of the custom-made omniwheels for the Rezero project), the mechanical design of omniwheels is complex. This, along with the inevitable vibrations and comparatively high wear and tear, have limited their use to a few niche applications.

One such niche is moving very heavy or large goods such as large aircraft wings or the rotor blades of wind energy plants. The picture above shows KUKA's heavy load carrier, the HLC 60000. Built in 2007, it is used for internal logistics with payloads of up to 60 tons. The biggest KUKA omniMove vehicle ever built is 32 meters long, 4 meters wide and has a payload of 100 tons.

Other niches are forklifts for aircraft carriers made by U.S. company Airtrax or some versions of Segway's RMP platform.

Another interesting, but still more futuristic, application of omniwheels is to drive the spheres used in ballbots like Masaaki Kumagai's BallIP and the Rezero (or in cars in the movie "I, Robot," where they allow omnidirectional movement without reorientation of the drive unit).

Will omniwheels become the wheel choice for robotics, or will they remain a niche?

More photos:

Ray Oung's Distributed Flight Array wheel (DFA)

Omniwheel on the Rezero robot at the ETH Zurich in Switzerland

Thanks Raff, Robert, Ray and Mike!

Long Exposure Pictures of Robots Cleaning

This is a set of all of the different long exposure pics that I’ve taken while doing reviews of cleaning robots. I spent a likely excessive amount of time reviewing each robot in detail (and you can read those reviews at the links below), but really, the pictures explain the cleaning patterns better than I ever could. It’s illuminating (so to speak) to see them all right up against one another, so here they are:

iRobot Roomba 560, 45 minutes

Neato Robotics XV-11, 12 minutes

Evolution Robotics Mint (Sweeping Mode), 15 minutes

Evolution Robotics Mint (Mopping Mode), 23 minutes

Three very different robots, three very different cleaning techniques. Check out the individual reviews for more details on each.

[ iRobot ]
[ Neato Robotics ]
[ Evolution Robotics ]

Interview: Ken Goldberg Discusses Telerobots, Androids, and Heidegger


Ken Goldberg explores the interactions between technology, art, and media. Photo: Martin Sundberg

This year saw the invasion of telepresence robots—electromechanical avatars that allow you to be there without actually being there. Today, half a dozen companies are selling, or will start selling, these robots. I’ve tested two of the robots myself, discussing at length their technical merits as well as their practical shortcomings, and even helped a colleague build his own robotic self.

Although the technology behind these robots is fascinating, I’m also interested in the historical and philosophical aspects of telepresence. Telepresence robots didn’t come out of nowhere; they stem from a convergence of different technologies, each with its own history. The advent of robotic telepresence also reflects a moment in time when many of us are becoming ever more connected and available.

 

  special report:
   Telepresence

So what made these robots possible now? What’s so appealing about roaming around as a machine in a remote place? And where is this technology taking us, literally and figuratively?

To explore these themes, I spoke with Ken Goldberg, a robotics professor at the University of California, Berkeley, and a member of IEEE Spectrum’s editorial advisory board. When he’s not building robot cameras to spot wild birds or computer-controlled flexible needles that steer through soft tissue, he’s delving into the interactions between technology, art, and media.

If anyone can make a connection between robots and Heidegger, it’s Ken.

Erico Guizzo: I've recently asked Marvin Minsky what he thought of current telepresence robots. He complained that they don’t have legs. And I’ve seen other people complaining that they don’t have arms. What do you think of their design?

Ken Goldberg: The main issue is cost. When you add legs or arms, you need more actuators, more sensors, more computation, and it gets expensive very fast. But at the same time, robotic parts and technologies are getting better and cheaper. I think it was Brian Carlisle [former CEO of Adept Technology], who said that if you can sell a car, which is a lot of metal and a lot of engineering together, for under 10,000 dollars, we should be able to do the same for robots. We need volume to make them cheaper. So, of course, we want robots that clean up the house and hopefully change diapers. But we have to start somewhere, and the one thing that is easier to do is communication. Today’s telepresence robots are communication tools, and they raise some interesting design questions. For instance, how tall should they be? Rovio [a home robot sold by WowWee] is small, so you probably can’t have an eye to eye conversation, unless you want to talk to your kids. Or your cat. I think height, and eye to eye interaction, is important. Another thing I consider key is the ability to point at things. And that’s easy: You need just a laser and a two-axis gimbal. So there are many design issues to consider if you want to make telepresence a powerful experience.

EG: And why do we want to physically extend ourselves to distant places anyway? Telephone and Skype aren’t enough?

KG: The idea of remote control, that you can click a button here and something happens over there, is a very powerful and satisfying experience. We love our TV and garage remotes. Robots have a very intertwined history with this idea of remote operation. It might go back to Tesla’s famous experiments with a radio-controlled boat, which he demonstrated in New York in 1898. After World War II, the first robots were master-slave telerobots used to handle radioactive substances. Today telerobots are used for exploration, in space and underwater, and more recently for bomb disposal. Now, telepresence is different because you’re not manipulating an object or performing a repair; you’re interacting with people. So there are humans on both ends. The goal is to give the remote operator a sense that he or she is closer to the people on the other end. And hopefully vice versa. Naturally, the telephone and things like Skype are more or less trying to do the same thing. But the key question is, What’s missing? How can you make the experience of “being there” more fulfilling? One of the benefits of the robots is that they enhance the sense of agency, of being an agent, in the remote environment. You are not just a passive conversationalist; you can actually move around and explore. And that really matters. The ability to control where you are is empowering and gives you a different set of possibilities. There’s more spontaneity and discovery.

EG: In 2001 you edited a collection of essays titled, “The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet.” One of the articles, by John Canny and Eric Paulos, describes a telepresence robot [image, right] very similar to the commercial versions we’re seeing today. Why did it take nearly 10 years for these robots to become commercially viable?

KG: Back then when people like John Canny and Eric Paulos were developing various kinds of telerobots and camera systems, the Internet and wireless networks weren’t as fast and reliable as they are today. Now networks have more bandwidth, better quality of service, lots of error correction. That makes a huge difference. The other thing that has changed is that it’s less expensive to build a robot today, because the components you need are getting better and cheaper. So you’re seeing these companies coming out, like Vgo and Anybots, trying to commercialize these robots. The big question is, When can they get the price down to a point where it’s available to a large number of people? When that happens, things will get very interesting.

EG: Today we use cellphones, e-mail, instant messaging, Twitter, Facebook—and soon some of us may be using telepresence robots. We’re staying connected in more ways and for longer periods of time. Where is this going?

KG: I explore some issues related to that in a course I teach with Dreyfus [UC Berkeley philosopher Hubert Dreyfus] on the philosophy of technology. The goal of the course is to discuss what is the “essence” of technology. We want to give students, many of whom will be creators of technology, a broader historical and social perspective to understand technology. Our starting point is a 1954 essay by Heidegger, “The Questions Concerning Technology.” Let me say first that Heidegger is a problematic figure. He wasn’t the nicest guy. But we can’t dismiss everything that he wrote. He’s considered by many as a fundamental philosopher for the 21st century. So, in a nutshell, what Heidegger says in this essay is that technology is really a “mode of being,” a sort of attitude or culture we are immersed in. It’s not something we can consciously adopt. It’s all around us, we’re engulfed in it. He says we’re in a technological mode of being that is all about making the world available. The key concept here is availability. For example he mentions the Rhine River. Rather than approaching the river as primitives, who might ponder how the gods created the river, or artists and poets, who would focus on the beauty of the river, our approach is that the river is a resource to generate power. He argues that we approach the world around us, nature in particular, as something that we should use to make other things available. Where it gets really interesting is that the availability starts to take on momentum of its own. So we don’t necessarily want anything in itself; all we want is everything to be transformable into something that we’ll need in the future. The key thing that Heidegger hints at and he worries about is that this worldview, if it continues in the direction it’s going, will overwhelm us, and then we’ll do the same thing to ourselves: We’ll see ourselves as resources.

EG: I guess Heidegger would have hated telepresence robots…

KG: Yeah, maybe. Telepresence is another form of availability, going beyond the forms we have. As technologists, we’re excited about it, but from Heidegger’s perspective, it’s another step along that trajectory that he is worried about. He’s not a doomsayer; he’s just saying we should beware. At the end of the essay he comes to this point where he sees this supreme danger, when we’ll be engulfed and overwhelmed and we’ll want to make ourselves constantly available. And you see that today with Facebook , Twitter, smartphones, compared to 10 years ago. We’ve become much more instantly available to each other and to the world, and we are willing to put out all kinds of information that were previously private. And also we are, especially the younger generations, happy to be available. So we’re transforming ourselves into resources. But Heidegger also saw a bright side, a way out of this situation. As we move closer to being consumed by this technology mode of being, the positive outcome is that we’ll be jolted into realizing what we’re doing and we’ll have the capability of stepping out of it. In other words, he’s saying we have to hit bottom before we stop the madness.

EG: Almost ten years ago, you participated in a telepresence research project called the Teleactor, using people as proxies for other people. Is the Teleactor a precursor to robotic telepresence?


Teleactor Annamarie Ho. Photo: Bart Nagel

KG: The idea there was to have a person—the teleactor—who would be your proxy and go to a place when you’re unable to go yourself. The teleactor would wear a camera and microphone, transmit that data over the Net, and you’d see and hear through them. So you would finesse the robot part by having a human. The teleactor also had an ear piece and would get instructions from you about where to go and who to talk to. We did a lot of experiments, but the technology was not there yet. First we were using analog video and we were constantly getting interference. Then we switched to Wi-Fi, and it was the first versions of Wi-Fi, and it had its own problems. The network connections were also slow, and we were constantly fighting network delays. And we were using Java applets that were slow and Web interfaces that barely had video capabilities. It was primitive. I wished we had 4G networks to do the audio and video. That would have made things much easier!

EG: So when the operator spoke, the teleactor repeated what was said?

KG: Not in a robotic way. You can imagine that if the teleactor is just literally talking and acting like a robot—that’s sort of creepy. [Laughs.] Our idea was more like an actor taking directions from the remote person. So the teleactor can also improvise a little. Imagine it’s a meeting or a party and I’m connected to a teleactor. The teleactor can see a group of people and go near them, or just wait for you to decide where to go. We wanted to investigate the interactions that would take place and see how they’d compare to normal situations. I think someone should repeat the experiment today. Imagine you hire out-of-work actors, who love to engage with strangers, and send them as teleactors to parties and events. People who can’t attend the party in person would go as teleactors.

EG: And in the future we can replace teleactors with androids! We’re already seeing some steps in that direction. What do you think of telepresence robots that look like people, like the androids Hiroshi Ishiguro is creating?

KG: When we think about robots they are sort of generic in shape and appearance. [Ishiguro] is not doing that at all. His robots are not only very human but also very specific to individual humans. And these are not just functional machines that’d do work or entertain; these things are actually out there to act as surrogates in a very real way. It’s an idea with connections to psychology, mythology, and science fiction. It goes back to Galatea, the Golem, and later Pinocchio and Frankenstein and Blade Runner, and all the attempts to create something that’s very lifelike. He’s really pushing the limits and asking some deep questions about ourselves. It’s a really Cartesian question: Are we automatons or not? If robots become indistinguishable from humans, how are we going to treat them? It also forces us to reflect about our own mortality. It evokes this memento mori—just remember we’re going to die. Life is short: appreciate the present. How will you be remembered, if at all? Can something of me live beyond me? With these robots, just like a portrait or statue or other replica, you’ll be able to have something that conceivably could live beyond your existence. These androids help us explore these questions. Maybe we’ll discover that there are things we want to avoid. But the only way we’re going to know is by experimenting.

Review: Evolution Robotics Mint Sweeper

Evolution Robotics’ Mint sweeper robot made its debut at CES 2010, where we got a demo of it exhibiting its cleaning behaviors on video. Mint offers flexibility by doing away with the vacuum entirely, and using either wet or dry cleaning pads, Swiffer style. It’s certainly simpler, but does it work as well as the competition? Our review, after the jump.

-Design

Mint is adorable. I like its clean lines and straightforwardly modern black and white color scheme. It’s compact (10 inches wide, 3 inches high), but at about 4 pounds I wouldn’t call it light. On top, it has three backlit operation buttons and three small indicator lights, and that’s the extent of the interface, although it also sings to you to let you know when it’s charging, finished, stuck, etc.. The big black diamond is how Mint localizes itself on your floor; more on that later.

Mint’s ‘chin’ is a pressure-actuated bumper, similar to other robot vacuums. It also has a frontal proximity sensor with which it can detect impending obstacles to prevent itself from running headlong into walls and stuff.

Underneath, Mint has two drive wheels and two casters that also serve as edge sensors… Mint is heavily weighted towards its butt, so if the casters drive off a ledge, the robot won’t tip forward and has plenty of time to stop and back up.

The system for attaching Mint’s cleaning pads is ingenious. There’s a detachable panel that sticks to the bottom of Mint’s chin with strong magnets, and the cleaning pads wrap around that. To get the pads to stay on, you “zip” them into two sets of rubber teeth, and then snap the panel onto Mint and you’re good to go.

It’s very secure while also being very easy to change. The modularity of the system also means that you can stick just about anything on there, from Swiffer pads to the reusable cloths that come included with Mint to something of your own.

Mint doesn’t have a charging dock. To charge it, you use a wall adapter that plugs in underneath the robot. The manual suggests that you charge Mint while its standing on its butt, which makes it take up less space, but then you can’t easily grab it by the butt-mounted handle. It might have been better to put the charging port on the side of the robot instead, which would allow for a bit more flexibility, but I suppose that’s a pretty minor quibble.

-Cleaning Technique

First, let’s just get one fact out of the way: Mint does not clean carpets. The manual spells it out on page one… Mint is for indoor use on hard surface floors only, specifically wood, tile, vinyl, linoleum, and laminate. So, if you have lots of carpet, Mint may not be for you.

If you have carpet and hardwood or tile, Mint can deal with that. Mint is supposed to avoid getting on any area rugs you might have by registering their edges with its sensors, but if your rugs are too flat, Mint will get on them anyway. If this is a problem for you, the manual suggests that you place obstacles around carpet to dissuade Mint from getting on them, but this isn’t necessarily a very practical thing to do for most people. In our testing, Mint avoided thick shag area rugs, but once (out of six trial runs) got onto a thinner rug that had rounded edges.

The way Mint approaches cleaning isn’t pseudo-random like the Roomba; it’s more like the Neato XV-11 in that it actively builds a map of each area it cleans. Mint doesn’t have any integrated localization sensors, but instead depends on an external reference point to tell it where it is in a room.

This is a NorthStar cube. It’s essentially a little infrared projector that shines some spots onto your ceiling. The big black diamond on top of Mint can spot those spots, and Mint uses their relative orientation to figure out where it is and which way it’s going. You don’t really have to pay much attention to where exactly you stick the cube, as long as it’s in the same room that you want Mint to be cleaning and it has a clear view of the ceiling. This location information, combined with Mint’s proximity and drop sensors, allows it to build a map of the entire area that it’s cleaning.

NorthStar isn’t strictly necessary to get Mint to work, and it’s not a problem if Mint gets underneath something and can’t localize for a little while. If you don’t use NorthStar at all, Mint won’t be able to localize itself, and consequently will clean a smaller area and not do its edge cleaning behavior. This can actually be a good thing, though, for small areas like a bathroom. If you want Mint to clean more rooms, you can buy additional NorthStar cubes to help it navigate.

Mint uses two different cleaning techniques depending on whether you tell it to sweep or mop, based on what motions are most effective with what type of cleaning pad. The dry sweeping technique is a straight line approach, where Mint covers open areas in a back and forth pattern and then makes a complete edging circuit. For mopping, Mint also does straight lines, but while moving along each line it goes forward and back and to each side in a sort of “Y” motion. We’ve got some long exposure pictures that show this very well just below.

-Cleaning Effectiveness

As discussed, Mint cleans using two different techniques depending on whether it’s mopping or sweeping. I took a long exposure image of each of these patterns, and the differences are pretty cool. First is sweeping; Mint finished the room in about 15 minutes:

You can easily see the linear coverage pattern of the open space, as well as the edge coverage behavior. Mint did miss a small area at the lower left which temporarily contained a cat. The cat did not suffer any permanent physical damage, but I can’t speak to potential emotional trauma.

Here’s the mopping pattern, which took Mint a little over 20 minutes to execute:

It’s essentially a similar coverage pattern, except with the addition of the back and forth scrubbing behavior and without doing the edging.

As you can see Mint has no problems navigating around corners, under furniture, and avoiding (most) rugs. We have a tangle of cords under our entertainment center, and Mint didn’t get caught or disconnect anything after multiple runs.

I mentioned that Mint was cute, and its cleaning behavior is equally adorable. I’ve always liked watching cleaning robots at work, but Mint is my favorite, because you can see it thinking… Whenever Mint updates its internal map based on new data and decides where to go next, it pauses and its little indicator lights blink for a second. To me, this small touch lends it an incredible amount of personality.

While sweeping, Mint is excellent at picking up dust and small patches of dirt and hair, which actually stick to the pad. In open areas, it’s likely more effective than you wielding a Swiffer because of Mint’s consistent, overlapping coverage. Mint does have problems with larger objects and will just push them into corners and next to walls since they don’t get trapped on the pad. For example, we have some pet rats in our living room (in a cage, obviously), and they like to toss sunflower seed shells onto the floor. Mint doesn’t pick these up. It also has problems with big clumps of cat hair and dust bunnies, since it pushes them around as well. The dry pads also aren’t great at dealing with stickiness; they’re basically just for small dry things like dust and hair.

Overall the wet pads did a better job at cleaning since they (combined with the mopping behavior) were more effective at breaking up dirt and getting stuff to stick to them. Impressively, the pad that came with Mint stayed moist, even while cleaning most of our living room. The difference was easy to notice when walking around barefoot. Mint didn’t do quite as well on grouted tile, because it was hard for it to get down into the grout lines as effectively.

When Mint finishes cleaning, it sings at you, and then returns to where it started, which is both handy and adorable. The battery is supposed to last 3 hours dry sweeping and 2 hours wet mopping, and I never had any issues with Mint running out of power. The only thing you have to do after Mint finishes (besides plugging it in) is either tossing the pad in the trash if you used a disposable one, or tossing it in the laundry if you used a reusable one. Otherwise, there’s no additional maintenance… I didn’t experience any issues with hair tangled in the wheels or anything like that.

It is slightly irksome that there’s no easy way to confine Mint to a specific area. Placing the NorthStar cube does mean that Mint will tend to not to stray to far from there, but if you don’t want it under your couch or something, you basically have to go old school and block it off by putting stuff on the floor.

Mint not having a vacuum means that it doesn’t sound like a vacuum. In fact, I would describe Mint as silent, for all practical purposes. There’s an almost imperceptible hum from the wheel motors when it moves, but otherwise, it’s just the noise of the pad moving over the ground. What this means is that you can run Mint while you’re home, like when you’re on the couch watching TV, and it won’t bother you in the least.

-Conclusion

Many things about Mint seem, at first blush, to be compromises in functionality when compared to other cleaning robots like the Roomba or the Neato XV-11… Mint can’t be scheduled. Mint doesn’t have a charging dock. Mint can’t clean carpet. However, Mint is designed to be simple and straightforward, and you probably won’t find yourself missing those features. Realistically, even though the Neato XV-11 and Roomba dock themselves and will run on their own when you’re not around, you still have to empty their dustbins, which means that you’re spending similar amounts of time on post-cleaning maintenance for all three robots.

Another thing to keep in mind is, as Nancy Smith from iRobot told us, cleaning robots (at this point) aren’t really able to duplicate the level of cleaning that a human can accomplish with an upright vacuum or a mop. Mint is not really designed to clean a very dirty floor, but it’ll keep an already clean floor clean. You can run it every day, or every other day, and it’ll help keep dirt, dust, and pet hair under control with only minimal effort from you.

Personally, I was more than satisfied with how well Mint cleaned, although I’m not sure why I wouldn’t just have it use its wet cloth and mopping technique every time, since it seemed to clean better and it left my floors all shiny. Also, the fact that Mint is silent is a major plus, because it means I can use it whenever I need to and it doesn’t prevent me from doing other important things, like napping.

So, the big question: how does Mint compare to iRobot’s Roomba and Neato’s XV-11? Well, firstly, it’s $250, which is cheaper than either the Roomba ($300 and up for a 500 series) or the XV-11 ($400). Secondly, it doesn’t do carpet, so if you want a robot that cleans carpet… Yeah, it’s not gonna be Mint. On hardwood, Mint does as well or better than the other robot vacuums at daily maintenance of dust and pet hair, but suffers at times from the lack of a vacuum for picking up larger pieces of debris. While sweeping, Mint is similar in speed to the XV-11, and significantly faster than the Roomba. Mint’s ability to use different types of cleaning techniques (wet and dry) is also a bonus, and its lack of noise is a major distinguishing feature.

I’d recommend Mint if you’re considering a robot vacuum and don’t have a lot of carpet to deal with. Mint is smart, it’s versatile, it’s quiet, and it’s (relatively) cheap. It does have some issues picking up larger clumps of dirt and hair, but as a simple maintenance robot, it’s very effective.

Now, I realize that I may not have explicitly answered all of your questions, but fear not, I’ll be on the phone with Evolution Robotics in the next couple days for a follow-up post in which they’ll personally be answering all of the questions that I haven’t been able to, as well as whatever new comes up based on this post.

You can find out more about Mint at the Evolution Robotics website, and buy one from MintCleaner.com.

The r3 Rope Robot

I've recently had the chance to visit ETH Zurich's M3-Lab, which is part of the Sensory Motor Systems group headed by Robert Riener, and which contains one of the world's most advanced rope robots. The r3 uses ropes guided over deflection units and pulled by motorized winches to move end effectors to any point in space. A modular winch setup allows payloads of up to 100kg and top speeds of up to 18m/s. The necessary stiffness is provided by using Dyneema ropes with very little stretch (<1%) and a very stiff aluminum frame (see pictures below).

In addition, the r3 can also function as a tendon-based haptic interface: Combining information from absolute position measurement units and high-resolution motor encoders allows to determine absolute winch positions at any given time. Force sensors on each rope, and a sampling frequency of 4 kHz complete the setup, making the robot a very versatile platform.

The r3 is built around a large CAVE (Cave Augmented Virtual Environment), equipped with multiple large projection screens and an advanced 3D sound system to provide a realistic and immersive simulation of environments.

The Somnomat project shown in the video above uses the r3 system to conduct sleep research. Here is what Joachim von Zitzewitz, the lead researcher of the project, has to say on the Somnomat's motivation:
 

A child is rocked to sleep. Adults drowse in a rattling train. But why? Not much information about vestibular feedback on sleep is found in literature. Therefore, we want to investigate the underlying mechanisms by moving the test subject in different degrees of freedom while he/she is sleeping. Additionally, we focus on the influence of other modalities (acoustic, brightness of light) on sleep.

As in other sleep research projects, the additional equipment on the Somnomat monitors several physiological data such as heart rate, breathing frequency, EEG. Future studies starting in 2011 will investigate correlations between these data and the vestibular stimulation - a system identification of the test person if you will.

Bad timing of my visit did not allow me to test-sleep the bed (yet!), because the robot is also concurrently used in two other projects that focus on highly different dynamics: The rowing simulator recreates the experience of rowing on a river, down to the dynamics of the oar entering the water and the crowds cheering from the shore. The athlete sits in a rowing scull mounted at the center of the CAVE, with the r3 robot's ropes connected to the outer ends of the oar. As you row, the robot displays oar forces in 3D in accordance with the user's movements.

A third project focuses on understanding motor learning of fast movements by using the robot to support a tennis-like 3D-movement. It implements different assistive controllers, such as position control or path control (i.e., controller restricting the user's movement to a given trajectory in space).

With single rope axis generating peak forces of up to 900 N, guaranteeing the r3's safety has been a major focus, and a series of safety measures have been implemented (e.g., all drive trains are equipped with electro-mechanical brakes which block the winch in case of an emergency) and tested with dummies (see my favorite picture at the end of the post).

 

 

Thanks Joachim!

Autonomous Vehicle Driving from Italy to China

vislab

The Russian policeman waved at the orange van zigzagging at the empty plaza, ordering it to stop. The van didn’t, so the officer stepped closer to address the driver.

But there was no driver.

The van is an autonomous vehicle developed at the University of Parma’s Artificial Vision and Intelligent Systems Laboratory, known as VisLab. Crammed with computers, cameras, and sensors, the vehicle is capable of detecting cars, lanes, and obstacles -- and drive itself.

The VisLab researchers, after getting tired of testing their vehicles in laboratory conditions, decided to set out on a real-world test drive: a 13,000-kilometer, three-month intercontinental journey from Parma to Shanghai. The group is now about halfway through their trip, which started in July and will end in late October, at the 2010 World Expo in China. (See real time location and live video.)

The autonomous vehicle Grand Challenges organized by the Defense Advanced Research Projects Agency, or Darpa, in the United States, popularized the idea of cars that drive themselves. After participating twice in the Darpa competitions, VisLab was seeking new ways of testing their ideas.

“When you do things in the lab, it all really works. But when you go out in a real road, with real traffic, real weather, it’s another story,” says Alberto Broggi, VisLab's director and an engineering professor at Parma University.

That's how the idea for their Parma-Shanghai trip, which is partially funded by the European Research Council, came about. The goal is to test, and later perfect, their vision and navigation systems, which the researchers hope to one day deploy on commercial vehicles.

Unlike the Darpa vehicles, the VisLab van is not driving fully autonomously from start to finish. That approach would require at least a rough map of the complete route, Broggi says, adding that they lack maps for locations like Mongolia and Kazakhstan. So instead of programming the trajectory ahead of time, the Italians adopted a simpler, though still technically challenging, approach.

Two vans travel in line. The first uses maps and GPS to drive itself whenever possible, but a human driver is in control most of the time. The second van uses its cameras and navigation system to follow the first; it visually tracks the lead van, plans a trajectory in real time, and generates controls for steering and accelerating or braking. (If a car gets in between the two vans, the second van guides itself using GPS instructions it receives from the leader.) Watch:

And whereas the Darpa vehicles ran unmanned, the VisLab vans have people on board (including, in one occasion, hitchhikers). Inside, researchers monitor the systems and are ready to take control of the vehicles if necessary. The vans only drive with no one inside during demonstrations. That was the case when it was stopped by the police officer in Yekaterinburg, in central Russia.

Broggi insists they are not attempting to prove anything, just trying to "stress our systems." And he acknowledges that even the second van is not driving autonomously the entire time. There were situations when they had to take control because of maniac drivers or distracted pedestrians, or because they had to drive faster to compensate for delays. Still, he estimates that on average the van is in autonomous mode more than 90 percent of the time.

vislab viacThe compact vehicles carry a lot of equipment. Each has seven cameras, four laser scanners, GPS unit, and inertial sensor suite. Two cameras hanging above the windshield provide stereo vision, used for identifying lane markings and the terrain slope. Three synchronized cameras behind the windshield stitch their images into a 180-degree panoramic frontal view. The laser scanners -- three mono-beam and one four-plane laser beam -- detect obstacles, pedestrians, and other vehicles, as well as ditches and bumps on the road.

Each vehicle also carries three computers. One computer is responsible for processing images and data from the front of the vehicle. Another computer handles data for the sides. The third integrates all the data and plans a path, which in turn triggers low-level controls for steering, accelerating, and braking the vehicle. Solar panels on top of the vans power all the electronics.

A key piece of software is the one that processes the 180-degree frontal view. This component takes the large panoramic image and identifies the lead van, even when approaching a tight turn or steep hill. It can also detect road markings, pedestrians, and obstacles in general. And though it can recognize Italian traffic signs, it can’t do the same for those in Russian or Chinese.

The vans are manufactured by Piaggio, better known for its Vespa scooters. The vans are fully electric, and the researchers drive them in the morning, recharge in the afternoon, drive some more, and recharge again overnight. They use power outlets along the way or, when none is available, diesel generators.

A group of 20 staff researchers and students travel in a convoy that includes four vans (two pairs of leader-follower vehicles) and six support trucks, which provide a mechanic shop, storage, accommodation, and satellite communications. Every three weeks a few team members return to Italy to rest and others join the convoy.

The experiment will produce a mountain of data. Whenever the vehicles are running, the computers are recording data from the cameras, laser scanners, inertial sensors, GPS, vehicle actuators, batteries, and other systems. Every night the team replaces a set of hard drives with empty ones. They estimate they’ll end up with between 50 and 100 terabytes of data.

The idea is that after the test is over, the researchers can use the data to study every instance when things didn’t work, such as when the vehicle failed to detect lanes or misidentified an obstacle. Moreover, they can design new vision and navigation systems and use the data set to test them, recreating their journey.

So the end of the trip is not the end of the work for the VisLab team. "It will be the beginning of many new projects," Broggi says. Indeed, they've already done one major upgrade during the trip. At the beginning, there were problems with the second van, which was failing to follow the lead van, and they had to tweak their software and re-upload it to the vehicles.

Even with all the planning, some problems are unpredictable. During a recent stretch of the trip, the convoy found itself in the middle of the notoriously bad traffic of Moscow. Because of the congestion, a two-lane road had three rows of cars. The van's system insisted in staying on its lane, so the researchers had to turn to manual driving.

“It was too dangerous to drive lawfully!” Broggi says with a laugh.

And if the technology is challenging, the bureaucracy can be even more overwhelming. VisLab, a top research group in autonomous vehicles and intelligent transportation systems, first proposed the project to the Italian government years ago; the researchers never heard back. The project moved forward when they teamed up with Overland, an expedition organization, which handles logistics, including obtaining permissions to enter Russia and China carrying high-tech cameras and satellite equipment.

In fact, to cross the Russian border the group was held for 22 hours by custom officers, who took a huge number of pictures of the vehicles and the equipment and demanded a pile of paperwork. The Russians also insisted in having access to every data packet sent over a satellite connection.

The VisLab group is currently crossing Siberia, facing rain, snow, and icy roads. Next they will encounter even tougher conditions, a stretch of mountainous driving in Kazakhstan, with treacherous hills and roads, which will last for several days.

As for the Russian police office who stopped -- or tried to stop -- the driverless van, the researchers say they approached him and explained the situation. The officer wanted to see documents, though he didn’t ask for anyone's driver license. In the end, he didn’t look amused, but he left -- without issuing a ticket.

More photos:

HRP-4 Hides It All Somewhere

Kawada Industries and the National Institute of Advanced Industrial Science and Technology (ASIT) have just unveiled the latest edition of their family of humanoid robots, the HRP-4. HRP-4 is designed “in the image of a lean but well-muscled track-and-field athlete,” and it certainly is pretty damn lean…

Read More

Hoaloha Robotics: Tandy Trower's New Healthcare Robotics Company

Tandy Trower, who helped launch Microsoft Robotics Studio back in 2006, has started a brand spankin’ new robotics company called Hoaloha Robotics. The goal? Affordable ($5000 – $10000) socially assistive (i.e. elder care) robots in the next three to five years. Trower envisions a robot able to do all of the conventional remote monitoring and pill reminder stuff, but also able to assist with movement, object retrieval, and potentially provide some degree of intelligent social interaction.

Trower believes he can make an important contribution by developing a common interface and software that will make assistive robots easy to use and customize with applications, similar to the way Apple standardized the interface and application model for smartphones. “This is what primarily I believe is holding back most of the industry right now. It’s not that robots can’t be built, it’s that nobody has defined the software that’s going to turn robots into useful appliances,” he said.

Er, they haven’t? Hm.

“The components exist; it’s not difficult to build such a platform,” he said. “What people have lacked is the ability to envision what the right package should contain and, most important, what the applications and user interface should be.”

Now that’s something I wholeheartedly agree with. Or at least, I agree that the interface is going to be the tricky part. I’m not trying to minimize the amount of work that it’s going to take to get the hardware and programming up to snuff, but in order to be an effective assistive robot, the Hoaloha platform is going to have to be more independent than a Roomba or an XV-11, both of which are designed to be totally independent (more or less) and neither of which quite pulls it off. This, specifically, is what Hoaloha is going to be focusing on, partnering with other companies for hardware development. And when it comes to hardware components, they do exist, and they’re getting cheaper in leaps and bounds, making that three to five year timeframe (and the target price) potentially achievable.

Also, here’s the same obligatory quote we’ve been hearing for like the last decade:

Trower said the industry feels a lot like the early days of the PC, when there were Apple II and TRS-80 computers, but they weren’t yet doing a lot to enhance productivity or change people’s lives.

Dammit, I’m getting old over here… It feels like we’ve been stuck in the roboeighties forever.

[ Hoaloha Robotics ] VIA [ Seattle Times ] and [ Hizook ]

Thanks Dan!

Is Telepresence the Next Big Thing in Robotics?

Is telepresence the next big thing in robotics? Will telepresence robots revolutionize work, manufacturing, energy production, medicine, space exploration, and other facets of modern life? Or is it just all hype?

See below a compilation of opinions from interviews I did and from other sources. Then tell us what you think in the comment section at the bottom of the page.

"Manual labor could easily be done without leaving your home ... One region of the world could export the specialized skills it has. Anywhere."
—Marvin Minsky, MIT professor, in his 1980 telepresence essay

"Telepresence is vastly easier to do than AI is, so remotely controlling a robot -- be it to visit a remote location or do surgery -- will mature much sooner than autonomous robots will."
—Rob Enderle, principal analyst, Enderle Group

"After 100 years of advances in communications, where we discovered how to transmit text, voice, images, why not try to transmit presence?"
—Trevor Blackwell, founder and CEO of Anybots

"It made me realize that the telepresence experience -- you actually can have these robotic avatars, then your consciousness is injected into the vehicle, into this other form of existence. It was really quite profound."
—James Cameron, movie director, about piloting a robotic submersible into the shipwreck of Titanic, in a TED talk in Long Beach, Calif., early this year

"Whatever hugs do for people, I'm quite sure telehugs won't do it."
—Hubert Dreyfus, philosopher at the University of California, Berkeley, in The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet

"All of these [robotic telepresence] products are just begging me to kick them over."
—Lou Mazzucchelli, an expert in video teleconferencing, in this New York Times article

Most Commented Posts

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Advertisement
Load More