Automaton iconAutomaton

France Developing Advanced Humanoid Robot Romeo

aldebaran romeo
Romeo, shown here in a computer-generated rendering, is a French humanoid robot designed to assist elderly and disabled people. Image: Aldebaran Robotics

France is set to join the select club of countries that have developed advanced adult-size humanoid robots.

Paris-based Aldebaran Robotics, famed for its small humanoid robot Nao, is working with major French research organizations to build a larger and more capable humanoid called Romeo, to be unveiled next March.

Designed to assist elderly and disabled individuals in their daily activities, the 1.4-meter-tall robot will be able to walk through a home, fetching food from the kitchen, taking out the garbage, and acting as a loyal companion who helps entertain its owners and keep tabs on their health.

Expected to cost about 250,000 euros, Romeo will be available first to partners in the project and university researchers. An improved version will later be offered to hospitals and nursing homes, and after that to individuals as well.

Aldebaran claims that Romeo, weighing in at 40 kilograms, will introduce several innovations, including a four-vertebra backbone, articulated feet, a partially soft torso, a composite leg exoskeleton, and a new kind of actuator that lets the robot control its limbs in a safer way.

The company also says it's putting a lot of emphasis on the robot's communication capabilities. “We want people to interact with Romeo using just natural speech and gestures,” Bruno Maisonnier, founder and CEO of Aldebaran, told me at the IEEE Humanoids 2010 conference in Nashville, Tenn., last week.

The Romeo project, which started in early 2009, involves 13 partners and has a budget of 10 million euros, half of which comes from federal and local French governments.

The robot will have 37 degrees of freedom, including 2 DOF for each eye, 1 DOF for each foot, and 3 DOF for the backbone. An advanced type of actuator known as a backdrivable mechanism, which uses motors coupled to screws and cables rather than reduction gears, will power the joints. This mechanism can sense forces on the limbs and move them more safely.

aldebaran romeo
A computer-generated rendering shows Romeo doing chores at a home. Image: Aldebaran Robotics

To learn more about the project, I spoke to Rodolphe Gelin, head of cooperative projects at Aldebaran and one of the engineers leading the development of Romeo.

Below is my full interview with Gelin, in which he describes the current status of the project, the applications Aldebaran envisions for the robot, and details about its technology.

Erico Guizzo: Colin Angle, one of the founders of iRobot, said in a recent interview that he doesn’t believe that people will have humanoid robots in their homes. I guess he likes his Roombas… Why build a human-shaped robot?

Rodolphe Gelin: My ideas about this are changing regularly. So I’ll give you my opinion today. I think that a humanoid robot is very useful if you want to have a multipurpose robot. If you only want to have a vacuum cleaner, it’s good enough to have a little cylinder robot. But if you want a robot that can open the refrigerator, grab a bottle of water, pour water into a cup, and take the cup to another room, being a little cylinder is not very helpful. If you want to do plenty of things, then you have to have plenty of robots, or one robot able to do lots of things. There’s also human-robot interaction issues. I think that people will find a humanoid much easier to use and much more fun than a Roomba or a kind of box on wheels robot. Many people don’t want just a slave robot; they want a machine that works like a companion, and so it's helpful if your robot has a certain height and a face that people can look at and talk to in a natural way. That’s why a humanoid shape is interesting. Does it need legs? We’ve discussed that a lot. Legs and wheels both have advantages and disadvantages. We decided to use legs because we think it’s more convenient and adaptable for most situations. And also because it's a big technological challenge. But that's my answer today. It may change tomorrow!

EG: What are some of the things you envision Romeo doing? Will it be able to do tasks that other humanoid robots can’t do today?

RG: We have some scenarios. For example, imagine that Mr. Smith lives alone and has a Romeo in his house. Because Mr. Smith is elderly, he might forget where things are, like his glasses or the remote control of the TV. But the robot can constantly keep track of these objects, using its vision, and it doesn’t forget where they are. So Romeo can assist Mr. Smith to find objects and even go get them for him. The robot is also monitoring Mr. Smith himself, checking that he’s feeling okay, and if not, calling family members or medical assistance. We’re also working on more physically challenging tasks. Imagine that Mr. Smith asks Romeo to take the garbage out and the robot has to grab a 5-kilogram bag and walk down the stairs with it. Or imagine that Mr. Smith needs help to stand up and walk. Above all, we’re putting a lot of emphasis on Romeo’s communication functions. We want a robot that is easy to use and that, unlike other robots, doesn't look dangerous.

EG: There are lots of humanoid robots out there, both small and large, and their technology has been advancing fast. How did Aldebaran go about designing Romeo? Is it essentially an enlarged version of Nao?

aldebaran romeo legRG: No. The things we’re learning with Nao are helping us with Romeo and vice versa, so there’s a lot of synergy between the two projects. In fact, from the software point of view, most code will be compatible between the two and both will use our Coregraph program to create movements and behaviors. But in terms of hardware, they are very different robots. You can’t simply transform a small robot into a bigger one because variables like power, mechanical strength, and other things don’t scale in the same way. One thing that is very different compared to other humanoids is how Romeo controls its leg actuators. Japanese robots like the HRP use position control: You control the position and speed of each joint but you can’t control the force or torque. Other robots [like Justin, developed at DLR in Germany] are capable of controlling force, but they rely on very fast and accurate—and expensive—force sensors in each joint. In our approach, based on technology developed at CEA [France’s Alternative Energies and Atomic Energy Commission], we don’t need a force sensor; we only have what is called a backdrivable mechanism. It means a mechanism that can convert electrical current into motion but also motion into current. To do that you need a mechanism with very low friction, so energy losses are very small. Then by reading the current in the actuator you’re able to compute the torque applied by the robot on the environment. For Romeo we designed a system of screws and cables with very low friction and inertia [image, right].

EG: Why did you decide to use this approach and how unique is it? Are there other robots using this?

RG: Researchers we spoke to complained that if you can’t control the force and  torque in each joint, you don’t truly control what you want to do with the robot. That becomes critical when the robot is big, because of the damage it can cause. So from both a control and safety point of view, controlling torque rather than just position is very important. The WAM arm from Barrett Technology uses a backdrivable mechanism. But most humanoid robots use motor and gears to power their joints, and these mechanisms are not backdrivable—due to energy losses in the gears and you can’t relate the electrical current in the motors to force. That’s why they need force sensors if you want to move in a compliant way; it’s called active compliance. Romeo is different. It has no gears. We transmit the motion from the motors to the joints using screws and cables. This system is backdrivable. We can measure the torque applied by the arm of the robot only by reading the electrical currents in the motors. It’s passively and intrinsically compliant. We think that this approach is better because it offers more control and is safer and cheaper.


Romeo's upper torso and head. Image: Aldebaran Robotics 

EG: In terms of walking strategies, are you using the traditional ZMP [zero moment point] approach that most other humanoid robots use?

RG: You have to take ZMP into account, or your robot will fall. But what we’re doing differently is that we want to have a top-down approach to the walking algorithm. In the classical approach, say you’re in the living room and you want to go to the kitchen; first you plan where you have to put your feet for each step going from the living room to the kitchen and then the robot will try to put its feet in the footprints it planned. And if the robot encounters an obstacle, it has to stop and compute new footprints to avoid the obstacle. In our approach—and this is something that other top labs are also pursuing—the robot walks by relying on a real-time pattern generator. Basically the robot uses its vision system to decide where it wants to go and then it just walks in that direction, adjusting its walking in real time. It’s a more reactive and realistic way of implementing walking. In fact, vision will be Romeo’s main sensing capability. It has two cameras to do simultaneous localization and mapping, or SLAM. Basically the robot can recognize where it is and navigate from room to room.

EG: Will the robot be capable of performing dexterous manipulation as well?

RG: Romeo’s hand has four fingers and one degree of freedom. The hand can perform a basic grasping motion and grasp objects like a cup on a table or place a dish inside the oven. Of course we’d like to have more dexterous hands. But that would have made the budget too expensive. So we’re leaving manipulation for a future phase of the project, when we may implement things like tactile sensors and force control on the hands.

aldebaran romeo hand
With three fingers and one thumb, Romeo's hand will be able to grasp objects like cups and plates. Image: Aldebaran Robotics

EG: Open source robotics is gaining momentum. Do you plan to release any part of Romeo’s software as open source?

RG: From a commercial point of view, we’re exploring different business models and at this point we can’t say Romeo is open source. But like with the Nao, we’ll release a SDK [software development kit] that will provide libraries and functions to those interested in programming Romeo. So Romeo is an open platform in the sense that there are APIs to to control the robot.

EG: When will Romeo be available and how much will it cost?

RG: We have completed the design of all the mechanics of the robot—the legs, arms, torso, neck, and head. The head is already built and we’re expecting the parts to assemble the legs and the arms. We should have a full body prototype sometime early next year. The robot will cost about 250,000 euros. This is not a final price tag; it’s just an order of magnitude. A lot of things can change. We’d really like to tell the world that if you want to buy a humanoid robot, you don’t have to buy a Japanese one, there are other good choices available. We want to be one of these choices. It’s a challenge, because Japan has very strong products. But we want to be considered a competitor in this area. Romeo will initially be a research tool for university labs and our partners in the project. With time, we plan to improve the robot and transform into a more reliable and affordable product that we can offer to hospitals and nursing homes. After that we’d sell it to individuals for personal use. We hope that one day anyone can have a Romeo as a companion at home.

aldebaran robotics romeo hand
A prototype of Romeo's head. Image: Aldebaran Robotics

This interview has been edited and condensed.

Athlete Robot Learning to Run Like Human

athlete robotJapanese researcher Ryuma Niiyama wants to build a biped robot that runs.

But not like Asimo, whose running gait is a bit, well, mechanical.

Niiyama wants a robot with the vigor and agility of a human sprinter.

To do that, he's building a legged bot that mimics our musculoskeletal system.

He calls his robot Athlete. Each leg has seven sets of artificial muscles. The sets, each with one to six pneumatic actuators, correspond to muscles in the human body -- gluteus maximus, adductor, hamstring, and so forth [see diagram below].

To simplify things a bit, the robot uses prosthetic blades, of the type that double amputees use to run.

And to add a human touch, Niiyama makes the robot wear a pair of black shorts.

Human runners with prosthetic feet, like South African paralympic runner Oscar Pistorius, nicknamed the "Blade Runner," "give me great inspiration," Niiyama tells me.

The robot has touch sensors on each foot and an inertial measurement unit on the torso for detecting the body's orientation.

Niiyama developed the robot as a PhD candidate at the Department of Mechano-Informatics of the University of Tokyo with colleague Satoshi Nishikawa, under the supervision of their advisor, Professor Yasuo Kuniyoshi.

They presented their project at the IEEE Humanoids 2010 conference in Nashville, Tenn., last week.

The researchers are now teaching Athlete to run. They programmed the robot to activate its artificial muscles with the same timing and pattern of a person's muscles during running.

athlete robot

Niiyama, who has since become a post-doc at MIT's Robot Locomotion Group in Cambridge, Mass., says they're trying to better understand how we control our muscles during a challenging task like running.

Previously, he studied another complex motion, jumping, by developing a bipedal hopping robot called Mowgli.

Traditional humanoid robots like Asimo run by changing the angle of their joints. Their legs are rigid, powered by motors coupled to reduction gears. In other words, they run like robots.

People, as well as animals, don't keep track of the position of their joints -- we use our viscoelastic muscles and tendons to bounce against the ground, propelling our bodies forward while maintaining balance.

Athlete can take three, sometimes five steps, moving at about 1.2 meters per second. Then it falls. Watch:

It's a short dash, but the researchers are optimistic. They plan to fine tune the artificial muscles and improve the feedback control system. And then hopefully move their tests to a real running track.

Images: Ryuma Niiyama

Robotic Drone Flies Itself by Looking at Landscape Below

Robotic aircraft have taken to the skies, finding increasing use in military applications, law enforcement, environmental monitoring, and also becoming popular among hobbyists who want to build their own drones.

These unmanned aerial vehicles, or UAVs, have varied degrees of autonomy, though typically they depend on GPS and also on supervision from a human operator, who can send commands to the aircraft and receive images from its on-board cameras.

Now researchers at McGill University's Mobile Robotics Lab, in Montreal, Canada, are making these smart aircraft a bit smarter. They've developed a UAV control system that uses aerial images to identify visual cues on the landscape and steer the aircraft autonomously.

Aerial vehicles guided by advanced vision capabilities could help track wildfires, oil spills, and even animal herds. The aircraft would carry out monitoring and mapping missions requiring no human supervision or GPS coordinates.

Anqi Xu, a PhD student, and his advisor, Professor Gregory Dudek, director of the Mobile Robotics Lab, say that their current system is capable of following a coastline or a road surrounded by forests.

The researchers presented their results at the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems in October.

They used a fixed-wing UAV called the Unicorn from Procerus Technologies, which they can control via software. The aircraft carries a gimbal-mounted camera that streams video over a radio link. A Linux notebook computer analyzes the video feed and sends heading updates to the UAV in real time.

To track coastlines, their vision algorithm analyzes the color properties in the images to distinguish between water and land. To track a highway in a wooded region, it analyzes textures cues. Once the algorithm has identified the boundaries between different areas, it then determines a heading to follow.

To test their system, the researchers took their UAV to the beach. Watch:

The test area consisted of a 1-kilometer long "S" shaped tropical coastline. After manually aligning their UAV, their control system took over and successfully steered the aircraft along the stretch of the shore. The UAV traveled at an altitude of 150 meters with an average ground speed of 13 meters per second with lateral wind speed of 7 meters per second.

How would that performance compare to a human operator piloting the UAV using the same visual information?

The researchers asked five volunteers to watch the recorded images and specify  headings to keep the UAV following the coastline. Though there were discrepancies between the headings produced by the algorithm and by the volunteers, the researchers concluded that their system can perform nearly as well as a human operator.

In a next phase of the project, the researchers plan to use their aerial tracker to transmit navigation data to another of their systems, an amphibian robot designed to study coral reefs.

Images and video: Mobile Robotics Lab/McGill University

DARwIn-OP Humanoid Robot Demo

darwin-op

Dennis Hong is a Virginia Tech roboticist who has been building some really cool robots. He's also a good salesman. Watch him showing off his "new baby," DARwIn-OP, at this week's IEEE Humanoids 2010 conference in Nashville, Tenn. Designed by Hong's RoMeLa team and collaborators at University of Pennsylvania's Grasp Lab, Purdue University, and Korean company Robotis, DARwIn-OP has both its hardware and software open source. That means that in principle you can fabricate the parts, choose your own electronics and actuators, and build your own. Or maybe you'd prefer to buy one already assembled? Robotis is selling it for around U.S. $8,000. (Update: Robotis announced that it will be $12,000 MSRP and $9,600 educational discount price.)

Specs below from Robotis:

DARwIn-OP (Dynamic Anthropomorphic Robot with Intelligence-Open Platform)
* Height: 455 mm (17.9 inches)
* Weight: 2.8 kg (6.3 lbs)
* Head: USB camera (HD); status LEDs on eyes and forehead; USB mic; two microphones on sides of the head (optional)
* Torso: Speaker; 3-axis gyroscope and 3-axis accelerometer; Mini SD; WiFi; two cooling fans; two USB interfaces; HDMI; audio line-in; audio line-out; battery; external power input; power switch; Ethernet port; seven status LEDs; removable handle
* Feet: FSR X4 sensor (optional)
* Default walking speed: 24.0 cm/sec (9.5 in/sec); 0.25 sec/step (user modifiable gait)
* Default standing up time from ground: 2.8 sec (from facing down) and 3.9 sec (from facing up)
* Built-in PC: 1.6 GHz Intel Atom Z530 on-board 4 GB flash SSD
* Management controller (CM-730): ARM CortexM3 STM32F103RE 72 MHz
* 20 actuator modules: Robotis Dynamixel RX-28M (6 DOF leg x2 + 3 DOF arm x2 + 2 DOF neck)
* 1 spare actuator (for maintenance and expansion)
* Self-maintenance kit
* Standby mode for low-power consumption
* 4.5 Mbps high-speed Dynamixel bus for joint control
* Battery (30 minutes of operation), charger, and external power adapter
* Mechanical and electronics information and source code: http://sourceforge.net/projects/darwinop

Robots To Stay Very Far Away From: Ham De-boning Arm

This video may be NSFW if you don’t like watching raw meat get sliced up by a robot.

The reason that someone thought that giving this robot arm a razor sharp knife to stab meat with was that boning hams is a repetitive task, i.e. something that a robot would be great at. They’re probably right, and it’s an impressive technical achievement, because the robot has to be able to compensate for lots of variability in, uh, “meat form and bone size.” Using these robots, it only takes 10 people to bone 500 hams an hour instead of 20.

On the other hand, I can’t help but thing two things. First of all, this is the sort of semi-skilled labor that until very recently was not at risk for automation because of the knowledge and adaptability required. And second, we’re giving robot arms knives now. PANIC!

I’m kidding, of course.

Via [ DigInfo ]

Quadrotor + Kinect = One Weird Looking Robot

Kinect’s 3D sensor is so cheap and effective that it’s getting bolted onto any robot that moves, and quadrotors are just the latest victims. UC Berkeley’s quadrotor is using the Kinect for autonomous flight and dynamic obstacle avoidance, and as long as you don’t come at it from behind, it works great. The nice thing about using Kinect like this is that it translates into a SLAM system, where the robot can fly around and make a 3D map of a space using the same data that it’s relying on to keep from crashing in to stuff.

[ UC Berkeley Hybrid Systems Lab ] via [ Trossen ]

Greedy Robot Picks Only The Ripest Strawberries

strawberry picking robot japan

Japan’s National Agriculture and Food Research Organization has developed this excessively complicated robot that’s able to visually recognize ripe strawberries and then delicately pluck them and drop them in a basket.

The robot operates at a speed of 9 seconds per strawberry, which is probably a minimum of 9 times slower than an experienced human would be able to do it, so I’m really not sure how the designers suggest that using robots would be 60% faster. The only way I can get that type of math to work is by using an impractical number of robots, and by impractical, I mean hugely expensive. Don’t get me wrong, I think there’s a future in agricultural robots like this… But they’re going to have to find some way of overcoming cheap and efficient human labor first. This has already happened with lots of crops, but with some exceptions, fruit is significantly more difficult, because it has a ripeness factor and bruises easily.

The strawberry harvesting robot is currently being tested in the field, with a more practical production version due next year.

Via [ CrunchGear ]

Dreamer Sociable Robotic Head

dreamer

UT Austin’s HCR Lab just got this robot head, and its primary goal is to “elicit a sense of trust and sociability to an otherwise pure mechatronic device.” This is a moderately refreshing (and on the whole, quite advisable) approach to creating a robot… It’s very easy to focus on functionality without worrying about whether or not people are going to actually want to interact with your robot. Obviously, a lot of thought was put into Dreamer, because it’s securely in that sweet spot of humanish without trying too hard.

One of the things that I think makes this robot appear so natural is that fact that it has fast eyes that lead its head around, just like an animal or human. There’s only a minimal amount of that sluggish, mechanical servo response, and the video even mentions that the eyes are capable of moving even faster, up to “human speed.” Plus, as we’ve mentioned before, having eyelids is a really big deal.

[ HCR Lab ] via [ Engadget ]

Ping Pong Robot Learns by Doing


Katharina Mülling (holding the emergency stop switch), Jan Peters, and Jens Kober monitor their ping pong robot practicing against a ball gun. All photos: Axel Griesch/MPG, München 

Despite all the recent advances in robotics, one fundamental task appears to remain as hard as ever: robot programming.

To be sure, robot programming in industrial settings has evolved significantly, from a series of mechanical switches to advanced programming languages and teach-pendant devices for trajectory planning. But getting robots to do their jobs still requires a great deal of human labor -- and human intelligence.

The situation is even worse when it comes to programming robots to do things in non-industrial environments. Homes, offices, and hospitals are unstructured spaces, where robots need to deal with more uncertainty and act more safely.

To overcome this programming bottleneck, engineers need to create robots that are more flexible and adaptable -- robots that, like humans, learn by doing.

That's what a team led by Dr. Jan Peters at the Robot Learning Lab, part of the Max-Planck Institute for Biological Cybernetics, in Tübingen, Germany, is trying to do. Peters wants to transform robot programming into robot learning. In other words, he wants to design robots that can learn tasks effortlessly instead of requiring people to painstakingly determine their every move.

In the video below, you can see his students taking their robot "by the hand" to teach it motor skills needed for three tasks: paddle a ball on a string, play the ball-in-a-cup game, and hit a ping pong ball. 

Here's how Dr. Peters explained to Automaton his team's approach: "Take the example of a person learning tennis. The teacher takes the student by the hand and shows basic movements: This is a forehand, this is a backhand, this is a serve. Still, it will take hours and hours of training before the student even feels comfortable at performing these behaviors. Even more practice is needed for the student to be able to play an actual game with these elementary behaviors." But still, he adds, humans succeed at learning the task. Why can't robots do the same? "That's what we're trying to do: Make our robots mimic the way humans learn new behaviors."

In the first part of the video, graduate student Katharina Muelling shows the robot how to paddle a ball on a string by performing the action while holding the robot's "hand." The robot decomposes the movement into primitive motor behaviors -- a discrete motor primitive that modulates the rhythmic paddling with an increasing amplitude until it becomes a stable rhythmic behavior -- and quickly "learns" how to perform the task.

For comparison purposes, the researchers tried to manually program the robot's motors to perform the same task. It took them three months and the result wasn't as good as the imitation learning experiment, which took less than an hour, Dr. Peters says. 

In the second part of the video, Muelling teaches the robot the ball-in-a-cup game. [See photo on the right; the robot has to swing the yellow ball, which is attached to a string, and make it land into the blue cup.] This skill is significantly more difficult than paddling the ball on a string, and the robot doesn't have enough data to simply imitate what the human did. In fact, when the robot attempts to reproduce the human action, it can't match the accelerations of the human hand and the ball misses the cup by a large margin. Here, self-improvement becomes key, Dr. Peters says.

"For every new attempt, when the robot reduces the distance by which the ball misses the cup, the robot receives a 'reward,' " he says. "The robot subsequently self-improves on a trial-by-trial basis. It usually gets the ball in the cup for the first time after 40 to 45 trials and it succeeds all the time after about 90 to 95 trials."

How does the robot's learning ability compare to a human being? PhD student
Jens Kober, who led this particular experiment, wanted to find out: He went home for a holiday last year and enjoyed the benefit of an extended, large family -- always good subjects for a scientific experiment. He showed his many cousins the ball-in-a-cup game and rewarded them with chocolate. It turned out that the younger ones (around 6 years old) would not learn the behavior at all, the ones in their early teens (10 to 12) would learn it within 30 to 35 trials, and the grownups would be much faster.

"His supervisor may be the only person in his lab who has not managed to learn this task," Dr. Peters quips.

In the last part of the video, the researchers tackle an ever harder task: ping pong. Again, Muelling teaches the robot by holding its "hand," this time to hit a ping pong ball sent by a ball gun [photos above]. The challenge here is to use -- and modify -- previously learned basic motions and combine them with visual stimuli: The robot needs to keep track of the ball, which may come from different directions, and then execute the right set of motions.

Some of their work, part of GeRT consortium, a program that aims at generalizing robot manipulation tasks, is still preliminary, Dr. Peters notes. But he's confident they can teach their robot to become a good ping pong player. How good? Maybe not as good as Forrest Gump, but good enough to beat everyone in the lab.

Samuel Bouchard is a co-founder of Robotiq in Quebec City.

WheeMe Massage Robot Roams Around Your Back

dreambots wheeme

There are several therapeutic robots out there, but this one is a bit different. While robots like Paro the baby seal require you to stroke them, the DreamBots Wheeme caresses you.

According to the company, this massage robot uses "unique tilt sensor technology" to move slowly across a person's body "without falling off or losing its grip." As the bot roams around, its four sprocket-like rubber wheels press gently on the skin.

Founded by a bunch of Israeli electronics and defense engineers, DreamBots will show off the WheeMe at CES next January. There's no word on price yet. The company admits the robot can't give you a deep tissue massage, because it's very light (240 grams, or 8.5 ounces). But they claim the device can provide "a delightful sense of bodily pleasure." 

It's unclear how big the market is for a body-rolling robot. I guess we'll have to wait and see.

In the meantime, watch the WheeMe navigate:

Another video and more images:

dreambots wheeme

dreambots wheeme

dreambots wheeme

READ MORE:

Robots With Knives: A Study of Injury
Thu, May 06, 2010

Blog Post: What would happen if a knife-wielding robot struck a person?

Therapeutic Robots Are Cute But Costly
Fri, July 31, 2009

Blog Post: These 'bots are soft, cuddly, and designed to be universally likable, but they don't come cheap.

Robotic Bed for Sleep Research
Mon, September 27, 2010

Blog Post: Swiss engineers have used cables, pulleys, and motors to build a robotic bed.

French Robot to Assist Elderly, Disabled
Tue, March 09, 2010

Blog Post: A French robotics company has unveiled a robot designed to assist elderly and disabled people

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More