I have to admit that I'm a sucker for simple solutions to difficult problems. At ICRA this week, one of the cleverest new designs (and winner of the award for best video) was for a small tube climbing robot. The Biorobotics lab and Manipulation lab at Carnegie Mellon University have been working for several years on dynamic climbing bots that can climb between walls without any special attachment mechanism. But they wanted to come up with a smaller design that could make it up three-dimensional tubes.
The result is this little device. It's simple motor turns an unbalanced mass at a uniform velocity. As the mass swings around, it causes the robot to bounce back and forth between the tube walls. Two rubber o-rings let the researches specify the exact contact points and increase friction with the walls.
This isn't the first tube-climbing, vibrating robot, but it has some distinct advantages. Earlier designs relied on fibers or bristles to create anisotropic friction with the walls and vibration caused motion in the direction of lowest friction. The problem with these designs comes when you need to remove the robot--now you're forced to work against the maximum friction.
What's most impressive about Carnegie Mellon's new bot is its speed, versatility, and payload capability. In the video, you can see that it travels up to 20 body-lengths per second and has a payload capacity of roughly 5x it's weight. The robot can even climb different sized tubes, although at different rates.
The researchers say they weren't application driven, but it's not hard to imagine such a simple device coming in handy for navigating tubing quickly.
(Video courtesy of Amir Degani, Siyuan Feng, Howie Choset, and Matthew T. Mason)
UPDATE:It turns out that the courageous individual in the video is Sami Haddadin, the study's lead author, who was clearly confident in the collision-detection system he devised. I incorporated additional details he gave me.
The idea of a robot in the kitchen cooking us meals sounds great. Just watch out when the automaton is handling the knives!
To find out what would happen if a robot holding a sharp tool accidentally struck a person, German researchers set out to perform a series of stabbing, puncturing, and cutting experiments.
They fitted an articulated robotic arm with various tools (scalpel, kitchen knife, scissors, steak knife, screwdriver) and programmed it to execute different striking maneuvers. They used a block of silicone, a pig's leg, and at one point a human volunteer's bare arm as the, uh, test surface.
The researchers -- Sami Haddadin, Alin Albu-Schaffer, and Gerd Hirzinger from the Institute of Robotics and Mechatronics, part of DLR, the German aerospace center, in Wessling, Germany -- presented their results today at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska.
The main goal of the study was to understand the biomechanics of soft-tissue injury caused by a knife-wielding robot. But the researchers also wanted to design and test a collision-detection system that could prevent or at least minimize injury. Apparently the system worked so well that in some cases the researchers were willing to try it on human subjects.
We applaud the guy [editor's note: see update above] at the end of the video who put his body on the line in the name of robotic science.
Warning: Some people may consider content graphic or upsetting.
The researchers acknowledge that there are huge reservations about equipping robots with sharp tools in human environments. It won't happen any time soon. (Sorry, you'll still have to chop that cucumber salad yourself). But they argue that only by getting more data can roboticists build safer robots.
The experiments involved the DLR Lightweight Robot III, or LWRIII, a 7 degrees-of-freedom robot manipulator with a 1.1 meter reach and moderately flexible joints. The robot, which weighs 14 kilograms, is designed for direct physical interaction and cooperation with humans.
The tools the researchers tested included [photo, right]: (1) scalpel; (2) kitchen knife; (3) scissors; (4) steak knife; (5) screwdriver.
The researchers performed two types of experiments: stabbing and cutting, testing the different tools striking at various speeds, with and without the collision-detection system active.
In most cases, the contact resulted in deep cuts and punctures, with potentially lethal consequences. But remarkably, the collision-detection system was able to reduce the depth of the cuts and in a few cases even prevent penetration altogether.
Although the robotic arm has a force-torque sensor on its wrist, this sensor is not used in the collision-detection system; it only serves as a measurement reference in the experiment. "The collision detection and reaction," Haddadin told me, "is based on a very good dynamics model of the robot and the fact that, unlike other robots, we have torque sensors and position sensors in every joint."
With the dynamics model (which includes rigid body dynamics, joint elasticity, and motor model) and the sensor measurements, the robot can detect a collision nearly instantaneously. (The control system relies on a "nonlinear disturbance observer.")
"This method does not require any additional external sensors and only relies on the internal capabilities of the robot," says Haddadin.
This is the first study to investigate soft-tissue injuries caused by robots and sharp instruments. Previous studies by the same researchers, as well as other groups, have focused on blunt collisions involving non-sharp surfaces.
The video below shows impact experiments using crash-test dummies and large industrial robots. Ouch.
To be useful in human environments, robots must be able to do things that people do on a daily basis -- things like opening doors, drawers, and cabinets. We perform those actions effortlessly, but getting a robot to do the same is another story. Now Georgia Tech researchers have come up with a promising approach.
Professor Charlie Kemp and Advait Jain at Georgia Tech's Healthcare Robotics Laboratory have programmed a robot to autonomously approach and open doors and drawers. It does that using omni-directional wheels and compliant arms, and the only information it needs is the location and orientation of the handles.
The researchers discussed their results yesterday at the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska, where they presented a paper, "Pulling Open Doors and Drawers: Coordinating an Omni-Directional Base and a Compliant Arm with Equilibrium Point Control."
One of the neat things about their method is that the robot is not stationary while opening the door or drawer. "While pulling on the handle," they write in their paper, "the robot haptically infers the mechanism's kinematics in order to adapt the motion of its base and arm."
In other words, most researchers trying to make robots open doors, cabinets, and similar things rely on a simple approach: keep the robot's base in place and move its arms to perform the task. It's easier to do -- and in fact that's how most robot manipulation but limits the kinds of tasks a robot could accomplish.
The Georgia Tech researchers allow their robot to move its omni-directional base while simultaneously pulling things open -- an approach they say improves the performance of the task.
There's no better way to understand it than seeing the robot in action:
So how did they do it?
First, a look at their robot. According to Travis Deyle, a researcher at the Healthcare Robotics Lab who first reported on the new robot and its capabilities at Hizook, the robot is called Cody [photo, right]. It consists of a Segway RMP 50 Omni base with Mecanum wheels, a vertical linear actuator to raise the robot's torso up to 1.2 meter above the ground, a laser range finder, and a pair of 7-DOF MEKA Robotics arms.
A Mac Mini running Linux performs all the computation for the sensing and high-level control. Another computer running a Linux-based real time system controls the MEKA arms. The researchers wrote all their software in Python and used open source packages like ROBOOP and ROS.
The robot uses a simple hook as its end effector, which the researchers built with a 3D printer and coated with rubber to increase friction. At the wrist, a 6-axis force sensor measures the forces on the hook, which was based the way a person uses a finger to pull something open [photo below].
But the most innovative thing is the control method they implemented, which they call equilibrium point control, or EPC. Here's the gist. Rather than model the dynamics of the arm and the impedance at the end effector or use inverse dynamics, the researchers created a control system that relies on simulated visco-elastic springs at the robot's joints. The EPC system uses these virtual springs, whose stiffness can be adjusted, to determine how the joints should move to achieve a desired movement.
Kemp and Jain say that this approach, combined with the robot's low mechanical impedance (which reduces the forces resulting from contact and thus minimizes the risks of damage to the robot, objects, and people), proved "easy to work with, easy to implement, and surprisingly effective."
They tested their approach with 10 different doors and drawers, reporting that the robot succeeded in 37 out of 40 trials. What's more, the robot was able to open doors and drawers from initial positions that would be difficult for static robots to succeed at the task.
They write: "We empirically demonstrate that the system is robust to common forms of task variation, including variation in the mechanism being operated (tested with 7 doors and 3 drawers), and variation in the pose of the robot's base with respect to the handle."
I think that's researchspeak for "It works!"
Images and video: Georgia Tech's Healthcare Robotics Lab
This is the microrobotics competition arena. Image: NIST
At the IEEE International Conference on Robotics and Automation, in Anchorage, Alaska, this week, the U.S. National Institute of Standards and Technology, the famed NIST, is holding a robotics competition for small robots -- very small robots.
In the Mobile Microrobotics Challenge, robots with dimensions measured in micrometers will square off in a series of challenges taking place at a, uh, microchip playing field [photo above]. First there's a race across a 2 millimeter distance, or the equivalent to the diameter of a pin head. Then the microbots will compete in a microassembly challenge in which they'll have to insert tiny pegs into tiny holes. Finally, there's a freestyle competition in which each team chooses how to show off its small bot in a grand way.
Researchers will remote operate the microrobots, viewed under a microscope, using magnetic fields or electrical signals transmitted across . The bots, made from materials like silicon, gold, aluminum, nickel, and chromium, are a few tens of micrometers to a few hundred micrometers across and weigh just a few nanograms.
As the organizers put it, "These events are designed to 'road test' agility, maneuverability, response to computer control and the ability to move objects—all skills that future industrial microbots will need for tasks such as microsurgery within the human body or the manufacture of tiny components for microscopic electronic devices."
Here's a schematic of the 2 millimeter racetrack:
And here are the contenders:
Team Name: Magic & Voodoo
Organization: Carnegie Mellon University (Pittsburgh, Pennsylvania)
Robot Dimensions: Under 500 micrometers in all dimensions
Materials: Neodymim-Iron-Boron magnetic particles suspended in a polyurethane matrix material
MagPieR (Magnetic - Piezoelectric microRobot)
Team Name: CNRS Team (The French Team)
Organization: FETO-ST Institute; ISIR - Institut des Systemes Intelligents et de Robotique (France)
Robot Dimensions: Under 400 micrometers in all dimensions
Materials: The MagPieR microrobotis composed of two distinct layers, an upper using a ferromagnetic metal (such as nickel) and a lower using a piezoelectric material.
Team Name: MagMite Team
Organization: ETH Zurich (Switzerland)
Robot Dimensions: Under 300 micrometers in all dimensions
Materials: The device consists of two nickel masses connected through a gold spring.
µMAB (Micro-scale Magnetostrictive Asymmetric thin-film Bimorph)
Team Name: Stevens Institute of Technology
Organization: Stevens Institute of Technology (Hoboken, New Jersey)
Robot Dimensions: Under 600 micrometers in all dimensions
Materials: Nickel, copper
Team Name: University of Maryland
Organization: University of Maryland (College Park, Maryland)
Robot Dimensions: 500 micrometers in all dimensions
Materials: The device is silicon cube with a layer of nitride on the top surface, and platium on the bottom
EMMA (ElectroMagnetic Microrobotic Actuation)
Team Name: University of Waterloo Nanorobotics Group
Organization: University of Waterloo (Canada)
Robot Dimensions: 500 micrometers and under in all dimensions
Materials: Nickel, cobalt, manganese, phosphorus
Team Name: U.S. Naval Academy Microrobotics Team
Organization: U.S. Naval Academy (Annapolis, Maryland)
Robot Dimensions: 300 micrometers in diameter
Materials: Nickel, gold, polysilicon, nitride
We'll keep you updated about the competition and winners, and try to get some video as well.
UC Berkeley researchers demonstrated a PR2 robot that could fold towels. Image: UC Berkeley
Watch out, towels of the world, more PR2 robots are coming for you.
Willow Garage, the Silicon Valley company dedicated to advancing open robotics, is announcing this morning that it will award 11 PR2 robots to institutions and universities around the world as part of its efforts to speed-up research and development in personal robotics.
The company, in Menlo Park, Calif., hopes that the 11 organizations [see list below] in the United States, Europe, and Japan that are receiving PR2 robots at no cost—a total worth over US $4 million—will use the robots to explore new applications and contribute back to the open-source robotics community.
An open robot platform design and built by Willow, the Personal Robot 2, or PR2, has a mobile base, two arms, a variety of sensors, and 16 CPU cores for computation. But what makes the robot stand out is its software: the open-source Robot Operating System, or ROS, that offers full control of the PR2, including libraries for navigation, manipulation, and perception.
Yesterday I spoke with Eric Berger, Willow's co-director of the personal robotics platform program, who said they’re "really excited about the new applications that will come out of this."
As an example of the possibilities, he mentioned that earlier this year a group at UC Berkeley programmed a PR2 to fold towels. The video of the robot neatly folding a stack of towels went viral.
"People get very excited with the idea of robots doing something that's really useful in their homes," Berger says. "People have seen a lot of military robots, industrial robots, robot vacuum cleaners, but the idea of something like Rosie the Robot, I think it's very powerful."
With its PR2 Beta Program, Willow Garage hopes to foster scientific robotics research, promote the development of new tools to improve the PR2 and other robots, and also help researchers create practical demonstrations and applications of personal robotics.
For the researchers receiving a state-of-the-art personal robot platform worth several hundred thousand dollars, the possibility of working on real-world problems without having to waste time reinventing the robotic wheel, so to speak, is a big deal.
Even more significant, the researchers will be able to "share their software for use by other groups and build on top of each other's work," says Pieter Abbeel, the UC Berkeley professor who created the towel folding demo and is one of the PR2 recipients. "This will significantly boost the rate of progress in robotics, and personal robotics in particular."
"Just as the Mac and PC hardware inspired new applications for personal computers in the 1980s, the PR2 could be the key step in making personal robots a reality," says Ken Goldberg, an IEEE Fellow and UC Berkeley professor. "It's a very exciting step forward for robotics and we're very excited to participate."
The PR2 robot is an advanced mobile and manipulation platform. Image: Willow Garage
Eric Berger told me that one of the PR2 Beta Program's main goals -- and of Willow itself -- is improving the software side of robotics. Today a lot of research groups write their own code, wasting time creating tools that already exist. Willow aims to address that with its ROS, an open-source framework that robotic developers can use and share.
“I think there’s definitely hardware problems to solve, but a lot of the biggest problems that we’re facing right now have to do with software and applications," Berger says. "That's what we're trying to enable with this [program]."
Berger compares the evolution of robotics to that of computers. Robotics is going from something designed to solve a specific problem to something that is a general-purpose system, he says.
"Once robots have reached a certain level of capability, then it's a question of what do you make them do."
Willow says the ROS software is BSD-licensed, making it completely free for anyone to use and change, and free for companies to commercialize on. The company hopes advances in personal robotics could have an impact in a wide range of industries, including retail, health care, home care, automotive, and manufacturing.
Willow had announced the PR2 Beta Program early this year, inviting research groups to submit proposals showing how they'd use a PR2. Willow received 78 proposals from all over the world and selected 10—adding an 11th recipient at the last minute. The selected proposals include three in Europe and one in Japan.
In selecting the 11 PR2 recipients, Berger said they wanted diversity in terms of applications, but at the same time they focused on those that could make the best use of PR2's mobility and manipulation capabilities. The selected institutions will pursue their research and development goals and regularly meet to share their progress and explore new applications together.
Here's the list of lucky 11 PR2 recipients that Willow is releasing this morning:
* Albert-Ludwigs-Universität Freiburg with the proposal TidyUpRobot
The University of Freiburg's strength in mapping has led to multipleopen-source libraries in wide use. Their group will program the PR2 to do tidy-up tasks like clearing a table, while working on difficult underlying capabilities, like understanding how drawers and refrigerators open, how to recognize different types of objects, and how to integrate this information with the robot's map. Their goal is to detect, grasp, and put away objects with very high reliability, and reproduce these results at other PR2 Beta Program sites.
* Bosch with the proposal Developing the Personal Robotics Market: Enabling New Applications Through Novel Sensors and Shared Autonomy Bosch will bring their expertise in manufacturing, sensing technologies and consumer products. Bosch will be making robotic sensors available to members of the PR2 Beta Program, including a limited number of "skins" that will give the PR2 the ability to feel its environment. Bosch will also make their PR2 remotely accessible and will expand on the libraries they've released for ROS.
* Georgia Institute of Technology with the proposal Assistive Mobile Manipulation for Older Adults at Home
The Healthcare Robotics Lab at Georgia Tech will be placing the PR2 in an "Aware Home" to study how robots can help with homecare and creative assistive capabilities for older adults. Their research includes creating easierways for adults to interact with robots, and enabling robots to interact with everyday objects like drawers, lamps, and light switches. Their human-robot interaction focus will help ensure that the software development is closely connected to real-world needs.
* Katholieke Universiteit Leuven with the proposal Unified Framework for Task Specification, Control and Coordination for Mobile Manipulation KU Leuven in Belgium is a key player in the open-source robotics community. As one of the founding institutions for the Orocos Project, they will be improving the tools and libraries used to program robots in ROS, by, for example, integrating ROS with Blender. They will also be working on getting the PR2 and people to perform tasks together, like carrying objects through a crowded environment.
* MIT CSAIL with the proposal Mobile Manipulation in Human-Centered Environments
The diverse MIT CSAIL group will use the PR2 to study the key capabilities needed by robots that operate in human-centered environments, such as safe navigation, interaction with humans via natural language, object recognition, and planning for complex goals. Their work will allow robots to build the maps they need in order to move around in buildings as large as MIT’s 11-story Stata Center. They will also program the PR2 to put away groceries and do simple cleaning tasks.
* Stanford University with the proposal STAIR on PR2 PR1 was developed in Kenneth Salisbury's lab at Stanford, and ROS was developed from the STAIR (Stanford AI Robot) Project. We're very excited that the PR2 will become the new platform for the STAIR Project's innovative research. Their team will work on several applications, which include taking inventory, retrieving items scattered about a building, and clearing a table after a meal.
* Technische Universität München with the proposal CRAM: Cognitive Robot Abstract Machine TUM will research giving the PR2 the artificial intelligence skills and 3D perception to reason about what it is doing while it performs various kitchen tasks. These combined improvements will help the PR2 perform more complicated tasks like setting a table, emptying a dishwasher, preparing meals, and other kitchen-related tasks.
* University of California, Berkeley with the proposal PR2 Beta Program: A Platform for Personal Robotics
The PR2 is now known as the "Towel-Folding Robot", thanks to the impressive efforts of Pieter Abbeel's lab at Berkeley. In two short months, they were able to get the PR2 to fold fifty towels in a row. Berkeley will tackle the much more difficult challenge of doing laundry, from dirty laundry piles to neatly folded clothes. In addition, their team is interested in hierarchical planning, object recognition, and assembly and manufacturing tasks (e.g. IKEA products) through learning by demonstration
* University of Pennsylvania with the proposal PR2GRASP: From Perception and Reasoning to Grasping
The GRASP Lab proposal aims to tackle some of the challenges facing household robotics. These challenges include tracking people and planning for navigation in dynamic environments, and transferring handheld objects between robots and humans. Their contributions will include giving PR2 a tool belt to change its gripper on the fly, helping it track and navigate around people, and performing difficult two-arm tasks like opening spring-loaded doors.
* University of Southern California with the proposal Persistent and Persuasive Personal Robots (P^3R): Towards Networked, Mobile, Assistive Robotics USC has already demonstrated teaching the PR2 basic motor skills so that it can adapt to different situations and tasks, such as pouring a cup. They will continue to expand on this work in imitation learning and building and refining skill libraries, while also doing research in human-robot interaction and self-calibration for sensors.
* University of Tokyo, Jouhou System Kougaku (JSK) Laboratory with the proposal Autonomous Motion Planning for Daily Tasks in Human Environments using Collaborating Robots
The JSK Laboratory at the University of Tokyo is one of the top humanoid robotics labs in the world. Their goal is to see robots safely and autonomously perform daily, human-like tasks such as retrieving objects and cleaning up domestic environments. They'll also be working on getting the PR2 to work together with other robots, as well as integrating the ROS, EusLisp, and OpenRAVE frameworks.
And here's a video describing the PR2 Beta Program:
Underwater robots are one of my areas of interest and the last few months have been full of cool news in the UUV realm. From new research initiatives to innovative applications, there's some fun stuff going on. Briefly, some highlights:
In current events news, the BP oil spill in the Gulf of Mexico led BP to attempt using OceaneeringRemotely Operated Vehicles (ROVs) to shut off the pipeline (read more about that here). They were ultimately unsuccessful, and BP is collaborating with the US military to see if the have any underwater systems that may perform better, but this is one situation where robots are definitely doing important work in place of humans at depths and in environmental conditions that are extremely dangerous.
In mildly horrifying news, the South Korean government is spending $18M to develop underwater crawler robots to do geological and biological surveys. Unfortunately, they're six legged, and I dislike creepy crawly things with more than four legs. They'll only be able to crawl at about 1 mph, but still -- creepy.
AUVs can have serious political implications -- in this case, they're surveying the continental shelf off the coast of Canada in the Arctic to determine exactly where the shelf ends. Surveying these areas may allow Canada to claim more of the ocean floor for mineral or oil deposits -- not to mention all the shipping lanes that are opening up as the Artic ice is melting. (We've previously discussed some of the challenges of AUV operation under the Arctic ice)
And finally, in some sadder news, a very famous AUV named ABE, developed by the Woods Hole Oceanographic Institute, was lost during a mission off the coast of Chile. ABE was technically in retirement and had been reassembled for one last mission when some catastrophic failure -- probably of one of the glass buoyancy spheres -- at 3000m depth caused what we in the biz refer to euphemistically as an "unintended depth excursion." ABE's loss hits the entire underwater robotics community -- it successfully completed many important research missions and pioneered a lot of deep-ocean robotic technology. Here's an interesting bit from one of the team's engineers on what may have caused the failure, along with a touching tribute from one of ABE's inventors (borrowed from Robert Louis Stevenson)
Dr. Masaaki Kumagai, director of the Robot Development Engineering Laboratory at Tohoku Gakuin University, in Tagajo City, Japan, has built wheeled robots, crawling robots, quadruped robots, biped robots, and biped robots on roller skates.
Then one day a student suggested they build a robot that would balance on a ball.
Dr. Kumagai thought it was a wonderful idea.
The robot they built rides on a rubber-coated bowling ball, which is driven by three omnidirectional wheels. The robot can not only stand still but also move in any direction and pivot around its vertical axis.
It can work as a mobile tray to transport cocktails objects and it can also serve as an omnidirectional supporting platform to help people carry heavy objects.
Such a ball-balancing design is like an inverted pendulum, and thus naturally unstable, but it offers advantages: it has a small footprint and can move in any direction without changing its orientation.
In other words, whereas a two-wheel self-balancing robot has to turn before it can drive in a different direction, a ball-riding robot can promptly drive in any direction. Try that, Segway!
Dr. Kumagai and student Takaya Ochiai built three robots and tested them with 10-kilogram bricks. They even made them work together to carry a large wooden frame.
The robot is about half meter high and weighs 7.5 kg. The ball is a 3.6-kg bowling ball with a 20 centimeter diameter and coated with rubber spray.
Its ball driving mechanism uses three omnidirectional wheels developed at Japan's R&D institute RIKEN [see photo, right].
To power the wheels, they chose NIDEC motors and micro-step controllers to achieve a rate of 0.225 degree per step, which made the rotation of the wheels smooth.
The robot's control system runs on a 16-bit microcontroller, which receives data from two sets of Analog Devices gyroscopes and accelerometers.
It's interesting that they had to use both gyros and accelerometers. The gyros can detect fast movements, or high-frequency components, but they're not suited when you want to derive the inclination of the robot. On the other hand, the accelerometers can detect the inclination but they're affected by the motion of the robot, so they couldn't be used alone.
The control strategy is the same used for other inverted pendulum-type systems. The goal of the control system is to keep the inclination at zero degrees and keep the ball on the same spot. If you push the robot, it will try to balance itself and return to the original location.
The idea of ball-balancing robots and one-wheeled robots dates back to the 1970s. Today even hobbyists have shown off cool designs, and a few large-scale robots have been built in academia. Perhaps the most famous is Ballbot, developed by researchers at Carnegie Mellon. It's a dynamically stable mobile robot that is tall enough to interact with people. (Watch videos here.)
Dr. Kumagai's robot added some new tricks, including something other ball-bots cannot do: thanks to its innovative omnidirectional wheel driving system, it can rotate around its vertical axis.
The robot has two control modes. The first tries to keep the robot stable and on the same spot, as described above. The other is a passive mode, in which the robot remains stable but you can easily push it around, even using just a finger [photo, right].
What's next? Dr. Kumagai wants to make the robot more user-friendly for carrying things and he plans to combine several of them in cooperative behaviors.
Update from Dr. Kumagai: "A month ago, the robot was named BallIP, short for Ball Inverted Pendulum. In addition, Mr. Takaya Ochiai finished his master course at Tohoku Gakuin this March -- and he got a job!"
Images and video: Dr. Masaaki Kumagai/Tohoku Gakuin University
It's a problem that has long annoyed virtual reality researchers: VR systems can create a good experience when users are observing or manipulating the virtual world (think Michael Douglas in "Disclosure") but walking is another story. Take a stroll in a virtual space and you might end up with your face against a real-world wall.
The same problem is becoming apparent in teleoperated robots. Imagine you were teleoperating a humanoid robot by wearing a sensor suit that captures all your body movements. You want to make the robot walk across a room at the remote location -- but the room you're in is much smaller. Hmm.
Researches have built a variety of contraptions to deal with the problem. Like a huge hamster ball for people, for example.
Or a giant treadmill. The CyberWalk platform is a large-size 2D omni-directional platform that allows unconstrained locomotion, adjusting its speed and direction to keep the user always close to the center. With a side of 5 meters, it's the largest VR platform in the world.
It consists of an array of synchronous linear belts. The array moves as a whole in one direction while each belt can also move in a perpendicular direction. Diagonal movement is possible by combining the two linear motions.
Built by a consortium of German, Italian, and Swiss labs, the machine currently resides at the Max Planck Institute for Biological Cybernetics, in Tubingen, Germany, where it's been in operation for over two years.
Last year at IROS, Alessandro De Luca and Raffaella Mattone from the Universita di Roma "La Sapienza," in Rome, Italy, and Paolo Robuffo Giordano and Heinrich H. Bulthoff from the Max Planck Institute for Biological Cybernetics presented details of the machine's control system.
According to the researchers, previous work on similar platforms paid little attention to control algorithms, relying on simple PID and heuristic controllers.
The Italian and German researchers came up with a kinematic model for the machine and from there they devised a control strategy. Basically the challenge is that the control system needs to adapt to changes in the user's direction and speed -- variables that it can't measure directly, so it needs to estimate them.
By precisely monitoring the position of the user on the platform using a Vicon motion-capture system, the controller computes estimates for the two variables and tries to adjust the speeds of the linear belts to keep the user close to the center -- all without abrupt accelerations.
The researchers also devised a way of using a frame of reference for the controller that varies with the user's direction. This method allowed the CyberWalk platform to provide a more natural walking experience, without making the user's legs cross when changing direction. The video above shows the results.
The CyberWalk platform is one of two locomotion devices developed as part of the European Union-funded Project CyberWalk. The other is a small-scale ball-array platform dubbed CyberCarpet.
The Technical University of Munich, another partner in the CyberWalker consortium, designed and built both platforms. And ETH Zurich, another partner, was responsible for the VR part -- creating a 3D VR model of ancient Pompeii and implementing the motion synchronization on the head-mounted display of the human walker.
You can read the researcher's paper, "Control Design and Experimental Evaluation of the 2D CyberWalk Platform," here.
Dennis Hong, a professor of mechanical engineering and director of Virginia Tech's Robotics & Mechanisms Laboratory, or RoMeLa, has created robots with the most unusual shapes and sizes -- from strange multi-legged robots to amoeba-like robots with no legs at all.
Now he's unveiling a new robot with a more conventional shape: a full-sized humanoid robot called CHARLI, or Cognitive Humanoid Autonomous Robot with Learning Intelligence.
The robot is 5-foot tall (1.52 meter), untethered and autonomous, capable of walking and gesturing.
But its biggest innovation is that it does not use rotational joints.
Most humanoid robots -- Asimo, Hubo, Mahru -- use DC motors to rotate various joints (typically at the waist, hips, knees, and ankles). The approach makes sense and, in fact, today's humanoids can walk, run, and climb stairs. However, this approach doesn't correspond to how our own bodies work, with our muscles contracting and relaxing to rotate our various joints.
Dr. Hong and his students wanted a system based on the human anatomy -- and that they could build in short time and on a small budget. So to generate movement, they engineered an ingenious linkage system of pulleys and springs. This actuation system is much lighter than those of other humanoids, and the team was able to design and built it in 1.5 years with only about US $20,000 and donated software/hardware like LabVIEW and SingleBoard RIO.
Dr. Hong is already working on a new version of the robot, CHARLIE-H, that will use linear actuators to power the new legs. You can see the actuator, the black cylinder, on the photo below:
Linear actuators have been tried in humanoids before but, as far as I know, without much success. So I look forward to seeing how the approach will work out in this case. Will CHARLI be able to walk more naturally than Asimo? Only time will tell.
Dr. Hong, for his part, remains confident he'll be able to improve the overall capabilities of humanoid robots in particular bipedal locomotion.
Or as he put it as CHARLI took its first steps, "One small step for a robot, one giant leap for robotics."
Watch the robot (the current version is called CHARLIE-L) in action:
Photos and video: RoMeLa and Virginia Tech
UPDATE: Corrected details about the use of linear actuators, which will be present in an upcoming version of the robot.
Mahru knows its way around a kitchen, popping a snack into the microwave and bringing it to you, as KIST researchers demonstrated when they unveiled the robot's latest version, Mahru Z, early this year.
Mahru can also dance and perform Taekwondo moves. (More on that later.)
Now how do the KIST researchers go about programming Mahru to do all that? I asked this question when I visited KIST a while ago.
Dr. Bum-Jae You, head of KIST's Cognitive Robotics Center, in Seoul, told me that they use two approaches. One involves filming a person with body markers using a traditional optical motion-capture system to track the body movements. The other, which they've been using more recently, relies on a wearable inertial motion-capture suit [photo above].
A person wears the suit while performing various tasks. The movements are recorded and the robot is then programmed to reproduce the tasks while adapting to changes in the space, such as a displaced objects.
But the cool thing is, the capture and reproduction of movements can also take place in real time. When I visited, Dr. You and his students demonstrated this capability using a Mahru III robot.
When the operator moves his arms, Mahru moves its arms. There's virtually no delay. There's a delay, though, in the walking part -- after the operator takes a few steps, it takes some time for the robot to follow suit. But Dr. You told me they're working to do that in real time as well.
Mahru's arm movements under teleoperation are quite impressive -- fast and precise, and also safe, thanks to force-torque sensors and compliant control. Eventually, Dr. You says, a person will be able to teleperate a robot to accomplish manipulation tasks -- and also walk over to people and shake their hands.
Note on the photo above the operator with the motion-capture suit (behind the robot) extending his right hand -- while the robot does the same.
Dr. You and his team also showed me Mahru's dancing capabilities. This demo involved an earlier version of the Mahru robot [below]. Really cool to see the "guts" of the machine -- and the sticker saying "Dancer."