Automaton iconAutomaton

At Sea with Underwater Robots

Underwater robots are one of my areas of interest and the last few months have been full of cool news in the UUV realm. From new research initiatives to innovative applications, there's some fun stuff going on. Briefly, some highlights:

  • In current events news, the BP oil spill in the Gulf of Mexico led BP to attempt using Oceaneering Remotely Operated Vehicles (ROVs) to shut off the pipeline (read more about that here). They were ultimately unsuccessful, and BP is collaborating with the US military to see if the have any underwater systems that may perform better, but this is one situation where robots are definitely doing important work in place of humans at depths and in environmental conditions that are extremely dangerous.

  • In mildly horrifying news, the South Korean government is spending $18M to develop underwater crawler robots to do geological and biological surveys. Unfortunately, they're six legged, and I dislike creepy crawly things with more than four legs. They'll only be able to crawl at about 1 mph, but still -- creepy.

  • AUVs can have serious political implications -- in this case, they're surveying the continental shelf off the coast of Canada in the Arctic to determine exactly where the shelf ends. Surveying these areas may allow Canada to claim more of the ocean floor for mineral or oil deposits -- not to mention all the shipping lanes that are opening up as the Artic ice is melting. (We've previously discussed some of the challenges of AUV operation under the Arctic ice)

  • And finally, in some sadder news, a very famous AUV named ABE, developed by the Woods Hole Oceanographic Institute, was lost during a mission off the coast of Chile. ABE was technically in retirement and had been reassembled for one last mission when some catastrophic failure -- probably of one of the glass buoyancy spheres -- at 3000m depth caused what we in the biz refer to euphemistically as an "unintended depth excursion." ABE's loss hits the entire underwater robotics community -- it successfully completed many important research missions and pioneered a lot of deep-ocean robotic technology. Here's an interesting bit from one of the team's engineers on what may have caused the failure, along with a touching tribute from one of ABE's inventors (borrowed from Robert Louis Stevenson)

Image from the Korea Times

A Robot That Balances on a Ball

Dr. Masaaki Kumagai, director of the Robot Development Engineering Laboratory at Tohoku Gakuin University, in Tagajo City, Japan, has built wheeled robots, crawling robots, quadruped robots, biped robots, and biped robots on roller skates.

Then one day a student suggested they build a robot that would balance on a ball.

Dr. Kumagai thought it was a wonderful idea.

The robot they built rides on a rubber-coated bowling ball, which is driven by three omnidirectional wheels. The robot can not only stand still but also move in any direction and pivot around its vertical axis.

It can work as a mobile tray to transport cocktails objects and it can also serve as an omnidirectional supporting platform to help people carry heavy objects.

Such a ball-balancing design is like an inverted pendulum, and thus naturally unstable, but it offers advantages: it has a small footprint and can move in any direction without changing its orientation.

In other words, whereas a two-wheel self-balancing robot has to turn before it can drive in a different direction, a ball-riding robot can promptly drive in any direction. Try that, Segway!

Dr. Kumagai and student Takaya Ochiai built three robots and tested them with 10-kilogram bricks. They even made them work together to carry a large wooden frame.

Watch:

The robot is about half meter high and weighs 7.5 kg. The ball is a 3.6-kg bowling ball with a 20 centimeter diameter and coated with rubber spray.

Its ball driving mechanism uses three omnidirectional wheels developed at Japan's R&D institute RIKEN [see photo, right].

To power the wheels, they chose NIDEC motors and micro-step controllers to achieve a rate of 0.225 degree per step, which made the rotation of the wheels smooth.

The robot's control system runs on a 16-bit microcontroller, which receives data from two sets of Analog Devices gyroscopes and accelerometers.

It's interesting that they had to use both gyros and accelerometers. The gyros can detect fast movements, or high-frequency components, but they're not suited when you want to derive the inclination of the robot. On the other hand, the accelerometers can detect the inclination but they're affected by the motion of the robot, so they couldn't be used alone.

The control strategy is the same used for other inverted pendulum-type systems. The goal of the control system is to keep the inclination at zero degrees and keep the ball on the same spot. If you push the robot, it will try to balance itself and return to the original location.

The idea of ball-balancing robots and one-wheeled robots dates back to the 1970s. Today even hobbyists have shown off cool designs, and a few large-scale robots have been built in academia. Perhaps the most famous is Ballbot, developed by researchers at Carnegie Mellon. It's a dynamically stable mobile robot that is tall enough to interact with people. (Watch videos here.)

Dr. Kumagai's robot added some new tricks, including something other ball-bots cannot do: thanks to its innovative omnidirectional wheel driving system, it can rotate around its vertical axis.

The robot has two control modes. The first tries to keep the robot stable and on the same spot, as described above. The other is a passive mode, in which the robot remains stable but you can easily push it around, even using just a finger [photo, right].

What's next? Dr. Kumagai wants to make the robot more user-friendly for carrying things and he plans to combine several of them in cooperative behaviors.

Update from Dr. Kumagai: "A month ago, the robot was named BallIP, short for Ball Inverted Pendulum. In addition, Mr. Takaya Ochiai finished his master course at Tohoku Gakuin this March -- and he got a job!"

More photos:

ballip

ballip

Images and video: Dr. Masaaki Kumagai/Tohoku Gakuin University

READ ALSO:

The Man Who Made a Copy of Himself
April 2010

Article: A Japanese roboticist is building androids to understand humans--starting with himself

Geminoid F: Ultrarealistic Female Android
Tue, April 20, 2010

Blog Post: IEEE Spectrum obtained exclusive images and video of Hiroshi Ishiguro's new android

Honda's U3-X Unicycle of the Future
Mon, April 12, 2010

Blog Post: It only has one wheel, but Honda's futuristic personal mobility device is no pedal-pusher

Robot Operated by Wii Controller
Thu, April 22, 2010

Blog Post: Japanese researchers demonstrate a robotic wheelchair operated with Wii game controller

CyberWalk: Giant Omni-Directional Treadmill To Explore Virtual Worlds

It's a problem that has long annoyed virtual reality researchers: VR systems can create a good experience when users are observing or manipulating the virtual world (think Michael Douglas in "Disclosure") but walking is another story. Take a stroll in a virtual space and you might end up with your face against a real-world wall.

The same problem is becoming apparent in teleoperated robots. Imagine you were teleoperating a humanoid robot by wearing a sensor suit that captures all your body movements. You want to make the robot walk across a room at the remote location -- but the room you're in is much smaller. Hmm.

Researches have built a variety of contraptions to deal with the problem. Like a huge hamster ball for people, for example.

Or a giant treadmill. The CyberWalk platform is a large-size 2D omni-directional platform that allows unconstrained locomotion, adjusting its speed and direction to keep the user always close to the center. With a side of 5 meters, it's the largest VR platform in the world.

It consists of an array of synchronous linear belts. The array moves as a whole in one direction while each belt can also move in a perpendicular direction. Diagonal movement is possible by combining the two linear motions.

Built by a consortium of German, Italian, and Swiss labs, the machine currently resides at the Max Planck Institute for Biological Cybernetics, in Tubingen, Germany, where it's been in operation for over two years.

Last year at IROS, Alessandro De Luca and Raffaella Mattone from the Universita di Roma "La Sapienza," in Rome, Italy, and Paolo Robuffo Giordano and Heinrich H. Bulthoff from the Max Planck Institute for Biological Cybernetics presented details of the machine's control system.

According to the researchers, previous work on similar platforms paid little attention to control algorithms, relying on simple PID and heuristic controllers.

The Italian and German researchers came up with a kinematic model for the machine and from there they devised a control strategy. Basically the challenge is that the control system needs to adapt to changes in the user's direction and speed -- variables that it can't measure directly, so it needs to estimate them.

By precisely monitoring the position of the user on the platform using a Vicon motion-capture system, the controller computes estimates for the two variables and tries to adjust the speeds of the linear belts to keep the user close to the center -- all without abrupt accelerations.

The researchers also devised a way of using a frame of reference for the controller that varies with the user's direction. This method allowed the CyberWalk platform to provide a more natural walking experience, without making the user's legs cross when changing direction. The video above shows the results.

The CyberWalk platform is one of two locomotion devices developed as part of the European Union-funded Project CyberWalk. The other is a small-scale ball-array platform dubbed CyberCarpet.

The Technical University of Munich, another partner in the CyberWalker consortium, designed and built both platforms. And ETH Zurich, another partner, was responsible for the VR part -- creating a 3D VR model of ancient Pompeii and implementing the motion synchronization on the head-mounted display of the human walker.

You can read the researcher's paper, "Control Design and Experimental Evaluation of the 2D CyberWalk Platform," here.

Photo: CyberWalk Project

READ ALSO:

Riding Honda's U3-X Unicycle of the Future
Mon, April 12, 2010

Blog Post: It only has one wheel, but Honda's futuristic personal mobility device is no pedal-pusher

Personal Mobility Robot Operated by Wii Controller
Thu, April 22, 2010

Blog Post: Japanese researchers demonstrate a robotic wheelchair operated with Wii game controller

World Robot Population Reaches 8.6 Million
Wed, April 14, 2010

Blog Post: There are 8.6 million robots in the world -- or more than one automaton for every person in Austria

Robosoft Unveils Kompai Robot To Assist Elderly, Disabled
Tue, March 09, 2010

Blog Post: The French robotics company has introduced a robot designed to assist elderly and disabled people in their daily activities

Virginia Tech's Humanoid Robot CHARLI Walks Tall

romela charli

Dennis Hong, a professor of mechanical engineering and director of Virginia Tech's Robotics & Mechanisms Laboratory, or RoMeLa, has created robots with the most unusual shapes and sizes -- from strange multi-legged robots to amoeba-like robots with no legs at all.

Now he's unveiling a new robot with a more conventional shape: a full-sized humanoid robot called CHARLI, or Cognitive Humanoid Autonomous Robot with Learning Intelligence.

The robot is 5-foot tall (1.52 meter), untethered and autonomous, capable of walking and gesturing.

But its biggest innovation is that it does not use rotational joints.

Most humanoid robots -- Asimo, Hubo, Mahru -- use DC motors to rotate various joints (typically at the waist, hips, knees, and ankles). The approach makes sense and, in fact, today's humanoids can walk, run, and climb stairs. However, this approach doesn't correspond to how our own bodies work, with our muscles contracting and relaxing to rotate our various joints.

Dr. Hong and his students wanted a system based on the human anatomy -- and that they could build in short time and on a small budget. So to generate movement, they engineered an ingenious linkage system of pulleys and springs. This actuation system is much lighter than those of other humanoids, and the team was able to design and built it in 1.5 years with only about US $20,000 and donated software/hardware like LabVIEW and SingleBoard RIO.

Dr. Hong is already working on a new version of the robot, CHARLIE-H, that will use linear actuators to power the new legs. You can see the actuator, the black cylinder, on the photo below:

Linear actuators have been tried in humanoids before but, as far as I know, without much success. So I look forward to seeing how the approach will work out in this case. Will CHARLI be able to walk more naturally than Asimo? Only time will tell.

Dr. Hong, for his part, remains confident he'll be able to improve the overall capabilities of humanoid robots in particular bipedal locomotion.

Or as he put it as CHARLI took its first steps, "One small step for a robot, one giant leap for robotics."

Watch the robot (the current version is called CHARLIE-L) in action:

More photos:

Photos and video: RoMeLa and Virginia Tech

UPDATE: Corrected details about the use of linear actuators, which will be present in an upcoming version of the robot.

Humanoid Robot Mahru Mimics a Person's Movements in Real Time

mahru humanoid robot

Mahru, an advanced biped humanoid developed by the Korea Institute of Science and Technology (KIST) and Samsung Electronics, has a bunch of skills.

Mahru knows its way around a kitchen, popping a snack into the microwave and bringing it to you, as KIST researchers demonstrated when they unveiled the robot's latest version, Mahru Z, early this year.

Mahru can also dance and perform Taekwondo moves. (More on that later.)

Now how do the KIST researchers go about programming Mahru to do all that? I asked this question when I visited KIST a while ago.

Dr. Bum-Jae You, head of KIST's Cognitive Robotics Center, in Seoul, told me that they use two approaches. One involves filming a person with body markers using a traditional optical motion-capture system to track the body movements. The other, which they've been using more recently, relies on a wearable inertial motion-capture suit [photo above].

A person wears the suit while performing various tasks. The movements are recorded and the robot is then programmed to reproduce the tasks while adapting to changes in the space, such as a displaced objects.

But the cool thing is, the capture and reproduction of movements can also take place in real time. When I visited, Dr. You and his students demonstrated this capability using a Mahru III robot.

Watch the demo:

Watch in HD

When the operator moves his arms, Mahru moves its arms. There's virtually no delay. There's a delay, though, in the walking part -- after the operator takes a few steps, it takes some time for the robot to follow suit. But Dr. You told me they're working to do that in real time as well.

There are several telepresence robots out there, and many more should be around soon. But typically an operator has control only over the robot's legs or wheels or head; very few allow for full-body remote operation.

Mahru's arm movements under teleoperation are quite impressive -- fast and precise, and also safe, thanks to force-torque sensors and compliant control. Eventually, Dr. You says, a person will be able to teleperate a robot to accomplish manipulation tasks -- and also walk over to people and shake their hands.

Note on the photo above the operator with the motion-capture suit (behind the robot) extending his right hand -- while the robot does the same.

Dr. You and his team also showed me Mahru's dancing capabilities. This demo involved an earlier version of the Mahru robot [below]. Really cool to see the "guts" of the machine -- and the sticker saying "Dancer."

Watch the dance:

Watch in HD

I asked Dr. You if that was traditional Korean dance. Nope, he said, laughing. The choreography comes from the Wonder Girls.

READ ALSO:

Hubo II Humanoid Robot Is Lighter and Faster, Makes His Creator Proud
Tue, March 30, 2010

Blog Post: The creator of Albert Hubo is back with a new, better -- and less creepy -- humanoid robot

Kojiro Humanoid Robot Mimics Your Musculoskeletal System
Thu, March 04, 2010

Blog Post: University of Tokyo researchers are developing a humanoid that mimics the way our skeleton, muscles, and tendons work to generate motion

Geminoid F: More Video and Photos of the Female Android
Tue, April 20, 2010

Blog Post: IEEE Spectrum obtained exclusive images and video of Hiroshi Ishiguro's new android

Hiroshi Ishiguro: The Man Who Made a Copy of Himself
April 2010

Article: A Japanese roboticist is building androids to understand humans--starting with himself

Mars Escape Online Game Helps To Study Human-Robot Interaction

Researchers at the Personal Robots Group at the MIT's Media Lab have developed a new online game to study human interactions. Mars Escape forms teams of two human players, with one person taking on the role of an astronaut, and the other one controlling a robot on Mars. Human and robot must work together to complete various missions before oxygen runs out.

Mars Escape Game In the current first phase of the study the researchers investigate how human players work as a team. In a second phase later this year the researchers will use the data collected from human players to build a model of how humans interacted for task assignment, action synchronization, and problem solving. The model will then be tested by creating an in-game AI.

The long-term goal of the project is to create robots that are able to assist and work with humans in a natural, predictable and robust way. To this end, the researchers will recreate the game environment in real life this summer at the Boston Museum of Science, and test their behavior model using their MDS robot Nexi.

New Boston Dynamics Videos Reveal Faster BigDog, PETMAN

Boston Dynamics has posted some updated videos of BigDog and PETMAN. As far as I can tell, there isn’t much new going on… BigDog still carries a bunch of stuff, climbs up muddy hills, doesn’t fall down on ice, looks like two guys running around under a tarp, and sounds like a swarm of killer bees. The one new sequence that I noticed shows BigDog running (the definition of running being an airborne gait phase) at 5 mph. At the end of the video, when the hydraulics are run externally and the engine is off, BigDog sounds a lot more reasonable. Unfortunately, it’s hard to beat the power density and instant rechargeability of petroleum-based fuels, so we might be stuck with the bees for a while longer.

PETMAN is moving a bit more briskly as well, reaching a walking speed of 4.4 mph. Although it’s dynamically balancing itself, it still looks to me like it’s perpetually on the verge of falling over, but I guess arguably that’s what dynamic balancing is all about. Remember that eventually Petman is supposed to be able to crawl, sweat, and do ‘calisthenics’ to test protective clothing. And when I say eventually, I mean by 2011, but that seems a little bit optimistic at this point. Artificial fingers crossed!

[ Boston Dynamics ]

The Robots Podcast: 50 Years of Robotics (Part 1)

The Robots Podcast - 50 Years of RoboticsI'm not a fan of self-promotion, but I believe that this may be of general interest: The Robots podcast (I am a founder, now run by my colleague Sabine Hauert) is celebrating its 50th episode today. For the occasion, Robots has interviewed 12 experts from a variety of robotic backgrounds on the topic of "The Past and the Next 50 Years of Robotics". Here is the line up of interviewees for the first part of the two part series:

Part 2 of this series will air in 2 weeks and give a snapshot view of the past and next 50 years in Nano Robotics, Artificial Intelligence, Flying Robots, Human-Robot Interaction, Robot Business, and Space Robots. Tune in!

PS: Coinciding with its 50th episode, the Robots podcast has also just launched its new website and forum.

Personal Mobility Robot Operated by Wii Controller

The Personal Mobility Robot, or PMR, is a nimble robotic wheelchair that self-balances on two wheels like a Segway. The machine, based on a platform developed by Toyota, has a manual controller that the rider uses to change speed and direction [see photo, inset].

Now two University of Tokyo researchers have decided to upgrade the machine, making it controllable by a Wii remote controller. Why drive the thing with a Wii-mote? Well, why not?

Last year, when I visited the JSK Robotics Laboratory, part of the university's department of mechano-informatics and directed by Professor Masayuki Inaba, researchers Naotaka Hatao and Ryo Hanai showed me the PMR under Wii control. They didn't let me ride it while they're piloting the machine, but it was fun to watch.

Watch:

Watch in HD here.

The PMR project is part of the Information and Robot Technology (IRT) research initiative at the University of Tokyo. Researchers developed the machine to help elderly and people with disabilities remain independent and mobile. The machine is designed to be reliable and easy to operate, being capable of negotiating indoor and outdoor environments -- even slopes and uneven surfaces. It weighs 150 kilograms and can move at up to 6 kilometers per hour.

The PMR is a type of robot known as a "two-wheeled inverted pendulum mobile robot" -- like the Segway and many others. The advantage of a self-balancing two-wheeled machine is its smaller footprint (compared to, say, a four-wheeled one) and its ability to turn around its axis, which is convenient in tight spaces.

The machine assumes a lower configuration to allow a rider to climb on the seat. Then it raises itself, allowing the two wheels to dynamically balance the vehicle. (The two little wheels you see at the front and at the back are for safety, in case the machine tips over.)

In addition to Wii-mote controllability, the JSK researchers have been working on an advanced navigation system that is able to localize itself and plan trajectories, with the rider using a computer screen to tell the robot where to go [photo, right].

The navigation system runs on two laptop computers in real time, one for localization and the other for trajectory planning. Laser range sensors and SLAM algorithms detect people and objects nearby, and it can distinguish between static and moving obstacles. It does that by successively scanning its surroundings and comparing the scans, which allows it to detect elements that are moving as well as occluded areas that only become "visible" as the robot moves.

The system can rapidly detect pedestrians who suddenly start to move as well as people appearing from blind spots. In those cases, the robot can do two things: recompute the trajectory to avoid a collision [image below, left, from a paper they presented at last year's IEEE RO-MAN conference] or stop for a few seconds, wait for the pedestrian, and then start moving again [below, right].

The navigation system uses a deterministic approach to plan the trajectory. Basically it assigns circles to all vertexes of static objects and then tries to draw a continuous line that is tangent to the circles, going from origin to destination. Of course, there might be a lot of possible routes, so the system uses a A* algorithm to determine the path to be taken. You can see a visual representation of this approach below:

And although I didn't get to see it, researchers told me they're also developing a PMR model specific for indoor use. It's lighter (45 kg) and more compact and the rider can control it by shifting his or her body, just like the Segway.

That means, the researchers say, that you can ride it hands-free: Just tell it where to go and enjoy the ride while sipping a drink or reading a book.

Photos: Information and Robot Technology/University of Tokyo, JSK Robotics Lab

Advertisement

Automaton

IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:  e.guizzo@ieee.org

Editor
Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
 
Contributor
Jason Falconer
Canada
Contributor
Angelica Lim
Tokyo, Japan
 

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Advertisement
Load More