Automaton iconAutomaton

New Roomba 700 Series from iRobot

Along with the new Scooba 230, iRobot has today unveiled a redesigned version of the Roomba, the 700 series. There are three different models: the 760, 770, and 780, and similar to other Roomba series, they mostly seem to differ from each other in frills. Here are the core upgrades from the 500 series:

-New design is smaller and sleeker.

-Battery life is 50% longer than previous generations (although it’s not clear whether they’re talking about the ‘premium’ Roombas with the increased battery life).

-I’ll quote this from the PR: “Persistent Pass Cleaning Pattern – when Roomba senses excessive dirt and debris, it uses a brush-like, back and forth motion to focus its cleaning effort in the dirty area it has detected.” Interesting; we’ll have to see it in operation.

The 770 and 780 include a few extras not present in the 760:

-Also quoted from the PR: “Debris Detector uses an optical sensor to detect larger, soft particles on the floor like popcorn, lint or paper chads, so Roomba can respond by focusing its cleaning pattern to ensure deeper, concentrated cleaning in that area.” The 760 doesn’t do this, so we’ll have to find out how exactly this differs from the regular ‘dirt detect’ feature that the 500 series Roombas have, and whether that feature is present in the 760.

-They both light up an indicator light when their dust bins are full.

-The 780 has a fancy capacitive touch sensor interface. No more buttons!

The Roomba 760 starts at $449; the 770 and and 780 will certainly be more expensive, possibly in $50 increments but we’ll find out shortly… We’ll be getting our first look and hands-on at CES starting Tuesday, and we’ve just scheduled a personal demo and interview on Friday, so stay tuned.

Holiday Season Robot Videos

My colleagues over at the Robots Podcast (full disclosure: I'm part of the team) have started a collection of Christmas Robots videos on their YouTube channel for the upcoming festive season:

To get into the holiday mood, what better than watching some crazy robot videos! Make a holiday video featuring any robot, real or not, and put it on YouTube. Send us a link and we'll feature it on our dedicated playlist and on our website!

For now two videos are up, but keep your eyes open: I know of at least a couple of other submissions in preparation!

Update 1: Two more Holiday Season robot videos have been added to the Robots Podcast Holiday Season YouTube channel:

Update 2: Three more videos featuring NAO, a robot band and the HRP-2 have been added - view them here!

UK Researchers Using Charles Babbage Robot Head to Develop Emotional Machines

the emotional computer

"Charles, I think this is the beginning of a beautiful friendship," Peter Robinson says to the passenger sitting in the car next to him.

The passenger is a robot head that Robinson, a professor of computer technology at Cambridge University in England, is using to explore the role of emotions in human-machine interaction.

Can computers understand emotions? Can computers express emotions? Can they feel emotions?

These are the questions that Robinson and his team at Cambridge's Computer Laboratory want to answer.

When people talk to each other, they express their feelings through facial expressions, tone of voice, and body postures. The interesting thing is that humans do the same even when they are interacting with machines.

So could we build better computers, robots, and other machines if they could understand and respond to these hidden signals?

Robinson's team believe the answer is yes. They are developing systems to analyze faces, gestures, and speech and infer emotions. They hope these systems could improve human-machine interactions in real situations.

Charles is a robotic head modeled on Charles Babbage. (Am I the only one who didn't notice the similarity? And is Charles a Hanson Robotics creation?) It's one of the research tools Robinson uses in his experiments, which include riding a car simulator with the robot as a GPS assistant.

"The way that Charles and I can communicate," Robinson says in a short movie called "The Emotional Computer" [watch below], "shows us the future of how people will interact with machines."

Do you agree? Would you replace your car GPS with Charles the robot head?

Image and video: "The Emotional Computer"/Cambridge University

Heartland Robotics Developing $5k 'PC of Robots'?

Heartland Robotics is a company headed by legendary iRobot co-founder Rodney Brooks. It’s been in ’stealth mode’ since its founding in 2008, meaning that they’re working on something cool enough to have raised some $32 million in funding, but they’re not ready to tell the world about it yet.

With the latest round of funding (which involves as bunch of investors being shown around and told what the plan is), some new information has leaked out from Heartland, and it’s tantalizing:

Visitors to Heartland describe a robot that looks like a human from the waist up, with a torso; either one or two arms with grippers; and a camera where you might expect the head to be. The robot is on a rolling base rather than legs; it can be moved around but doesn’t move autonomously. The arm and gripper can be quickly trained to do a repetitive task just by moving them, no software code required.

It’s possible that this robot is based in part on MIT’s Obrero platform, pictured above. There’s more:

“Brooks apparently likens Heartland’s robot, which is intended to perform assembly and packaging tasks that low-wage factory workers do today, to Apple’s iPhone. He’s interested in encouraging a community of software developers to create applications that would teach the robot to do tasks such as using its camera to recognize a defective widget and pulling it off the conveyor belt.”

Thinking about robots as hardware that runs apps is not unique to Heartland, but the deciding factor could be the target price point: a shockingly low $5000. At that level, it’s easy for businesses to justify purchasing a robot just to try it out, since the risk is so small. And if they can set the robot up on an assembly line (which seems to be its general target market), it could very rapidly start making things more efficient for even small businesses, especially if the robot is as easy to program as they’re trying to make it.

Even if it takes three of these robots to do the job of one human, you’re still talking about a very positive investment. Heck, even if you needed ten of them, a $5k a pop they’d probably pay for themselves in less than a year when you consider the overhead that humans require, and they’d work 24/7 to boot.

For a long (long long LONG) time, the robotics industry has been looking for its PC, the one killer platform or application that has the potential to make robots simple, cheap, reliable, and useful. It hasn’t turned out to be vacuums, but it might just be a cheap robot worker from Heartland.

Stay tuned.

[ Heartland Robotics ] via [ ] and [ NBF ]

Aldebaran's New Nao Robot Demo

aldebaran robotics nao humanoid robot

At the IEEE Humanoids conference early this month, I met the new Nao.

The latest version of the popular humanoid robot, created by Paris-based Aldebaran Robotics, has a more robust body, longer arms, a more advanced motion engine, and a new head with improved temperature control, Wi-Fi communications, and audio input and output.

In terms of software, Aldebaran improved whole body motion controls, voice recognition, sound localization, and face and image recognition (Nao can learn to identify objects like photographs and book covers). And the robot has now a "fall manager," which detects a fall is going to happen and positions the arms and legs in a more protective posture.

To see a demo, I met with Aldebaran founder and CEO Bruno Maisonnier, who it turns out is a big geek. He told me that he was a computer enthusiast in the 1980s and ended up working in IT. But as a science fiction fan, he also loved robots and dreamed that one day they'd become part of everyone's lives -- just as computers did. He founded Aldebaran to help transform that dream into reality.

Nao was the first robot the company created. It has rapidly gained popularity as a reliable and flexible robotics research platform. It's used at universities and companies in 30 countries and also in the RoboCup competition. Aldebaran has sold 1,065 Nao units. Each costs approximately 12,000 euros.

Next year, Aldebaran plans to unveil Romeo, an adult-size humanoid designed to help elderly and disabled people with everyday tasks.

Maisonnier loves to show off the Nao. Everywhere he goes he lugs a suitcase with a Nao nestled in a custom foam insert. As you can tell from the video, the relationship between creator and creature is sometimes contentious. But in the end love prevails. Watch:

Gostai Jazz Telepresence Robot Unveiled

gostai jazz telepresence robot

French robotics company Gostai is unveiling today a mobile robot called Jazz designed for "telepresence and telesurveillance."

The waist-high robot, which a user can remote control using a web-based interface, rolls on two wheels and has a head that can move in any direction, with a camera stuck on its forehead. The price starts at 7900 euros.

This is the first time that the Paris-based company, known for its robotics software, ventures into hardware.

Jean-Christophe Baillie, founder and CEO of Gostai, tells me that they built the robot "very quickly," relying on the experience they gained by interacting with robot manufacturers that use their software.

"This is a little revolution for Gostai," he says, "and we are very excited about the potential of this little guy!"

I've tested a couple of telepresence robots this year [see our special report on robotic telepresence] and look forward to driving the Jazz as well. But just by looking at the specs and video I've noticed several interesting things.

First, the robot can not only use Wi-Fi -- as other telepresence robots do -- but it can also connect to a 3G cellphone network. From what I know, this is the first telepresence robot with this capability. [Update: Despite what its website says, Gostai hasn't implemented 3G yet. The company says it will be available soon.]

Another interesting thing is that its head can turn in any direction. This is very helpful when you want to see the floor, or when you want to look, say, left or right without actually moving the robot.

Also interesting, the Jazz robot runs on Gostai's Urbi open-source robot operating system, the company's flagship product, and also uses the GostaiNet cloud computing infrastructure. Baillie says some features, like video recording and voice synthesis (if you're using the Jazz at marketing events, you can prepare a text document and the robot will speak it aloud), already rely on the cloud and they plan to add more cloud-based capabilities soon.

But the most innovative thing in my opinion is the web-based remote control interface. You drive the robot just by clicking with the mouse on the video feed. Say you're driving the robot and the video is showing a long corridor -- you just click at the end of the corridor and the robot will go there. "It's a bit like the 3D cursor of Google Street View," is how Baillie puts it.

(For comparison, the Anybots QB, which you control using the keyboard's arrow keys, was very easy to drive in my tests; the Willow Garage Texai has a web-based control pad that you have to click with the mouse, and I found this approach not as easy to use.)

gostai jazz telepresence robot

In terms of limitations, the robot is short -- only 1 meter tall -- so people talking to it will be looking down, unless they're sitting (and even then, they still have to stare down a bit).

And then there's the lack of a LCD screen so that people interacting with the robot can see the face of the remote operator -- a capability that some argue is essential for a true telepresence experience.  

Robotic telepresence expert Sanford Dickert, who helped develop the Texai robot at Willow Garage, writes at his Pilot Presence blog:

Interestingly enough, the team at Gostai have eschewed the concept of two way visual presence which I use as a hallmark for a true RPS [remote presence system], and play in the range of the WowWee Rovio or the iRobot ConnectR.

But that might change. Baillie, the Gostai CEO, tells me that they're working on a version of Jazz with a screen that "should be available soon." They built the current model to have a less expensive offering that they believe "will find its niche." 

The robot, which can run for five hours and docks automatically at a recharging station, comes in three versions. Jazz Connect, for offices, costs 7900 euros. The Jazz Icon, for marketing events, comes equipped with a tray for carrying objects and costs 7900 euros, or it can be rented for 1800 euros. The Jazz Security, costs 8400 euros and has more autonomy and a camera that can see in the dark.  

Watch the Gostai Jazz robot in action:

Images and video: Gostai

Willow Garage Sells First PR2 Robots

willow garage pr2 robot

Silicon Valley robotics company Willow Garage is announcing today that it has sold PR2 robots to four institutions in Korea, France, and the United States.

These are the first PR2 units sold by the company, which hopes to make the robot a common platform that users in academia and industry can tinker with and improve, sharing their results and accelerating the development of applications.

willow garage pr2 robotWillow Garage, based in Menlo Park, Calif., has become the leading proponent of open source robotics. It has developed an open source software platform called ROS, or Robot Operating System, that is becoming widely adopted around the world, running not only on the PR2 but also on a variety of robots.

The PR2 purchase means that the ROS community is growing stronger.

The company developed the PR2 because it believed robotics researchers were burdened by the cost and time it took to develop their own robots and software. The company wants to eliminate this barrier, allowing roboticists to focus on applications for personal robotics use in homes and offices.

Each PR2, short for Personal Robot 2, costs U.S. $400,000.

The buyers announced today are: Samsung Electronics in Suwon, Korea; Laboratory of Analysis and Architecture of Systems, part of France's National Center for Scientific Research, in Toulouse; University of Washington, in Seattle; and George Washington University, in Washington, D.C.

The PR2, based on a robot built at Stanford University, is a powerful and flexible platform. It consists of an omnidirectional based equipped with two arms and grippers. It has two onboard computer servers, power system, and a sensor suite with cameras, LIDAR, inertial measurement unit, and other devices. 

Early this year, Willow Garage gave away free PR2 robots to 11 institutions, as part of its beta program [see robots prior to delivery below, with the production team]. In September, the company announced that it was making the PR2 commercially available. The company says that there are now 16 research labs worldwide using the PR2.

willow garage pr2 robot

The purchase announcement is also good news from a business point of view, of course. Several people I have talked to have wondered whether Willow Garage has a viable business model. The company was founded -- and reportedly it's still financially supported -- by Scott Hassan, a wealthy former Google architect who became a robotics enthusiast. But Willow Garage's President and CEO Steve Cousins said early this year that he expects the company to make money.

Willow Garage said in a statement that one PR2 has already arrived at Samsung Electronics, which will use it to enhance its existing robotics R&D program.

Another PR2 was delivered to Joshua R. Smith, a professor of computer science and engineering at the University of Washington in Seattle.

The PR2 heading to France will reside at LAAS-CNRS's Robotics and Artificial Intelligence Group, headed by Rachid Alami, who plans to use the robot to develop systems for human assistance, such as housekeeping for seniors.

And at George Washington University, Evan Drumwright, a professor of computer science, is the new owner of a PR2. His focus is on discovering ways to get robots to autonomously perform occupational tasks.

Here's a video showing robots using ROS:

Images and video: Willow Garage

France Developing Advanced Humanoid Robot Romeo

aldebaran romeo
Romeo, shown here in a computer-generated rendering, is a French humanoid robot designed to assist elderly and disabled people. Image: Aldebaran Robotics

France is set to join the select club of countries that have developed advanced adult-size humanoid robots.

Paris-based Aldebaran Robotics, famed for its small humanoid robot Nao, is working with major French research organizations to build a larger and more capable humanoid called Romeo, to be unveiled next March.

Designed to assist elderly and disabled individuals in their daily activities, the 1.4-meter-tall robot will be able to walk through a home, fetching food from the kitchen, taking out the garbage, and acting as a loyal companion who helps entertain its owners and keep tabs on their health.

Expected to cost about 250,000 euros, Romeo will be available first to partners in the project and university researchers. An improved version will later be offered to hospitals and nursing homes, and after that to individuals as well.

Aldebaran claims that Romeo, weighing in at 40 kilograms, will introduce several innovations, including a four-vertebra backbone, articulated feet, a partially soft torso, a composite leg exoskeleton, and a new kind of actuator that lets the robot control its limbs in a safer way.

The company also says it's putting a lot of emphasis on the robot's communication capabilities. “We want people to interact with Romeo using just natural speech and gestures,” Bruno Maisonnier, founder and CEO of Aldebaran, told me at the IEEE Humanoids 2010 conference in Nashville, Tenn., last week.

The Romeo project, which started in early 2009, involves 13 partners and has a budget of 10 million euros, half of which comes from federal and local French governments.

The robot will have 37 degrees of freedom, including 2 DOF for each eye, 1 DOF for each foot, and 3 DOF for the backbone. An advanced type of actuator known as a backdrivable mechanism, which uses motors coupled to screws and cables rather than reduction gears, will power the joints. This mechanism can sense forces on the limbs and move them more safely.

aldebaran romeo
A computer-generated rendering shows Romeo doing chores at a home. Image: Aldebaran Robotics

To learn more about the project, I spoke to Rodolphe Gelin, head of cooperative projects at Aldebaran and one of the engineers leading the development of Romeo.

Below is my full interview with Gelin, in which he describes the current status of the project, the applications Aldebaran envisions for the robot, and details about its technology.

Erico Guizzo: Colin Angle, one of the founders of iRobot, said in a recent interview that he doesn’t believe that people will have humanoid robots in their homes. I guess he likes his Roombas… Why build a human-shaped robot?

Rodolphe Gelin: My ideas about this are changing regularly. So I’ll give you my opinion today. I think that a humanoid robot is very useful if you want to have a multipurpose robot. If you only want to have a vacuum cleaner, it’s good enough to have a little cylinder robot. But if you want a robot that can open the refrigerator, grab a bottle of water, pour water into a cup, and take the cup to another room, being a little cylinder is not very helpful. If you want to do plenty of things, then you have to have plenty of robots, or one robot able to do lots of things. There’s also human-robot interaction issues. I think that people will find a humanoid much easier to use and much more fun than a Roomba or a kind of box on wheels robot. Many people don’t want just a slave robot; they want a machine that works like a companion, and so it's helpful if your robot has a certain height and a face that people can look at and talk to in a natural way. That’s why a humanoid shape is interesting. Does it need legs? We’ve discussed that a lot. Legs and wheels both have advantages and disadvantages. We decided to use legs because we think it’s more convenient and adaptable for most situations. And also because it's a big technological challenge. But that's my answer today. It may change tomorrow!

EG: What are some of the things you envision Romeo doing? Will it be able to do tasks that other humanoid robots can’t do today?

RG: We have some scenarios. For example, imagine that Mr. Smith lives alone and has a Romeo in his house. Because Mr. Smith is elderly, he might forget where things are, like his glasses or the remote control of the TV. But the robot can constantly keep track of these objects, using its vision, and it doesn’t forget where they are. So Romeo can assist Mr. Smith to find objects and even go get them for him. The robot is also monitoring Mr. Smith himself, checking that he’s feeling okay, and if not, calling family members or medical assistance. We’re also working on more physically challenging tasks. Imagine that Mr. Smith asks Romeo to take the garbage out and the robot has to grab a 5-kilogram bag and walk down the stairs with it. Or imagine that Mr. Smith needs help to stand up and walk. Above all, we’re putting a lot of emphasis on Romeo’s communication functions. We want a robot that is easy to use and that, unlike other robots, doesn't look dangerous.

EG: There are lots of humanoid robots out there, both small and large, and their technology has been advancing fast. How did Aldebaran go about designing Romeo? Is it essentially an enlarged version of Nao?

aldebaran romeo legRG: No. The things we’re learning with Nao are helping us with Romeo and vice versa, so there’s a lot of synergy between the two projects. In fact, from the software point of view, most code will be compatible between the two and both will use our Coregraph program to create movements and behaviors. But in terms of hardware, they are very different robots. You can’t simply transform a small robot into a bigger one because variables like power, mechanical strength, and other things don’t scale in the same way. One thing that is very different compared to other humanoids is how Romeo controls its leg actuators. Japanese robots like the HRP use position control: You control the position and speed of each joint but you can’t control the force or torque. Other robots [like Justin, developed at DLR in Germany] are capable of controlling force, but they rely on very fast and accurate—and expensive—force sensors in each joint. In our approach, based on technology developed at CEA [France’s Alternative Energies and Atomic Energy Commission], we don’t need a force sensor; we only have what is called a backdrivable mechanism. It means a mechanism that can convert electrical current into motion but also motion into current. To do that you need a mechanism with very low friction, so energy losses are very small. Then by reading the current in the actuator you’re able to compute the torque applied by the robot on the environment. For Romeo we designed a system of screws and cables with very low friction and inertia [image, right].

EG: Why did you decide to use this approach and how unique is it? Are there other robots using this?

RG: Researchers we spoke to complained that if you can’t control the force and  torque in each joint, you don’t truly control what you want to do with the robot. That becomes critical when the robot is big, because of the damage it can cause. So from both a control and safety point of view, controlling torque rather than just position is very important. The WAM arm from Barrett Technology uses a backdrivable mechanism. But most humanoid robots use motor and gears to power their joints, and these mechanisms are not backdrivable—due to energy losses in the gears and you can’t relate the electrical current in the motors to force. That’s why they need force sensors if you want to move in a compliant way; it’s called active compliance. Romeo is different. It has no gears. We transmit the motion from the motors to the joints using screws and cables. This system is backdrivable. We can measure the torque applied by the arm of the robot only by reading the electrical currents in the motors. It’s passively and intrinsically compliant. We think that this approach is better because it offers more control and is safer and cheaper.

Romeo's upper torso and head. Image: Aldebaran Robotics 

EG: In terms of walking strategies, are you using the traditional ZMP [zero moment point] approach that most other humanoid robots use?

RG: You have to take ZMP into account, or your robot will fall. But what we’re doing differently is that we want to have a top-down approach to the walking algorithm. In the classical approach, say you’re in the living room and you want to go to the kitchen; first you plan where you have to put your feet for each step going from the living room to the kitchen and then the robot will try to put its feet in the footprints it planned. And if the robot encounters an obstacle, it has to stop and compute new footprints to avoid the obstacle. In our approach—and this is something that other top labs are also pursuing—the robot walks by relying on a real-time pattern generator. Basically the robot uses its vision system to decide where it wants to go and then it just walks in that direction, adjusting its walking in real time. It’s a more reactive and realistic way of implementing walking. In fact, vision will be Romeo’s main sensing capability. It has two cameras to do simultaneous localization and mapping, or SLAM. Basically the robot can recognize where it is and navigate from room to room.

EG: Will the robot be capable of performing dexterous manipulation as well?

RG: Romeo’s hand has four fingers and one degree of freedom. The hand can perform a basic grasping motion and grasp objects like a cup on a table or place a dish inside the oven. Of course we’d like to have more dexterous hands. But that would have made the budget too expensive. So we’re leaving manipulation for a future phase of the project, when we may implement things like tactile sensors and force control on the hands.

aldebaran romeo hand
With three fingers and one thumb, Romeo's hand will be able to grasp objects like cups and plates. Image: Aldebaran Robotics

EG: Open source robotics is gaining momentum. Do you plan to release any part of Romeo’s software as open source?

RG: From a commercial point of view, we’re exploring different business models and at this point we can’t say Romeo is open source. But like with the Nao, we’ll release a SDK [software development kit] that will provide libraries and functions to those interested in programming Romeo. So Romeo is an open platform in the sense that there are APIs to to control the robot.

EG: When will Romeo be available and how much will it cost?

RG: We have completed the design of all the mechanics of the robot—the legs, arms, torso, neck, and head. The head is already built and we’re expecting the parts to assemble the legs and the arms. We should have a full body prototype sometime early next year. The robot will cost about 250,000 euros. This is not a final price tag; it’s just an order of magnitude. A lot of things can change. We’d really like to tell the world that if you want to buy a humanoid robot, you don’t have to buy a Japanese one, there are other good choices available. We want to be one of these choices. It’s a challenge, because Japan has very strong products. But we want to be considered a competitor in this area. Romeo will initially be a research tool for university labs and our partners in the project. With time, we plan to improve the robot and transform into a more reliable and affordable product that we can offer to hospitals and nursing homes. After that we’d sell it to individuals for personal use. We hope that one day anyone can have a Romeo as a companion at home.

aldebaran robotics romeo hand
A prototype of Romeo's head. Image: Aldebaran Robotics

This interview has been edited and condensed.

Athlete Robot Learning to Run Like Human

athlete robotJapanese researcher Ryuma Niiyama wants to build a biped robot that runs.

But not like Asimo, whose running gait is a bit, well, mechanical.

Niiyama wants a robot with the vigor and agility of a human sprinter.

To do that, he's building a legged bot that mimics our musculoskeletal system.

He calls his robot Athlete. Each leg has seven sets of artificial muscles. The sets, each with one to six pneumatic actuators, correspond to muscles in the human body -- gluteus maximus, adductor, hamstring, and so forth [see diagram below].

To simplify things a bit, the robot uses prosthetic blades, of the type that double amputees use to run.

And to add a human touch, Niiyama makes the robot wear a pair of black shorts.

Human runners with prosthetic feet, like South African paralympic runner Oscar Pistorius, nicknamed the "Blade Runner," "give me great inspiration," Niiyama tells me.

The robot has touch sensors on each foot and an inertial measurement unit on the torso for detecting the body's orientation.

Niiyama developed the robot as a PhD candidate at the Department of Mechano-Informatics of the University of Tokyo with colleague Satoshi Nishikawa, under the supervision of their advisor, Professor Yasuo Kuniyoshi.

They presented their project at the IEEE Humanoids 2010 conference in Nashville, Tenn., last week.

The researchers are now teaching Athlete to run. They programmed the robot to activate its artificial muscles with the same timing and pattern of a person's muscles during running.

athlete robot

Niiyama, who has since become a post-doc at MIT's Robot Locomotion Group in Cambridge, Mass., says they're trying to better understand how we control our muscles during a challenging task like running.

Previously, he studied another complex motion, jumping, by developing a bipedal hopping robot called Mowgli.

Traditional humanoid robots like Asimo run by changing the angle of their joints. Their legs are rigid, powered by motors coupled to reduction gears. In other words, they run like robots.

People, as well as animals, don't keep track of the position of their joints -- we use our viscoelastic muscles and tendons to bounce against the ground, propelling our bodies forward while maintaining balance.

Athlete can take three, sometimes five steps, moving at about 1.2 meters per second. Then it falls. Watch:

It's a short dash, but the researchers are optimistic. They plan to fine tune the artificial muscles and improve the feedback control system. And then hopefully move their tests to a real running track.

Images: Ryuma Niiyama

Robotic Drone Flies Itself by Looking at Landscape Below

Robotic aircraft have taken to the skies, finding increasing use in military applications, law enforcement, environmental monitoring, and also becoming popular among hobbyists who want to build their own drones.

These unmanned aerial vehicles, or UAVs, have varied degrees of autonomy, though typically they depend on GPS and also on supervision from a human operator, who can send commands to the aircraft and receive images from its on-board cameras.

Now researchers at McGill University's Mobile Robotics Lab, in Montreal, Canada, are making these smart aircraft a bit smarter. They've developed a UAV control system that uses aerial images to identify visual cues on the landscape and steer the aircraft autonomously.

Aerial vehicles guided by advanced vision capabilities could help track wildfires, oil spills, and even animal herds. The aircraft would carry out monitoring and mapping missions requiring no human supervision or GPS coordinates.

Anqi Xu, a PhD student, and his advisor, Professor Gregory Dudek, director of the Mobile Robotics Lab, say that their current system is capable of following a coastline or a road surrounded by forests.

The researchers presented their results at the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems in October.

They used a fixed-wing UAV called the Unicorn from Procerus Technologies, which they can control via software. The aircraft carries a gimbal-mounted camera that streams video over a radio link. A Linux notebook computer analyzes the video feed and sends heading updates to the UAV in real time.

To track coastlines, their vision algorithm analyzes the color properties in the images to distinguish between water and land. To track a highway in a wooded region, it analyzes textures cues. Once the algorithm has identified the boundaries between different areas, it then determines a heading to follow.

To test their system, the researchers took their UAV to the beach. Watch:

The test area consisted of a 1-kilometer long "S" shaped tropical coastline. After manually aligning their UAV, their control system took over and successfully steered the aircraft along the stretch of the shore. The UAV traveled at an altitude of 150 meters with an average ground speed of 13 meters per second with lateral wind speed of 7 meters per second.

How would that performance compare to a human operator piloting the UAV using the same visual information?

The researchers asked five volunteers to watch the recorded images and specify  headings to keep the UAV following the coastline. Though there were discrepancies between the headings produced by the algorithm and by the volunteers, the researchers concluded that their system can perform nearly as well as a human operator.

In a next phase of the project, the researchers plan to use their aerial tracker to transmit navigation data to another of their systems, an amphibian robot designed to study coral reefs.

Images and video: Mobile Robotics Lab/McGill University



IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.
Contact us:

Erico Guizzo
New York, N.Y.
Senior Writer
Evan Ackerman
Berkeley, Calif.
Jason Falconer
Angelica Lim
Tokyo, Japan

Newsletter Sign Up

Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.

Load More