In a recent video, Willow Garage researcher Eitan Marder-Eppstein describes the open-source navigation stack they've released as version 1.0. The code, available at http://ros.org/wiki/navigation, was designed to be flexible and cross-platform, he says, and could be used in anything from a small iRobot Create-based bot to a large multi-sensor robot like Willow's own PR2 (which Spectrum has covered in detail here and here).
The stack lets users configure different sensors, change the footprint of the robot, integrate SLAM systems, and use a 2D or 3D view of the world. Says Marder-Eppstein:
"In particular the three-dimensional view of the world enables the robot to avoid obstacles like tables, chairs, and people's feet."
And a guy trying to hit it with a two-by-four.
"This is a significant improvement over navigation stacks that view the world as purely planar," he says.
I like Willow because their work is practical and promising. And because they have a sense of humor. They really put their bodies on the line.
Kojiro is an advanced musculoskeletal humanoid robot under development at the University of Tokyo's JSK Robotics Laboratory. Kojiro's creators designed its body to mimic the way our skeleton, muscles, and tendons work to generate motion. The goal is to build robots that are light and agile, capable of moving around and interacting with the physical world in the same way our flesh bodies do.
I met Kojiro during a visit to the JSK lab late last year. Masayuki Inaba, a professor at Tokyo University, and Yuto Nakanishi, a researcher and one of Kojiro's main developers, showed me their latest trick: using a PS2 controller to make Kojiro move. In particular, they wanted to demo the robot's spine motion.
Other research groups are also exploring the idea of anthropomimetic humanoids. But I don't think many of them have a flexible spine, which is one of Kojiro's main innovations. Like the human spine, Kojiro’s can bend in different directions to let the robot arch and twist its torso. It can't quite dance the Macarena yet, but it shows some promising hip moves.
Nakanishi explained to me that most humanoid robots have articulated limbs and torsos powered by DC motors at the joints. Although these robots have a good range of motion, they're typically hard and heavy, making collisions with humans and objects a big problem.
Kojiro does use DC motors, but the motors pull cables attached to specific locations on the body, simulating how our muscles and tendons contract and relax. These tendon-muscle structures -- Kojiro has about 100 of them -- work together to give the robot some 60 degrees of freedom, or much more than could be achieved with motorized rotary joints.
And instead of big, bulky DC motors, Kojiro uses lightweight, high-performance ones. Its brushless motors are quite small (16 millimeters [0.6 inches] in diameter and 66.4 mm [2.5 inches] in length) but can deliver a substantial 40 watts of output power.
Each motor unit has a rotary encoder, tension sensor, and current and temperature sensing circuit. A driver circuit board automatically adjusts the current fed to the motors based on temperature measurements. The results are transmitted to a computer and displayed on a control screen developed by Takanishi.
To make the robot safer, the researchers built its body using mostly light and flexible materials. To keep track of its posture and limb positions, they embedded joint angle sensors on spherical joints and six-axis force sensors on the ankles. For balance, the robot uses three gyros and a three-axis accelerometer on its head.
The main drawback of using a musculoskeletal system is that controlling the robot's body is difficult. This kind of system has lots of nonlinearities and is hard to model precisely. To develop control algorithms for Kojiro, the JSK team is using an iterative learning process. They first attempt small moves and little by little tweak the control parameters until the robot can handle more complex movements.
Eventually they hope to integrate control for the head, spine, arms, and legs. Then Kojiro might do the Macarena.
Most telepresence robots (with a few exceptions) aren’t especially presence-y, in that you can see people, and people can see you, but you’re pretty much just a head on a screen on a robotic stick with wheels. MeBot, a project from the Personal Robotics Group at MIT, adds a little bit of personality to telepresence by providing ways for users to communicate non-verbally, through things like head movement, arm movement, and posture:
The clever bit is that you, as the user, don’t need to tell the robot to do any of the expressive stuff that it does with its screen. It watches what you’re doing with your head, and duplicates those socially expressive movements with the robot. Is it effective? You bet:
We conducted an experiment that evaluated how people perceived a robot-mediated operator differently when they used a static telerobot versus a physically embodied and expressive telerobot. Results showed that people felt more psychologically involved and more engaged in the interaction with their remote partners when they were embodied in a socially expressive way. People also reported much higher levels of cooperation both on their own part and their partners as well as a higher score for enjoyment in the interaction.
Even though it has those little 3 DoF arms, MeBot isn’t designed to do anything in particular with its additional axes of motion. You currently control them sympathetically using a second set of arms, the positions and movements of which are duplicated by the arms on the robot. Conceivably, you could add some grippers to the robot and a more comprehensive control system on the other end, but that would defeat a large part of the purpose (and the beauty) of MeBot: it’s designed to be purely expressive, implying a natural simplicity that requires no extra effort or skill. It just does its thing while you do yours, which is how all the best systems (hardware and software alike) tend to function.
Another vid with a few more details, after the jump.
UC Santa Cruz emeritus professor David Cope has for 20 years been working on software, called Emily Howell, that generates original and modern music. Using algorithms that mathematically mixes, recombines, and alters musical combinations, his music can often convincingly mimic the styles of the great classical composers such as Mozart and Bach.
That said, his work has generated a hostility from those who believe creativity is something a machine could never have, arguing that only humans can compose music with 'liveliness' and 'soul'. What I particularly find interesting about the article Triumph of the Cyborg Composer is that it shows the strong prejudices we have against anything that belittles the meaning and spirituality of our lives. The world is flat, earth is the center of the universe, and we all have souls that can't be bestowed onto robots.
What attracted me to this article wasn't the enjoyable music examples by Emily Howell that you can download, but this precursor to the modern-day Spanish Inquisition us robot creators and AI researchers will perhaps one day face.
"We are so damned biased, even those of us who spend all our lives attempting not to be biased. Just the mere fact that when we like the taste of something, we tend to eat it more than we should. We have our physical body telling us things, and we can't intellectually govern it the way we'd like to," he says. In other words, humans are more robotic than machines. "The question," Cope says, "isn't whether computers have a soul, but whether humans have a soul."
iRobot has long dominated the market for home cleaning robots, but other competitors have begun to emerge. Evolution Robotics has developed a new robot called Mint which uses disposable cleaning cloths to clean hard floors. By outsourcing the cleaning technology, Evolution was able to focus on the navigation and usability of the robot. The Mint should be available later this year for less than US $250.
According to a recent Robots Podcast interview with Joshua Portlock, manager of the CyberQuad project at Australia's Cyber Technology, what happened is a classical case of an enabling technology being driven by the consumer market. Fast, precise and affordable accelerometers are a key technology for Quadcopters. Their development was initially driven by their use for airbags in cars, and now increasingly by their use in consumer devices such as mobile phones.
Accelerometers are key because unlike standard helicopters, which use complex mechanics to allow stable flight, Quadrotors use fast onboard motor control to take care of stability. This mechanical simplicity is also their main attraction: Quadrotors can navigate in three dimensions using only four moving parts. And the high reliability of brushless motors makes them a simpler, more reliable alternative to many traditional flying platforms.
Hexacopters such as the one featured in the video above allow to pack more rotors into a given size providing more power. Other designs including Octocopters (such as the one in the picture below) and ducted fan or counter-rotating versions allow further versatility needed for specific applications such as indoor flight.
For now most applications such as inspection of power lines, oil rigs or wind turbines, law enforcement surveillance or military reconnaissance are very real-time and do not require much autonomy beyond simple GPS waypoint navigation. However, that may soon change.
Future application scenarios include robotic security guards that can rapidly react to a triggered alarm by autonomously providing surveillance of a specific site or area. Other tasks center around autonomous border patrol and perimeter search. And the military is considering sending groups of Quadrotor UAVs that can perch on powerlines, rocks and rooftop edges ahead of convoys for advanced surveillance, which may also allow automatic pin-pointing of sniper locations using sound triangulation.
Space robotics may appear to be a purely scientific endeavor -- brave little rovers exploring planets in search of life -- but it turns out there's a multi-million dollar market in space just waiting for the right kind of robot. This market is satellite servicing.
Geostationary communication satellites fire small thrusters to stay in orbit. When they run out of fuel (typically helium or hydrazine), or when a battery or gyroscope fails, these expensive satellites often have to be abandoned, becoming just another piece of space junk, even though their mechanical systems and electronics work fine.
Space agencies and companies around the world are developing robotic servicing systems (the United States demonstrated one such system in its Orbital Express mission), but putting these systems in space, docking them to satellites, and performing repairs remains a big challenge.
To address the problem, DLR (Germany's NASA equivalent) launched the European Proximity Operations Simulator, or EPOS, initiative. EPOS is a robotic facility designed to simulate on-orbit rendezvous, docking, and repair maneuvers. EPOS allows engineers to do computers simulations of a docking system with hardware in the loop.
For this project, DLR partnered with Robo-Technology, a small industrial robotic integrator that designed, built and programmed the EPOS hardware. Its main components are the two robots and the 25 meter linear track, which defines the working range. Both robots offer 6 degrees of freedom, so the two satellites can be positioned relative to each other, but also relative to instrumentation in the lab.
The simulation is as realistic as possible, so there is a sun simulator, the satellites "believe" to be in zero gravity and the high control bandwidth of 250 Hz enables the 1:1 simulation of contact dynamics during the capturing and docking. Even the capturing of a satellite with non-zero rotation can be simulated with EPOS using actuators which are remote-controlled as if a radio transmission from ground to space would take place.
DLR is using the system now to evaluate approach scenarios and test docking cameras using 1:1 satellite mock-ups. And just like on Earth for more common objects, things would be so much simpler for the roboticist if the design of the satellite had considered the robot limitations. But then, where would the fun be?
The video below explains the EPOS initiative (even if you don't understand German, it's fun to watch the robots in action).
Photo and video: DLR
Samuel Bouchard is a co-founder of Robotiq in Quebec City.
Ever since Cicero’s De Natura Deorum ii.34., humans have been intrigued by the origin and mechanisms underlying complexity in nature. Darwin suggested that adaptation and complexity could evolve by natural selection acting successively on numerous small, heritable modifications. But is this enough? Here, we describe selected studies of experimental evolution with robots to illustrate how the process of natural selection can lead to the evolution of complex traits such as adaptive behaviours. Just a few hundred generations of selection are sufficient to allow robots to evolve collision-free movement, homing, sophisticated predator versus prey strategies, coadaptation of brains and bodies, cooperation, and even altruism. In all cases this occurred via selection in robots controlled by a simple neural network, which mutated randomly. ...
Once one looks beyond the blogosphere claptrap caused by any mention of words like "evolution" or "predator" in the context of robotics, there are some interesting insights from this body of work: Foremost, the experiments outlined in the essay offer an intriguing real-world illustration of evolution at work. Another is that they offer a powerful way to study biological phenomena such as the evolution of group behaviors like communication or altruism in a highly-controlled, real-world system. Yet another is that embodiment does matter - not only can the use of robots result in stronger testing of hypotheses and in higher predictive power than purely computational models, in some cases it is the best way to gain insights into the complex behavior resulting from a robot's - or animal's - reaction to the environmental changes caused by its own actions.
However, for all the work's merits I think that its limitations may be even more revealing. In spite of the apparent behavioral complexity of the robots, all behaviors were achieved with extremely simple brains consisting of only a handful of neurons. This is surprising, given that even the brains of animals as simple as the nematode worm C. elegans exceed this by an order of magnitude and animals like the fruit fly Drosophila melanogaster by a factor 10'000. Clearly, we are far from evolving intelligent robots. Furthermore, the performance of robots invariably stagnated after a few hundreds of generations of evolution across all experiments. For now it is unclear if this is a direct cause of using extremely simple brains, or is tied to deeper reasons such as an inadequate genetic encoding. Either way, we are far from (re-?)creating open-ended evolution in the lab.
If you find this interesting, I suggest you have a look at the essay and accompanying videos.
NASA and General Motors have unveiled a humanoid robot called Robonaut2, or R2, that they say will be able to "assist astronauts during hazardous space missions and help GM build safer cars and plants."
The robot was designed with dexterous hands capable of using the same tools as humans do. NASA and GM boast that the robot "surpasses previous dexterous humanoid robots in strength," being able to lift a 9-kilogram weight (20 pounds) with its arms extended, but details are sketchy.
The R2 is based on the original Robonaut created by NASA and Darpa a decade ago [see photo, right]. It was a fairly advanced android for its time, but it never travel to space.
The biggest upgrades from the original Robonaut are R2's thumb, which now have four degrees of freedom (as opposed to three), and its overall speed, which have improved by a factor of four. One result of all of this engineering is the kind of breakthrough only a roboticist would swoon over: R2 can use both hands to work with a piece of flexible material. If that sounds simple, consider the amount of sensory data, cognitive processing and physical dexterity needed to manipulate something that flows and bends in your fingers. In the series of baby steps that comprises robotics, R2 is leaping.
Still, the two existing R2 prototypes are still essentially legless—GM has no need for a bipedal robot awkwardly swaying through its plants, and NASA plans to fit the robot with at least as many mobility platforms as its predecessor. R2's lower half is intended to be modular, and so is its redesigned head, which could fit a variety of sensor suites, depending on the mission or environment. Of course, until the agency's budget is sorted out, [Robonaut2 project manager Myron Diftler] can't confirm what those missions will be, or when the robot could be deployed. Which means the robot, or some version of it, is more likely to show up in a GM plant before leaving the planet.
See the new Robonaut2 in action in the video below: