It's not yet clear exactly which parts of Nao's software platform will become open source and which will remain proprietary. The company said in a release that it will "share a significant part of [Nao's] source code with the research and developer community by the end of 2011."
Last year, when I interviewed Bruno Maisonnier, Aldebaran's founder and CEO, he hinted that making Nao more open was a desired move for the company, aimed at both building a stronger developer community and improving the robot's overall capabilities. Here's what he says in the release:
“Building robotics applications is challenging: applications are built upon many state-of-the-art complex technologies like speech recognition, object recognition, grasping, navigation and mapping. Applications need to be secure and be able to run in a constrained environment with limited resources. With over five years of experience in developing embedded systems for robotics platforms, we want to share our middleware, our core communication library, our cross-platform build tools and other essential modules with researchers, developers and emerging projects in humanoid robotics."
Open source is becoming a powerful trend in robotics. A year ago, another French company, Gostai, maker of robotics software and the Jazz telepresence robot, also announced it was opening the code of Urbi, its flagship product. This week, RoboDynamics introduced a US $3,000 personal robot called Luna that will likely run on open source software. Also this week, Google unveiled its Android Open Accessory, a project combining its mobile operating system, Android, and the popular DIY open source microcontroller Arduino. And then, of course, there's Willow Garage, the Silicon Valley firm that is perhaps the strongest backer of open source robotics and whose Robot Operating System, or ROS, and other open source projects are becoming ever more popular.
Not everyone in robotics has embraced open source, though. At least one major figure has said he doesn't favor the trend. In an interview with Forbes, iRobot CEO Colin Angle called into question the benefits of open source for the industry:
Angle also questions the benefits of some open-source robotic initiatives. He notes that giving away technology can hurt any business. “Solving the hard problems of robotics and giving those solutions away is the worst thing someone can do for the robotics industry,” says Angle. “You have to understand the economic engine behind things.”
That's a provocative statement. I'm sure many people would question the claim that open source robotics is bad for the consumers. But who knows, maybe there are others who would agree with him? What do you think?
Updated May 16, 2011: Fixed date when Urbi was released as open source.
The robot received a lot of attention, but there weren't really good videos showing how the thing operated. What we needed was an intrepid reporter willing to do a, uh, hands-on test with the bot. Now IDG has done just that and brings great footage of the Telenoid talking and squiggling under the grasp of their somewhat creeped-out correspondent.
It's clear from the video that the Telenoid can move its head and change its facial expressions, although only slightly, but still more than I expected. So for the person holding it, it's more than just a fancy, sperm-shaped giant telephone; whether you can feel the operator's "presence" via the robot I don't know, but according to the reporter who tried it, you might even want to hug it.
Why is this man about to smash his robot with a bat?
Don't worry. It's just another day at the German Aerospace Center (DLR), where researchers have built what's probably the most advanced -- and tough -- robotic arm ever. And they aren't afraid of testing its limits.
It's called the DLR Hand Arm System [photo, below]. It has an anthropomorphic design and packs 52 motors, ultra-miniaturized control electronics, a supercapacitor-based power supply, and a web of synthetic tendons stronger than Kevlar. But what makes it stand out compared to conventional systems is its ability to withstand collisions, thanks to ingeniously designed joints and actuators that can absorb and dissipate energy, much like our own arms and hands do.
This time, they wanted to assess the robustness of the upper arm and shoulder. Would it still work after a violent collision? There was only one way to find out:
Did you notice how, after getting struck, the arm gently touches the yellow ball, exactly as it had done before? That's to show that its mechanisms and electronics are as good as new. If I could design a robot like this, I would be grinning just like the guy in the video.
The researchers, members of DLR's Institute of Robotics and Mechatronics, described the overall system in detail for the first time today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
Markus Grebenstein, the Hand Arm System's lead designer, tells me that robustness is essential if we want to deploy service robots in the real world, where collisions are likely to happen. Even small shocks, he says, can damage conventional robots, which rely on motors coupled to joins in mechanically stiff configurations.
This means that, in a collision, shock and vibration will spread through these rigid structures and, because they lack a mechanism to dissipate the energy, they will tear themselves apart. Even newer impedance control techniques, which lets robots move compliantly when a person or object pushes on them, can't help in collision situations, because feedback controllers and motors aren't fast enough.
The DLR team noticed this limitation in its own robots. One of them, Justin, is able to catch a ball thrown at him -- and even two balls at once -- most of the time. But in some cases, when the robot computed an inaccurate trajectory for the ball, it might stretch its fingers completely and the 80-gram ball, upon hitting the fingertips, would damage the hand. By comparison, a handball goalkeeper withstands the impact of a 480-gram ball moving at 120 kilometers per hour, or nearly five times as fast, and delivering 100 times as much energy.
The solution, the DLR group figured, was to completely rethink the design of robotic joints and actuators. For the upper arm, the researchers designed intricate shock-absorbing structures that they call "floating spring joints." As the name implies, it consists of two cam disks coupled to a spring, which helps dissipate energy and damp vibrations. The mechanism [diagram, above] is also different from conventional robot joints in that it uses not one but two motors: A powerful motor effectively moves the joint, while a smaller one adjusts the position of the cams and the spring to vary the stiffness of the joint. A harmonic drive gear interfaces the two motor modules.
Similarly, the hand also relies on variable stiffness actuation. As we described before in detail, the hand has 38 tendons, each connected to an individual motor. Two motors drive each finger articulation [diagram, right], in an antagonistic arrangement (when the motors turn in one direction, the articulation bends one way; when the motors reverse direction, the articulation bends the other way). Coupled to the tendons, which are made from a super strong synthetic fiber called Dyneema, is a spring-based nonlinear elastic mechanism. It allows the fingers to store and release energy -- again, just like our own hands do.
And why vary the stiffness of the fingers anyway? Because different situations demand different levels of stiffness. If you embed lots of springs and other shock-absorbing structures into your arm system, it will be able to withstand collisions, but when you try to pick a grape on a table it will vibrate and fail.
By varying its stiffness, the DLR system can perform well both in terms of robustness and accuracy. And it can perform tasks that other robots would have difficulty doing, such as using a hammer to drive nails on a wooden board. Watch:
The complete Hand Arm System is an engineering masterpiece. The shoulder has 3 degrees of freedom, and the elbow has 1 DoF. Each hand alone has 19 DoF, with two additional DoF on the wrist, which uses a "spherical antiparallelogram mechanism" powered by four motors, allowing for 30-degree side motion and 90-degree flexion/extension.
I mean, just look at this thing [photo, below]. It's crammed with motors, electronics, and cables, and yet it can endure vicious mechanical abuse without losing performance.
I suspect the researchers will be working on some of these capabilities in the near future. One thing is certain, though: Expect more awesome videos from these guys. (UPDATE: As if to tease me, Grebenstein just send me the photos below.)
Updated May 13: Corrected explanation of Justin catching balls; added more photos.
Bipedal robots, whether they're human-sized or not, are generally heavy and unstable and (withfewexceptions) don't lend themselves to dynamic motions like running and jumping. Researchers from Ohio State University and the University of Notre Dame have developed an experimental biped called KURMET that's specifically designed for controllable, repetitive jumping*:
That big arm thing isn't being used to aid in the jumping at all, it's just there to simplify the system a little bit. Theoretically, it would be possible to do all of this research on an untethered fully three-dimensional robot, but for the purposes of figuring out how to make a robot hop in a stable manner, you only really need to focus on whether it's tipping forward or backward as it jumps. The “fuzzy” term that you see in the video is referring to how KURMET is controlled: The robot learns how to jump through a training process, not by remembering rules, so there isn't always a precisely pre-defined action that it's required to take based on given inputs, which is why it's called a fuzzy control system.
In the future, the researchers hope to apply evolutionary learning strategies to push KURMET's performance boundaries, which may or may not include doing flips and playing hopscotch.
The researchers -- Yiping Liu, Patrick Wensing, David Orin, and James Schmiedeler -- describe their work in a paper, "Fuzzy Controlled Hopping in a Biped Robot," presented yesterday at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
* Among the most incredible hopping machines ever created are the robots built by Marc Raibert and his team back when he was an MIT professor and directed the MIT Leg Lab. Raibert went on to co-found Boston Dynamics. Some of hisrobots are now on display at the MIT Museum.
Normally, when a robot wants to pick something up that it's never seen before, it either has to download a 3D model of the object, make its own 3D model and analyze it, or be trained by a human on the right way to grip. Unfortunately, none of these things are really practical to do in the fast paced world of grocery checkout lines.
Researchers at Stanford University have figured out that in order to pick something up, all you really need to know is whether a piece of it has the same basic shape as the shape of your gripper. If it does, then you can mostly likely grip it tolerably well, and experimentally the success rate is better than 90 percent. Best of all, you can extract this shape information from one simple (and quick) 3D scan, even if you've got a big cluttered pile of stuff. Once the robot has picked up an object, it holds it up to its cameras to scan for the barcode, adds it to your tab, and bags it for you. Watch a demo of their method implemented on a PR2:
Don't let the fact that this video is sped up by anywhere from 5x – 25x worry you; this is just research code. There's a lot of optimizing that could be done that could increase the speed by “several orders of magnitude,” according to the researchers. And while you probably aren't going to see PR2s down at your local Trader Joe's, the code that's being developed here could conceivably find its way into some kind of grocery robot in the future, or even into a robot that picks up and puts away stuff in your house.
The Stanford team -- Ellen Klingbeil, Deepak Rao, Blake Carpenter, Varun Ganapathi, Andrew Y. Ng, Oussama Khatib -- describe the research in a paper, "Grasping with Application to an Autonomous Checkout Robot," presented today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
That mystery robot that we've been teased about for months now, originally rumored to be something developed by either Apple or Google, is in fact a project by a company called RoboDynamics. It's called Luna, it's a personal robot designed for people to use at home, it's fully programmable, and will start shipping later this year.
As of right now, the embargo has been lifted and we're allowed to tell you more about Luna and how RoboDynamics, in Santa Monica, Calif., hopes that it'll revolutionize robotics in the same way that the PC revolutionized computing and the iPhone and Android are revolutionizing mobile electronics.
Before we get to the overall concept, here's a rundown of Luna's hardware and software specs, which RoboDynamics says is subject to change:
Computer Processor: Dual Core Atom 2 GHz
Graphics: nVidia 94000M
Storage capacity: 8 GB Flash, expandable to 32 GB
Wireless: Wi-Fi (802.11g), optional Bluetooth via Luna Expansion Port (LXP)
Cellular comm.: Optional 3G or 4G via Luna Expansion Port (LXP)
Operating system: LunaOS (includes Poky Linux, ROS, and other packages) I/O
Display: 8" touchscreen capacitive LCD
Camera: 8-megapixel primary camera with digital zoom
Microphone: 3 microphone array with DSP front-end with sound localization
Speakers: Yes (no specs available yet)
Sensors: 10-bit wheel encoders, PrimeSense 3D Sensor
Expansion ports: Luna Expansion Ports (LXP) x 7 [Each LXP comprises standard USB Female Type A and 12 volt and 5 volt regulated power with mounting holes] Power
Battery: 12 volt, 26 amp-hour - SLA
Battery life: Between 4-8 hours
Charge time: 4-8 hours for full charge Dimensions
Size: Height: 5'2" (157 cm) - Base: 22" (56 cm)
Weight: 65 lbs (30 Kg)
Clearly, this is not some kind of fancy, futuristic new platform. It's got a pretty good computer in it, with a pretty good graphics card. It's got some pretty good sensors, pretty good mobility, and pretty good design. All very pretty good. So why get excited?
Because, at least in principle, Luna could do something that no other robot has been able to accomplish: bring a programmable, general-purpose robot to a vast number of home users and establish an ecosystem for developers to create and sell software that gives the robot more capabilities.
Let's use the computer as an analogy. Starting with the Apple II (or thereabouts, our memory only goes back so far), it was possible to buy a computer system that would come out of the box offering immediate usefulness without requiring specialized technical knowledge. And that's what made everybody want a computer: it would immediately make your life better, and furthermore, the ability to teach it new things makes it increasingly useful.
To take the analogy further, and to get closer to the idea behind Luna, think about the iPhone. You buy it because it makes phone calls and you can get the Internet on it, but that's just the beginning. What makes the iPhone (and Android platforms) stand out from other phones is the fact that you can make it increasingly useful, thanks to the app store. And not just that, but making the iPhone useful by writing apps has become lucrative, which makes the iPhone itself more lucrative, and so on.
RoboDynamics CEO Fred Nikgohar [the guy in the suit, right] wants Luna to do for robotics what smartphones did for mobile computing. He argues that the robotics industry has failed to make home robots (beyond toys, kits, and vacuum cleaners) available to consumers, and that even open-source software platforms like Willow Garage's ROS are still too hard for people without a PhD in robotics. He hopes that "a well-designed, open, and affordable personal robot will kickstart a rush of innovation."
We applaud the idea, but we see some hurdles along the way. RoboDynamics had mentioned previously a price tag of around US $1,000, which would make Luna a very competitive offering. To put that in perspective, remember that a TurtleBot or a Bilibot will set you back $1,200. And they're not five feet tall with touchscreens. But now RoboDynamics is saying that the $1,000 is a target price and that the initial model, to ship later this year, will sell for $3,000.
It's still reasonable for the hardware you're getting, but way above the psychologically appealing price point of $1,000, which would certainly entice a lot of people. So whether RoboDynamics will be able to bring the cost down is still uncertain.
Another issue is software. We haven't had a chance to check out the robot's Linux-based operating system, called LunaOS, and we haven't seen Luna's interface system, the SDK, and the Luna App Store that RoboDynamics says will be available. Software, perhaps even more than hardware, will be key to Luna's success. If the robot ships with good apps, and more apps start to show up on the store, Luna's appeal increases dramatically. But so far this is all a big question mark.
In the next few weeks, RoboDynamics plans to release more information about Luna's first edition, as well as future models, prices, and availability (if you're interested, go to their website and fill out the form). We'll report back as soon as we have a chance to meet the robot in person and check out its full capabilities.
We've seen all kinds of robotsthatareabletomaketheirwayupwalls, but few if any of them have been what you'd call dynamic. That is, those robots clamp themselves to something, move, clamp, and them move again. A dynamic robot is more like a gymnast, relying on motion and inertia to actively propel itself up using walls and other surfaces to its advantage.
ParkourBot, designed by researchers at Carnegie Mellon and Northwestern University, takes all the lessons they've learned from dynamic walking robots and brings it to the vertical dimension. Well, near vertical. At this point, the robot is being tested on an angled air table to simplify the system, and they're also cheating a little bit (their words!) by relying on a gyroscope to keep the robot from spinning around like a pinwheel.
So okay, it may not exactly be climbing buildings, but that's definitely the goal. The next step is to add variations and gaps in the walls to teach ParkourBot to adapt on the fly, and once it gets that figured out, removing the gyro will open up some exciting possibilities for actual jumping and leaping and climbing. ParkourBot, watch and learn.
The researchers -- Amir Degani, Siyuan Feng, H. Benjamin Brown, Kevin M. Lynch, Howie Choset and Matthew T. Mason -- describe the project in a paper, "The ParkourBot: A Dynamic Bow Leg Climbing Robot," presented today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
The quadrotor is not doing everything by itself. It's getting help from the environment, an enclosed space called the Flying Machine Arena, which is equipped with multiple motion capture cameras. The researchers devised algorithms to transform the vision data from the cameras into control commands for the quadrotor. The machine can hover in place or it can follow pre-programmed trajectories. Manual control is also possible using a "set point tracking" device.
Hehn and D'Andrea, an IEEE Fellow and co-founder of Kiva Systems, which develops warehouse automation robots (disclosure: he's also a member of IEEE Spectrum's editorial advisory board), describe the project in a paper, "A Flying Inverted Pendulum," presented today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
Japanese prototype of a train that levitates on cushions of air.
High speed trains are huge in Asia, but barring a catastrophe, most of them are designed to stay firmly on the ground, running on rails. There are plenty of good reasons not to run on rails, though, one of which is that you can go much faster without all that friction. This is the idea behind maglev trains, but there's still a lot of wind drag that crops up between the bottom of a maglev train and its track that makes them less efficient (which combined with other problems make maglevs very costly).
A ground-effect vehicle takes advantage of this fast-moving air and uses some stubby little wings to fly just above the ground, like a maglev without the mag. This is a tricky thing to do, since you have to control the vehicle more like an airplane than a train, meaning that you have to deal with pitch, roll, and yaw and not just the throttle. A Japanese research group led by Yusuke Sugahara at Tohoku University has built robotic prototype of a free flying ground-effect vehicle [photo above] that they're using to test an autonomous three axis stabilization system:
The researchers are looking to use this robot to generate a dynamic model of how vehicles like these operate, which they hope to apply to a manned experimental prototype train [first photo at the top] that can travel at 200 kilometers per hour in a U-shaped concrete channel that keeps it from careening out of control.
Later, the plan is that the same technology can scale and power a large commuter rail system called the Aero Train [concept below]. If this is the future of commuting, we'll be literally flying to work some day.
Sugahara and his colleagues describe the project in a paper, "Levitation Control of Experimental Wing-in-Ground Effect Vehicle along Z Axis and about Roll and Pitch Axes," presented today at the IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.
In 2009, Yaskawa equipped three of its Motoman industrial robots with lightsabers and made them fight until there's only one bot standing perform a choreographed dance. Now it appears Yaskawa has realized that when you give lightsabers to robots, people expect to see a Jedi battle, not ballet moves. Check out the demo the company put together for this year's IEEE International Conference on Robotics and Automation (ICRA), in Shanghai.