KUKA Manipulator + Disney Thrill Ride = RoboCoaster
What happens when you put KUKA roboticists and Disney "imagineers" in the same room?
What happens when you put KUKA roboticists and Disney "imagineers" in the same room?
NASA and General Motors have unveiled a humanoid robot called Robonaut2, or R2, that they say will be able to "assist astronauts during hazardous space missions and help GM build safer cars and plants."
The robot was designed with dexterous hands capable of using the same tools as humans do. NASA and GM boast that the robot "surpasses previous dexterous humanoid robots in strength," being able to lift a 9-kilogram weight (20 pounds) with its arms extended, but details are sketchy.
The R2 is based on the original Robonaut created by NASA and Darpa a decade ago [see photo, right]. It was a fairly advanced android for its time, but it never travel to space.
UPDATE: Popular Mechanics has more technical details and specs:
The biggest upgrades from the original Robonaut are R2's thumb, which now have four degrees of freedom (as opposed to three), and its overall speed, which have improved by a factor of four. One result of all of this engineering is the kind of breakthrough only a roboticist would swoon over: R2 can use both hands to work with a piece of flexible material. If that sounds simple, consider the amount of sensory data, cognitive processing and physical dexterity needed to manipulate something that flows and bends in your fingers. In the series of baby steps that comprises robotics, R2 is leaping.
Still, the two existing R2 prototypes are still essentially legless—GM has no need for a bipedal robot awkwardly swaying through its plants, and NASA plans to fit the robot with at least as many mobility platforms as its predecessor. R2's lower half is intended to be modular, and so is its redesigned head, which could fit a variety of sensor suites, depending on the mission or environment. Of course, until the agency's budget is sorted out, [Robonaut2 project manager Myron Diftler] can't confirm what those missions will be, or when the robot could be deployed. Which means the robot, or some version of it, is more likely to show up in a GM plant before leaving the planet.
See the new Robonaut2 in action in the video below:
Photos and videos: NASA
Apparently Professor Dennis Hong at the Robotics & Mechanisms Laboratory (RoMeLa) at Virginia Tech is exploring robotic locomotion not only with strange multi-legged robots but also with robots with no legs at all.
When we wrote about iRobot's blob 'bot, I should have known that others were working on similar chemical actuation projects. It turns out Professor Hong and his team are developing an amoeba-inspired robot called ChIMERA (Chemically Induced Motion Everting Robotic Amoeba), which can slide using a technique known as whole-skin locomotion.
To see the "amoebot" in action, watch the video below. It's a talk Professor Hong gave at TEDxNASA -- ChIMERA stars at 07:27.
Great little movie by Honda. It's a short documentary film called "Living with Robots," which debuted at the 2010 Sundance Film Festival last month. My favorite part is the footage of Asimo during its early days -- compare that to today's model. Go Hirose-san!
Robots have revolutionized the factory. What about the field?
Over the past century, agriculture has seen an explosion in productivity, thanks to things like mechanization, synthetic fertilizers, selective breeding, and, of course, pesticides -- lots of it.
But it remains to be seen what role robots will play in working the fields. Automation was possible in factories because tasks were repetitive and the environment well-defined. A robot arm welding a car chassis does the exact same job over and over. When it comes to crops, though, everything changes: the environment is unstructured and tasks -- like picking a fruit -- have to be constantly readjusted.
It's a huge challenge, but some companies are up to the task. Take Vision Robotics, for example. It is using advanced vision and localization techniques to develop systems like its autonomous grape-vine pruner.
We've written about them before; now they've added the impressive (and bucolic) video above, which is a demonstration the company gave to the grape and wine industry. The company, based in San Diego, Calif., developed a vision system that uses stereoscopic cameras to create a virtual 3D image of the grape vines. Articulated cutting arms do the trimming at an exact angle and location.
From what I understand their goal is to have a tractor equipped with the articulated robotic arms. Mobility is a priority, and the machines must be able to access most of the areas of the tree being cut. The tractor might be driven by a person, but everything else would be controlled by an on-board computer.
Another promising application is fruit picking. Again a robot would distinguish between fruit and leaves by using vision. A camera mounted on the robotic arm detects colors and compares it reference data in its memory. A match means the fruit is picked.
Over the next few decades we could expect a time when robots will work tirelessly on our fields. Just like they do in our factories.
A bigger BigDog is coming.
The robot, called Legged Squad Support System, or LS3, will be able to navigate rough terrain, carrying 180 kilograms (~400 pounds) of load and enough fuel for missions covering 32 kilometers (~20 miles) and lasting 24 hours.
Boston Dynamics says LS3 won't need a driver, because it will automatically follow a human leader using computer vision or travel to designated locations using sensors and GPS.
Breeding, er, building the robot will take 30 months and cost US $32 million. The first LS3 prototype is expected to debut in 2012.
"If LS3 can offload 50 lbs [23 kg] from the back of each solider in a squad, it will reduce warfighter injuries and fatigue and increase the combat effectiveness of our troops," Marc Raibert, president of Boston Dynamics and principal investigator for the program, said in a statement.
The company, based in Waltham, Mass., is teaming up with the likes of Bell Helicopter, Carnegie Mellon, NASA's Jet Propulsion Laboratory, among others, to develop LS3.
The LS3 follows on the footsteps of BigDog, and Raibert expects the new robot to make "a major leap forward." We can't wait for the videos.
Illustration: Boston Dynamics
I'm calling for case studies, stories, anecdotes of the interaction between intelligent robots and people in public spaces and working places for a feature page in next quarter's IEEE Robotics and Automation magazine.
Here's why: How many people do you know who treat their PCs like a pet, or fear their laptop will attack them in the night? Now give that laptop its own set of wheels, set a doll on top, and suddenly the story changes: The perceptive area of our brains flashes neon: "Human!"
When computer users encounter a problem with their system, they blame the software provider or the malevolent who sent them a virus. They attribute any intent to the far side of the keyboard, not inside the box.
However, the fact that a laptop can be used to actuate motors and drive around a building on its own may change the perception from a machine controlled by human beings to a machine that is a being itself.
Will that perception evolve over time now that we have commercial robot operating systems like Motivity and hobby systems like SPARK and Mindstorms that let children and computer-literate adults program interactive and intelligently navigating robot applications? Or will motion and a face continue to cause people to treat robots like human beings?
I'm looking for stories about people interacting with real robots, in the workplace, in public or in the classroom, that show how neophytes feel when they first meet robots in the course of their normal daily activities, and, if possible, how those perceptions or interactions change over time.
Please send your contributions to jdietsch [at] mobilerobots.com
Jeanne Dietsch is co-founder and CEO of MobileRobots, based in Amherst, N.H.
The above video demonstrations the workings of a professional 3D printer. Think of all the millions of things you can do with such a wonderful device! But starting at $20k, usually $30k after adding the required 'extras', you'll quickly forget about purchasing your own all together.
You may also have heard of 'the 3D fab that fabricates itself.' But when you realize the amount of technical know-how required for it to 'fabricate itself,' and the lack of precision the machine offers, once again the dream dies.
So why are the professional 3D printers so expensive? Anyone who has been seriously in the market to purchase one can tell you its the market strategy of the big players. First, they don't actually publish prices. They don't. You have to contact them by phone, interview in person with a salesman, and just before you sign the contract they mention an additional ~$10k of equipment you need that doesn't come with it. Its like a cars salesmen who, at the end of reaching an agreement, then asks "would you like wheels for your car?"
For example, there is the $1k heater to melt the support wax away - but is actually not much different than a simple $30 toaster oven. Or the $1k trolley with wheels that you can otherwise hand make in 20 minutes for $20. And I kid you not, the salesmen even said to me, "its a special price just for you."
Their strategy is to see how much you are willing to pay for it.
Perhaps they understand the market more, but if they mass-manufactured the printers, dramatically dropping costs, and sell the units at a point where the masses could afford it, perhaps they could potentially make significantly higher profits with significantly higher sales.
Fortunately for us DIY dreamers, in a major turn of marketing strategy, it appears Stratasys has recently come upon the same conclusion.
Stratasys, a leading 3D printer manufacturer, has signed a definitive agreement with HP for Stratasys to manufacture an HP-branded 3D printer. With the reduced costs associated with mass-manufacturing, and potentially large new user base, both companies are set to take 3D printing to main-stream.
So how much will their new 3D printer set you back? What precision can we expect? How much would a refill cartridge cost? Well, nothing more concrete has been announced, yet.
Stratasys' most affordable unit begins at $15k, so given my own experience in mass manufacturing of electronics, I'd say we are looking at around $5k per unit with 'acceptable' precision. Units could potentially be sold below cost, perhaps as little as $3k, with the companies calculating they can make their main profit from 'specially patented' refill cartridges.
Perhaps still too expensive for a personal at-home printer, but definitely affordable for small and mid-sized companies. And its only time until more affordable printers from jealous competitors join in.
I can’t fault people for writing articles that make use of the term “killer robots.” It’s sexy, and it attracts attention. I mean, I kinda just did it myself, didn’t I? An article by Johann Hari for the opinions section of The Independent takes this several steps too far, however, by making false assertions about the motives and capabilities of unmanned combat robots:
Every time you hear about a “drone attack” against Afghanistan or Pakistan, that’s an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it “chooses”.
Why is “chooses” in quotes? It’s in quotes because that’s not the way it works, the author knows that’s not the way it works, and he’s covering his ass. Here’s the next paragraph:
At the moment, most are controlled by a soldier – often 7,500 miles away – with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and “die”. So the military is building “autonomy” into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.
See those quotes again? If you’ve been reading this blog long enough, you should be able to figure out why they’re there. Obviously, the robots don’t “die.” And “autonomy” is in quotes because the previous paragraph talked firing a machine gun at autonomously chosen targets, which is not at all the way it works. In fact, the way it works is the exact opposite of what the author is insinuating with his quotation marks: when a combat robot loses signal, the only active actions it will take is to try to reacquire the signal again, or (in some cases) try to get home, even if it’s an impossibility. It won’t just start shooting at people.
This, really, is what bothers me most about these articles: They’re basically full of lies of a sort, designed to scare people who don’t know the facts. No, the author isn’t actually publishing false statements (I guess), but that stuff in quotes isn’t exactly true, and it’s only in there so that people who don’t take the time to find out what is true (most people) will use it to jump to the obvious, and wrong, and inevitably terrifying conclusion.
More, including a pretty funny video of a robot totally NOT killing the Japanese prime minister, after the jump.
The point behind combat robots is, of course, that it’s better to have a robot in combat than a human, because if something goes wrong, it’s better to have a destroyed robot than a dead person. So that’s good, right?
But the evidence punctures this techno-optimism. We know the programming of robots will regularly go wrong – because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 per cent of US factories have “major robotics accidents” every year – a man having molten aluminum poured over him, or a woman picked up and placed on a conveyor belt to be smashed into the shape of a car. The former Japanese Prime Minister Junichiro Koizumi was nearly killed a few years ago after a robot attacked him on a tour of a factory. And remember: these are robots that aren’t designed to kill.
Think about how maddening it is to deal with a robot on the telephone when you want to pay your phone bill. Now imagine that robot had a machine-gun pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian?
Don’t mind me while I pound my head against my keyboard… Let’s start with the easy one, the thing about Japanese Prime Minister Junichiro Koizumi being “nearly killed” a few years ago by an industrial robot that “attacked” him. Here’s a video of what I’m pretty sure is the attack:
I guess maybe he has asthma or something, and that’s hysterical laughter because everybody is in shock from their brush with death.
As for the man who got aluminum poured over him, or the woman who got smashed into the shape of a car… I can’t find any mention of these events, or events similar to these, and I have to believe that such things would have made the news, since “killer robots” are, after all, so sexy. I don’t want to say that the author made these incidents up to shock people, but if anyone can find references to anything like this, I’d be much obliged.
It’s certainly true that programming is fallible, and hardware is fallible, no matter what a robot is designed to do. I won’t belabor the fact humans are fallible too (and less easy to troubleshoot and reprogram) since I’ve done it before (a few times). But you can’t compare combat robots to automated telephone systems. That’s just stupid, and the only reason to tell someone to imagine that is to scare them. You might as well compare apples to tomatoes. Once again, I won’t belabor the fact that robots can be programmed with the same type of combat rules that humans follow, and if you see one red fruit that’s shooting at you and one red fruit that’s not, it isn’t too difficult to tell which one’s the tomato (because tomatoes are always the bad guys).
Johann Hari does raise a couple relevant points toward the end of the article… It is important to consider whether it becomes easier to participate in an armed conflict when robots are put at risk instead of humans, and what reaction the use of robots engenders in others. I still maintain that robots can be used responsibly, and that getting humans out of combat is a good thing. But either way, these are human issues, not robot issues. Robots are what we make them, and what we make of them, nothing more, and nothing less. Even autonomous robots are simply carrying out a series of commands programmed into them by a human; they’re not (in the strictest sense) making decisions on their own.
I’ve gone on long enough, so let me just say this: the ethics of using armed robots in combat is an huge issue that’s rapidly becoming more relevant, and it’s important to have intelligent and well informed debate on the subject.
This article, and articles like it, do not provide an intelligent and well informed perspective. This article is designed to scare people who are unfamiliar with robotics. I mean, it’s not even opinion… Opinion takes facts and gives a perspective, but you have to start with facts, not hyperbole.
I probably shouldn’t waste my time and energy getting so upset at crap like this, but the fact is, a lot of people read this kind of thing, and it gives a horribly negative impression of robotics in general, not just military robotics. It sets back the industry, it sets back the hobby, and it makes it harder for things like household and medical robots to get accepted into daily life.
[ The Independent ]
There are plenty of flying robots out there (up there?) -- DIY drones, quadrotor squads, autonomous UAVs, and little robo-insectoids -- but Parrot's AR Drone is perhaps the first to combine robotics with the emerging, and heavily hyped up, field of augmented reality, or AR. The quadricopter vehicle carries two cameras and can stream video to an iPhone via Wi-Fi. On the phone, the live video is overlaid with flight controls and additional graphics for a variety of games. Watch the video above and then tell us: Would you buy one?
IEEE Spectrum's award-winning robotics blog, featuring news, articles, and videos on robots, humanoids, automation, artificial intelligence, and more.Contact us: firstname.lastname@example.org
Sign up for the Automaton newsletter and get biweekly updates about robotics, automation, and AI, all delivered directly to your inbox.
The new Kinect was announced yesterday, and it's better. Way better.
This flexible new camera can see even better than bugs do
We've still got lots of news from ICRA, but not today, since it's Video Friday!
The German company is organizing a competition based on its youBot robot
This 20-DOF miniature humanoid acts as a shoulder-mounded remote-telepresence avatar
The world desperately needs a robotic chameleon, and we're now one projectile-tongue closer to that goal
This Death Star-looking automatic litter machine will neutralize your kitty's poop
Boning ham is a tedious, repetitive task. Perfect for a robot!
This jumping robot video will make you love robots even more