Robots have revolutionized the factory. What about the field?
Over the past century, agriculture has seen an explosion in productivity, thanks to things like mechanization, synthetic fertilizers, selective breeding, and, of course, pesticides -- lots of it.
But it remains to be seen what role robots will play in working the fields. Automation was possible in factories because tasks were repetitive and the environment well-defined. A robot arm welding a car chassis does the exact same job over and over. When it comes to crops, though, everything changes: the environment is unstructured and tasks -- like picking a fruit -- have to be constantly readjusted.
We've written about them before; now they've added the impressive (and bucolic) video above, which is a demonstration the company gave to the grape and wine industry. The company, based in San Diego, Calif., developed a vision system that uses stereoscopic cameras to create a virtual 3D image of the grape vines. Articulated cutting arms do the trimming at an exact angle and location.
From what I understand their goal is to have a tractor equipped with the articulated robotic arms. Mobility is a priority, and the machines must be able to access most of the areas of the tree being cut. The tractor might be driven by a person, but everything else would be controlled by an on-board computer.
Another promising application is fruit picking. Again a robot would distinguish between fruit and leaves by using vision. A camera mounted on the robotic arm detects colors and compares it reference data in its memory. A match means the fruit is picked.
Over the next few decades we could expect a time when robots will work tirelessly on our fields. Just like they do in our factories.
Boston Dynamics, developer of BigDog and PETMAN, announced today that it has won a Darpa contract to develop a new robot mule to help soldiers on foot carry gear in the field.
The robot, called Legged Squad Support System, or LS3, will be able to navigate rough terrain, carrying 180 kilograms (~400 pounds) of load and enough fuel for missions covering 32 kilometers (~20 miles) and lasting 24 hours.
Boston Dynamics says LS3 won't need a driver, because it will automatically follow a human leader using computer vision or travel to designated locations using sensors and GPS.
Breeding, er, building the robot will take 30 months and cost US $32 million. The first LS3 prototype is expected to debut in 2012.
"If LS3 can offload 50 lbs [23 kg] from the back of each solider in a squad, it will reduce warfighter injuries and fatigue and increase the combat effectiveness of our troops," Marc Raibert, president of Boston Dynamics and principal investigator for the program, said in a statement.
The company, based in Waltham, Mass., is teaming up with the likes of Bell Helicopter, Carnegie Mellon, NASA's Jet Propulsion Laboratory, among others, to develop LS3.
The LS3 follows on the footsteps of BigDog, and Raibert expects the new robot to make "a major leap forward." We can't wait for the videos.
I'm calling for case studies, stories, anecdotes of the interaction between intelligent robots and people in public spaces and working places for a feature page in next quarter's IEEE Robotics and Automation magazine.
Here's why: How many people do you know who treat their PCs like a pet, or fear their laptop will attack them in the night? Now give that laptop its own set of wheels, set a doll on top, and suddenly the story changes: The perceptive area of our brains flashes neon: "Human!"
When computer users encounter a problem with their system, they blame the software provider or the malevolent who sent them a virus. They attribute any intent to the far side of the keyboard, not inside the box.
However, the fact that a laptop can be used to actuate motors and drive around a building on its own may change the perception from a machine controlled by human beings to a machine that is a being itself.
Will that perception evolve over time now that we have commercial robot operating systems like Motivity and hobby systems like SPARK and Mindstorms that let children and computer-literate adults program interactive and intelligently navigating robot applications? Or will motion and a face continue to cause people to treat robots like human beings?
I'm looking for stories about people interacting with real robots, in the workplace, in public or in the classroom, that show how neophytes feel when they first meet robots in the course of their normal daily activities, and, if possible, how those perceptions or interactions change over time.
Please send your contributions to jdietsch [at] mobilerobots.com
Jeanne Dietsch is co-founder and CEO of MobileRobots, based in Amherst, N.H.
The above video demonstrations the workings of a professional 3D printer. Think of all the millions of things you can do with such a wonderful device! But starting at $20k, usually $30k after adding the required 'extras', you'll quickly forget about purchasing your own all together.
You may also have heard of 'the 3D fab that fabricates itself.' But when you realize the amount of technical know-how required for it to 'fabricate itself,' and the lack of precision the machine offers, once again the dream dies.
So why are the professional 3D printers so expensive? Anyone who has been seriously in the market to purchase one can tell you its the market strategy of the big players. First, they don't actually publish prices. They don't. You have to contact them by phone, interview in person with a salesman, and just before you sign the contract they mention an additional ~$10k of equipment you need that doesn't come with it. Its like a cars salesmen who, at the end of reaching an agreement, then asks "would you like wheels for your car?"
For example, there is the $1k heater to melt the support wax away - but is actually not much different than a simple $30 toaster oven. Or the $1k trolley with wheels that you can otherwise hand make in 20 minutes for $20. And I kid you not, the salesmen even said to me, "its a special price just for you."
Their strategy is to see how much you are willing to pay for it.
Perhaps they understand the market more, but if they mass-manufactured the printers, dramatically dropping costs, and sell the units at a point where the masses could afford it, perhaps they could potentially make significantly higher profits with significantly higher sales.
Fortunately for us DIY dreamers, in a major turn of marketing strategy, it appears Stratasys has recently come upon the same conclusion.
So how much will their new 3D printer set you back? What precision can we expect? How much would a refill cartridge cost? Well, nothing more concrete has been announced, yet.
Stratasys' most affordable unit begins at $15k, so given my own experience in mass manufacturing of electronics, I'd say we are looking at around $5k per unit with 'acceptable' precision. Units could potentially be sold below cost, perhaps as little as $3k, with the companies calculating they can make their main profit from 'specially patented' refill cartridges.
Perhaps still too expensive for a personal at-home printer, but definitely affordable for small and mid-sized companies. And its only time until more affordable printers from jealous competitors join in.
I can’t fault people for writing articles that make use of the term “killer robots.” It’s sexy, and it attracts attention. I mean, I kinda just did it myself, didn’t I? An article by Johann Hari for the opinions section of The Independent takes this several steps too far, however, by making false assertions about the motives and capabilities of unmanned combat robots:
Every time you hear about a “drone attack” against Afghanistan or Pakistan, that’s an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it “chooses”.
Why is “chooses” in quotes? It’s in quotes because that’s not the way it works, the author knows that’s not the way it works, and he’s covering his ass. Here’s the next paragraph:
At the moment, most are controlled by a soldier – often 7,500 miles away – with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and “die”. So the military is building “autonomy” into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.
See those quotes again? If you’ve been reading this blog long enough, you should be able to figure out why they’re there. Obviously, the robots don’t “die.” And “autonomy” is in quotes because the previous paragraph talked firing a machine gun at autonomously chosen targets, which is not at all the way it works. In fact, the way it works is the exact opposite of what the author is insinuating with his quotation marks: when a combat robot loses signal, the only active actions it will take is to try to reacquire the signal again, or (in some cases) try to get home, even if it’s an impossibility. It won’t just start shooting at people.
This, really, is what bothers me most about these articles: They’re basically full of lies of a sort, designed to scare people who don’t know the facts. No, the author isn’t actually publishing false statements (I guess), but that stuff in quotes isn’t exactly true, and it’s only in there so that people who don’t take the time to find out what is true (most people) will use it to jump to the obvious, and wrong, and inevitably terrifying conclusion.
More, including a pretty funny video of a robot totally NOT killing the Japanese prime minister, after the jump.
The point behind combat robots is, of course, that it’s better to have a robot in combat than a human, because if something goes wrong, it’s better to have a destroyed robot than a dead person. So that’s good, right?
But the evidence punctures this techno-optimism. We know the programming of robots will regularly go wrong – because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 per cent of US factories have “major robotics accidents” every year – a man having molten aluminum poured over him, or a woman picked up and placed on a conveyor belt to be smashed into the shape of a car. The former Japanese Prime Minister Junichiro Koizumi was nearly killed a few years ago after a robot attacked him on a tour of a factory. And remember: these are robots that aren’t designed to kill.
Think about how maddening it is to deal with a robot on the telephone when you want to pay your phone bill. Now imagine that robot had a machine-gun pointed at your chest.
Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian?
Don’t mind me while I pound my head against my keyboard… Let’s start with the easy one, the thing about Japanese Prime Minister Junichiro Koizumi being “nearly killed” a few years ago by an industrial robot that “attacked” him. Here’s a video of what I’m pretty sure is the attack:
I guess maybe he has asthma or something, and that’s hysterical laughter because everybody is in shock from their brush with death.
As for the man who got aluminum poured over him, or the woman who got smashed into the shape of a car… I can’t find any mention of these events, or events similar to these, and I have to believe that such things would have made the news, since “killer robots” are, after all, so sexy. I don’t want to say that the author made these incidents up to shock people, but if anyone can find references to anything like this, I’d be much obliged.
It’s certainly true that programming is fallible, and hardware is fallible, no matter what a robot is designed to do. I won’t belabor the fact humans are fallible too (and less easy to troubleshoot and reprogram) since I’ve done it before (a few times). But you can’t compare combat robots to automated telephone systems. That’s just stupid, and the only reason to tell someone to imagine that is to scare them. You might as well compare apples to tomatoes. Once again, I won’t belabor the fact that robots can be programmed with the same type of combat rules that humans follow, and if you see one red fruit that’s shooting at you and one red fruit that’s not, it isn’t too difficult to tell which one’s the tomato (because tomatoes are always the bad guys).
Johann Hari does raise a couple relevant points toward the end of the article… It is important to consider whether it becomes easier to participate in an armed conflict when robots are put at risk instead of humans, and what reaction the use of robots engenders in others. I still maintain that robots can be used responsibly, and that getting humans out of combat is a good thing. But either way, these are human issues, not robot issues. Robots are what we make them, and what we make of them, nothing more, and nothing less. Even autonomous robots are simply carrying out a series of commands programmed into them by a human; they’re not (in the strictest sense) making decisions on their own.
I’ve gone on long enough, so let me just say this: the ethics of using armed robots in combat is an huge issue that’s rapidly becoming more relevant, and it’s important to have intelligent and well informed debate on the subject.
This article, and articles like it, do not provide an intelligent and well informed perspective. This article is designed to scare people who are unfamiliar with robotics. I mean, it’s not even opinion… Opinion takes facts and gives a perspective, but you have to start with facts, not hyperbole.
I probably shouldn’t waste my time and energy getting so upset at crap like this, but the fact is, a lot of people read this kind of thing, and it gives a horribly negative impression of robotics in general, not just military robotics. It sets back the industry, it sets back the hobby, and it makes it harder for things like household and medical robots to get accepted into daily life.
There are plenty of flying robots out there (up there?) -- DIY drones, quadrotor squads, autonomous UAVs, and little robo-insectoids -- but Parrot's AR Drone is perhaps the first to combine robotics with the emerging, and heavily hyped up, field of augmented reality, or AR. The quadricopter vehicle carries two cameras and can stream video to an iPhone via Wi-Fi. On the phone, the live video is overlaid with flight controls and additional graphics for a variety of games. Watch the video above and then tell us: Would you buy one?
Though the past year has not been great for venture funding and many companies -- including robotics companies -- have been faced with serious financial challenges, the robotics startup community seems to be a bright spot, with many millions of dollars being poured into early-stage companies. These are just the ones I know of:
Heartland Robotics - Heartland, one of the recent iRobot spinoffs that's focusing on collaborative industrial robots, had an early Series A round in 2008 and a supplementary $7M round in the fall of 2009. They're backed by Jeff Bezos's personal investment fund as well as Charles River Ventures.
Harvest Automation - Harvest, known for a long time as "Q Robotics," also includes some iRobot alumni. As you may recall, they're building small robots that help move potted plants around massive commercial nurseries. A few weeks ago they announced a successful $4M Series A round with funding both from Life Sciences Partners and MidPoint Food & Ag fund LP, which, in stark contrast to historical robotics investors, focuses primarily on agricultural companies. It's interesting to see venture funding for robotics start to move to these industry-specific investors.
iWalk -- We discussed iWalk, Hugh Herr's MIT Media Lab spinoff that's designing robotic prosthetic devices, in September when they received their $21M series B funding from General Catalyst Partners.
CyPhy Works -- the iRobot spinoff (previously The Droid Works) has continued in the proud tradition of most of Massachusett's robotics companies, relying on government SBIRs to build up their company. CyPhy Works got a piece of last year's economic stimulus package for some navigation system work and in December won a $2.4M grant to develop small tethered UAVs for bridge and building inspection.
CasePick Systems -- CasePick is an interesting one. Located just outside of Boston, they've been pretty quiet and stealthy, but through rumor mills and recruiters I've managed to put together some pieces. They're doing warehouse automation, in the realm of their neighbor to the north Kiva Systems, but rather than focus on delivering racks, they're moving around massive cases of product in distribution centers. Rather than venture funding they have a single investor -- who I don't know -- who has put a massive amount of money toward CasePick so that they can solve their warehouse logistic problems.
Cardiorobotics -- because the only thing scarier than open heart surgery is snakes, the Pittsburgh-based company has decided to combine the two, developing a snake-like robot called the CardioARM for minimally invasive heart surgery. This earned them an $11.6M Series A round, announced yesterday, from Eagle Ventures, The Pittsburgh Life Sciences Greenhouse, and the Slater Technology Fund.
I've heard from founders of the first generation of robotics companies that finding venture funding in the 90s for technology like this was next to impossible. It was too risky; no one had proven that robots, outside of industrial manufacturing environments, were viable technology. One founder even recalled being turned down by a VC firm's intern -- they weren't even deemed important enough to meet with an actual partner. To see so much activity among robotics companies in what was otherwise a down year is pretty exciting.
By now, over 1800 high school teams are already deep into planning—and probably already building—for their six-week robot design challenge. As I wrote in IEEE Spectrum’s online commentary about the kickoff event, it was exciting to be there after having participated in FIRST myself for three years of high school.
One thing was obvious: the 18-year-old FIRST (For Inspiration and Recognition of Science and Technology) robotics competition is living up to its name: inspiration.
Flowers, who is the FIRST national advisor and pretty much a FIRST mascot as well, proudly reported during the kickoff that 10 percent of this year’s freshman class at MIT is made up of FIRST alums. Four FIRST alums who spoke at the event included a web developer, a NASA flight controller, a math and science teacher, and a representative from Engineers Without Borders, which works to bring clean water and other sustainable engineering projects to developing countries. Most commented that they wouldn’t have thought about careers in engineering if not for joining their high school FIRST teams.
FIRST has in fact ballooned from hosting a single competition for high school kids to sponsoring projects for younger students as well, in the form of Lego Leagues (ages 9-14) and junior Lego Leagues (ages 6-9). These start kids out working with programmable Lego kits. In high school they can move on to the robotics competition (“the varsity sport for the mind,” as it’s advertised on the FIRST website) or a similar brain-stretching Tech Challenge, which is completed with smaller teams.
“I’ve talked about nerd pride, and nerd nouveau,” Flowers told thousands of students worldwide, via satellite feed, at the kickoff. Now, he said, it’s “supernerds.” In other words, ”people who know a lot about a lot, who think hard and creatively, who love continuing to learn. I want them in the hospital… designing legislation, serving society,” he said. Yes, he wants them everywhere.
I, for one, was inspired. And judging by the cheers from students, parents, teachers, and mentors, I wasn't the only one.
From our coverage of the kickoff:
[FIRST founder Dean] Kamen told students, "Ten years from now, you won't remember which robot won which event." But, he added, "one of those students will have done something that created a solution to a global problem, maybe because they were inspired by [FIRST]. That, the world will never forget."
I can attest to that, having graduated just over ten years ago. I don’t remember what our robot looked like, or even what it was supposed to do. I do remember my teammates and my mentors, the long nights at our sponsor company, Lennox, using power tools and designing electronics, even our team song (Tubthumping from Chumbawamba). My teammates are now engineers, doctors, and government employees, to name just a few. All are gracious professionals. Woodie Flowers would be proud.
Image courtesy of FIRST. Foreground Woodie Flowers; background Dean Kamen.
Last November the International Robot Exhibition (IREx) took place here in Tokyo, with more than 100,000 visitors coming to see the latest robotic creations by universities, research institutes, start-ups, and also the large, worldwide known industrial robot makers. The area of industrial robotics was very large, as usual, and apart from the choreography of massive assembly and welding robots, I was not expecting to see anything new. I was wrong! It turns I was quite amazed by the superfast parallel link manipulators presented there.
These manipulators, like the ABB IRB 360 shown in the video below, are able to move so fast and with such a degree of accuracy that it becomes difficult to follow them by eye.
The features that impressed me the most were the links made of carbon fiber, to reduce inertia and increase operational speed, and the link mechanisms installed to control the orientation of the end-effector.
In the video above we can see the ABB IRB 360 operating with a high-speed vision system to collect parts and arrange them according to the colors, forming pre-defined patterns, rotating each part so that they be aligned.
The task of destroying the pattern and placing the parts randomly on the conveyor belt for the robot to assemble them, however, was still done by a human (not shown in the video). Some things are better left to humans...
UPDATE: The IRB 360 is also a skilled pancake handler!
Once hailed as the future of robotics, humanoid robots were conspicuously absent from the International Robot Exhibition (IREx) in Tokyo last November. Only one booth still presented them as "the future," but without any practical uses. Apparently, some roboticists regard applications as part of "future work."
Humanoid robots for drilling holes and holding cases of nuts and bolts
Another booth, in the university area, had a humanoid secretary to greet their visitors. I was really scared, and only managed to take its picture after assuring myself that it was not a moving corpse, but only a practically unintelligible machine. With such a morbid "receptionist," I completely failed to pay attention to the other research on display at that booth!
Not your usual receptionist
One fresh look at humanoid robots was the cardboard robot, which does not intend to perform any task that is better done by humans. Instead, it is a cleverly-built structure made of layers of cardboard and servo motors, that functions as a mobile mannequin. Low cost is definitely one of its main features, and the fact that it is meant only to display clothes in shop windows allows it to be very lightweight.
Finally, the humanoid that left the strongest impression was perhaps one that was supposed to play table tennis. It was modeled after an athletic human body, with well-defined muscles and even sunglasses!
Table tennis humanoid robot: motionless
Alas, in the four days of exhibition, I only saw the robot moving in the videos that the company was showing -- the robot moved quite violently to hit the ball with the paddle, which caused it to oscillate wildly, probably eliminating any chance of image processing with the mounted cameras to try to catch the next ball.
The fact that I was exhibiting another robot just across from this booth provided me with a vantage point to keep an eye on this bot. But on the stage it remained motionless, indifferent to the efforts of its exhibitors, who tried to get it to work. It was only in the last day that the robot finally moved, although in an unexpected way. It fell on the wooden stage with a loud thud, in front of many visitors, plunging like a big piece of ham.
During those four days, I could think of a few robotic mechanisms that might be able to play table tennis in a more reliable way, without the need to look like a human being. But maybe I'm just being too hard on these human-shaped bots, and some day one of them will prove me wrong ... in a table tennis match. What you think?