You may have noticed that there was no Video Friday last week. This is because we flew out to California on Thursday (which is when we usually stay up all night putting videos together for you) to see what was up with Toyota. We figured that was kind of important, you know? But obviously we misjudged either the importance that some of you place on Video Friday, or just how horribly bored you get at the end of the week, because all we heard in California was “Hey, where’s Video Friday??”
The good news is that you’ve waited this long, and we’re going to make it up to you with a ridiculously huge number of videos.
At a DARPA Robotics Challenge press conference earlier this year, Gill Pratt was asked about his post-DARPA plans. He politely declined to comment, saying he couldn’t discuss it at that point. There was speculation that Google, Apple, Uber, or other tech giant interested in robotics would try to lure him, and they probably did. The company that succeeded, though, comes as a bit of a surprise. Toyota, the world’s largest automaker, announced last week a big push into AI and robotics, and Pratt accepted to lead that effort.
“It’s going to be a big deal,” he told IEEE Spectrum about the Japanese firm’s plans. Pratt explained that a US $50 million R&D collaboration with MIT and Stanford is just the beginning of a large and ambitious program whose goal is developing intelligent vehicles that can make roads safer and robot helpers that can improve people’s lives at home.
In these further excerpts from an interview last week, Pratt gives more details about Toyota’s plans and what we have to look forward to over the next few years. What follows has been condensed and edited for clarity.
A few months ago, we got a chance to check out the latest prototype of this robot, and we’re excited to say that it’s made it all the way to a fully armed and operational prototype: Hedgehog, as it’s called, has its core mobility hardware fully integrated and has been undergoing microgravity testing on parabolic flights. We spoke with Rob Reid from JPL and Ben Hockman and Marco Pavone from Stanford about what they’ve been up to over the last year, and then we definitely didn’t sneak* the robot into Smithsonian’s National Air and Space Museum in Washington, D.C., for a little photoshoot.
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
There are signs all around us indicating that the field of robotics is going through a major transformation. Robots are getting significant coverage in the media. A number of big companies that had little to do with robotics are suddenly on a buying spree to acquire robot companies. Countries that were not on anyone’s radar screen just few years ago are now emerging as major players in the robotics arena. Many design and operational constraints associated with robots are being obliterated by, among other things, the use of cloud computing and social media. Costs are falling rapidly, enabling new applications. Even the notion of what was considered a robot is changing fast. All these signs seem to point that robotics is on the verge of something big that can hopefully impact our lives in a positive way.
This post lists six main trends and discusses their implications.
At a press conference in Palo Alto, Calif., today, Toyota is announcing the first step of what is expected to be a major push into artificial intelligence and robotics, technologies that the company sees as critical for addressing current and future societal challenges. Toyota, the world’s largest automaker by sales, says it will establish two collaborative research centers at MIT and Stanford, with an investment of $50 million over the next five years. The initial focus will be on accelerating the development of AI with applications to smarter and safer vehicles, as well as robots that can make our lives better at home, especially as we age.
Toyota says an immediate goal is to figure out ways to save lives on the road. But the company is very clear that it’s not trying to develop a fully autonomous car in the same way that Google and many others are. Instead, they’re working on assistive autonomy: you’ll be driving most of the time (or at least in control of the vehicle), but the vehicle will be continuously sensing and interpreting the environment around you, ready to step in as soon as it detects a dangerous situation. Toyota believes this approach could make cars virtually crash-proof.
“Our long-term goal is to make a car that is never responsible for a crash,” says Dr. Gill Pratt, who was until just a few months ago the program manager at DARPA responsible for the DARPA Robotics Challenge (among other ambitious robotics programs) and will now direct this research at Toyota. He added that such intelligent cars will “allow older people to be able to drive, and help prevent the one and a half million deaths that occur as a result of cars every single year around the world.”
Dr. Pratt will be working with Professor Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as Professor Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory (SAIL).
Earlier this week, we spoke with Pratt, Rus, and Li to get all the details on what we have to look forward to over the next five years.
You might think that the biggest threat to the world’s coral reefs is humanity. And you’d be right, of course: climate change, pollution, overfishing, and scuba divers who have no idea where their fins are all contribute to coral reef destruction. There are other, more natural but no less worrisome causes as well, and one of those is the crown-of-thorns sea star. It’s big, it’s spiky, and it eats coral for breakfast, lunch, and dinner.
Population explosions of these sea stars can devastate entire reefs, and it’s not unheard of to see 100,000 crown-of-thorns sea stars per square kilometer. There isn’t a lot that we can do to combat these infestations, because the sea stars can regenerate from absurd amounts of physical damage (they have to be almost entirely dismembered or completely buried under rocks), so humans have to go up to each and every sea star and inject them with poison 10 times over (!) because once isn’t enough.
Bring on the autonomous stabby poison-injecting robot submarines, please.
At IROS 2012, Gill Pratt declared that grasping was solved, which was a bit of a surprise for all the people doing grasping research. Grasping, after all, is the easiest thing ever, as long as you know absolutely everything there is to know about the thing that you want to grasp. The tricky bit now is perception: recognizing what the object that you want to grasp is, where it is, and how it’s oriented. This is why robots are festooned with all sorts of sensing things, but if all you care about is manipulating an object that you’re familiar with already, dealing with vision is a lot of work.
Liatris is an open-source hardware and software project (led by roboticist Mark Silliman) that does away with vision completely. Instead, you can determine the identity and pose of slightly modified objects with just a touchscreen and an RFID reader. It’s simple, relatively inexpensive, and as long as you’re not trying to deal with anything new, it works impressively well.
Anyway, Gremlins is also the name of a new DARPA program that’s seeking proposals to develop the technology to launch swarms of low-cost, reusable unmanned aerial vehicles (UAVs) over great distances and then retrieve them in mid-air.
About half a billion years ago, life on earth experienced a short period of very rapid diversification called the “Cambrian Explosion.” Many theories have been proposed for the cause of the Cambrian Explosion, with one of the most provocative being the evolution of vision, which allowed animals to dramatically increase their ability to hunt and find mates (for discussion, see Parker 2003). Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics—a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion.
When you have a brand new robot to show the world, it’s not always easy to come up with a demo that will attract attention, especially if your robot does stuff that’s (and forgive us for saying this) inherently kind of boring. Don’t get me wrong: robots that do boring things are very important, because otherwise humans would be doing those things instead.
PRENAV (which I’m going to call Prenav so that I don’t get a headache) is introducing an aerial robot that can inspect tall structures, and what’s impressive about it is that it can (through the assistance of another robot on the ground) localize itself with centimeter-level accuracy. To demonstrate how well this works, Prenav stuck some lights on its drone and photographed it while it flew around. The timelapsed footage is amazing.
Be amazed, and then watch some other videos, it’s Video Friday.