As car companies large and small make steady but incremental progress towards the commercialization of autonomy in consumer vehicles, the big question is when we're going to finally see (and be able to benefit from) full, level 4 autonomy. The kind of autonomy where you don't have to pay attention at all, and your car simply takes you where you want to go. This is what's going to completely change transportation, turning time spent getting from where you are to where you want to be from a frustrating experience into a productive (or relaxing) one.
So far, we can buy cars that come equipped with autonomous braking, autonomous parking, and autonomous highway driving, but fully urban autonomy has only been demonstrated by a few, and not in a form that's ready for consumers to take advantage of. An MIT spinout called nuTonomy ( which closed a $3.6M seed funding round in January ) is ready to change everything by deploying a fully autonomous urban taxi service in downtown Singapore. Using your phone, you'll call a self-driving car to you, tell it your destination, and then sit back and let the car drive you there. This would be a massive advance for both autonomous cars and urban mobility, and we talked with nuTonomy co-founder and CEO Karl Iagnemma about how they're going to make it happen.
nuTonomy was launched in 2013, but the company was based on robotics research at MIT that goes back almost a decade. Karl Iagnemma and Emilio Frazzoli , nuTonomy's CEO and CTO, both directed mobility-focused robotics labs from MIT, and most recently, Frazzoli was part of an MIT experiment in Singapore which set up autonomous golf carts to ferry tourists around a park for a week. Singapore and MIT have been collaborating on research projects like these since 2007 , and nuTonomy is one of the results of this partnership: part of nuTonomy's 25-member core team comes directly from the team that developed those autonomous golf carts.
Although nuTonomy is developing automotive technology, it's essentially a software company. Software, while not always the most visible part of an autonomous driving system, is at this point arguably the most important (and most difficult). As companies like Google have demonstrated, we have the (very expensive) hardware that's necessary for autonomous urban vehicles, but the software that tells those vehicles what to do based on the data their sensors collect is still a work in progress. This is nuTonomy's secret sauce: They believe that their autonomous driving software is better than anyone else’s. So much better, in fact, that they're planning to launch a pilot of a fully autonomous taxi service in Singapore later this year.
When we say "fully autonomous," we're talking about Level 4 autonomy , or full self-driving automation. A Level 4 autonomous vehicle "is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip;" all you have to do is provide a destination and (possibly) open and shut the doors. Most cars with functional autonomy right now, in contrast, are at Level 3, with limited self-driving automation: the driver is expected to be available for occasional control. It's a big step from Level 3 to Level 4, but the benefits are enormous: in addition to leaving the driving completely to the car, it also means that the car is capable of driving itself with no human inside, which is what makes a robotic taxi service possible.
For nuTonomy, as with most autonomous car companies, the progress towards full autonomy is incremental. Part of nuTonomy's business involves providing autonomous features to automotive OEMs and tier 1 manufacturers. For Jaguar Land Rover, for example, nuTonomy is working on a variety of autonomous features that will end up in dealerships in the coming years. "There's a real opportunity for companies like ours to be providers of this technology," nuTonomy CEO Karl Iagnemma told IEEE Spectrum . "The reason for that is the technology in this area isn't primarily automotive technology—it's really being drawn from the robotics community, technology that's been developed in robotics research labs over the last 20 years. We come to this problem as natives."
The problem with incremental progression towards autonomy in personal vehicles, Iagnemma explains, is fundamentally one of cost: "you're not trying to sell a feature to a customer, who might only be willing to pay a couple thousand dollars, which really constrains your sensor and computer cost." Removing consumer ownership from the equation with a commercial vehicle, like a robotic taxi, completely changes things, however: "Now you're trading against the cost of a human driver, so you have a lot fewer constraints on your cost," Iagnemma says. "And it's very likely that the technology will reach the market earlier in the form of this autonomous mobility-on-demand system."
A mobility-on-demand system only really makes commercial sense in urban areas, and urban areas are the most challenging for autonomous vehicles because of the density and complexity of information that needs to be understood in order to make safe and productive decisions. "This is one of the core problems of autonomous vehicles," Iagnemma tells us, "and a problem that a lot of groups in our community are really struggling with."
“We saw an opportunity to build on a lot of the work that myself and Emilio [Frazzoli] were doing at MIT over the past 15 years, and apply it to this problem. The result is that we feel that we have an approach to the planning and decision making problem that is state of the art and robust. It's not hand-engineered if-then statements in code, it's a rigorous algorithmic process that's translating specifications on how the car should behave into verifiable software. And that's something that's really been lacking in the industry.”
IEEE Spectrum: How is nuTonomy's approach to planning and decision making for autonomous vehicles unique?
nuTonomy CEO Karl Iagnemma
Karl Iagnemma: What nuTonomy is focusing on as a company is this decision making problem: how will cars be smart enough to navigate in urban environments? And it's not sufficient to just be safe: being safe is the necessary condition. But for people who want to use the technology, you not only have to be safe, but you have to drive in some sense the way a human drives.
Sometimes, for example, human drivers actually break the rules of the road. They do it in a principled and safe way, but it's something you do almost every time you get behind the wheel of a car. So one of the really unique and differentiating things that we're doing is building into our decision-making engine the ability for cars to actually violate the rules of the road when it's necessary to do so, it in a safe and reliable manner.
How do you teach your software to make decisions like these?
We use a fundamentally new approach to the problem called formal logic . Formal logic is a set of tools that can be used in applications where you have safety critical semi-autonomous or automated systems that have to have verifiable software and respond to very complex scenarios.
Basically, we provide the car with a list of rules, like the rules of the road, and then also a list of preferences, like instructions about how humans drive. We rank-order these rules and preferences: there are rules you can't violate, like colliding with something, and there are things that you'd ideally like to do, if possible. And then we use algorithmic processes to translate these rules into logical structures that are verifiable, meaning that we're sure that the structures that come out of these rules exactly represent and adhere to the rules that we define.
This verifiability is a huge benefit, because when you take an alternative approach, which is to just manually hand-engineer a ruleset, it's very difficult to convince yourself that that ruleset exactly represents the rules you'd ideally like the car to follow, especially when the ruleset is large and the situations are complex.
All humans drive differently, and some humans are comfortable with decisions that would make other drivers uncomfortable. How will you handle this variability in your software?
In my opinion, there's something of a fallacy right now in the community where we say, "the car should drive like a human." What we gloss over is that humans tend to drive in different ways, but in essence, what we mean is that an autonomous car should drive like an average, reasonable, confident, safe driver.
When these cars actually get deployed is, I think there's going to be some segment of the population that's just not going to be comfortable with autonomous vehicles. They're not going to like the way the car drives because it's going to drive differently than how they would drive, and that may create some anxiety or mistrust of the automation. I think what we're going to evolve to, as a community, is the ability to customize the performance of the car. The car will remain safe, of course, it'll just drive more how you might personally drive. But we're not there yet.
At some point, autonomous vehicles will have to make what are commonly called "ethical" decisions in the interest of safety. How will your cars be programmed do this?
As of today, we don't have any procedure for what we would commonly think of as ethical decision making. I'm not aware of any other group that does either. I think the topic is a really important one. It's a question that's very important to pose, but it's going to take a while for us to converge to a technical solution for it. We'd love to be able to address that question today, but we just don't have the technology.
The other part of it, not that this is a bad thing, is that we're putting more of a burden on the autonomous car than we do on the human driver. Human drivers, when faced with emergency situations where they might have to make a difficult ethical decision, aren't always able to make a reasonable ethical decision in that short amount of time. What level of performance are we going to hold autonomous cars to? The answer is, quite probably, a higher level of performance than we would hold a human driver to, or most people won't accept the technology. That may be unfair, but it doesn't necessarily mean that it's wrong.
Even with its unique and sophisticated software, it's somewhat surprising that a company as young (and small) as nuTonomy could very well be the first company in the world to deploy a true Level 4 autonomous vehicle in commercial operation in an urban area. A substantial part of what is making this possible is the location: Singapore. Beyond MIT's existing academic partnership in Singapore, the government is very proactive about adopting autonomous vehicle technology, Iagnemma explains: "We see Singapore as one of the best markets in the world for this technology. They have a progressive approach towards appropriate legislation around autonomous vehicles, and then working with technology providers, car manufacturers, and other groups to insure that they'll be able to operate in a reasonable way."
The environment in the United States is very different, Iagnemma says. It's obviously a huge market for any vehicle manufacturer, but there's no consistent regulatory framework, and government agencies are frustrating to work with. Singapore, on the other hand, is small, nimble, and actively interested—providing both political and financial support. "Singapore is completely aligned behind this technology,” he says. “They want it to happen, and they're going to make sure it does."
This year, nuTonomy plans to launch a small scale pilot study of a fully autonomous on-demand mobility system in One North , a business park in Singapore near the national university. The pilot will (nuTonomy hopes) prove both the technology and the business case for a robotic taxi service in an area where it will have both practical relevance and commercial viability. nuTonomy has had test and development cars on the road in Singapore for several months now, and by the end of this year, one could take you for a ride.