Cars That Think iconCars That Think

Australian tinkerer turns his Ford Focus into a self-driving car with $700 in add-ons.

Aussie Uses Arduino Chips to Roboticize a Ford Focus

Update October 20, 2016: Several readers and other news outlets say there is less truth to this guy's claims than you'll find on a chocolate frog card, and Spectrum is trying to find out.

The hard part of getting cars to drive themselves is the last niggling 1 percent of reliability. That’s the bit that has so far stymied even the tech giants and auto titans who are developing self-driving cars.

But roboticizing the first 90-plus percent of driving is easy-peasy, as Keran McKenzie, of Melbourne, Australia, shows in his hack of a Ford Focus. He began by asking himself why the “home” button on the car can only let the driver plot a trip back home. Why not have it actually drive the car home as well?

So he took out his car’s recessed ultrasound sensors (good for short-range work, like parking) and replaced them with five little cameras backed by the hacker’s friend, Arduino processors. Next, he hooked these up to a master processor situated at the back of the engine compartment. Then he piped some of the output into a display in the cabin, so he’d know how the system was working. Total cost, McKenzie says, was less than a grand, presumably in Australian dollars—the equivalent of about US $770.

That’s not half bad for an IT guy whose main job is in automating workplace systems. It’s not even 10 percent bad. But it wasn’t quite good enough for use on a public road, as he learns about 10 seconds into his drive (at 3:00 in the following clip).

Okay, so maybe McKenzie’s surprise is feigned, as if he meant to say, “Don’t try this at home, kids!” 

Lady opens a door for a delivery man with a package

New Taxi Hitchhiker: The Package

A new research project aims to sneak a hitchhiker along on your taxi ride: the package.

In 2013, consumers received deliveries for over one billion products they ordered online, according to a report by Barclays Bank in the United Kingdom. In the June 2016 business quarter, a major online retailer in China delivered, on average, about 42 million packages every day.

With so many packages traveling on the move, retailers have been searching for ways to increase efficiency, particular at the so-called last mile. Several startups have systems where crowdsourced travelers deliver packages. Meanwhile, Amazon has tried experiments including drone delivery and paying for taxis.

Taxis might be a solution for some cities in China, according to Chen Chao, a computer scientist at Chongqing University. “Taxis are really close to our daily life,” he says. He’s hoping to lower delivery costs, traffic congestion, and air pollution by taking advantage of taxi prevalence in big cities.

Recently in IEEE Transactions on Intelligent Transportation Systems, Chen and his collaborators unveiled a software system that makes taxi drivers in China the middleman for packages. Collaborators included Zhang Daqing, a computer scientist leading a team at Peking University in Beijing, and other researchers in Hong Kong, China, and France. The researchers’ objective was to minimize the amount of time it takes to receive a package.

The system consists of a relay chain of participating taxis, convenience stores for temporary package holding, and one month of taxi trajectory data from 7,614 taxis in Hangzhou, the capital city of the Zhejiang Province in eastern China.

In computer simulations, packages get passed from taxi to taxi—with temporary holdings at convenience stores to await pickup if necessary.

In the proposed system, a user requests a delivery, the app analyzes the trajectory data to find the shortest route of taxi hand-offs that would deliver a package the fastest. It then makes changes to the route as  new requests come in. Taxis collect or drop a package off only when passengers are not in the vehicle, so as not to interfere with the passenger experience. The taxis will also only travel a short range—up to about 1.2 kilometers—away from their normal routes.

Simulations reveal that on average, the system is able to deliver 985 packages every hour, and 85 percent of all packages arrive at their destination within 8 hours of a request. “The performance is quite good,” Chen says. 

“In theory, I would like to think something like this could work,” says Nick Jennings, a computer scientist at Imperial College London in the United Kingdom. He was not involved in the research but has previously studied crowdsourcing package delivery in rural areas. “I just sort of worry about the pragmatics of it all,” he says.

The main hitches are whether there is enough demand, proper incentives, and the right infrastructure. Jennings says that package interchange infrastructure might not scale, because it’s not safe to leave packages in an unguarded location and there’s no clear incentive for infrastructures to participate. And, if there’s no incentive for taxi drivers to put in the extra effort of delivery. “They’re unlikely to be quite so obliging,” he says.

Chen imagines taxi drivers receiving compensation in a way similar to Uber drivers, by using a pricing model. “The taxi drivers only need very little effort,” he says “and if they are well paid they are willing to accept this service.”

According to Jean-François Rougès, a logistics and transportation management researcher at Laval University in Quebec, Canada, “the concept of taxi operated crowd-delivery is a relevant solution.” But “such a solution may not be relevant for small cities or widespread cities with extended suburbs.”

According to Rougès, to be efficient and profitable, the system needs a critical mass of participating taxis and packages in order to create a routine. The density of taxis can vary greatly even between world capitals: Paris, for example, has one-fourth the taxi density of London.

Chen hopes to eventually incorporate forms of transportation such as buses, metros, or bicycles into the system to increase its reach. If there’s no form of public or private transportation that travels to your neighborhood, then you might have to walk to a transportation-covered area to receive a package.

“We have to think new ideas, new solutions to tackle these challenges,” he says.

A future step, he says, is to change the optimization conditions. Right now it’s meant to get a package delivered as quickly as possible. But people might instead prefer having their package arrive before a certain time. 

Traffic signals at an intersection in Pittsburgh, Penn.

Pittsburgh's AI Traffic Signals Will Make Driving Less Boring

Idling in rush-hour traffic can be mind-numbing. It also carries other costs. Traffic congestion costs the U.S. economy $121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions, Carnegie Mellon University professor of robotics Stephen Smith told the audience at a White House Frontiers Conference last week. In urban areas, drivers spend 40 percent of their time idling in traffic, he added.

The big reason is that today’s traffic signals are dumb. Smith is developing smart artificial-intelligence-fueled traffic signals that adapt to changing traffic conditions on the fly. His startup Surtrac is commercializing the technology.

In pilot tests in Pittsburgh, the smart traffic-management system has gotten impressive results. It reduced travel time by 25 percent and idling time by over 40 percent. That means less time spent staring out the windshield and more time working, being with your family, or doing anything else. I’m a Pittsburgh resident who has witnessed the city’s rapidly evolving urban landscape. And I can attest to the mostly frustration-free driving that has resulted from this system despite the city’s growing population. 

The researchers also estimate that the system cuts emissions by 21 percent. It could also save cities the cost of road widening or eliminating street parking by boosting traffic throughput.

Conventional traffic lights have preprogrammed timing that’s updated every few years. But as traffic patterns evolve, the systems can fall out of date much more quickly than that.

The Surtrac system instead relies on computerized traffic lights coordinating closely with each other. Radar sensors and cameras at each light detect traffic. Sophisticated AI algorithms use that data to build a timing plan “that moves all the vehicles it knows about through the intersection in the most efficient way possible,” Smith says. The computer also sends the data to traffic intersections downstream so they can plan ahead.

Read More
Interior view of the dashboard on a self-driving car driving on a road; the driver's hands are off the wheel

Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe

Car manufacturers will have difficulty demonstrating just how safe self-driving vehicles are because of what’s at the core of their smarts: machine learning. 

“You can’t just assume this stuff is going to work,” says Phillip Koopman, a computer scientist at Carnegie Mellon University who works in the automotive industry.

In 2014, a market research firm projected that the self-driving car market will be worth $87 billion by 2030. Several companies, including GoogleTesla, and Uber, are experimenting with computer-assisted or fully autonomous driving projects—with varying success because of the myriad technical obstacles that must be overcome.

Koopman is one of several researchers who believe that the nature of machine learning makes verifying that these autonomous vehicles will operate safely very challenging.

Traditionally, he says, engineers write computer code to meet requirements and then perform tests to check that it met them.

But with machine learning, which lets a computer grasp complexity—for example, processing images taken at different hours of the day, yet still identifying important objects in a scene like crosswalks and stop signs—the process is not so straightforward. According to Koopman, “The [difficult thing about] machine learning is that you don’t know how to write the requirements.”

Years ago, engineers realized that analyzing images from cameras is a problem that can’t be solved by traditional software. They turned to machine learning algorithms, which process examples to create mathematical models for solving specific tasks.

Engineers provide many human-annotated examples—say, what a stop sign is, and what isn’t a stop sign. An algorithm strips down the images, picking unique features and building a model. When a computer is subsequently presented with new images, it can run them through the trained model and get its predictions regarding which images contain a stop sign and which ones don’t.

“This is an inherent risk and failure mode of inductive learning,” Koopman says. If you look inside the model to see what it does, all you get are statistical numbers. It’s a black box. You don’t know exactly what it’s learning, he says.

To make things more concrete, imagine if you test drive your self-driving car and want it to learn how to avoid pedestrians. So you have people in orange safety shirts stand around and you let the car loose. It might be training to recognize hands, arms, and legs—or maybe it’s training to recognize an orange shirt.

Or, more subtly, imagine that you’ve conducted the training during the summer, and nobody wore a hat. And the first hat the self-driving car sees on the streets freaks it out.

“There’s an infinite number of things,” that the algorithm might be training on, he says.

Google researchers once tried identifying dumbbells with an artificial neural network, a common machine learning model that mimics the neurons in the brain and their connections. Surprisingly, the trained model could identify dumbbells in images only when an arm was attached.

Other problems with safety verification, Koopman says, include training and testing the algorithm too much on similar data; it’s like memorizing flash cards and regurgitating the information on an exam.

If Uber dropped its self-driving cars in a random city, he says, where it hasn’t exhaustively honed computer maps, then maybe they wouldn’t work as well as expected. There’s an easy fix: If you only train and only operate in downtown Pittsburgh (which Uber has mapped), then that could be okay, but it’s a limitation to be aware of.

There’s also the challenge of ensuring that small changes in what the system perceives—perhaps because of fog, dust, or mist—don’t affect what algorithms identify. Research conducted in 2013 found that changing individual pixels in an image, invisible to the unaided eye, can trick a machine learning algorithm into thinking a schoolbus is not a schoolbus.

“You would never put such [a machine learning] algorithm into a plane because then you cannot prove the system is correct,” says Matthieu Roy, a software dependability engineer at the National Center for Scientific Research in Toulouse, France, who has worked in both the automotive and avionics industries. If an airplane does not meet independent safety tests, it cannot take off or land, he says. 

Roy says it would be too difficult to test autonomous cars for all the scenarios they could experience (think of an explosion or a plane crashing right in front). “But you have to cope with all the risks that may arrive,” he says.

Alessia Knauss, a software engineering postdoc at the Chalmers University of Technology in Göteborg, Sweden, is working on a study to determine the best tests for autonomous vehicles. “It’s all so costly,” she says.

She’s currently interviewing auto companies to get their perspectives. She says that even if there were multiple sensors—such as in Google’s cars—that act as backups, each component has to be tested based on what it does, and so do all of the systems that make use of it.

“We’ll see how much we can contribute,” Knauss says.

Koopman wants automakers to demonstrate to an independent agency why they believe their systems are safe. “I’m not so keen to take their word for it,” he says.

In particular, he wants car companies to explain the features of the algorithms, the representativeness of the training and testing data for different scenarios, and, ultimately, why their simulations are safe for the environments the vehicle is supposed to work in. If an engineering team simulated driving a car 10 billion miles without any hiccups, although the car didn’t see everything, a company could explain that any other scenarios wouldn’t happen very often.

“Every other industry that does mission critical software has independent checks and balances,” he says.

Last month, the U.S. National Highway Traffic Safety Administration unveiled guidelines for autonomous cars, but they make independent safety testing optional.

Koopman says that with company deadlines and cost targets, sometimes safety corners can be cut, such as during the 1986 NASA Challenger accident, where ignoring risk led to a spacecraft exploding 73 seconds after liftoff and killing seven astronauts.

It’s possible to have independent safety checks without publicly disclosing how the algorithms work, he says. The aviation industry has engineering representatives who work inside aviation companies; it’s standard practice to have them sign nondisclosure agreements.

“I’m not telling them how to do it, but there should be some transparency,” says Koopman.

In-cabin camera monitors driver in self-driving car

Autonomous Vehicles Need In-Cabin Cameras to Monitor Drivers

When Elon Musk unveiled his “Master Plan” for Tesla on the company’s blog, he argued for the electric car’s controversial Autopilot mode in stark ethical terms. It would be “morally reprehensible,” he said, to scale back or disable Tesla’s partially autonomous driving feature because, on balance, Autopilot still saves lives.

There is no doubt that Autopilot and other similar driver-assistance technologies improve safety. But as CEO of EDGE3 Technologies, a vision company developing driver-monitoring systems for vehicles, and as a former professor and head of the Machine Vision Lab at Embry-Riddle Aeronautical University, my experience suggests something else too. Namely, in the rush to achieve fully autonomous driving, we may be side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.

Read More
A Google self driving SUV on the streets of Mountain View, California

No Test Driver? No Problem: California May Make Testing Self-Driving Cars Easier

Self-driving cars without steering wheels or pedals might have gotten a little bit closer to reality late last week. On Friday, the California Department of Motor Vehicles released a revised draft of regulations that could give more flexibility to autonomous car manufacturers than they have today. The proposed regulations allow testing driverless cars that pass a federal safety inspection, even with no driver in the car.

At the moment, states with self-driving car regulation normally require the presence of drivers. In California, 15 companies have permits to test vehicles as long as there is a licensed driver along for the ride, according to Reuters. On Thursday, Gov. Jerry Brown signed into law a bill for testing vehicles without steering wheels or pedals, but the pilot program only applies to vehicles traveling below 56 kph (35 mph) in a privately-owned business park.

Read More
Car driving down a road with a heads-up display of vehicle information

HERE Mapping Service to Automate Finding a Parking Spot

A new mapping service could one day automatically help drivers find parking spots.

At the Paris Motor Show on 29 and 30 September, the mapping company HERE will show off several technologies—such as detecting road signs and monitoring traffic—aiming to improve driver safety and decision making.

One of these services, according to a press release today, could show a driver the best places to park.  A UCLA downtown traffic study of major cities in 2006 found that cruising for spots accounts for up to 30 percent of traffic flow. By combining third-party data with cars onboard sensors, HERE’s service would automatically pinpoint available spots and how much each of them would cost.

The novelty here is that, at first, the data would come from forward cameras or other sensors on Audi, BMW, and Mercedes-Benz cars. (HERE, a former NOKIA mapping company, is backed by Audi, BMW and Mercedes-Benz.)

The platform should be available by the first half of 2017 to customers both inside and outside the automotive industry, according to HERE. The company plans to license the technology to automakers, municipalities, road authorities, smartphone makers, and app developers.

“What we are seeing today is the technology and automotive industries coming together to create services that will elevate the driving experience for billions,” Edzard Overbeek, CEO of HERE, said in the press release.

The interior of a Tesla car, with a large display screen on the dashboard

Tesla's Massive New Autopilot Update Is Released, Promising Safer Driving

A long-heralded update to Tesla Motor’s Autopilot has just been made available for download. First reports suggest that it’s as big a change in the semiautonomous driving system as Tesla CEO Elon Musk had promised.

One key element of the upgrade is making more use of the car’s existing radar capabilities, both to perceive the road in real time and to map it so that subsequent Tesla cars can distinguish earlier fixed features from new, perhaps threatening ones. Another key element is saving drivers from over-dependence on the software.

Either of those points might have saved the Tesla owner who died last May when his Autopilot, apparently unsupervised by the driver, drove into the side of a tractor trailer. That is the first fatality known to have been caused by a modern robotic driving system.

“We believe it would have seen a large metal object across the road,” Musk said in a conference call earlier this month, referring to the trailer. “Knowing that there is no overhead road sign there, it would have braked.”

(Another Tesla driver died in China in January, in a case now under litigation there, but it isn’t clear whether the Autopilot was operating at the time of the crash.)

Tesla’s preference for radar over lidar, the laser-ranging equivalent, makes the company a little unusual in autoland. Lidar has far better resolution—unlike radar it can see road markings and make out the shapes of signs and other things even at a distance.

Radar, however, is cheaper, more compact, and far better at seeing through rain and snow. And Tesla needs this immediate practicality because it’s incrementally raising the capability of its cars’ “advanced driver assistance systems,” or ADAS, to a fully self-driving level. By contrast, Google, Ford and Uber are aiming to produce a fully robotic car in one fell swoop. They now festoon their experimental cars with lidar in the expectation that it will become cheaper, smaller and more capable by the time that car is ready, five years (at least) from now.

Tesla’s Autopilot 8.0 goes further than ever to keep the driver’s eyes on the road. For instance, it will sound the alarm if your hand’s off the wheel, then does it with increasing insistance until, after the third time, the Autopilot will disengage for the remainder of the trip. 

How far these changes will go to prevent accidents, small and large, remains to be seen. For now, though, the select reviewers who have been beta-testing the car say that it certainly drives in less machine-like way.

“It’s only human to want to give the truck a little more space and hug the outer edge of the lane,” writes Tom Randall in Bloomberg News. “With the upgrade, the car is beginning to act a little more human, adjusting its position in the lane to account for perceived threats from the sides.” 

One ho-hum aspect of today’s upgrade would once have been the most striking thing of all: it’s all done through an over-the-air download. Tesla pioneered this trick, and now other automakers are following suit. Here Tesla has a built-in advantage over other car makers: it sells cars direct to the public, so upgrades can go straight to the customer without alienating a dealer network.

U.S. DoT Secretary Foxx on the day he announces the imminent publication of federal robocar guidelines

Federal Regulators Open the On-Ramp for Self-Driving Cars

One of the great questions hanging over self-driving cars is the attitude that government regulators will take toward them.

As it had hinted it would do, the U.S. Department of Transportation has chosen to allow the adoption of robocars to proceed as quickly as possible (but not more so, to borrow a phrase from Einstein).  

In a statement last night the DOT summarized the policy, which it has just released in full today. It’s a system of guidelines rather than hard-and-fast rules—enough to enable engineers to plan their products and companies to refine their business models.

“This is a change of culture for us,” Transportation Secretary Anthony Foxx said yesterday. “Typically we would say a car must meet standard ‘A’ in a certain way. Under this approach, it isn’t prescriptive that there have to be specific proof points to be met before a technology comes to market.”

The guidelines cover when a car can drive itself; when it must hand control back to the driver; how it might stop or leave the road when such a handover’s not possible; and how it must handle ethical challenges, such as whether to veer to avoid one accident even if that risks causing another one. Perhaps most important, the framework will have national standing.

Vox reports that a Transportation Department official said last night in a telephone interview that the federal rules will cover robotic systems, while those of states and municipalities will apply only to the human drivers. In other words, if I drive badly, my state will punish me; if my car drives itself badly, the feds will intervene, presumably by going after the car’s maker.

Here is how the full Department of Transportation (DOT) report puts it: “DOT strongly encourages States to allow DOT alone to regulate the performance of [self-driving] technology and vehicles. If a State does pursue [self-driving] performance-related regulations, that State should consult with NHTSA and base its efforts on the Vehicle Performance Guidance provided in this Policy.”

The U.S. government has long shown its desire to encourage self-driving technology, both in what it has said and in what it has not said. At a conference back in July, Mark R. Rosekind, head of the National Highway Traffic Safety Administration, refused to mention by name the first fatality caused by a robocar—a Tesla Model S that drove itself into a truck two months before. Instead he referred to it indirectly as “the elephant in the room,” and went on to stress that no single failure would “derail” the government’s efforts to speed the adoption of self-driving cars.

“We should be desperate for new tools that will help us save lives,” Roskind said.

ford robocar with four lidar towers

A Ride In Ford's Self-Driving Car

The only sign of fallibility I saw yesterday in Ford’s experimental self-driving car came halfway through a drive near the company’s headquarters in Dearborn, Mich., when the robocar briefly braked for no clear reason, then apparently thought better of it.

A tiny irregularity, and grist for the engineers’ mill along with other little lapses, logged at this media event. One reporter said his car had been a bit “spooked” by a hedge. But at least in my drive the car did everything from start to finish, setting out with a programmed destination but deciding on each turn, lane-change, stop, and start.

Here, in this protected realm among Ford employees, the self-driving car will first see use in a ride-hailing service in 2018. By then some of the sensors will have improved. For instance, the four, $8,000 lidar sets on the roof, which reach only 80 meters, will soon be replaced by just two sets that can see about twice as far. And by 2021, when Ford plans to roll out a commercial robotaxi service, the lidar should be still better, smaller and cheaper.

“We design modularly, so that we don’t depend on the availability of new hardware,” Randal Visintainer, the director of Ford's autonomous vehicle program, told IEEE Spectrum. A lot of suppliers have been talking about lidar-on-a-chip for ridiculously cheap prices, he noted, “but I haven’t seen one yet.”

The interesting thing about the lidar arrangement is that the two outermost sets revolve obliquely, so as to get a view of the space immediately adjacent to the side of the car. If you rely on just one roof-mounted set, as Google’s car does, the car casts a shadow, creating a blind zone. The other two sets on Ford’s vehicle are vertically oriented so that their fields overlap in front and in the back, providing extra detail.

In this first unveiling to journalists, Ford’s little fleet of robocars, all based on the Ford Focus hybrid, stuck to streets mapped to within two centimeters, a bit less than an inch. The car compared that map against real-time data collected from the lidar, the color camera behind the windshield, other cameras pointing to either side, and several radar sets—short range and long—stashed beneath the plastic skin. There are even ultrasound sensors, to help in parking and other up-close work.

Here’s how the map looks to the car’s self-driving system. The darker colors represent stored mapping data, the brighter colors represent real-time data from the car’s own sensors:

One sensor the car did not use was real-time GPS, which Ford deems too unreliable in built-up areas, where the satellite’s signal can reflect along a number of different pathways. 

“If they give us GPS relay stations, we’ll use them,” Visintainer said. “If there were smart intersections, we’d use them too—it’d make our lives a lot easier.”

The only staged event in our ride came when a Ford employee acted out the part of a pedestrian: He hit the “walk” button at a crosswalk and then crossed the street while demonstratively fiddling with a cellphone. The car stopped appropriately. Later, though, real pedestrians crossed, and the car again did as it should, if anything using an excess of caution. One time it stopped and wouldn’t budge until a pedestrian had not only crossed the street but taken another dozen steps up a sloping path.

The critical point here is that Ford is designing a car that will do it all, all at once—a kind of technological Great Leap Forward from today’s cars—with their ADAS, advanced driver assistance technologies. That doesn’t mean the company isn’t working on those stopgap measures as well.

“We developed a philosophy of designing from the bottom up and from the top down,” Visintainer said. Improving the self-driving power is the top-down approach; getting the driver-assistance systems to work is the bottom-up approach. “The question is, how far  down can we take that [first approach], and when do the two approaches meet?”

Google made the top-down approach famous, arguing that anything short of full autonomy would lull drivers into a false sense of security. And that’s what many in the business say caused the one fatal robocar accident back last May, when a Tesla, unsupervised by its driver, drove itself into the side of a truck.

Taking the human being out of the loop—the jargon for turning a driver into a passive passenger—means taking away the safety net that today’s most advanced cars all require. “And you need extra redundancy if there’s no human serving as backup,” Visintainer adds.

He said putting the two strategies together, and putting systems together so they can be manufactured efficiently and last long, is Ford’s core competency—the thing it can do better than non-carmakers like Google, Apple, and Uber. 

It was a note sounded yesterday in a talk by Bill Ford, the executive chairman of Ford Motor Co., who noted he’d gotten to know Silicon Valley during the years he spent a member of the board of eBay. “We know how to integrate all that technology into a vehicle that they [the non-car makers] might not have,” Ford said. “People were saying we’ll be low-margin assemblers of other people’s technology. That’s shifted. We bring a lot of technology ourselves, then we integrate it into the vehicle, and then we build it.”

The car and non-car companies are competing, not just to lead in technology, but also to be perceived as leading. It was no accident that on Tuesday, while Ford was giving a gaggle of journalists in Dearborn their first ride in a robotic version of the Ford Fusion, Uber was giving another bunch of scribes in Pittsburgh their first ride in Uber’s own robotic version of the very same car.

Uber is about to use them in a pilot commercial robotaxi service, but that doesn’t mean it’s leapfrogged Ford, let alone Google. The Uber cars will remain firmly under the supervision of professional drivers.  


Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
Senior Writer
Evan Ackerman
Berkeley, Calif.
Lucas Laursen

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More