Cars That Think iconCars That Think

Elon Musk waving good-bye

Tesla Divorces Mobileye, or Vice Versa

Mobileye, the Israeli automatic driving company, will not develop new technologies for Tesla Motors—a development that seems related to the recent death of a driver whose Tesla Model S steered itself into the side of a truck.

It isn’t clear which company initiated the split-up or even whether the crash was the main reason for it. The two companies have lately differed on autonomous driving. Tesla seems comfortable with letting its customers beta-test its newest systems; Mobileye wants to label its current product as a mere “driver assistance” feature.

Amnon Shashua, the chief technical officer of Mobileye, only alluded to the crash this morning during a second-quarter conference call with industry analysts. But he recently questioned Tesla’s approach.

“‘It’s very important given this accident…that companies would be very transparent about the limitations of autonomous driving systems,” Shashua said earlier this month, according to a report today in the Wall Street Journal. “It’s not enough to tell the driver to be alert but to tell the driver why.”

The Tesla driver who died had apparently not been paying attention at the time of the crash. Some reports implied that he may even have been watching a movie.

Mobileye’s current system relies heavily on a single camera, but its next-gen system will fuse data from other sensors, notably lidar, the laser-imaging system that Google cars use and that most automakers are experimenting with now. Tesla, however, has remained a rare holdout.

Mobileye achieved its first big market success by supplying the chips and software for Tesla’s Autopilot system. But the company says revenues from Tesla constitute only about 1 percent of the total. Most of its business is with other carmakers, notably General Motors, Ford, Nissan, BMW, Volkswagen, and Hyundai.

“Tesla manufactures annually just 50,000 to 60,000 vehicles, whereas Mobileye should supply around 13 million chips this year (each car has only one chip),” Paul Kratz, an analyst at Berenberg, told the Financial Times.

Many suppliers must certainly yearn for Tesla’s business.

“Tesla will be transitioning to internally developed software for the camera portion of Autopilot," Tesla said in a terse email to Investor’s Business Daily. No mention was made of what hardware it will use. One possible supplier is Nvidia, which is promoting its Drive PX chipset for self-driving cars; another is NSX, which recently launched its rival BlueBox product.

Tweet by Elon Musk

Tesla's Autopilot May Have Saved A Life

The driver of a Tesla reports that the Autopilot feature in his car stopped him from hitting a pedestrian last Saturday night, in Washington, D.C.

The report was included in a message—with the driver’s identifying details blacked out—posted on Twitter by Tesla CEO chief Elon Musk. Musk noted that the story was confirmed by data logs.

“It was night time, there was a lot of glare from the headlights of oncoming cars, and there was a siren in the distance,” the driver recounts. “We were having trouble figuring out if the siren was coming from behind us or from one of the side roads when a pedestrian stepped out in front of our Model S in the dark with dark clothes and in the middle of the road (not near an intersection).

“Before I could step on the brakes the car beeped [a warning] and the picture of a red car came up on my dash. The car slammed on the brakes before I could, and we stopped just inches from hitting the pedestrian. I guess that the car thought the pedestrian was another car in front of us? I am not sure if I would have been able to stop before hitting him, but I am so glad the car did.” 

If Autopilot did in fact save a life, it offsets the death in May of a Tesla driver who apparently was looking away from the road when Autopilot led his car into the side of a truck. That death—the only one in a modern robocar—has become “the elephant in the room” for road-safety experts, in the words of Mark R. Rosekind, head of the National Highway Traffic Safety Administration.

Rosekind, speaking Wednesday at the Automated Vehicles Symposium, in San Francisco, wouldn’t comment directly on that crash or even mention Tesla by name. But he did stress that no one incident would “derail” the agency’s efforts to integrate self-driving cars into the regulatory system.

“We should be desperate for new tools that will help us save lives,” he said.

Nobody would accept a robocar technology that merely saved as many lives as it cost. But it’s still unclear how favorable that ratio must become to satisfy the critics of this or any other newfangled technology.

“Two times safer? Five times? Ten times?” asked Roskind. He provided no answer.

Drivers need not apply

Self-Driving Tech On Parade in San Francisco

You can learn a lot about the progress of robocars just by eating lunch at the Automated Vehicles Symposium, being held here in San Francisco—especially if you’re standing next to a self-driving skeptic like Lee Gomes. He’s a frequent contributor to IEEE Spectrum, and he recently broke the story that Google was no longer predicting fully autonomous cars within five years. The company is now talking five to 30 years, depending on driving conditions.

You’re a little pessimistic, I opine. “I’ve covered AI for a long time,” he explains.

Cars are getting smarter all the time, I maintain. Then Nidhi Kalra, a Rand Corporation analyst, brings her plate of food to our little standup table. She’s no pessimist, but she does have questions. 

“What will Google do when its self-driving car finds itself in a situation it can’t handle?” she asks. “If there’s no steering wheel in the car, do the passengers just sit there?”

One particularly audacious iteration of the Google Car concept does indeed leave out the steering wheel. And the accelerator, and the brakes.

Tom Taylor of Wheego Electric Cars leans over a plate of three-bean salad and says Google has done a great job in motivating the notoriously torpid auto industry. “They have no interest in selling cars, they just wanted to push the industry forward, and it worked,” he says.

Wheego is a small example of that phenomenon. A pure-EV outfit—it plans on selling cars to China—it’s attending this conference in part to demonstrate its self-driving chops.

“No, I think Google’s founders do believe in the technology,” retorts Gomes. “They’re computer scientists, and they think computers can do anything.”

One thing’s evident from the attendance list here: Google has got all the major car companies to follow its lead. Nissan and Ford had speakers in this morning’s plenary session. The big auto suppliers are doubling down on robcar tech as well. 

And governments are trying to get out in front of the juggernaut by regulating safety while they can still make a difference. In the morning session, Joan Walker, a professor of civil engineering at the University of California, Berkeley, cited the challenge of levying a toll for cars that aren’t carrying even a single person. “If you worry about all the single-occupancy vehicles on the roads, well, now we’ll be getting zero-occupancy trips,” she said. “Once things are free it’s hard to charge [for them]; maybe we can start policy to charge now while we stll can.”

And another thing Google helped to start: Everyone is trying to cherry-pick talent from everyone else.

Kalra got her Ph.D., in robotics, from Carnegie Mellon University. Uber then swooped in and pillaged her old department.

Even some of the plenary speakers have been part of the industry-wide churn.

Faraday Future, the fabulously well-funded Chinese EV company, was represented here today by Jan Becker, who said the company would begin to sell its first products “in the next few years.” The last time he presented at this symposium, Becker represented Bosch.

Swiss E-Buses Charge As Fast As Lightning

People’s eyes glaze over after about one minute spent charging a battery, and that’s bad news for the guys who sell electric vehicles.

So various schemes have been devised to shorten the wait. You can swap in fresh batteries, but that hasn’t taken off. You can add gasoline engines optimized for “range extension,” as it’s called, but that adds moving parts. You can charge as you go, via inductive pads buried in the pavement, but that makes for an expensive roadway.

Now ABB, a Swiss rail and bus company, has another suggestion. In a pilot project in Geneva, it has provided electric buses with quick-charging opportunities. In the 15 seconds that people spend getting on and off the bus the system transfers 2.5 kilowatt-hours—enough to keep the bus on its appointed rounds for another three or four stops. Then it’s time for another quick fix. Only when it reaches the end of the circuit does the bus take the four or five minutes it needs to top off its lithium-ion batteries, which can hold 73 kilowatt-hours.

The demonstration phase has ended, and ABB says it now has a commercial contract: A complete line with a full fleet of buses will operate on one of Geneva’s main routes. The company hopes to sell the technology to clients around the world.

“Obtaining adequate power for this purpose from the grid can be a challenge,” according to Jean-Luc Favre, who heads ABB’s global rail business. “We address this at the flash stations by trickle charging [the station’s] batteries, which are an integral part of the station. Then, when the bus arrives, power is drawn from the flash batteries to the bus batteries, avoiding peaks on the grid.”

The batteries, too, are optimized for the purpose. Their anodes are composed not of carbon but of nanocrystals made of a compound of lithium and titanium. The much greater surface area offered by the nanocrystals facilitates fast charging; the tradeoff for this is a lower voltage per cell.

“To ease integration into a city [the flash charger] could be removed from the bus stop and placed around the corner or underground, if required,” Favre adds. 

A system that works for buses should also work for delivery trucks, taxis, and any other vehicles that make ;periodic stops. It’s not necessary that fast charging add huge amounts of effective capacity—it would be fine if all it did was ease the range anxiety that haunts many would-be buyers of EVs.

A tweet from Elon Musk about Tesla radar technology

Tesla Again Spurns Lidar, Betting Instead On Radar

Elon Musk says today’s Tesla cars can improve their self-driving ability with its existing radar system by generatingthe kind of point cloud you’d normally expect only from lidar. (Lidar’s the laser sensing system used in the trademark, roof-mounted beacons in Google cars.)

In messages on his Twitter feed, Musk explained that radar could create in effect a stereoscopic, 3D map by making repeated measurements as the car moves, each time getting a slightly different point of view. Lidar achieves this effect by scanning the laser beacon.

Many major car companies have started working with lidar, in part because the technology is getting less expensive. But although Telsa was recently reported to be experimenting with lidar, the company remains a notable holdout.

Today’s Tesla cars could realize “certainly moderate and maybe big advances w no incremental hardware,” Musk said in another Tweet.

Lidar offers greater precision than radar, but it can’t see nearly as well through rain, snow, and fog. It’s also very conspicuous, though future setups will likely eschew the roof tower for less obtrusive packages that can be hidden in the grille and salted around the corners of the car.

Most big car makers believe that fortrue self-driving capability many different and complementary sensors will be needed. Each one would serve as backup,covering the blind spots of all the others.

Of course, the most capable backup is the human being behind the wheel—provided that the human rivets his attention to the road. But,as the recent fatal accident of a Tesla Model S has demonstrated, a human freed from moment-to-moment driving chores may be not much backup at all.


Nissan Offers Self-Driving Feature Like Tesla's Autopilot

Nissan is about to offer a semiautonomous feature that’s quite like Tesla’s Autopilot. Not only does it sound like it—“ProPilot”—it also depends on a mono-camera sensor and on Mobileye processing.

Nissan is working very hard to emphasize the “semi” part of the word “semiautonomous,” billing ProPilot as an  improved form of cruise control—not as a robotic chauffeur that’s “almost twice as good as a person.” That’s what Tesla honcho Elon Musk said of Autopilot on 25 April, 12 days before a Tesla Model S under Autopilot control crashed into a truck, killing the car’s driver. It was the first fatality attributed to a modern, self-driving car.

Even Tesla has required drivers to hit the turn signal to trigger Autopilot’s automatic lane-changing feature. But Musk, at once a futurist and a salesman, has always had trouble restraining his own tendency to hype things.

With the Nissan, push a button and ProPilot maintains a fixed distance to the car in front of you, keeping within the lane and braking when necessary. Take your hands off the steering wheel, and it will nag you to put them back; ignore the nagging, and the system will cut off. 

“Naturally, there are limitations to the system, and our job is to communicate what those limitations are,” said Nissan General Manager Tetsuya Iijima, Reuters reports.

And Nissan customers are about as different from Tesla’s as you can get. ProPilot will initially be offered only in Japan, on the Nissan Serena, a staid and practical minivan. Tesla’s cars are feline, earth-clawing performance cars, and their mostly American drivers are, as you’d expect, a rather adventurous breed, to judge by the Youtube videos they post.

Tesla Model S Autopilot

Tesla Autopilot Crash: Why We Should Worry About a Single Death

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Only recently, Tesla Motors revealed that one of its self-driving cars, operating in Autopilot mode, had crashed in May and killed its driver. How much responsibility Tesla has for the death is still under debate, but many experts are already reminding us of the huge number of lives that could be saved by autonomous cars.

Does that mean we shouldn’t worry much about the single death—that we should look away for the sake of the greater good? Is it unethical to focus on negative things that could slow down autonomous-driving technology, which could mean letting thousands of people die in traffic accidents?

Read More
A woman in a Tesla car on Autopilot without her hands on the wheel.

Tesla Autopilot Crash Exposes Industry Divide

The first death of a driver in a Tesla Model S with its Autopilot system engaged has exposed a fault line running through the self-driving car industry. In one camp, Tesla and many other carmakers believe the best route to a truly driverless car is a step-by-step approach where the vehicle gradually extends control over more functions and in more settings. Tesla’s limited Autopilot system is currently in what it calls “a public beta phase,” with new features arriving in over-the-air software updates.

Google and most self-driving car startups take an opposite view, aiming to deliver vehicles that are fully autonomous from the start, requiring passengers to do little more than tap in their destinations and relax.

The U.S. National Highway Traffic Safety Administration (NHTSA) classifies automation systems from Level 1, sporting basic lane-keeping or anti-lock brakes, through to Level 4, where humans need never touch the wheel (if there is one).

A Level 2 system like Tesla’s Autopilot can take over in certain circumstances, such as highways, but requires human oversight to cope with situations that the car cannot handle—such as detecting pedestrians, cyclists, or, tragically, a white tractor-trailer crossing its path in bright sunlight.

Proponents of Level 4 technologies say that such an incremental approach to automation can, counter-intuitively, be more difficult than leap-frogging straight to a driverless vehicle. “From a software perspective, Level 2 technology may be simpler to develop than Level 4 technology,” says Karl Iagnemma, CEO of autonomous vehicle startup nuTonomy. “But when you include the driver, understanding, modeling and predicting behavior of that entire system is in fact pretty hard.”

Read More
Tesla Motors CEO Elon Musk

The Tesla Autopilot Crash Could Have Been Much Worse

Five days after the world learned that the driver of a Tesla on Autopilot had died in a crash, there are still a lot of unanswered questions. 

The first unanswered question—if we go in a strictly chronological order—has to do with the timing. The crash happened on 7 May, but we learned of it 56 days later—on the Thursday evening that inaugurated the long holiday weekend in the United States ending in 4 July.  

It’s an old trick, one that General Motors pulled back in 1987. I was then a mere cub reporter who’d left work a little early for the long Thanksgiving weekend. Minutes later, at the very close of business hours, GM reported a recall of its Fiero two-seater, whose engine had shown an unfortunate tendency to burst into flames

Fool me once, GM, shame on you. Fool me twice, shame on me.

A second unanswered question is whether the Tesla’s driver—Joshua Brown, 40, a former Navy SEAL—had been paying proper attention. One report said that a “Harry Potter” movie was heard playing on a DVD in the wrecked car. Another report denied that any such thing happened.

If Brown had in fact been paying attention but simply lacked time to intervene, then that’s really bad news for Tesla and for its policy of using its customers to beta-test technology. Google has repeatedly insisted it would release no self-driving tech until professional drivers have proved it to be safe. Volvo is carrying system redundancy to great lengths. Tesla is alone in playing things fast and loose.

A third unanswered question is what Tesla intends to do to prevent more such accidents from happening. The company is due to release version 8.0 of Autopilot, and though it will have many improvements it seems that none of them would have averted the 7 May crash.

But the fourth question is probably what troubles Elon Musk’s sleep the most, and it’s purely hypothetical. What if that Tesla had crashed not into a huge truck but into something smaller than itself?

Read More

Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren't Perfect

On 7 May, a Tesla Model S was involved in a fatal accident in Florida. At the time of the accident, the vehicle was driving itself, using its Autopilot system. The system didn’t stop for a tractor-trailer attempting to turn across a divided highway, and the Tesla collided with the trailer. In a statement, Tesla Motors said this is the “first known fatality in just over 130 million miles [210 million km] where Autopilot was activated” and suggested that this ratio makes the Autopilot safer than an average vehicle. Early this year, Tesla CEO Elon Musk told reporters that the Autopilot system in the Model S was “probably better than a person right now.”

The U.S. National Highway Transportation Safety Administration (NHTSA) has opened a preliminary evaluation into the performance of Autopilot, to determine whether the system worked as it was expected to. For now, we’ll take a closer look at what happened in Florida, how the accident may could have been prevented, and what this could mean for self-driving cars.

Read More

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
Senior Writer
Evan Ackerman
Berkeley, Calif.
Lucas Laursen

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More