Cars That Think iconCars That Think

NHTSA says Tesla's AutoPilot not at fault for car crash

Feds Call Tesla's AutoPilot Safe

Not only does the U.S. road safety regulator clear Tesla’s AutoPilot for any responsibility in causing a fatal crash in May 2016, the agency even credits the system for reducing crashes by 40 percent overall.

In the report, released yesterday, the National Highway Transportation Administration said that the crash—a side collision in an intersection between a Tesla Model S and a truck—was due to driver distraction that lasted for at least 7 seconds. It concluded the automatic emergency braking system (AEB) was not to blame because it hadn’t been designed for such a scenario.

“AEB systems used in the automotive industry through Model Year 2016 are rear-end collision avoidance technologies that are not designed to reliably perform in all crash modes, including crossing path collisions,” the report said.

More important, NHTSA said that Autosteer, another element in Tesla’s driver-assistance package, had reduced crashes by nearly 40 percent. Before installation of Autosteer there were 1.3 crashes involving airbag performance for every million miles driven; that fell to 0.8 after installation. 

Many have criticized AutoPilot not so much for its technical prowess as for its ambitious-sounding name, which might be taken as an encouragement to rely on the system to drive the car without human supervision. NHTSA points out that Tesla has addressed the problem by giving drivers visual cues to test their attentiveness: miss one cue too many and you’ll "strike out," turning off Autopilot until you stop the car.

velodyne megafactory photo

Velodyne Plans a Lidar Megafactory

After years of dominating the market for automotive lidar thanks to the simple fact of being in it first, Velodyne is now shaking off new contenders by expanding a megafactory in San Jose, Calif.

This investment—enabled in part by cash infusions from Ford and Baidu—should allow Velodyne to ramp up to 1 million units annually by 2018, the company says. That means that its rooftop tower, the Cadillac of lidars, may soon sell at Chevrolet prices. The company is also expanding its R&D effort by building a new facility in Alameda, Calif.

The factory is already in operation on a comparatively small scale, churning out the HDL-64E, a descendant of the model made famous as the roof-mounted revolving scanner on Google’s self-driving cars. Google originally had to pay US $73,000 for each unit, itself a rotating array of 64 laser beams. But in 2015, Chris Urmson, then head of Google’s car project—recently spun off as Waymo—said his outfit was unhappy with the product and was designing alternatives to it in house.

Earlier this month, at CES 2017, Waymo CEO John Krafcik showed off a new self-driving car packing in-house lidars of three kinds: midrange, to replace Velodyne’s unit; short distance, for the area right next to the car that had previously stood in the shadow of that unit; and long distance, for seeing beyond the range of Velodyne’s original model.

Velodyne’s expansion suggests not only that the company thinks it can outdo Google but also other startups that in recent years have started talking up plans for much less costly solid-state lidars that they claim would equal or even outperform the Velodyne design. Then, last month, Velodyne claimed a “breakthrough” in solid-state design that, with mass production, should drop the unit price below $50.

That’s like marking down a Caddy to Matchbox car prices.

Such cheap-as-dirt lidar would find its way into just about every vehicle, bicycles included. Even Tesla chief Elon Musk, famously dismissive of the sensor, might finally cave in and allow lidar a place at the table.

Waymo fiat chyrsler pacifica

Waymo Shows Off Its Home-Grown Robocar Hardware

Waymo, the robocar spinoff of Alphabet, has finally taken the veil off a self-driving system loaded to the gills with sensors designed in house. And that’s the key point, said Waymo CEO John Krafcik yesterday at the Detroit Auto Show.

“It was the eminently-quotable computer scientist Alan Kay who said, ‘people who are really serious about software should make their own hardware,’ ” Krafcik said. Beside him on the stage next stood a self-driving hybrid minivan from Fiat Chrysler equipped with Waymo’s technology. Krafcik said the minivans would hit public roads in California and Arizona later this month.

The Kay quote was revealing: Kay pioneered graphical interfaces at Xerox PARC’s fabled idea factory in the 1970s, and much of that work was later embodied in the Apple MacIntosh. As Krafcik noted, the Apple design philosophy has famously emphasized maintaining control over hardware as well as software. There’s no better way to defend an invention’s status as a premium product, commanding correspondingly fat profit margins.

The Waymo sensors are divided among several categories:

Three lidar sets cover short, medium and long distances, using lasers to estimate distances and put detected objects into detailed, three-dimensional maps. The particular advantage of lidar is in its ability to judge distances.

The existing rooftop tower contains the medium distance sensor that scans a full 360 degrees around the car; years ago, when Waymo was known as the Google Car project, this sensor was supplied by Velodyne for some US $75,000 apiece. Krafcik said Waymo’s own lidars all cost just 10 percent as much. (Velodyne has also lowered its prices in the meantime.)

A short-range lidar covers areas beside the minivan that would otherwise have been in the shadow of the rooftop sensor. And a long-range lidar, which Krafcik said was the first of its kind, works with the rooftop device to improve resolution.

“Imagine being able to see a football helmet, but two full football fields away,” he said.

Eight vision modules—cameras, essentially—exploit this technology’s ability to recognize shapes; they cover the entire area around the car, along with a ninth module that faces forward and provides extra resolution.

“We can detect small objects, like construction cones, far away, even when we’re cruising down a road at high speed,” Krafcik said. “And with a wide dynamic range we can see in a dark parking lot, or out in the blazing sun, or any condition in between.”

Radars are also salted around the car to give a similar theater-in-the-round effect. Radar is the one sensor that sees easily through fog, rain and snow.

“Conventional radars have also been designed to focus solely on vehicle movements. However, our fully self-driving cars need to safely navigate around all road users, from cars to pedestrians to cyclists,” he said. “So, we’ve configured our radars so that they’re much more sensitive to these slower moving objects.”

Exactly how much of Waymo’s self-driving prowess comes from such hardware—rather than improved software and road mapping—isn’t clear. Krafcik did say that there’s been a significant decline in the need to “disengage,” the word Waymo uses to describe what the human minder does when he takes back the wheel from a car that’s gotten into difficulties.

“In 2016, even as we drove hundreds of thousands more miles, our rate of disengages per thousand fell, from 0.8 in 2015 to 0.2 today. That’s a four-time improvement in the performance of our self-driving systems in 12 months,” he said.

What role Fiat Chrysler is playing in the creation and testing of the vehicle is unclear. What is clear is that Waymo wants to supply the entire auto industry with packages that can be fitted to just about any vehicle. Honda has recently shown interest in just that possibility.

Other carmakers, however, have opted to partner with another tech supplier—as Audi has done, with Nvidia—bought tech firms themselves, as General Motors did last year, with its $1 billion acquisition of Cruise Automation, and generally to develop the main parts of a robocar themselves, as Ford and Daimler are doing.

Waymo, in imitating Apple’s policy of seamlessly integrating hardware and software, may hope to thus reap some of Apple’s margins. Steve Jobs famously retained tight control in order to provent his company’s products from degenerating into mere commodities. And commodification is the main threat that roboticization holds for the auto industry right now.

Nvidia CEO Huang at CES 2017

CES 2017: Nvidia and Audi Say They'll Field a Level 4 Autonomous Car in Three Years

Jen-Hsun Huang, the CEO of Nvidia, said last night in Las Vegas that his company and Audi are developing a self-driving car that will finally be worthy of the name. That autonomous vehicle, he said, will be on the roads by 2020.

Huang made his remarks in a keynote address at CES. Then he was joined by Scott Keough, the head of Audi of America, who emphasized that the car really would drive itself. “We’re talking highly automated cars, operating in numerous conditions, in 2020,” Keough said. A prototype based on Audi’s Q7 car was, as he spoke, driving itself around the lot beside the convention center, he added.

This implies the Audi-Nvidia car will have “Level 4” capability, needing no human being to supervise it or take the wheel on short notice, at least not under “numerous” road conditions. So, maybe it won’t do cross-country moose chases in snowy climes.

These claims are pretty much in line with what other companies, notably Tesla, have been saying lately. The difference is in the timing: Nvidia and Audi have drawn a hard deadline for three years from now.

In a statement, Audi said that it would introduce what it called the world's first Level 3 car this year; it will be based on Nvidia computing hardware and software. Level 3 cars can do all the driving most of the time but require that a human be ready to take over.

At the heart of Nvidia’s strategy is the computational muscle of its graphics processing chips, or GPUs, which the company has honed over decades of work in the gaming industry. Some 18 months ago, it released its first automotive package, called Drive PX, and today it announced the successor to it, called Xavier. (That Audi in the parking lot uses the older, Drive PX version.)

“[Xavier] has eight high-end CPU cores, 512 of our next-gen GPUs,” Huang said.  “It has the performance of a high-end PC shrunk onto a tiny chip, [with] teraflop operation, at just 3o watts.” By teraflop he meant 30 of them: 30 trillion operations per second, 15 times as much as the 2015 machine could handle.

That power is used in deep learning, the software technique that has transformed pattern recognition and other applications in the past three years. Deep learning uses a hierarchy of processing layers that make sense of a mass of data by organizing it into progressively more meaningful chunks.

For instance, it might begin in the lowest layer of processing by tracing a line of pixels to infer an edge. It might proceed up to the next layer up by combining edges to construct features, like a nose or an eyebrow. In the next higher layer it might notice a face, and in a still higher one, it might compare that face to a database of faces to identify a person. Presto, and you have facial recognition, a longstanding bugbear of AI.

And, if you can recognize faces, why not do the same for cars, sign posts, roadsides and pedestrians? Google’s Deep Mind, a pioneer in deep learning, did it for the infamously difficult Asian game of Go last year, when its Alphago program beat one of the best Go players in the world.

In Nvidia’s experimental self-driving car, dozens of cameras, microphones, speakers, and other sensors are strewn around the outside and also the inside. Reason: Until full autonomy is achieved, the person behind the wheel will still have to stay focused on the road, and the car will see to it that he is.

“The car itself will be an AI for driving, but it will also be an AI for codriving—the AI copilot,” Huang said. “We believe the AI is either driving you or looking out for you. When it is not driving you it is still completely engaged.”

In a video clip, the car warns the driver with a natural-language alert: “Careful, there is a motorcycle approaching the center lane,” it intones. And when the driver—an Nvidia employee named Janine—asks the car to take her home, it obeys her even when street noise interferes. That’s because it actually reads her lips, too (at least for a list of common phrases and sentences).

Huang cited work at Oxford and at Google’s Deep Mind outfit showing that deep learning can read lips with 95 percent accuracy, which is much better than most human lip-readers. In November, Nvidia announced that it was working on a similar system.

It would seem that the Nvidia test car is the first machine to emulate the ploy portrayed in 2001: A Space Odyssey, in which the HAL 9000 AI read the lips of astronauts plotting to shut the machine down.

These efforts to supervise the driver so the driver can better supervise the car is directed against Level 3’s main problem: driver complacency. Many experts believe that this is what occurred with the driver of the Tesla Model S that crashed into a truck. Some reports say he failed to override the vehicle’s decision making because he was watching a video.

Last night, Huang also announced deals with other auto industry players. Nvidia is partnering with Japan’s Zenrin mapping company, as it has done with Europe’s TomTom and China’s Baidu. Its robocar computer will be manufactured by ZF, an auto supplier in Europe; commercial samples are already available. And it is also partnering with Bosch, the world’s largest auto supplier.

Besides these automotive initiatives, Nvidia also announced new directions in gaming and consumer electronics. In March, it will release a cloud-based version of its GeForce gaming platform on Facebook that will provide a for-fee service through the cloud to any PC loaded with the right client software. This required that latency, the delay in response from the cloud, be reduced to manageable proportions. Nvidia also announced a voice-controlled television system based on Google’s Android system.

The common link among these businesses is Nvidia’s prowess in graphics processing, which provides the computational muscle needed for deep learning. In fact, you might say that deep learning—and robocars—came along at just the right time for the company: It had built up stupendous processing power in the isolated hothouse of gaming and needed a new outlet for it. Artificial intelligence is that outlet.

Taxis in New York City

Imagine New York City With 3,000 Taxis Instead of 13,000

Large-capacity ride-sharing services could replace 98 percent of taxi service in Manhattan, researchers report this week in Proceedings of the National Academy of Sciences.

“We could drastically reduce the number of vehicles” with a “minor impact to users,” says Javier Alonso-Mora, a computer scientist at Delft Technical University in the Netherlands, who worked on the project while at the Massachusetts Institute of Technology.

Read More
Uber wants to test its self-driving vehicles on California's roads without a permit.

California's Fight with Uber's Self-Driving Car

Update, 22 December: Following threats of legal action by California regulators, Uber has suspended its self-driving car tests in San Francisco.

Is there a difference between a self-driving car and an autonomous one? That’s the question at the heart of an ongoing dispute between Uber and Californian regulators.

On Wednesday, Uber started offering free rides in what it calls “self-driving” cars: Volvo XC90 crossovers heavily modified with lidars, radars, cameras and onboard computers. The cars can respond when summoned by the Uber app, pick up passengers, then automatically navigate and drive to their respective destinations. If all goes well, the human safety drivers sitting in the front seat of these cars don’t have to touch the controls at all.

If that sounds similar to the 130 cars from 20 firms such as Tesla, Google and Ford that California’s Department of Motor Vehicles (DMV) has already granted autonomous vehicle test licenses, you’re not alone. The DMV’s Deputy Director and Chief Counsel Brian Soublet wrote to Uber, saying, “It is illegal for the company to operate its self-driving vehicles on public roads until it receives an autonomous vehicle testing permit.”

California insists on permits because they ensure trained personnel are in charge of the vehicles, require a $5 million bond in case things should go wrong, and include a system of reporting accidents and failures of the autonomous technology.

But Uber’s position is that while its cars might be able to drive themselves, they are far from autonomous. “The rules apply to cars that can drive without someone controlling or monitoring them,” wrote Anthony Levandowski, head of Uber’s Advanced Technology Group, in a blog post. “For us, it’s still early days and our cars are not yet ready to drive without a person monitoring them.”

This is an argument that Levandowski has used before. In January, Levandowski left Google (for whom he had built the company’s first self-driving Prius) to found Otto, a self-driving truck start-up. In May, Otto demonstrated a self-driving semi on I-80 in Nevada—without obtaining an autonomous vehicle testing permit. When confronted by an angry Nevada DMV official, the company claimed that the truck, which had no one in the front seats at all, was not autonomous because it needed to be monitored by an engineer further back in the cab.

“It’s a perfectly plausible reading [of the law] to say it’s an advanced driver assistance system and it is not intended to replace the driver,” says Bryant Walker Smith, an associate professor at the University of South Carolina. “But if a jurisdiction had a very skeptical view of the technology, they could certainly interpret [the law] in a contrary way. And if you’re a company that’s already irritating the state, you’re asking for a more restrictive interpretation.”

Uber definitely has a reputation for being an irritant in California, where it has argued with authorities about criminal background checks for drivers, and whether drivers should be treated as contractors ineligible for employee benefits.

Levandowski’s approach succeeded in Nevada. Although the Nevada DMV considered Otto’s driverless truck stunt illegal, it did not publicize or penalize the infraction. In fact, when Otto was subsequently acquired by Uber, Nevada rubberstamped an application from the companies to set up the state’s first facility to certify other autonomous vehicles. In his blog post on Wednesday, Levandowski said that Nevada has “made clear that [it is] pro technology.”

California could prove a tougher nut to crack. In his letter to Uber, Brian Soublet wrote: “If Uber does not confirm immediately that it will stop its launch and seek a testing permit, DMV will initiate legal action, including, but not limited to, seeking injunctive relief.”

As of Friday, the California DMV had not received a response from Uber. If the state proceeds with its threat, it could apply for a restraining order or injunction to prevent Uber from operating its cars within the state. Uber’s self-driving cars have already been spotted running red lights in the city. Should an injury or fatality occur, Uber’s refusal to obey the DMV might have serious consequences.

“If there were a crash, it would change very dramatically,” says Bryant Walker Smith. “At that point, non-compliance could be considered evidence of doing something unreasonable.”

What will Waymo, Google's spun-off self-driving car business, do to earn a living?

Google's Self-Driving Car Graduates to Stand-Alone Business Status

When you finish school, move out of your parent’s home, and generally grow up, you tell yourself that now, at last, you’re your own boss. And then an important bit of reality hits: You really, really need a job.

That’s the meaning of yesterday’s announcement that Google’s self-driving car project will be spun off as the firm Waymo. The world’s fanciest R&D project on wheels has to scare up some revenue now that parent Alphabet has shoved it from its innovation lab X, formerly known as Google X. (Google does like rebranding, doesn’t it?)

Read More
Can George Hotz escape liability if something goes wrong with his open-source self-driving car software?

Who’s Liable for George Hotz’s Self-Driving Software?

You could download Comma.ai’s new open-source Python code from Github, grab the necessary hardware, and follow the company’s instructions to add semi-autonomous capabilities to specific Acura and Honda model cars (with more vehicles to follow). Comma.ai’s CEO George Hotz told IEEE Spectrum last week that Comma.ai’s code has safety features, but what would happen if there’s a bug and your car crashes into a building, another car, or a pedestrian? Self-driving-cars are notoriously difficult to test for safety.

Hotz writes in an emailIt's not my code, I did not release it”—Comma.ai Inc. “released and maintains it.” Most legal experts that spoke with IEEE Spectrum—and Hotz himself—believe that if you use the company’s code and something goes wrong, then it isn’t liable for damages. You are. 

But Consumer Watchdog advocate John Simpson doesn’t believe this is fair. He says Hotz “was somewhat responsible” for any damage that could occur. Although responsibility gets “murkier” as more developers modify the code, he says Hotz made it public, and should therefore be held liable as well as the user.

The controversy exists in part because autonomous driving legislation is just starting to take shape around the world. Automakers such as Tesla and Volvo have publicly declared that they will take responsibility for faults in their self-driving software. Comma.ai Inc’s release, however, puts the liability risk in an even grayer area because the code is open source.

There have been only a handful of open-source lawsuits worldwide—for example, open-source code improperly ending up inside commercial products. The European Commission proposed legislation to apply consumer protection rules to software in 2009, but today there are no EU-wide rules for faulty digital content. And in the United States, a series of court cases in the 1990s ruled open source code as free speech protected under the First Amendment of the U.S. Constitution.

As of Monday, Hotz stood protected by a standard MIT open source license to indemnify him from any legal claims and shift legal responsibility to the downloader. Comma.ai Inc. includes the following disclaimer in a README document:

“THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. YOU ARE RESPONSIBLE FOR COMPLYING WITH LOCAL LAWS AND REGULATIONS. NO WARRANTY EXPRESSED OR IMPLIED.”

Heather Meeker, a partner at O’Melveny & Myers LLP who specializes in IT and software issues, writes in an email that “Open source is, in the sense of warranties, a ‘buyer beware’ model.”

If Hotz sold a product that included open source software, or made guarantees to car manufacturers that it worked properly, then Hotz could be held liable for damages. Or if a car manufacturer used Hotz’s software, then the manufacturer could potentially be liable if the cause of damage was the software, she says. But otherwise, risk would shift to whoever uses the code.

Unless, of course, a court ruled that the software really is a product.

Larry Coben, a lawyer at Anapol Weiss who specializes in product liability cases, says “I don’t know” whether the software counts as a product. He gave two examples. A supplier who provides an airbag for a car could be found liable if the module doesn’t perform to expectations, or a person selling an alarm system could be liable if he or she installed it carelessly.

Meeker says it’s not possible to disclaim or contract away product liability for personal injury. For example, you can’t sell defective lightbulbs that explode and sign a contract that protects you from liability.

It also might not be enough to say Hotz’s code is free. She points out that you don’t have to receive money to be viewed as a “seller.” For example, if you give away lightbulbs on the street and they explode, killing people, you could be held liable.

Kit Walsh, an attorney at the Electronic Frontier Foundation, writes in an email that “code is like a set of instructions, and publishing instructions generally cannot be punished consistent with the First Amendment—even if those instructions involve something dangerous like making a weapon or to try to eat a spoonful of cinnamon that will probably wind up in your lungs.”

Lawrence Rosen, a lawyer at Rosenlaw who specializes in open source software and open standards, agrees. The code is “not a product,” he says.

Walsh writes that “Prosecutors and plaintiffs often urge courts to disregard traditional First Amendment protections in the case of software because it has a ‘functional’ aspect when it can be executed, making it easier for the instructions it describes to be followed”—but that’s an argument she doesn’t believe is meaningful. Especially in this case, where extra steps are needed to combine the software with hardware and manual intervention.

However, Walsh notes that if you intend to cause harm with your speech, then you can be held liable for damages. Rosen says inducing others to use the code improperly or unsafely, or failing to properly warn individuals, could put Hotz at risk.

Walsh says that it might be unlawful to drive an unsafe car, “actively conspire” to circumvent vehicle safety legislation, or sell defective car modification hardware. Or in some cases, it might be lawful to modify a vehicle and drive it on private roads, but not public roads.

Michael Overly, a partner at Foley & Lardner LLP who works on technology-related agreements such as software licenses, sees another way Hotz could possibly be at risk. He says that while the MIT license and disclaimers protect developers from being held liable by whoever downloads the software, they do not protect him from lawsuits by third parties—people not bound by the terms of the agreement. Overly says that although Hotz has opened up a whole new area and open sourcing the code is a great idea for testing purposes, a third party who suffers an injury or property damage caused by a software malfunction could still sue for negligence—from anywhere in the world.

Overly says that in order to get full protection, Hotz would need an additional license or contract that warrants the user will use it for its intended purpose (research), he or she will not harm individuals or property, and he or she will pay any financial damages if the developer is sued (additional indemnity). Overly says these extra terms would work in conjunction with an existing open source license.

And there might be other lawsuits on the horizon. Rosen expects laws to change and software installed in any moving vehicle to require certification by a federal agency.

Meeker writes that this “could change the answer [of who’s liable, but] I would expect those regulations, like the general principles of products liability, to apply to actual products rather than software.”

The U.S. National Highway Traffic Safety Administration did not respond to a request for comment.

“If I were in [Hotz’s] shoes, I would not take [the situation] lightly,” Rosen says. He says it cost about $250 nowadays to file a lawsuit and much, much more than $250 to retain an attorney to defend against a lawsuit.

Hotz says he has spoken with lawyers and heard from NGOs, think tanks, and freedom-fighting organizations. “We can easily defend against any frivolous lawsuits,” he says.

This post was corrected on 14 December to clarify that the software was released by Comma.ai not George Hotz himself.

Velodyne

Velodyne Says It's Got a "Breakthrough" in Solid State Lidar Design

Lidar is a fantastic sensor for autonomous cars. Most companies developing self-driving vehicles seem to agree that the massive amount of long range, high accuracy data that you get from lidar is necessary, especially for complicated and variable urban environments. Really, the only reason why there's been so much focus on getting autonomy to work with just cameras and radar is that cameras and radar are cheap, and lidar is ridiculously expensive.

The reason that lidar is expensive is because it generally involves a bunch of high quality optical components (lasers, sensors, that sort of thing) all hooked up together inside of a housing that spins. Over the last year or two, however, we've seen some prototypes of solid state lidars—lidar sensors that take all of that bulky and costly hardware and squish it down onto one or two chips that can be mass produced for cheap. It's a technical challenge, but it's also a necessary inevitability for lidar in general, which is why it's exciting to hear that Velodyne is working on solid state lidar as well.

Read More
audi waits for a light in las vegas

Audi Cars Now Talk to Stoplights In Vegas

Audi’s cars are now telling drivers how many seconds remain until the traffic light turns green. It's the first commercial offering of vehicle-to-infrastructure communication in the United States, Audi says.

Of course, nobody would pay much extra for an electronic gadget that just lowered your stoplight waiting anxiety. But this feature is just testing the waters; bigger applications are in view.

The cars—recently manufactured Audi A4 and Q7 models signed onto Audi’s prime connection service—communicate with the Las Vegas traffic management system via 4G LTE, the standard that mobile phones use. The countdown appears on the dashboard or head-up display, then shuts off a few seconds before the light changes (presumably to keep drivers from getting mesmerized). Audi manages the transfer of data with the help of its partner, Traffic Technology Services (TTS), of Beaverton, Ore.

The plan is to eventually give drivers the information they need to make fairly ambitious predictions, like choosing the right speed to go sailing through several green lights in a row. Or the system might bypass the driver and go straight to the engine’s “start-stop” system, shutting it down for a long count, then starting it up again seconds before getting a green light. 

Cities should be able to use the tool to fine-tune their traffic management, say by metering access to a road to head off congestion before it can take shape. Nevada’s Regional Transportation Commission has declared an interest in such features, as have other cities. In a pilot test in Pittsburgh, the startup Surtrac has shown that an intelligent traffic management system can reduce travel time by 25 percent and idling time by over 40 percent.

But, like a mobile phone, a networked vehicle is eminently hackable, and when this communicative capability becomes common in cars, there will be more than enough incentive for the bad guys to prey on them.

Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Berkeley, Calif.
Contributor
Lucas Laursen
Madrid
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More