Cars That Think iconCars That Think

abstract illo of car breaking into bits

Toyota Joins Coalition to Bring Blockchain Networks to Smart Cars

Today if your car wants to talk to another car, a service provider or just an infotainment source, it needs to go through a third-party network. That can cost you money in two ways: you pay the third party and you forgo the chance of being paid for the use of your data.

Toyota means to solve both problems using blockchain technology, the networking method behind Bitcoin and other cryptocurrencies. Blockchain allows connected computers to share data and even software, and to do so securely. This matters for all sorts of computer-assisted driving, above all for self-driving technology.

“Hundreds of billions of miles of human driving data may be needed to develop safe and reliable autonomous vehicles,” said Chris Ballinger, director of mobility services and chief financial officer at Toyota Research Institute, in a statement. “Blockchains and distributed ledgers may enable pooling data from vehicle owners, fleet managers, and manufacturers to shorten the time for reaching this goal, thereby bringing forward the safety, efficiency and convenience benefits of autonomous driving technology.”

Distributed ledgers allow a lot of independent computers to keep track of the sharing of data. That way the record can’t be tinkered with if one machine is compromised. Up until now the technology has mostly involved financial transactions, but any sort of exchange or chitchat can be handled in the same way.

Car makers might use blockchain to restrict the sharing of data—say, from a robocar test fleet—to their partners and to road-safety regulators. Car owners might use it to charge for the data their cars churn out or to give up a measure of privacy in return for a service. Car owners could choose to give an insurer moment-by-moment driving data to get a lower premium—so long as they drive safely.

All car makers and suppliers are struggling to handle the gusher of data provided and demanded by cars, and the burden will grow once semi-autonomous cars hit the road. IEEE Spectrum spoke to Michael  Tzamaloukas, vice president for autonomous drive and advanced driver assistance systems at Harman, a leading auto supplier (including to Toyota, although Harman is not now using blockchain tech). He said Harman is already getting a data deluge from its cloud-based Map Live Layers, which combines data from a car’s sensors with other data shunted from other cars via the cloud.

“More and more content is being generated, particularly the premium cars, and in three to five years, you’ll be getting much more than you do from a smartphone,” says Tzamaloukas“It will greatly benefit a number of safety apps—kind of like an invisible airbag. It will make your ride more comfortable, more informed; you’ll be able to use data from other cars to see beyond the corner.”

Toyota and its main partner, the MIT Media Lab, are recruiting companies as to develop applications based on the blockchain system. It listed, in its statement, Berlin-based BigchainDB, which is building the data exchange; Oaken Innovations, based in Dallas and Toronto, which is developing a mobility token to pay tolls and other fees; Commuterz, in Israel, which is working on a carpooling application; Gem, in Los Angeles, which is working with Toyota Insurance Management Solutions; and Japan’s Aioi Nissay Dowa Insurance Services.

Toyota’s initiative forms part of a broader coalition of groups from completely different industries that are using blockchain tech to modernize data sharing. The group, called the Enterprise Ethereum Alliance, includes Merck, State Street Corp., JPMorgan Chase, BP, Microsoft, IBM and ING.

The right front corner of a yellow garbage truck. A metallic lidar cylinder sticks out at the corner. A man in neon yellow stands in the background.

Volvo Takes the Right First Step in Autonomous Garbage Collection

In September of 2015, Volvo announced that it was developing a robot designed to pick up trash bins and take them to a garbage truck. We were a little bit skeptical of that particular approach to the problem, but it's not the only angle that Volvo is taking towards autonomous refuse handling. Volvo Trucks has been testing a self-driving garbage truck in Sweden, designed to help humans do this dirty job more safely and more efficiently. It's not as slick as a team of little mobile robots that can pick up bins all by themselves, but it's a much more practical near-term solution towards solving the larger problem.

Read More
russell in car

22-Year-Old Lidar Whiz Claims Breakthrough

Lidarland is buzzing with cheap, solid-state devices that are supposedly going to shoulder aside the buckets you see revolving atop today’s experimental driverless cars. Quanergy started this solid-state patter, a score of other startups continued it, and now Velodyne, the inventor of those rooftop towers, is talking the talk, too.

Not Luminar. This company, which emerged from stealth mode earlier this month, is fielding a 5-kilogram box with a window through which you can make out not microscopic MEMs mirrors, but two honking, macroscopic mirrors, each as big as an eye. Their movement—part of a secret-sauce optical arrangement—steers a pencil of laser light around a scene so that a single receiver can measure the distance to every detail. 

“There’s nothing wrong with moving parts,” says Luminar founder and CEO Austin Russell. “There are a lot of moving parts in a car, and they last for a 100,000 miles or more.”

A key difference between Luminar and all the others is its reliance on home-made stuff rather than industry-standard parts. Most important is its use of indium gallium arsenide for the photodetector. This compound semiconductor is harder to manufacture and thus more expensive than silicon, but it can receive at a wavelength of 1550 nanometers, deep in the infrared part of the spectrum. That makes this wavelength much safer for human eyes than today’s standard wavelength, 905 nm. Luminar can thus pump out a beam with 40 times the power of rival sensors, increasing its resolution, particularly at 200 meters and beyond. That’s how far cars will have to see at highway speeds if they want to give themselves more than half a second to react to events. 

Russell’s a little unusual for a techie. He stands a head taller than anyone at IEEE Spectrum’s offices and, at 22, he is true wunderkind. He dropped out of Stanford five years ago to take one of Peter Thiel’s anti-scholarships for budding businessmen; since then he’s raised US $36 million and hired 160 people, some of them in Palo Alto, the rest in Orlando, Fla.

Like every lidar salesman, he comes equipped with a laptop showing videos taken by his system, contrasted with others from unnamed competitors. But this comparison’s different, he says, because it shows you exactly what the lidar sees, before anyone’s had the chance to process it into a pretty picture.

And, judging by the video he shows us, it is very much more detailed than another scene which he says comes from a Velodyne Puck, a hockey-puck-shaped lidar that sells for US $8,000. The Luminar system shows cars and pedestrians well in front of the lidar. The Puck vision—unimproved by advanced processing, that is—is much less detailed.  

“No other company has released actual raw data from their sensor,” he says. “Frankly there are a lot of slideshows in this space, not actual hardware.”

We take the gizmo into our little photo lab here and our photo editor snaps a few, always taking care not to look too deeply into the window shielding the lidar’s mirrored eyes. Trade secrets, all very hush-hush. One thing’s for sure: The elaborate optics don’t look cheap, and Russell admits it isn’t, not yet.

These machines are part of a total production run of just 100 units, enough for samples to auto companies, four of which, he says, are working with it as of now. He won’t say who, but Bloomberg quotes other sources as saying that BMW and General Motors “have dropped by” the California office.

The cost per unit will fall as production volume rises, he says. But he isn’t talking about $100 lidar anytime soon.

“Cost is not the most important issue; performance is,” he contends. “Getting an autonomous vehicle to work 99 percent of the time is not necessarily a difficult task, it’s the last percent, with all the possible ‘edge cases’ a driver can be presented with—a kid running out in front in the road, a tire rolling in front of the car.”

Tall though he is, Russell is himself a prime example of a hard-to-see target because his jeans are dark and his shirt is black. “Current lidar systems can’t see a black tire, or a person like me wearing black—Velodyne’s [Puck] wouldn’t see a 5- to 10-percent reflective object [30 meters away]. It’s the dark car problem—no one else talks about it!”

And, because the laser’s putting out 40 times more power, a dark object at distance will be that much easier to detect.

What’s more, only one laser and one receiver are needed, not an array of 16  or even 64—the count for Velodyne’s top-of-the-line, $50,000-plus rooftop model. As a result, the system knows the source for each returning photon and can thus avoid interference problems with oncoming lidars and with the sun itself. But, because it hasn’t got the 360-degree coverage of a rooftop tower, you’d need four units, one for each corner of the car.

That, together with the need to make the laser, receiver, and image processor at its own fab, in Florida, means the price will be high for a while. Russell’s argument is simple: good things cost money.

“The vast majority of companies in this space are integrating off-the-shelf components,” he says. “The same lasers, same receivers, same processors—and that’s why there have been no advances in lidar performance in a decade. Every couple of years a company says, ‘we have new lidar sensor, half the size, half the price, and oh, by the way, half the performance.’ The performance of the most expensive ones has stayed the same for practically a decade; all the newer ones are orders of magnitude worse.”

autonomous car diagram

Three Studies Show How Robocars Will Make for Safer, More Efficient Roads

Nailing down the safety benefits of robocar technology is harder than it looks. Of course, any system that keeps you from driving into a brick wall must be good for your health, but the problem is that not all drivers handle such safety features properly.

Three recent studies all have to do with this critical interaction between technology and people. The first one, published in the most recent issue of IET Intelligent Transport Systems, shows that if every car came with both an emergency braking system and a pedestrian and cyclist detection system, it would cut by 7.5 percent the total number of traffic deaths among these “vulnerable road users.That decrease, say the authors of the study (who hail from Finland, the Netherlands, Austria, and Britain), “comes down to an estimate of around 1,900 fatalities saved per year” in the European Union.

That safety system combination yielded the best result of the 10 driver-assistance technologies the authors considered. But even so, it falls a percentage point or two below what you’d get if pedestrians and cyclists were unaware that the cars had the two safety features. It’s that comforting knowledge that a car can make up for their carelessness that tempts pedestrians to cross in front of such a car when they shouldn’t.

Safety engineers call such trading of safety for convenience risk compensation. They’ve been struggling against it for decades.

I was in Detroit back in the 1980s when Mercedes Benz came to demonstrate its new antilock braking system (ABS) by driving a car over a half-soaped stretch of asphalt, with the left-hand wheels on the soapy side and the right-hand wheels on the dry side. When the driver stamped on the brake, the car stopped easily, without the slightest swerve or skid.

“When this comes out on a car I can afford, I’m buying it with my own money,” my newspaper bureau chief told me. “I’m not waiting for the safety guys to make it mandatory.”

Customers did pay willingly for ABS, in part because they were encouraged by auto insurers that offered discounts. The insurance companies then sat back and waited for the accident rate to fall. But it didn’t. The ABS instead encouraged drivers to tailgate a little more and take curves a little faster.

Still, if you really load a car up with safety tech, you can swamp the human proclivity to act like an idiot. The car will save us from ourselves, despite our own best efforts to hinder it.

Take electronic stability control (ESC), the next-gen elaboration of ABS. As its inventor, Anton van Zanten, told IEEE Spectrum last year, “ABS works only during panic braking, not during coasting or partial braking or free rolling, and traction control usually works only if you have full acceleration. But ESC controls car motion at any time.” 

ESC is the subject of another paper that appears in the same issue of the journal. Unlike most such analyses, this study, from Finland, doesn’t model future scenarios in which all cars carry ESC. It just estimates how many lives the technology actually saved in 2014. The conclusion: Thirty-seven lives were saved, equal to 16 percent of all road traffic deaths that year; and 747 injuries were prevented, equal to 11 percent of traffic injuries.

The third study, from researchers at the University of Illinois at Urbana-Champaign, concerns not safety but the efficient flow of traffic. Here, too, the point was to confirm, with hard evidence, something experts had long expected: that a self-driving car can head off the formation of so-called phantom traffic jams.

A phantom jam comes when human drivers tap the brake to add a little space between themselves and the car in front of them. Their own braking then provokes a similar, but slighly delayed, response in the driver just behind them, which has the same effect on the next driver down the line. The resulting “traffic wave” strengthens as it propagates backward until at last it produces a jam.

“Our experiments on a circular track with more than 20 vehicles show that traffic waves emerge consistently, and that they can be dampened by controlling the velocity of a single vehicle in the flow,” say the authors.  “These experimental findings suggest a paradigm shift in traffic management: Flow control will be possible via a few mobile actuators (less than 5 percent) long before a majority of vehicles have autonomous capabilities.”

It’s a kinder, gentler version of a trick that state troopers used in the 1970s and 1980s, when motorists routinely flouted the national speed limit of 55 miles per hour. Two patrol cars would drive side by side to create a “rolling roadblock.” Here, though, the robocar isn’t forcing anyone to do anything—it’s just proceeding at a steady pace, braking precisely when it must.

illustration of white self-driving cars in a parking lot

RethinkX: Self-Driving Electric Cars Will Dominate Roads by 2030

Within 13 years self-driving cars will dominate the roads, representing some 95% of all car miles driven, according to a new study released this week. And while 40% of the cars in 2030 will still be of the old, internal combustion variety, they'll represent just 5% of the consumer miles driven.

The report “Rethinking Transportation 2020-2030: The Disruption of Transportation and the Collapse of the Internal-Combustion Vehicle and Oil Industries,” released today by San Francisco think tank RethinkX departs from a number of forecasts in the past few years, including Moody’s and IHS Automotive, which expect the transition to self-driving cars instead to occupy multiple decades. 

The rapid, Facebook-like or smartphone-like adoption curve, the report says, will be driven largely by market forces. Self-driving electric car share plans, in which consumers “subscribe” to a self-driving service much like they subscribe to a cellphone plan today, will be cheaper and more convenient for many people than owning a vehicle. And as a result, the authors say, incumbent industries like oil, cars, insurance, and transportation will face a consumer mass migration away from their old-model products and services if they don’t start preparing for the disruption now.

Read More
Three road scenes with a bright green highlight surrounding other cars, lane markings, and the edges of the road

Nvidia Opens Up The "Black Box" of Its Robocar's Deep Neural Network

A deep neural network’s ability to teach itself is a strength, because the machine gets better with experience, and a weakness, because it’s got no code that an engineer can tweak. It’s a black box.

That’s why the creators of Google Deep Mind’s AlphaGo couldn’t explain how it played the game of Go. All they could do was watch their brainchild rise from beginner status to defeat one of the best players in the world. 

Such opacity’s okay in a game-playing machine but not in a self-driving car. If a robocar makes a mistake, engineers must be able to look under the hood, find the flaw and fix it so that the car never makes the same mistake again. One way to do this is through simulations that first show the AI one feature and then show it another, thus discovering which things affect decision making. 

Nvidia, a supplier of an automotive AI chipset, now says it has found a simpler way of instilling transparency. “While the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions,” wrote Danny Shapiro, Nvidia’s head of automotive, in a blog post.

And, because the work is done right inside the layers of processing arrays that make up a neural network, results can be displayed in real time, as a “visualization mask” that’s superimposed on the image coming straight from the car’s forward-looking camera. So far, the results involve the machine’s turning of the steering wheel to keep the car within its lane.  

The method works by taking the analytical output from a high layer in the network—one that has already extracted important features from the image fed in by a camera. It then superimposes that output onto lower layers, averages it, then superimposes it on still lower layers until getting all the way to the original camera image. 

The result is a camera image on which the AI’s opinion of what’s significant is highlighted. And, in fact, those parts turn out to be just what a human driver would consider significant—lane markings, road edges, parked vehicles, hedges alongside the route, and so forth. But, just to make sure that these features really were key to decision making, the researchers classified all the pixels into two classes—Class 1 contains “salient” features that clearly have to do with driving decisions, and Class 2, which contains non-salient features, typically in the background. The researchers manipulated the two classes digitally and found that only salient features mattered.

“Shifting the salient objects results in a linear change in steering angle that is nearly as large as that which occurs when we shift the entire image,” the researchers write in a white paper. “Shifting just the background pixels has a much smaller effect on the steering angle.”

True, engineers can’t reach into the system to fix a “bug,” because deep neural nets, lacking code, can’t properly be said to have a bug. What they have are features. And now we can visualize them, at least up to a point.

A dark haired man in profile

Waymo vs. Uber: 8 Things I Learned From Anthony Levandowski Taking the Fifth

In February, Google’s self-driving car spin-out Waymo accused Anthony Levandowski of stealing 14,000 confidential files about the laser-ranging lidars developed while he was working there and taking them to Uber. On Friday, 14 April, the engineer sat down in the San Francisco office of Waymo’s lawyers to face six hours of hard questioning.

When asked what his current responsibilities were at Uber, Levandowski took the Fifth, citing his right under the U.S. Constitution’s Fifth Amendment not to answer questions that might incriminate him. He pleaded it again to questions about whether he stole the files, and again when asked if he subsequently used the files to build lidars for Uber. In fact, he took the Fifth over 400 times in the course of the day.

The transcript of the deposition, released on Friday, is predictably repetitious. Despite that, it is one of the most illuminating documents to emerge from the lawsuit so far, revealing Google’s early suspicions of Levandowski, details about key suppliers, previously secret code names, and technical details of the lidars in question. Here’s what I learned, and how:


1. Questions can be just as informative as answers

Although Levandowski’s answers were identical, I learned a lot from Waymo’s questions. It seems Waymo now thinks that Levandowski was deceiving Google almost from the moment it hired him to work on the Street View maps project back in 2007. Google first had concerns when it found out that Levandowski was working with his own startups, 510 Systems and Anthony’s Robots, to build a self-driving car, as first revealed in IEEE Spectrum

“When Google discovered that you were involved in 510 Systems and Anthony’s Robots, it was concerned about potential conflicts,” said Waymo’s lawyer, David Perlson. “You used confidential information from Google to help develop technology at 510 Systems; correct?” He went on to accuse Levandowski of using Street View code to calibrate 510’s Velodyne lidar, and in the startup’s self-driving car technology.


2. Levandowski names his lidars after mountains

Perlson said that the lidar Levandowski built at 510 Systems was called Little Bear, after a mountain in Colorado. “The Fuji system at Uber is named after Mount Fuji,” he went on. “And the reason that the Fuji system at Uber is named after Mount Fuji is that it is derived from Google technology that was also code-named with names of mountains.” Perlson revealed Google’s lidars are all named Grizzly Bear, probably after Grizzly Peak in Berkeley, where 510 Systems was based. The latest Waymo lidar is called Grizzly Bear 3 or GBR3.


3. The side hustles kept coming

Perlson accused Levandowski of controlling a company called Dogwood Leasing that hired ex-Google contractor and 510 Systems engineer Asheem Linaval to use Google’s secrets to develop self-driving car technology. Linaval was eventually hired by Levandowski’s autonomous truck startup, Otto, which Uber bought in 2016.

Waymo also accused Levandowski of founding yet another startup, Odin Wave, and feeding it confidential lidar technology. In 2013, one of Google’s suppliers called Google because it had received an order from Odin Wave that was similar to parts used by the technology giant. Perlson accused Levandowski of then moving Odin Wave and renaming the company Tyto, to hide his involvement.


4. Be careful what you file

The 184-page transcript of Levandowski’s deposition was heavily redacted by Waymo’s and Uber’s lawyers to obscure commercially sensitive information. But unique among the court’s filings, the transcript included an index that had been generated before the redactions were made. By cross-referencing the redactions with the index, I was able to uncover almost every word that the companies did not want made public.

For instance, I discovered that the company supplying both Google and Odin Wave with lidar components was OMW Corporation, a high-tech contract manufacturing business based in the Bay Area.


5. A shared supplier ecosystem

Perlson then asked Levandowski: “Did Otto select third-party vendors because of their knowledge of Google’s confidential technology?” Uncovering the redactions reveals that Perlson asked whether Otto had ever used three particular optical component companies while developing its laser technology. It seems likely that these are key suppliers for Waymo’s lidar.

The issue came up again in reference to Uber. In December 2015, another supplier, which redactions reveal to be Gorilla Circuits, accidentally sent an email intended for Uber engineers to a  Waymo employee. The email included a circuit design for a lidar that Waymo believes replicates its secret technology.


6. An Owl and a Spider lidar

“You used confidential information from the 14,000 files you took from Google to develop lidar at Uber; correct?” Perlson asked Levandowski during the deposition. He went on to name three specific devices: the Fuji lidar that Uber is working on now; the Owl lidar built by Tyto; and, redactions reveal, a previously unknown lidar called Spider.

Spider is the pre-Fuji design that Waymo recently accused Uber of hiding, and whose existence first emerged from a filing Uber made with Nevada regulators in November. Waymo believes Spider to be essentially a copy of its Grizzly Bear 3 lidar.


7. Technical similarities

The redactions also show the specific technical details in Uber’s Fuji lidar design that Waymo thinks it can trace back to its own patents and trade secrets.

Waymo sees similarities in the spacing and placement of diodes on the Fuji’s printed circuit boards (PCBs), including having tiny diodes hanging on the edge of the PCB. It also suspects that a cylindrical lens assembly and guide holes for pins are derived from confidential information that Levandowski took from Google.

In addition, Perlson asked Levandowski about Uber’s progress on ion-doped laser fibers for lidars, which both companies are working on.


8. Levandowski’s early life

Finally, Perlson let Uber’s lawyers ask Levandowski a few questions that he was prepared to answer, mostly about his life and work outside Google and Uber. His responses show that Levandowski’s entrepreneurship started at an early age, selling candy to fellow students in high school, then building a website to navigate photos of the school buildings.

He made his first self-driving vehicle using Java and Lego Mindstorms at UC Berkeley, before building a (partly) self-driving motorcycle for DARPA’s Grand Challenge competition in 2004. He also claimed to have worked on Velodyne’s first commercial lidar. “After the [second Grand Challenge], I helped them get their unit from the early, early prototype that just outputted video data… into outputting Ethernet packets so that others could use it,” said Levandowski.


 A judge ruled Tuesday that Levandowski’s Fifth Amendment rights did not extend to documents that Uber controls.

The next milestone in the epic lawsuit is a hearing on 3 May to decide Waymo’s request for a preliminary injunction to halt Uber’s self-driving car efforts. That could be all the more critical for Uber, given that Waymo just announced that it would expand its trial of autonomous minivans in Arizona to over 500 vehicles.

waymo robotaxi in phoenix

Waymo Offers Robocar Rides to the Public

For a month now, Waymo has been offering rides in its robocars to select customers in Phoenix, Ariz. To help expand the ridership, the company is adding 500 vehicles to its fleet of self-driving Chrysler Pacifica minivans, a sixfold increase.

A customer hails a ride through a phone app; when it arrives, there will be a professional driver behind the wheel who can take over should the software get flummoxed. Waymo’s record for such human interventions is far and away the best in the business, at a mere 1 instance for every 800 kilometers traveled.

You can apply for the free service if you live in certain parts of the Phoenix area and are at least 18 years old. The robotaxi will take you only on trips within the environs, which Waymo says covers twice the area of San Francisco.

Waymo chief executive John Krafcik said in a blog post that the ride-hailing experiment represents a shift in emphasis away from the purely technical side of the robocar problem. “Now, with this program, we’re turning our attention to the people who will benefit from this technology,” said Krafcik in a blog post.

The move follows similar, but smaller, pilot programs begun by Uber, in Pittsburgh, and NuTonomy, in Singapore. Both of those companies plan to expand their services to other areas. 

Offering the service for free certainly must improve customer satisfaction, as is evident in the smiles on the faces of the family depicted in Waymo’s video. It serves to remind us that all these pilot programs are meant not only to provide feedback but also to promote the companies’ brands. Take a look:

On a pink background a car illustration atop the Apple logo with hemicircular waves propagate from both sides of the car.

What’s Apple’s Filings Say About Its Self-Driving Car Program

Apple’s self-driving car effort appears not to be quite as far down the road as we had thought, judging by documents released today by the California Department of Motor Vehicles (DMV).

The documents, obtained by IEEE Spectrum under a public records request, contain Apple’s full application for an autonomous vehicle testing permit that the DMV granted last week. This includes details of the three Lexus RX450h hybrid SUVs that Apple will be using for on-road tests, the company’s training plans, and the identities of some of the people working on the project.

Read More
velodyne photo illustration

Velodyne Announces a Solid-State Lidar

Velodyne today announced a solid-state automotive lidar ranging system that the company will demonstrate in a few months, release in test kits later this year, and mass produce at its new megafactory in San Jose, Calif., in 2018. The estimated price per unit is in the hundreds of dollars.

The company hopes to nail down the dominance it has enjoyed ever since pioneering automotive lidar a dozen years ago. That’s when its revolving arrays of lasers and optical sensors first appeared in rooftop towers on experimental self-driving vehicles, notably the Google car. Such arrays give 360-degree coverage and can see in detail what radar can only make out dimly, but they aren’t easy to build and calibrate in small, partially handmade batches. This is why their price still rises into the tens of thousands of dollars.

A solid-state version could be turned out in quantity for much less money, and in a smaller and more rugged package. That’s important for use in a production car meant to operate for years on end.

The Velodyne package measures 125 millimeters by 50 mm by 55 mm (about 5 by 2 by 2 inches)—small enough to be embedded into the front, sides, and corners of vehicles. Such a setup can give theater-in-the-round coverage even though each device covers only 120 degrees horizontally. They also span 35 degrees vertically, which comes in handy when climbing hills.

“I don’t necessarily believe that [the solid-state lidar] will obviate or replace the 360-degree units—it will be a complement,” Marty Neese, chief operating officer of Velodyne, told IEEE Spectrum earlier this month. “There’s a lot of learning yet to go by carmakers to incorporate lidar in a thoughtful way.”

Velodyne hopes to win the new phase of this game by being first to market. “The first mover sets the standard,” Neese said. “Software is 60 percent of the effort, so if you show up [later] with a new piece of hardware, it may not fit in.”

In lidar land, as on the larger self-driving continent, ambitious timetables are the rule. Velodyne’s rivals have cited even faster schedules. For instance, the startup Innoviz, in Israel, has promised to market a US $100 solid-state lidar by 2018.

Quanergy, one of the first solid-state lidar startups, perhaps poses the most immediate challenge to Velodyne. Unlike Velodyne, Quanergy has a manufacturing partner with plenty of experience with the automotive sector.

“This year is when we will be providing samples to the broader market—an announcement will be coming soon,” said Greg Noelte, Sensata’s global director for business development, in an interview earlier this month. “We’re targeting 2020–2022 for high-volume production.”

Noelte said that the key to winning the game would be in getting large numbers of units that meet tough automotive specifications and that Sensata’s many decades of experience in manufacturing gave it an edge. “It’s one thing to produce a few prototypes, another to produce one that can last 100,000 to 150,000 miles reliably,” he said. “Our specialty is in making it work, in process development.”

But Neese says lidar isn’t easy, and that Velodyne’s experience with the persnickety technology is what really matters.

“Don’t think of one guy at a bench [building a lidar set], think of a supply chain,” he says. “Imagine circuit board assemblies coming from completely automated factories, and laser boards, top and bottom assembly boards, all of it made in very capable factories with state-of-the-art automation. Lasers are similarly done in a very high-tech lab; it’s not trivial to make lasers.”

Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Editor
Philip E. Ross
New York City
Contributor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Washington, D.C.
Contributor
Mark Harris
Seattle
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More