Audi’s cars are now telling drivers how many seconds remain until the traffic light turns green. It's the first commercial offering of vehicle-to-infrastructure communication in the United States, Audi says.
Of course, nobody would pay much extra for an electronic gadget that just lowered your stoplight waiting anxiety. But this feature is just testing the waters; bigger applications are in view.
The cars—recently manufactured Audi A4 and Q7 models signed onto Audi’s prime connection service—communicate with the Las Vegas traffic management system via 4G LTE, the standard mobile phones use. The countdown appears on the dashboard or heads-up display, then shuts off a few seconds before the light changes (presumably to keep drivers from getting mesmerized). Audi manages the transfer of data with the help of its partner, Traffic Technology Services (TTS), of Beaverton, Ore.
The plan is to eventually give drivers the information they need to make fairly ambitious predictions, like choosing the right speed to go sailiing through several green lights in a row. Or the system might bypass the driver and go straight to the engine’s “start-stop” system, shutting it down for a long count, then starting it up again seconds before getting a green light.
Cities should be able to use the tool to fine-tune their traffic management, say by metering access to a road to head off congestion before it can take shape. Nevada’s Regional Transportation Commission has declared an interest in such features, as have other cities. In a pilot test in Pittsburgh, the startup Surtrac has shown that an intelligent traffic management system can reduce travel time by 25 percent and idling time by over 40 percent.
But, like a mobile phone, a networked vehicle is eminently hackable, and when this communicative capability becomes common in cars, there will be more than enough incentive for the bad guys to prey on them.
George Hotz, at 27 a past master of hacking, has outdone himself by giving away software that he says turns a modestly smart car into one that can more or less drive itself. He contends that it outperforms any production car except a Tesla—at a cost, in parts, of just US $700.
Hotz originally planned to sell an aftermarket system for a bit more than that and make a profit. However, that strategy foundered when the National Highway Traffic Safety Agency started breathing down his neck. So he went open source, reasoning that NHTSA can recall only a product that’s sold, not one that’s given away. Anyone can download the code from his startup, comma.ai.
IEEE Spectrum talked to Hotz last week about the open-source product, called Comma Neo. (This interview has been edited and condensed for clarity.)
Lidar has been the best thing to happen to self-driving cars—and also the worst. Installing a bank of lasers on the roof means a car can capture millions of points of information every second, rapidly building up a 3D image of the world around it.
The problem is that until recently, lidar units were more expensive than the cars that carried them. High performance units on Google’s early vehicles cost $70,000; devices with shorter range and a narrower field of view, from lidar pioneer Velodyne, can still cost thousands.
There is always the option of going without lidar altogether, as Tesla has done. Its cars rely on cheaper radar, video, and ultrasonic sensors. But the fatal crash of a Model S this summer while its Autopilot system was engaged shows that this solution is far from ideal.
What autonomous car makers really want is a dirt cheap and utterly reliable sensor that complements radar and video cameras. And Israeli start-up Oryx Vision thinks it might have just what they’re looking for.
Oryx’s technology, coherent optical radar, splits the difference between radar and lidar. Like a lidar, it uses a laser to illuminate the road ahead, but like a radar it treats the reflected signal as a wave rather than a particle.
The laser in question is a long-wave infrared laser, also called a terahertz laser because of the frequency at which it operates. Because human eyes cannot focus light at this frequency, Oryx can use higher power levels than can lidar designers. Long-wave infrared light is also poorly absorbed by water and represents only a tiny fraction of the solar radiation that reaches Earth. This means the system should not be blinded by fog or direct sunlight, as lidar systems and cameras can be.
One of the potential cost savings of Oryx’s technology comes from the fact that its laser does not need to be steered with mechanical mirrors or a phased array in order to capture a scene. Simple optics spread the laser beam to illuminate a wide swathe in front of the vehicle. (Oryx would not say what the system’s field of view will be, but if it is not 360 degrees like rooftop lidars, a car will need multiple Oryx units facing in different directions.)
The clever bit—and what has prevented anyone from building a terahertz lidar before now—is what happens when the light bounces back to the sensor. A second set of optics direct the incoming light onto a large number of microscopic rectifying nanoantennas. These are what Oryx’s co-founder, David Ben-Bassat, has spent the past six years developing.
Incoming light creates an AC response in the antenna that is rectified—in other words, converted into a DC signal. Rani Wellingstein, Oryx’s other founder, says the system has a million times the sensitivity of traditional lidar. Because the antennas treat incoming light as a wave, they can also detect Doppler shift—the change in frequency due to the relative motion of whatever it bounced off—and thus determine the velocities of other objects in or near the roadway.
Each nanoantenna is just five square micrometers; they will ultimately be fabricated directly onto integrated circuits using a thin-film chip manufacturing processes. This will make it fairly simple for the signals to be fed into a machine learning system that can classify objects in the scene.
While Oryx has already built many millions of experimental nanoantennas, it has yet to fabricate them into an integrated circuit. Within a year, it intends to build a 300-pixel demonstration unit, then a 10,000-pixel chip containing a quarter of a million nanoantennas, and finally a 100,000-pixel, multi-million-nanoantenna device suitable for in-car use.
“Today, radars can see to 150- or 200 meters, but they don’t have enough resolution. Lidar provides great resolution but is limited in range to about 60 meters, and to as little as 30 meters in direct sunlight,” says Wellingstein. He expects Oryx’s coherent optical radar to accurately locate debris in the road at 60 meters, pedestrians at 100 meters, and motorcycles at 150 meters—significantly better than the performance of today’s sensor systems.
The Oryx setup could be cheap, too. If the company can get its (still unproven) fabrication process to scale, it thinks making millions of nanoantennas will be little harder than producing conventional semiconductor chips. The company recently announced a $17 million Series A funding round and has already been talking with autonomous vehicle companies, including Nutonomy, which recently launched a pilot self-driving taxi service in Singapore.
“The idea seems reasonable, but I'd be skeptical until they have a working prototype,” says Joe Funke, an engineer who has worked on autonomous systems at several vehicle start-ups, including Lucid Motors. “There will always be use cases for cheaper, high range sensors,” says Funke. “If nothing else, longer range enables higher operating speeds.”
But don’t write off lidar just yet. Following a $150 million investment by Ford and Chinese search giant Baidu this summer, Velodyne expects an “exponential increase in lidar sensor deployments.” The cost of the technology is still dropping, with Velodyne estimating that one of its newer small-form lidars will cost just $500 when made at scale.
Solid-state lidars are also on the horizon, with manufacturers including Quanergy, Innoluce, and another Israeli start-up, Innoviz, hinting at sub-$100 devices. Osram recently said that it could have a solid-state automotive lidar on the market as soon as 2018, with an eventual target price of under $50. Its credit card–size lidar may be able to detect pedestrians at distances up to 70 meters.
That’s not quite as good as Oryx promises, but is probably fine for all but the very fastest highway driving. The window for disrupting lidar’s grip on autonomous vehicles is closing fast.
This story was corrected to give proper details of Oryx nanoantenna integration plans.
For those of us more interested in the far future than the near future or present (both of which tend to be comparatively dull), the best part of the Los Angeles Auto Show is the annual design challenge. There, we see the creations of curiously matched teams of designers that have a year to develop a concept that addresses nebulous questions about cars and the future. If you've ever wondered what Crayola, Fandango, or Lego think the car of 2050 might look like, you’re about to find out.
A consortium of 15 Japanese automakers and manufacturers that make components and systems for cars—including Toyota, Honda and Nissan, as well as Mitsubishi Electric, map makers, and others—have come together to create detailed, high-definition 3D maps to help usher in safe autonomous driving. Japan’s government is backing the project as part of its effort to have driverless vehicles on the road in time for the Tokyo Olympics in 2020. The 2020 target date has had the effect of focusing the country’s robocar efforts to prevent Japan from falling behind similar efforts underway in the United States and Europe.
Mitsubishi Electric is leading the project—dubbed Dynamic Map Planning—and is providing a new, compact version of its vehicle-mounted mobile mapping system. Mitsubishi began marketing a version of the system, the MMS-G220, overseas in October, and will introduce the commercial version domestically sometime in 2017. The tedious task of mapping Japan’s 30,000 kilometers of expressways—the plan’s first priority—is now underway.
The mobile mapping system (MMS) can be configured to take advantage of various combinations of lidar, cameras, and other sensors, along with a GPS antenna, depending on the application. The devices are assembled to form a single detachable unit designed for easy maintenance. The system, which can be mounted on even a compact car’s roof, draws power from the car’s cigarette lighter socket.
As the vehicle cruises at speeds of around 40 km an hour, the system uses a laser-scanning point cloud technique to gather 3D positioning data of roadside features such as traffic signals, road signage, and lane markings. It can capture objects up to 7 meters away with an absolute accuracy of 10 centimeters, according to Mitsubishi.
A point cloud is a collection of data points formed in space, the position of each point being identified by its X, Y, and Z coordinates. When light emitted by a laser scanner is reflected back from an object or surface, that information is recorded as a data point. Point cloud data alone would not be sufficient to identify objects clearly, so in post-processing, it is superimposed on synchronized camera images taken at the same time. This information-rich combination is then processed to create 3D maps. Color can also be added at this time.
With standard laser equipment, the Mitsubishi system collects 27,100 data points a second. With optional high-performance laser scanners, that number is raised to one million points a second. The mapping system can be equipped with long-range, high-density laser scanners that provide detailed images of cityscapes or roadside buildings.
To keep track of where these objects are in space, the system relies on GPS, an inertial measurement unit, and a wheel-mounted odometer to help calculate the position of the vehicle. For even greater accuracy, the mobile mapping system will also make use of the nascent Quasi-Zenith Satellite System, a Japan-centered commercial satellite system that aims to provide centimeter-scale positioning to augment the U.S.-operated GPS service. This is due to go into full operation in 2018.
Shun Kuriaki, manager of Mitsubishi Electric’s IT Solution Department, in its Electronic Systems Group, says that to improve the safety of autonomous driving, more detailed information than is currently supplied by car navigation systems is required. In bad weather, for instance, the effectiveness of various sensors needed to maintain control of the driving task can be diminished to the point where they’re inoperable.
“The MMS 3D maps will provide such additional information as noise barriers, lane divisions and their widths and surface conditions, as well as the location of traffic lights, road signs and other useful information to help improve the safety of autonomous driving,” says Kuriaki,
The system, which is gathering the myriad bits of information needed to subsequently allow vehicles to traverse Japan’s roads without human intervention, is designed to be operated by a person with a notebook PC in the passenger seat. According to Mitsubishi, no specialist knowledge is required to operate the system or to run the post-processing software after the data is collected.
Autonomous driving is just one of several applications for which Mitsubishi is seeking to use its MMS system. “Some special specification versions of our MMS have already been applied to inspection of tunnel linings and road surface conditions,” says Kuriaki. “And we are also studying how to apply the technology to other fields, such as inspection of railway tracks and underground areas.”
With Mitsubishi ready to export its road-scanning technology, it can expect to compete with Google in the United States, and with several companies in Europe.
Bostonians will get the chance to hail a self-driving cab by the end of this year, roboride startup NuTonomy announced today. It’s a continuation of a testing program the company began some months ago in Singapore.
Just one car, a modified Renault ZOE, will ply the paths of the Raymond L. Flynn Marine Park, a clearly delineated precinct, much like the high-tech section in Singapore the company’s existing robotaxis serve. In Singapore, the half-dozen or so cars—Renaults, with Volvos coming soon—started taking a select group of customers for rides in August, with one engineer sitting behind the wheel and another riding shotgun. Next year NuTonomy plans to open the service to all customers next year and eventually to expand service to other parts of the city-state.
Boston is the logical next step for the company, which is a spinoff of research done at the Massachusetts Institute of Technology, in nearby Cambridge. Other cities are expected to follow next year also.
In an interview back in April, NuTonomy’s CEO Karl Iagnemma told IEEE Spectrum that the company’s robodriving system relied a system of logical constraints that embody the rules of the road and the ways people drive. The logical structure is “verifiable, meaning that we're sure that the structures that come out of these rules exactly represent and adhere to the rules that we define,” he said. That logical nature should also make it easier for engineers to tweak the system.
Such tweaking will come as the software masters local signage and road markings, the company says in its statement. The software will also try to figure out how pedestrians, cyclists and drivers behave differently from those in Singapore; such things can vary widely between cultures.
Electric vehicles must make noise to warn pedestrians of their coming by 2019, U.S. road safety regulators said this week. And the measure is grist for our mill here at “Cars That Think” because Tesla Motors appears to be developing a robotic solution to the problem.
The National Highway Transportation Safety Administration announced on Monday that noisemakers would be needed in pure electric and hybrid vehicles operating at speeds under 30 kilometers per hour (19 mph). At higher speeds they evidently make their own noise, thanks to resistance from wind and road.
The first clue to Tesla’s plans, Electrotek reports, is the statement Tesla’s Elon Musk made at a press conference back in 2013. “I think the sensible and ideal thing long-term is to have proximity sensors that direct a pleasant sound in the direction of where somebody is walking,” Musk said. The second clue came earlier this year in the form of blueprints apparently leaked from Tesla. Clearly visible are structures labeled “pedestrian speakers.”
It’s unclear whether regulators will accept Musk’s proposal to beam noise with laser-like focus to spare the ears of the unthreatened. On the other hand, just broadcasting the noise takes away a key EV marketing advantage: silence. (Except for the motorcycle market. Some fans of Harley Davidson’s iconic bikes have disparaged that company’s planned electric version for its un-Harley-like purr.)
Many EV motorsports events already require the noisemakers to protect onlookers, photographers and pit crew. At this summer’s motor race at Pikes Peak, Colo., local stores sold modified car alarms to racing teams for just this purpose. The price: $8.
When cars can share data, maybe they’ll act in unison and drive themselves safely enough for us humans to sit back and daydream. But the car-to-car chat would have to occur at data transfer speeds a lot faster than those our pokey 4G cellular service can muster.
Today, the necessary 5G cellular technology was demonstrated for the first time at a BMW race track near Inchon, in South Korea. Two BMWs shared information with the human drivers; in a future, self-driving setup, such sharing of data might allow cars to coordinate actions almost instantaneously.
The purpose-built 5G network covered 240,000 square meters (59 acres, or about half the size of Vatican City) according to SK Telecom, the South Korean company that installed it along with Sweden’s Ericsson. The back-and-forth communication hadless than a millisecond of latency, par for the course for a system with a peak transmission rate of 20 gigabits per second.
Each car had a 5G station of its own, through which on-board cameras could upload ultrahigh-definition video for displaying to an audience. The cars were from the X5 and the S7 series (the first production vehicle to park itself driverlessly, as IEEE Spectrumreportedin April).
The coming of 5G is keeping the idea of cellular car-to-car connections alive. It may even end up driving a stake through the heart of alternative wireless schemes—notably dedicated short range communications, or DSRC, based on IEEE 802.11p. That’s the system upon which Europe’s Cooperative ITS Corridor, from Amsterdam to Vienna, is now being built.
A 5G connection will someday enable self-driving cars to brake in unison. If my car’s AI sees an obstacle as it rounds a bend, it could hit the brakes on both my car and your car, following just behind me. Of course, vehicle-to-vehicle talk won’t be enough: We’ll also need sensor-festooned cars capable of knitting together various kinds of data to make split-second decisions as good as a human driver’s.
That vision is still a ways off. No doubt we’ll first see 5G service in the hands of teenagers, who will use it to send and receive high-def video.
Those twirling banks of lasers you see atop experimental robocars cost plenty, wear fast, and suck power. The auto industry yearns to solve all those problems with a purely solid-state lidar set that designers can hide behind the grill of a car.
Their wish will come true next year, according to Osram Opto Semiconductors. The company says test samples will be available in 2017, and that commercial models could arrive in 2018. With mass production, the price should drop to around 40 Euros (US $43.50), says Sebastian Bauer, the product manager for Osram, in Regensburg, Germany.
By comparison, Velodyne’s rooftop lidar towers cost $70,000 and up, and that company’s new, hockey-puck-size model runs around $8000.
Osram’s contribution to the lidar system is a laser chip whose four diodes were all made in one piece, then separated. That means they’re perfectly aligned from the get-go, with no need for after-the-fact fiddling. Each channel fires in sequence, so the returning signal can be matched to its source, thus enabling the system to add each petty piece of angular scope into an ensemble capable of sweeping a large vertical swath.
“You need that for the forward-looking lidar,” Bauer says. ”Think of a car travelling along a hilly road, where a single beam’s not enough—often, you’ll just see the sky.”
The other key part of the lidar is a tiny array of mirrors to steer the laser beam. That’s being provided by Osram’s partner, Innoluce, which Infineon Technologies acquired last month. The mirrors are part of a microelectromechanical system (MEMS), so they move on a tiny scale. The MEMS chip can operate at up to 2 kilohertz, scanning the environment for the data a car needs to perform 3D mapping.
Osram’s four lasers each pulse for 5 nanoseconds, just one-quarter as long as the one-channel laser the company now makes for emergency stopping systems and other functions. Because the laser quickly reaches peak power and then winks out, it can support a robust peak power of 85 watts while shining for only 0.01 percent of the time. That makes it safe for the eyes.
The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.
How did Osram generate such short, powerful pulses? “We integrated the driver chip from a partner and included a capacitor in the package, keeping the distance between the parts short, so there’s little inductance,” Bauer says.
It seems straightforward enough. Why, then, didn’t somebody do it earlier?
“Good question,” Bauer laughs. “It’s because the lidar market took off just a few years ago, and before that, it was all about these towers on top of the cars. And there were hard problems to solve—it’s not enough to work just in the lab; it has to pass stress tests. It has to be good enough for automotive use.”
Just like in its first season, Formula E’s third season started with a crash.
The bump took part of the nose off of Abt driver Luca Di Grassi’s car on the Hong Kong circuit. Di Grassi, last season’s second-place winner, managed not only to stay in the race but to finish second.
The other top finishers were all familiar faces to series fans, too. Last year’s champion Renault e.dams driver Sebastien Buemi, won the Hong Kong race, followed by di Grassi, Mahindra driver and Formula E veteran Nick Heidfeld, and last year’s third-place winner, Renault e.dams driver Nick Prost. Prost was only 7.360 seconds behind Buemi but the next cluster of drivers was more than 10 seconds behind Prost, in part due to when the drivers chose to switch cars and take their pit stops.