Cars That Think iconCars That Think

mcity drive

Riding in a Robocar That Sees Around Corners

Communication is a sixth sense that lets us see through other people’s eyes and thus through walls, over the horizon, and around corners. “Look out for that lion,” someone said to our ancient ancestor, and that’s why we’re standing here today.

Birds do it, with their calls; bees do it, with their waggle dance. Now cars are doing it, with wireless systems.

I saw it at first hand yesterday, at MCity, a mock cityscape—complete with Hollywood-style street facades—that the University of Michigan, in Ann Arbor, has set up to test self-driving cars. The place is littered with antennas, power outlets, and cameras, including an eye in the sky—a quadcopter from a local aerial photography company.

I sit in a Kia Soul, along with Gabor Orosz, a professor of mechanical engineering, and a student, as we head into a bend shrouded in shrubbery. Suddenly we come upon a car parked right in the middle of the road, and Gabor hits the brakes with a screech. "If we’d had icy weather, there's no way I could stop the car," he says.

We do it again, but this time the car drives itself and has its communication system turned on. Well before we make the turn the car slows to a stop: It has received a signal from that car in the road. “Instead of getting into trouble, and asking the car to get us out of it, we prevent the trouble in the first place,” Orosz says.

Such communication—from car to car, and from a car to nearby traffic signals—has been mandated by U.S. safety regulators. When the details of the emerging system are approved, possibly this year, the rule could take effect by 2020. General Motors has already equipped its 2017 Cadillac CTS with the system.

The technology is perfect for robocars, but it can help even in today’s manually driven cars, as I’m shown in another test. This time we’re driving in the third of three cars when the first one brakes hard. The second one doesn’t, but that’s okay, because a buzzer goes off in our cabin, alerting our driver to brake. With this technology around, you can kiss multicar pileups goodbye.

Here are a few of MCity’s video clips:

The full system has yet to be rolled out, but wireless sharing of data is now going on in the streets of Ann Arbor, a largish college town. Some 1,500 people have signed up for the research project, enough to constitute from 2 to 5 percent of the traffic during rush hour. Next year, the program expects to add another 1,000 cars to the fleet.

The data they gather is already helping the city’s traffic control system optimize stop lights; soon it should be possible to prioritize a given traffic light for a particular vehicle—say, a city bus that needs a few more seconds to make the light because it’s behind schedule. Another possible application is a curb-mounted station that broadcasts a signal that tells your car what the speed limit is and whether you’re exceeding it. One of them is already in operation in front of a Wendy’s restaurant that’s about 5 minutes down the road from the MCity test track.

Could the curbside station, or others like it, one day email a speeding ticket to your car? Or maybe it could just take the money out of your checking account and tell you about it afterwards.

It takes 20 to 30 minutes to fit a car with the necessary hardware: a GPS sensor and a wireless transceiver. Here in the MCity compound, at least, the GPS system uses a repeater to enhance its accuracy down to centimeter level—good enough to locate a car precisely and to allow other cars to figure out its trajectory and measure its speed. The wireless transceiver uses DSRC (direct, short-range communications), a protocol based on the 5.9 gigahertz band.

To get data from actual self-driving vehicles, the University of Michigan will inaugurate a shuttle service from North Campus to the main campus, a 3-kilometer (2 mile) circuit. The shuttle, made by Navya, a French company, follows a set path at the pace of a trot, around 10 km/hour (6 mph).

I sit on a bench inside the sensor-festooned shuttle as it plies a curving gravel road, and I ask one of our minders whether the shuttle’s sensors had ever recorded anything strange. Indeed they had. A fawn once stepped in front of one, and the thing politely stopped for the baby deer. But nobody retrieved the video feed, so the YouTube viral moment was lost forever.

Fun fact: The Navya is mostly leased to organizations, like MCity, that are more in the way of being partners than customers. But if you’d like to buy one outright, it’ll cost you well over US $200,000, and the maintenance will run you around $40,000 a year.

The point of all this is to generate critically useful experience. We’ll need a whole lot of it to cut the accident rate down to the parts-per-million level that safe driving requires. It’s no surprise that Waymo, the company that’s been testing robocars the longest, has the lowest rate of unplanned driver interventions. Yet even Waymo hasn’t covered the billions—yes, billions—of kilometers you need to traverse in order to encounter all the hard problems that are lurking out there. These are the edge cases, as engineers call them.

Edge cases tend to be rare, as indeed severe accidents are. In the United States, only about one person dies for every 100 million miles driven (the highest rate is South Carolina’s, which has 1.89 deaths per 100 million miles, according to the Insurance Institute for Highway Safety). A fleet of a few thousand self-driving cars won’t log that many miles anytime soon.

That’s why engineers at the University of Michigan have concluded that in order to get enough experience, they must go beyond the real world to the virtual one. They are pouring data from their tests here into a system that constructs a huge number of variations, then beams that modeled world out to the car on the MCity track. That way, it can confront a lot of hair-raising problems without endangering anyone. Call it augmented reality for cars.

At one point during one of our drives, our car approached an intersection when another car zoomed through, running a red light. Of course, our driver knew it was going to happen—the lawyers would have insisted on it—and he braked in good time.

Later, in a small control room with four big screens, I see the event playing out again—first, as it actually happened, then again as a virtual variation of the event. 

“It’s difficult for us to test in a dangerous environment, like running a red light,” says Henry Liu, a professor of civil and environmental engineering at the university. “Augmented reality addresses this barrier by creating a simulated world, running in parallel with the real MCity.”

He gestures toward one of the screens, and I see why our car had stopped in front of a mocked-up train crossing. A virtual train—the Hogwarts Express?—had been passing through, but only the car had been able to see it.

Robocar Movie From Pre-Talkie Days

This short silent movie, called “The Automatic Motorist,” imagines an old-timey car, running boards and all, being driven by a robot chauffeur. And the film came out in 1911, apparently providing the first sustained vision of our robocar future.

Of course, robocars are still in our future, 106 years later.  

The clinking, clanking humanoid is much like the Tin Man, but with a temper. And as in our own day, the first roadblock to its progress is the law, which here takes the form of a preening policeman. 

Cineastes had long known of this little bauble, but I must tip my hat to Atlas Obscura, which unearthed it just a few days ago.

Like countless sci-fi movies yet to come, the plot was framed around the available special effects, not the other way around. Like many films that were to follow, it is a nearly point-by-point remake of a movie done five years before, by the same director. The main innovation is the robotic chauffeur.

Some 13 years before this movie came out, Nikola Tesla patented a remote-control system for vehicles—drones of sea and land, as it were. He predicted they would make war so dreadful as to be unthinkable, thus paving the way to universal peace. Two years after the movie’s release, H. G. Wells wrote a rather more pessimistic novel about the coming of nuclear weapons and the collapse of civilization.

The car fulfills a clutch of motorhead fantasies beyond robotic drive. It serves as an airplane, rocket ship, boat, and submarine, going first to London, then the moon, and finally Saturn. At that far flung stop, the car’s occupants—notably a newlywed couple—meet extraterrestrials who look like Munchkins, but with spears.

Hmmm. Could “The Wizard of Oz” (1939), with its Tin Man and Munchkins, have been cribbed from Mr. Booth? No, of course not—when moviemakers steal, they call it homage.

Two men stand in front of and beneath a pair of large white wheels trimmed with copper foil. Electronic equipment hangs from the bottom of the wheels and more equipment is on a table behind one wheel.

Wireless Power for Moving Electric Vehicles Closer to Reality

Wireless charging of moving electric vehicles is one step closer to hitting the road, Stanford University researchers say. Such technology could also help charge mobile devices, medical implants, and factory robots, the scientists add.

Read More
surefly air taxi

SureFly, a New Air Taxi That Runs on Electricity—and Gasoline

Range anxiety, the bugaboo of all-electric driving, is even more frightening for all-electric flying, where running out of power has worse consequences than having to pull over to the side of the road. 

A solution now comes from Workhorse, an Ohio-based firm. It has a passenger-carrying air taxi, called the SureFly, which combines the company’s expertise in partially automated operation, from its drone business, and in hybrid-electric propulsion, from its truck business. 

The craft’s eight counter-rotating motors each drive a carbon-fiber rotor, and the power comes from a generator cranked by an internal-combustion engine. You can fly 110 kilometers (70 miles) on a tank, then refill in minutes. There’s also a small lithium-ion battery, as a backup.

“That way, if the engine should fail, you have five minutes to get down,” Steve Burns, chief executive of Workhorse, tells IEEE Spectrum. “And we even have a ballistic parachute, fired upward, like an ejector seat, so you can be 100 feet up, and it’ll still work. In a normal helicopter the rotor would chop it up, but with eight blades, there’s nothing directly overhead.”

SureFly will be on display at the Paris Air Show later this month, but it won’t fly—it’ll just sit there, in all its octocopter glory. It’s supposed to have its maiden flight later this year, most likely at the company’s truck factory, in Indiana.

Of course, talkin’ about tomorrow is what air-taxi firms do. None of the dozen or so startups in the air-taxi game has yet flown anyone to work. Among them are Zee Aero, established by Google co-founder Larry Page, and Uber Elevate, brainchild of Uber honcho Travis Kalanick. Just last week Toyota said it was backing a startup in a similar project; in April, Aeromobile, a Slovakian company, said it was already taking orders—for 2020 delivery.

Burns maintains that his company has the edge because it hasn’t let the best be the enemy of the good.

“Most of the other companies are going after the Nirvana solution: something that can lift off as helicopter, transition to a fixed-wing plane to fly, then retransition to helicopter to land,” he says. “But you know the Osprey [a military transport with tiltrotor technology]? A lot of people got killed, which is why we decided that Version One won’t change to a plane. So in flight it’s not the most efficient way—but this is for a short-hop application.”

The goal, Burns says, is to demonstrate to the Federal Aviation Administration (FAA) that the air taxi is statistically twice as safe as taking a car to the same destination. One way to make it happen is with SureFly’s computer flight system, similar to many of the electronic safety features in today’s more advanced cars. Such a system would assist the pilot but not replace him, so you’d still need a license to fly it.

“Since it’s classified as a light-sport craft,” Burns says, “it only takes 20 hours of [pilot] training. Helicopter training is 1500 hours.” 

The first SureFly pilots will probably have to stick to whatever channels in the air that get carved out by the FAA and other governmental authorities. One trick is to have the craft fly over railway tracks. Among the governments that are already drawing up regulations for such things are Dubai and the municipality of Dallas-Fort Worth, Tex. 

Additional flight-control methods can always be rolled out years from now should Jetsons-style air commuters ever number in the millions. But don’t hold your breath: even the sale of a million drones last Christmas hasn’t yet darkened the skies.

Early adopters will include: farmers interested in precision agriculture; emergency responders, who want to get to the scene of an accident a few minutes faster than they could by ambulance; and the military. And maybe the odd centimillionaire.

In the beginning, at least, those will be the only guys who can afford craft like SureFly. “All we’re announcing now is that it’ll sell for under US $200,000, for the initial adopters,” Burns says. “I wanted to put it at the price of a Tesla, and with manufacturing at scale—well, we’ll see.”

Image

How Much Can Autonomous Cars Learn from Virtual Worlds?

To be able to drive safely and reliably, autonomous cars need to have a comprehensive understanding of what’s going on around them. They need to recognize other cars, trucks, motorcycles, bikes, humans, traffic lights, street signs, and everything else that may end up on or near a road. They also have to do this in all kinds of weather and lighting conditions, which is why most (if not all) companies developing autonomous cars are spending a ludicrous (but necessary) amount of time and resources collecting data in an attempt to gain experience with every possible situation.

In most cases, this technique depends on humans making annotations to enormous sets of data in order to train machine learning algorithms: hundreds or thousands of people looking at snapshots or videos taken by cars driving down streets, and drawing boxes around vehicles and road signs and labeling them, over and over. Researchers from the University of Michigan think there’s a better way: Doing the whole thing in simulation instead, and they’ve shown that it can actually be more effective than using real data annotated by humans.

Read More
chevy bolt

Robocars and Electricity—a Match Made in Heaven

A lot of talk on self-driving cars name-checks electric drive as well, which might lead you to put the pairing of these technologies down to fashion.

And indeed, in May, as the Lexis/Nexis database shows, of 1641 newspaper references to either “driverless” or “self-driving,” 185—that’s 11 percent—fell within 10 words of “electric.” A good part of the reason for this is carmakers’ strenuous efforts to associate themselves with futuristic ideas.

BMW has gone further than most by establishing its i-series. Just yesterday the company confirmed that the iNext, due out in 2021, would be both fully self-driving and fully electric. General Motors has pinned its driverless ambitions on the all-electric Chevrolet Bolt. Last week, Ford decided to install as its chief executive the guy who’d formerly headed research on both EVs and robocars.

But EVs and robotics really do mesh perfectly, both in the early development stage and later on, when the technologies are meant to take over the world.

Read More
abstract illo of car breaking into bits

Toyota Joins Coalition to Bring Blockchain Networks to Smart Cars

Today if your car wants to talk to another car, a service provider or just an infotainment source, it needs to go through a third-party network. That can cost you money in two ways: you pay the third party and you forgo the chance of being paid for the use of your data.

Toyota means to solve both problems using blockchain technology, the networking method behind Bitcoin and other cryptocurrencies. Blockchain allows connected computers to share data and even software, and to do so securely. This matters for all sorts of computer-assisted driving, above all for self-driving technology.

“Hundreds of billions of miles of human driving data may be needed to develop safe and reliable autonomous vehicles,” said Chris Ballinger, director of mobility services and chief financial officer at Toyota Research Institute, in a statement. “Blockchains and distributed ledgers may enable pooling data from vehicle owners, fleet managers, and manufacturers to shorten the time for reaching this goal, thereby bringing forward the safety, efficiency and convenience benefits of autonomous driving technology.”

Distributed ledgers allow a lot of independent computers to keep track of the sharing of data. That way the record can’t be tinkered with if one machine is compromised. Up until now the technology has mostly involved financial transactions, but any sort of exchange or chitchat can be handled in the same way.

Car makers might use blockchain to restrict the sharing of data—say, from a robocar test fleet—to their partners and to road-safety regulators. Car owners might use it to charge for the data their cars churn out or to give up a measure of privacy in return for a service. Car owners could choose to give an insurer moment-by-moment driving data to get a lower premium—so long as they drive safely.

All car makers and suppliers are struggling to handle the gusher of data provided and demanded by cars, and the burden will grow once semi-autonomous cars hit the road. IEEE Spectrum spoke to Michael  Tzamaloukas, vice president for autonomous drive and advanced driver assistance systems at Harman, a leading auto supplier (including to Toyota, although Harman is not now using blockchain tech). He said Harman is already getting a data deluge from its cloud-based Map Live Layers, which combines data from a car’s sensors with other data shunted from other cars via the cloud.

“More and more content is being generated, particularly the premium cars, and in three to five years, you’ll be getting much more than you do from a smartphone,” says Tzamaloukas“It will greatly benefit a number of safety apps—kind of like an invisible airbag. It will make your ride more comfortable, more informed; you’ll be able to use data from other cars to see beyond the corner.”

Toyota and its main partner, the MIT Media Lab, are recruiting companies as to develop applications based on the blockchain system. It listed, in its statement, Berlin-based BigchainDB, which is building the data exchange; Oaken Innovations, based in Dallas and Toronto, which is developing a mobility token to pay tolls and other fees; Commuterz, in Israel, which is working on a carpooling application; Gem, in Los Angeles, which is working with Toyota Insurance Management Solutions; and Japan’s Aioi Nissay Dowa Insurance Services.

Toyota’s initiative forms part of a broader coalition of groups from completely different industries that are using blockchain tech to modernize data sharing. The group, called the Enterprise Ethereum Alliance, includes Merck, State Street Corp., JPMorgan Chase, BP, Microsoft, IBM and ING.

The right front corner of a yellow garbage truck. A metallic lidar cylinder sticks out at the corner. A man in neon yellow stands in the background.

Volvo Takes the Right First Step in Autonomous Garbage Collection

In September of 2015, Volvo announced that it was developing a robot designed to pick up trash bins and take them to a garbage truck. We were a little bit skeptical of that particular approach to the problem, but it's not the only angle that Volvo is taking towards autonomous refuse handling. Volvo Trucks has been testing a self-driving garbage truck in Sweden, designed to help humans do this dirty job more safely and more efficiently. It's not as slick as a team of little mobile robots that can pick up bins all by themselves, but it's a much more practical near-term solution towards solving the larger problem.

Read More
russell in car

22-Year-Old Lidar Whiz Claims Breakthrough

Lidarland is buzzing with cheap, solid-state devices that are supposedly going to shoulder aside the buckets you see revolving atop today’s experimental driverless cars. Quanergy started this solid-state patter, a score of other startups continued it, and now Velodyne, the inventor of those rooftop towers, is talking the talk, too.

Not Luminar. This company, which emerged from stealth mode earlier this month, is fielding a 5-kilogram box with a window through which you can make out not microscopic MEMs mirrors, but two honking, macroscopic mirrors, each as big as an eye. Their movement—part of a secret-sauce optical arrangement—steers a pencil of laser light around a scene so that a single receiver can measure the distance to every detail. 

“There’s nothing wrong with moving parts,” says Luminar founder and CEO Austin Russell. “There are a lot of moving parts in a car, and they last for a 100,000 miles or more.”

A key difference between Luminar and all the others is its reliance on home-made stuff rather than industry-standard parts. Most important is its use of indium gallium arsenide for the photodetector. This compound semiconductor is harder to manufacture and thus more expensive than silicon, but it can receive at a wavelength of 1550 nanometers, deep in the infrared part of the spectrum. That makes this wavelength much safer for human eyes than today’s standard wavelength, 905 nm. Luminar can thus pump out a beam with 40 times the power of rival sensors, increasing its resolution, particularly at 200 meters and beyond. That’s how far cars will have to see at highway speeds if they want to give themselves more than half a second to react to events. 

Russell’s a little unusual for a techie. He stands a head taller than anyone at IEEE Spectrum’s offices and, at 22, he is a true wunderkind. He dropped out of Stanford five years ago to take one of Peter Thiel’s anti-scholarships for budding businessmen; since then he’s raised US $36 million and hired 160 people, some of them in Palo Alto, the rest in Orlando, Fla.

Like every lidar salesman, he comes equipped with a laptop showing videos taken by his system, contrasted with others from unnamed competitors. But this comparison’s different, he says, because it shows you exactly what the lidar sees, before anyone’s had the chance to process it into a pretty picture.

And, judging by the video he shows us, it is very much more detailed than another scene which he says comes from a Velodyne Puck, a hockey-puck-shaped lidar that sells for US $8,000. The Luminar system shows cars and pedestrians well in front of the lidar. The Puck vision—unimproved by advanced processing, that is—is much less detailed.  

“No other company has released actual raw data from their sensor,” he says. “Frankly there are a lot of slideshows in this space, not actual hardware.”

We take the gizmo into our little photo lab here and our photo editor snaps a few, always taking care not to look too deeply into the window shielding the lidar’s mirrored eyes. Trade secrets, all very hush-hush. One thing’s for sure: The elaborate optics don’t look cheap, and Russell admits it isn’t, not yet.

These machines are part of a total production run of just 100 units, enough for samples to auto companies, four of which, he says, are working with it as of now. He won’t say who, but Bloomberg quotes other sources as saying that BMW and General Motors “have dropped by” the California office.

The cost per unit will fall as production volume rises, he says. But he isn’t talking about $100 lidar anytime soon.

“Cost is not the most important issue; performance is,” he contends. “Getting an autonomous vehicle to work 99 percent of the time is not necessarily a difficult task, it’s the last percent, with all the possible ‘edge cases’ a driver can be presented with—a kid running out in front in the road, a tire rolling in front of the car.”

Tall though he is, Russell is himself a prime example of a hard-to-see target because his jeans are dark and his shirt is black. “Current lidar systems can’t see a black tire, or a person like me wearing black—Velodyne’s [Puck] wouldn’t see a 5- to 10-percent reflective object [30 meters away]. It’s the dark car problem—no one else talks about it!”

And, because the laser’s putting out 40 times more power, a dark object at distance will be that much easier to detect.

What’s more, only one laser and one receiver are needed, not an array of 16  or even 64—the count for Velodyne’s top-of-the-line, $50,000-plus rooftop model. As a result, the system knows the source for each returning photon and can thus avoid interference problems with oncoming lidars and with the sun itself. But, because it hasn’t got the 360-degree coverage of a rooftop tower, you’d need four units, one for each corner of the car.

That, together with the need to make the laser, receiver, and image processor at its own fab, in Florida, means the price will be high for a while. Russell’s argument is simple: good things cost money.

“The vast majority of companies in this space are integrating off-the-shelf components,” he says. “The same lasers, same receivers, same processors—and that’s why there have been no advances in lidar performance in a decade. Every couple of years a company says, ‘we have new lidar sensor, half the size, half the price, and oh, by the way, half the performance.’ The performance of the most expensive ones has stayed the same for practically a decade; all the newer ones are orders of magnitude worse.”

autonomous car diagram

Three Studies Show How Robocars Will Make for Safer, More Efficient Roads

Nailing down the safety benefits of robocar technology is harder than it looks. Of course, any system that keeps you from driving into a brick wall must be good for your health, but the problem is that not all drivers handle such safety features properly.

Three recent studies all have to do with this critical interaction between technology and people. The first one, published in the most recent issue of IET Intelligent Transport Systems, shows that if every car came with both an emergency braking system and a pedestrian and cyclist detection system, it would cut by 7.5 percent the total number of traffic deaths among these “vulnerable road users.That decrease, say the authors of the study (who hail from Finland, the Netherlands, Austria, and Britain), “comes down to an estimate of around 1,900 fatalities saved per year” in the European Union.

That safety system combination yielded the best result of the 10 driver-assistance technologies the authors considered. But even so, it falls a percentage point or two below what you’d get if pedestrians and cyclists were unaware that the cars had the two safety features. It’s that comforting knowledge that a car can make up for their carelessness that tempts pedestrians to cross in front of such a car when they shouldn’t.

Safety engineers call such trading of safety for convenience risk compensation. They’ve been struggling against it for decades.

I was in Detroit back in the 1980s when Mercedes Benz came to demonstrate its new antilock braking system (ABS) by driving a car over a half-soaped stretch of asphalt, with the left-hand wheels on the soapy side and the right-hand wheels on the dry side. When the driver stamped on the brake, the car stopped easily, without the slightest swerve or skid.

“When this comes out on a car I can afford, I’m buying it with my own money,” my newspaper bureau chief told me. “I’m not waiting for the safety guys to make it mandatory.”

Customers did pay willingly for ABS, in part because they were encouraged by auto insurers that offered discounts. The insurance companies then sat back and waited for the accident rate to fall. But it didn’t. The ABS instead encouraged drivers to tailgate a little more and take curves a little faster.

Still, if you really load a car up with safety tech, you can swamp the human proclivity to act like an idiot. The car will save us from ourselves, despite our own best efforts to hinder it.

Take electronic stability control (ESC), the next-gen elaboration of ABS. As its inventor, Anton van Zanten, told IEEE Spectrum last year, “ABS works only during panic braking, not during coasting or partial braking or free rolling, and traction control usually works only if you have full acceleration. But ESC controls car motion at any time.” 

ESC is the subject of another paper that appears in the same issue of the journal. Unlike most such analyses, this study, from Finland, doesn’t model future scenarios in which all cars carry ESC. It just estimates how many lives the technology actually saved in 2014. The conclusion: Thirty-seven lives were saved, equal to 16 percent of all road traffic deaths that year; and 747 injuries were prevented, equal to 11 percent of traffic injuries.

The third study, from researchers at the University of Illinois at Urbana-Champaign, concerns not safety but the efficient flow of traffic. Here, too, the point was to confirm, with hard evidence, something experts had long expected: that a self-driving car can head off the formation of so-called phantom traffic jams.

A phantom jam comes when human drivers tap the brake to add a little space between themselves and the car in front of them. Their own braking then provokes a similar, but slighly delayed, response in the driver just behind them, which has the same effect on the next driver down the line. The resulting “traffic wave” strengthens as it propagates backward until at last it produces a jam.

“Our experiments on a circular track with more than 20 vehicles show that traffic waves emerge consistently, and that they can be dampened by controlling the velocity of a single vehicle in the flow,” say the authors.  “These experimental findings suggest a paradigm shift in traffic management: Flow control will be possible via a few mobile actuators (less than 5 percent) long before a majority of vehicles have autonomous capabilities.”

It’s a kinder, gentler version of a trick that state troopers used in the 1970s and 1980s, when motorists routinely flouted the national speed limit of 55 miles per hour. Two patrol cars would drive side by side to create a “rolling roadblock.” Here, though, the robocar isn’t forcing anyone to do anything—it’s just proceeding at a steady pace, braking precisely when it must.

Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Editor
Philip E. Ross
New York City
Contributor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Washington, D.C.
Contributor
Mark Harris
Seattle
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More