Cars That Think iconCars That Think

Car driving down a road with a heads-up display of vehicle information

HERE Mapping Service to Automate Finding a Parking Spot

A new mapping service could one day automatically help drivers find parking spots.

At the Paris Motor Show on 29 and 30 September, the mapping company HERE will show off several technologies—such as detecting road signs and monitoring traffic—aiming to improve driver safety and decision making.

One of these services, according to a press release today, could show a driver the best places to park.  A UCLA downtown traffic study of major cities in 2006 found that cruising for spots accounts for up to 30 percent of traffic flow. By combining third-party data with cars onboard sensors, HERE’s service would automatically pinpoint available spots and how much each of them would cost.

The novelty here is that, at first, the data would come from forward cameras or other sensors on Audi, BMW, and Mercedes-Benz cars. (HERE, a former NOKIA mapping company, is backed by Audi, BMW and Mercedes-Benz.)

The platform should be available by the first half of 2017 to customers both inside and outside the automotive industry, according to HERE. The company plans to license the technology to automakers, municipalities, road authorities, smartphone makers, and app developers.

“What we are seeing today is the technology and automotive industries coming together to create services that will elevate the driving experience for billions,” Edzard Overbeek, CEO of HERE, said in the press release.

The interior of a Tesla car, with a large display screen on the dashboard

Tesla's Massive New Autopilot Update Is Released, Promising Safer Driving

A long-heralded update to Tesla Motor’s Autopilot has just been made available for download. First reports suggest that it’s as big a change in the semiautonomous driving system as Tesla CEO Elon Musk had promised.

One key element of the upgrade is making more use of the car’s existing radar capabilities, both to perceive the road in real time and to map it so that subsequent Tesla cars can distinguish earlier fixed features from new, perhaps threatening ones. Another key element is saving drivers from over-dependence on the software.

Either of those points might have saved the Tesla owner who died last May when his Autopilot, apparently unsupervised by the driver, drove into the side of a tractor trailer. That is the first fatality known to have been caused by a modern robotic driving system.

“We believe it would have seen a large metal object across the road,” Musk said in a conference call earlier this month, referring to the trailer. “Knowing that there is no overhead road sign there, it would have braked.”

(Another Tesla driver died in China in January, in a case now under litigation there, but it isn’t clear whether the Autopilot was operating at the time of the crash.)

Tesla’s preference for radar over lidar, the laser-ranging equivalent, makes the company a little unusual in autoland. Lidar has far better resolution—unlike radar it can see road markings and make out the shapes of signs and other things even at a distance.

Radar, however, is cheaper, more compact, and far better at seeing through rain and snow. And Tesla needs this immediate practicality because it’s incrementally raising the capability of its cars’ “advanced driver assistance systems,” or ADAS, to a fully self-driving level. By contrast, Google, Ford and Uber are aiming to produce a fully robotic car in one fell swoop. They now festoon their experimental cars with lidar in the expectation that it will become cheaper, smaller and more capable by the time that car is ready, five years (at least) from now.

Tesla’s Autopilot 8.0 goes further than ever to keep the driver’s eyes on the road. For instance, it will sound the alarm if your hand’s off the wheel, then does it with increasing insistance until, after the third time, the Autopilot will disengage for the remainder of the trip. 

How far these changes will go to prevent accidents, small and large, remains to be seen. For now, though, the select reviewers who have been beta-testing the car say that it certainly drives in less machine-like way.

“It’s only human to want to give the truck a little more space and hug the outer edge of the lane,” writes Tom Randall in Bloomberg News. “With the upgrade, the car is beginning to act a little more human, adjusting its position in the lane to account for perceived threats from the sides.” 

One ho-hum aspect of today’s upgrade would once have been the most striking thing of all: it’s all done through an over-the-air download. Tesla pioneered this trick, and now other automakers are following suit. Here Tesla has a built-in advantage over other car makers: it sells cars direct to the public, so upgrades can go straight to the customer without alienating a dealer network.

U.S. DoT Secretary Foxx on the day he announces the imminent publication of federal robocar guidelines

Federal Regulators Open the On-Ramp for Self-Driving Cars

One of the great questions hanging over self-driving cars is the attitude that government regulators will take toward them.

As it had hinted it would do, the U.S. Department of Transportation has chosen to allow the adoption of robocars to proceed as quickly as possible (but not more so, to borrow a phrase from Einstein).  

In a statement last night the DOT summarized the policy, which it has just released in full today. It’s a system of guidelines rather than hard-and-fast rules—enough to enable engineers to plan their products and companies to refine their business models.

“This is a change of culture for us,” Transportation Secretary Anthony Foxx said yesterday. “Typically we would say a car must meet standard ‘A’ in a certain way. Under this approach, it isn’t prescriptive that there have to be specific proof points to be met before a technology comes to market.”

The guidelines cover when a car can drive itself; when it must hand control back to the driver; how it might stop or leave the road when such a handover’s not possible; and how it must handle ethical challenges, such as whether to veer to avoid one accident even if that risks causing another one. Perhaps most important, the framework will have national standing.

Vox reports that a Transportation Department official said last night in a telephone interview that the federal rules will cover robotic systems, while those of states and municipalities will apply only to the human drivers. In other words, if I drive badly, my state will punish me; if my car drives itself badly, the feds will intervene, presumably by going after the car’s maker.

Here is how the full Department of Transportation (DOT) report puts it: “DOT strongly encourages States to allow DOT alone to regulate the performance of [self-driving] technology and vehicles. If a State does pursue [self-driving] performance-related regulations, that State should consult with NHTSA and base its efforts on the Vehicle Performance Guidance provided in this Policy.”

The U.S. government has long shown its desire to encourage self-driving technology, both in what it has said and in what it has not said. At a conference back in July, Mark R. Rosekind, head of the National Highway Traffic Safety Administration, refused to mention by name the first fatality caused by a robocar—a Tesla Model S that drove itself into a truck two months before. Instead he referred to it indirectly as “the elephant in the room,” and went on to stress that no single failure would “derail” the government’s efforts to speed the adoption of self-driving cars.

“We should be desperate for new tools that will help us save lives,” Roskind said.

ford robocar with four lidar towers

A Ride In Ford's Self-Driving Car

The only sign of fallibility I saw yesterday in Ford’s experimental self-driving car came halfway through a drive near the company’s headquarters in Dearborn, Mich., when the robocar briefly braked for no clear reason, then apparently thought better of it.

A tiny irregularity, and grist for the engineers’ mill along with other little lapses, logged at this media event. One reporter said his car had been a bit “spooked” by a hedge. But at least in my drive the car did everything from start to finish, setting out with a programmed destination but deciding on each turn, lane-change, stop, and start.

Here, in this protected realm among Ford employees, the self-driving car will first see use in a ride-hailing service in 2018. By then some of the sensors will have improved. For instance, the four, $8,000 lidar sets on the roof, which reach only 80 meters, will soon be replaced by just two sets that can see about twice as far. And by 2021, when Ford plans to roll out a commercial robotaxi service, the lidar should be still better, smaller and cheaper.

“We design modularly, so that we don’t depend on the availability of new hardware,” Randal Visintainer, the director of Ford's autonomous vehicle program, told IEEE Spectrum. A lot of suppliers have been talking about lidar-on-a-chip for ridiculously cheap prices, he noted, “but I haven’t seen one yet.”

The interesting thing about the lidar arrangement is that the two outermost sets revolve obliquely, so as to get a view of the space immediately adjacent to the side of the car. If you rely on just one roof-mounted set, as Google’s car does, the car casts a shadow, creating a blind zone. The other two sets on Ford’s vehicle are vertically oriented so that their fields overlap in front and in the back, providing extra detail.

In this first unveiling to journalists, Ford’s little fleet of robocars, all based on the Ford Focus hybrid, stuck to streets mapped to within two centimeters, a bit less than an inch. The car compared that map against real-time data collected from the lidar, the color camera behind the windshield, other cameras pointing to either side, and several radar sets—short range and long—stashed beneath the plastic skin. There are even ultrasound sensors, to help in parking and other up-close work.

Here’s how the map looks to the car’s self-driving system. The darker colors represent stored mapping data, the brighter colors represent real-time data from the car’s own sensors:

One sensor the car did not use was real-time GPS, which Ford deems too unreliable in built-up areas, where the satellite’s signal can reflect along a number of different pathways. 

“If they give us GPS relay stations, we’ll use them,” Visintainer said. “If there were smart intersections, we’d use them too—it’d make our lives a lot easier.”

The only staged event in our ride came when a Ford employee acted out the part of a pedestrian: He hit the “walk” button at a crosswalk and then crossed the street while demonstratively fiddling with a cellphone. The car stopped appropriately. Later, though, real pedestrians crossed, and the car again did as it should, if anything using an excess of caution. One time it stopped and wouldn’t budge until a pedestrian had not only crossed the street but taken another dozen steps up a sloping path.

The critical point here is that Ford is designing a car that will do it all, all at once—a kind of technological Great Leap Forward from today’s cars—with their ADAS, advanced driver assistance technologies. That doesn’t mean the company isn’t working on those stopgap measures as well.

“We developed a philosophy of designing from the bottom up and from the top down,” Visintainer said. Improving the self-driving power is the top-down approach; getting the driver-assistance systems to work is the bottom-up approach. “The question is, how far  down can we take that [first approach], and when do the two approaches meet?”

Google made the top-down approach famous, arguing that anything short of full autonomy would lull drivers into a false sense of security. And that’s what many in the business say caused the one fatal robocar accident back last May, when a Tesla, unsupervised by its driver, drove itself into the side of a truck.

Taking the human being out of the loop—the jargon for turning a driver into a passive passenger—means taking away the safety net that today’s most advanced cars all require. “And you need extra redundancy if there’s no human serving as backup,” Visintainer adds.

He said putting the two strategies together, and putting systems together so they can be manufactured efficiently and last long, is Ford’s core competency—the thing it can do better than non-carmakers like Google, Apple, and Uber. 

It was a note sounded yesterday in a talk by Bill Ford, the executive chairman of Ford Motor Co., who noted he’d gotten to know Silicon Valley during the years he spent a member of the board of eBay. “We know how to integrate all that technology into a vehicle that they [the non-car makers] might not have,” Ford said. “People were saying we’ll be low-margin assemblers of other people’s technology. That’s shifted. We bring a lot of technology ourselves, then we integrate it into the vehicle, and then we build it.”

The car and non-car companies are competing, not just to lead in technology, but also to be perceived as leading. It was no accident that on Tuesday, while Ford was giving a gaggle of journalists in Dearborn their first ride in a robotic version of the Ford Fusion, Uber was giving another bunch of scribes in Pittsburgh their first ride in Uber’s own robotic version of the very same car.

Uber is about to use them in a pilot commercial robotaxi service, but that doesn’t mean it’s leapfrogged Ford, let alone Google. The Uber cars will remain firmly under the supervision of professional drivers.  

A self-driving Ford Fusion with lidar sensors on roof.

Ford: Robotaxis in 2021, Self-Driving Cars for Consumer 2025

Mark Fields, the chief executive of Ford Motor Company, said his company would sell completely self-driving cars by about 2025, after first providing them via ride-hailing service, in 2021.

Such cars would have “no steering wheel, no brake pedal,” he said. “Essentially a driver is not going to be required.”

At first these robocars will cost more than conventional cars, he admitted, but the ride-hailing application will make up for that by saving the salary of a professional driver. Later, the rising scale of production will lower the sticker price enough to justify offering the robocars for sale. Ford can make money either way.

Read More
Mercedes and Starship Technologies collaborate on cargo delivery bots

Mercedes Tries to Conquer the Last Mile With Cute Delivery Drones and Bots

Mercedes-Benz is experimenting with small bots to carry cargo over that last, pesky mile to the customer, using a human-driven van to ferry the them over the previous stretch.

The two kinds of bots, aerial and terrestrial, are being supplied by companies we’ve already written about. Matternet is providing its slick M2 quadcopter, several of which would perch on the van’s roof. Starship Technologies is providing its six-wheel robot, eight of which can fit inside a van.

The idea is that the driver of the Mercedes Sprinter van would load up the bots with their various payloads, take an optimized route from one customer to the next, and unleash the automatons. Then, after the bots have made their rounds, the van would pick them up again at some convenient point, carry them back to the warehouse, load them with more cargo and, perhaps, swap out the battery.

Read More
Bosch WaterBoost

Squirts of Water Can Boost Engine Performance, Fuel Economy by 13%

According to Bosch, which knows a thing or two about cars, “even advanced gasoline engines waste roughly a fifth of their fuel.” Since everything I know about cars came from trying (and failing) to keep my ancient Swedish turbobrick alive, I’m just going to go ahead and take Bosch’s word for it. Bosch says that in order to keep engines cool, especially when they’re working hard, a bunch of extra gas is used to cool the engine rather than contributing to quickly getting you where you probably should have been 5 minutes ago.

If using gasoline as a coolant seems wasteful to you, that’s because it is, in fact, wasteful. Why not use something else instead? Maybe something that occasionally falls out of the sky for free? Bosch has been working on a water injection system for engines called WaterBoost that can reduce CO2 emissions by 4 percent, boost engine performance by 5 percent, and improve fuel economy by 13 percent.

Essentially, spraying water into the intake cools the entire system. Cooler temperatures during the combustion cycle allow you to pack more air into the combustion chamber, which increases performance. It also results in a more complete burn, reducing pollutants. The fuel efficiency increase comes from the fact that you can get more power out of the same amount of gas, meaning that the engine isn’t sucking down quite as much of it, especially at higher rpms. Consequently, you’ll see the biggest efficiency increase during acceleration and extended highway driving.

The first production car to use this technology was the absurdly expensive BMW M4 GTS, but Bosch is now targeting small, three- and four-cylinder engines with turbochargers. Lots of midsize cars use engines like these, so cumulative benefits should be huge. If you end up owning a car with WaterBoost, you’ll have to remember to fill up a tank with distilled water. However, this isn’t as big of a deal as it probably sounds, because the system uses only a few milliliters per kilometer, meaning that one full tank of water will last 3,000 km or so. And if you forget to fill the tank up for a while, the worst that can happen is that you lose the benefits that WaterBoost offers until you put some water into it again.

As cars ever so slowly transition away from gasoline engines, the fact is that the vast majority of people (which is a lot of people) are going to keep on using them for the foreseeable future. With that in mind, every little bit of improvement in efficiency and reduction in emissions that we can manage is important. And if Bosch can manage to turn something as simple and cheap and refreshing as water into a double-digit boost in fuel economy for the kinds of cars that people actually buy, then yeah, we’re sold.

Centimeter-Level GPS Positioning for Cars

Superaccurate GPS may soon solve three robocar bugbears—blurred lane markings, bad weather, and over-the-horizon blind spots. These are things lidar and radar often can’t see, see through, or see around. 

A group led by Todd Humphreys, an aerospace engineer at the University of Texas at Austin, has just tested a software-based system that can run on processors in today’s cars, using data from scattered ground stations, to locate a car to within 10 centimeters (4 inches). That’s good enough to keep you smack in the middle of your lane all the time, even in a blizzard.  

“When there’s a standard deviation of 10 cm, the probability of slipping into next lane is low enough, meaning 1 part in a million,” he said. Today’s unaided GPS gives meter-plus accuracy, which gives you maybe 1 part in 10, if that, he adds.

That’s not a great percentage, particularly if you’re driving a semi. Lane-keeping discipline is non-negotiable for a robocar.

The team, which was backed by Samsung, began with the idea of giving smartphones super-GPS positioning power. But though that idea worked, it was limited by inadequate antennas, which neither Samsung nor any other vendor is likely to improve unless some killer app should come along to justify the extra cost.

“We pivoted then, to cars,” Humphreys says.

Humphreys works on many aspects of GPS; just last month he wrote for IEEE Spectrum on how to protect the system from malicious attack. He continues to do basic research, but he also serves as the scientific advisor to Radiosense, a firm his students recently founded. Ford has recently contacted them,  as has Amazon, which may be interested in using the positioning service in its planned fleet of cargo-carrying drones. Radiosense is already working with its own drones—“dinnerplate-size quadcopters,” Humphreys says.

Augmented GPS has been around since the 1980s, when it finally gave civilians the kind of accuracy that the military had jealously reserved to itself. Now the military uses it too, for instance to land drones on aircraft carriers. It works by using not just satellites’ data signals but also the carrier signals on which the data are encoded. And, to estimate distances to satellites without being misled by the multiple pathways a signal may take, these systems use a range of sitings—say, taken while the satellite moves in the sky. They then use algorithms to locate the receiver on a map.

But until now it worked only if you had elaborate antennas, powerful processing, and quite a lot of time. It could take 1 to 5 minutes for the algorithm to “converge,” as the jargon has it, onto an estimate.

“That’s not good, I think,” Humphreys says. “My vision of the modern driver is one who’s impatient, who wants to snap into 10-cm-or-better accuracy and push the ‘autonomy’ button. Now that does require that the receiver be up and running, but once it’s on, when you exit a tunnel, boom, you’re back in.” And in your own lane.

Another drawback of existing systems is cost. “I spoke with Google—they gave me a ride in Mountain View, Calif., in November—and I asked them at what price point this would be worth it to them,” Humphreys says. “They originally had this Trimble [PDF] thing, $60,000 a car, but they shed it, thinking that that was exorbitant. They want a $10,000 [total] sensor package.”

The Texas student team keeps the materials cost of the receiver system in the car at just US $35 per car, running their software-defined system entirely on a $5 Raspberry Pi processor. Of course, the software could piggyback, almost unnoticed, on the powerful robocar processors that are coming down the pike from companies like Nvidia and NXP.

Just as important as the receivers is the ground network of base stations, which the Texas team has shown must be spaced within 20 kilometers (12 miles) for full accuracy. And, because the students’ solar-powered, cellphone-network-connected base stations cost only about $1,000 to build, it wouldn’t be too hard to pepper an entire region with them.

You’d need more stations per unit of territory where satellite signals get bounced around or obscured, as they are in cities, particularly heavily built-up parts. It’s tough, Humphreys admits, in the canyons of Manhattan. Conveniently, though, it is in just such boxed-in places that the robocar’s cameras, radar, and lidar work the best, thanks to the many easily recognized buildings there that can serve as landmarks.

“Uber’s engineers hate bridges, because there are not a lot of visual features,” Humphreys notes. “They would do well to have 10-cm precise positioning; it can turn any roadway into a virtual railway.”

What’s next, after cars? Humphreys is still looking for the killer app to justify superaccurate GPS in handheld systems.

“We’re looking into outdoor virtual reality,” Humphreys says. “You  could put on a visor and go on your favorite running trail, and it would represent it to you in a centimeter-accurate way, but artistically enhanced—maybe you’d always have a blue sky. You could craft the world to your own liking.” While staying on the path, of course.

Drive.ai

Drive.ai Solves Autonomous Cars' Communication Problem

Understandably, most people working on autonomous vehicles are very focused on things like getting the cars to avoid running into stuff. And in general, this is something that autonomous cars have gotten very good at—especially on highways and in other areas where they don't have to worry about unpredictable humans running around and making their thinking more complicated and difficult.

Drive.ai is one of a small handful of startups pushing for rapid commercialization of autonomous driving technology. It came out of stealth mode back in April, and IEEE Spectrum wrote about its top-to-bottom deep learning approach to the problem. Today, Drive.ai is “officially emerging from stealth” (whatever that means), and we've learned a bit more about what the company is working on.  Drive.ai is touting a retrofit kit for business fleets that can imbue existing vehicles with full autonomy. But uniquely, it includes an HRI (human-robot interaction) component in the form of a big display that lets the car communicate directly with people. At first glance, something like this may seem like a novelty, but it's a feature that autonomous cars desperately need.

Read More
Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Berkeley, Calif.
Contributor
Lucas Laursen
Madrid
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More