Cars That Think iconCars That Think

Are Self-Driving Cars Really the Solution to Drunk Driving?

If you’re like me, you probably assumed that once cars begin driving themselves, that will signal the end of the “fatal five” causes of vehicle fatalities: excessive speed, drunken and drug-impaired driving, failure to wear a safety belt, drowsy driving, and distracted driving. (The latter is ever more frequently caused by infotainment devices built right into the car.) But according to a traffic safety expert, we’d be wrong.

In a Conversation.com article, Ian J. Faulks, a researcher at the Center for Accident Research and Road Safety at Queensland University of Technology in Brisbane, writes that autonomous vehicles won’t eliminate drunken driving because, “Even if it is an autonomous vehicle, the alcohol-impaired person is still the driver.”  

That statement seems quite counterintuitive if you think of a self-driving vehicle as a machine designed to do the steering, accelerating, braking, and signaling for you. After all, how could you mess up something over which you’re not even in control? But Faulks, who is also a member of the International Council on Alcohol, Drugs, and Traffic Safety, makes a sobering point:

Actions need to be taken to start the vehicle, enter instructions regarding destination and route, and engage the self-driving function. These actions constitute driving, and if you’re drunk, that’s [drunk] driving. Moreover, there are serious issues concerning the possible situations where a driver in an autonomous vehicle needs to intervene due to an emergency or system malfunction. Any such intervention constitutes driving, and again, if you’re drunk, that’s [drunk] driving.

I, for one, still hold out hope that there will come a day when cars will reach the level of sophistication where a person can warble the equivalent of, “Home, James” and the car will deliver him or her from the ballpark, pub, or afterparty where there was a little bit too much celebration, right to his or her door. My faith is not unwarranted. Google researchers, who are thinking along the same lines, are said to be already testing self-driving cars without any human drivers or even steering wheels.

Still, Faulks thinks that even in a future where robocars exist, limiting the number of traffic fatalities resulting from driving under the influence of alcohol will require the same prescriptions in place today.

…the solutions will remain with interlock devices to deter an alcohol-impaired person from driving, traffic enforcement to catch the drunken driver, and encouragement for the erstwhile [drunk] driver to instead choose to become a passenger…in a cab, bus, or by traveling with a sober driver.

But because we know that cops can’t be everywhere, and despite decades of awareness campaigns, friends do let friends drink and drive, I’m rooting for Google and other automakers to produce vehicles whose design takes into consideration the person who wants to be the life of the party and still make it home alive.

Audi Robotic Racecar Relies on GPS

This weekend Audi will show off its self-driving technology under what would seem to be the most challenging conditions imaginable: a race track.

Yet despite the high speeds involved, the feat is simpler in some ways than navigating city streets, where you have to recognize and avoid pedestrians and squirrels.  A race car merely has to keep its position on the track, moving in and out of it only when passing or dodging another car.

And because the track is a known quantity, the car can keep it all in its little electronic head and rely heavily on GPS—provided it’s corrected to an accuracy of just a few centimeters. Which, in this case, it will be.

The public demonstration will take place on Sunday at the Hockenheim race track, in southwestern Germany. The car, an RS 7, will do a lap or two at race pace, around 250 kilometers per hour (149 miles per hour). It will duel with an identical, but human-piloted car. My money’s on the robot. 

“We’re going into the curves, the cornering, just like a professional race driver,” Peter Bergmiller, a technician for Audi, says in the company’s video promotion.  “So for example, we have lateral accelerations of more than 1 g.”

Watch the promo:

Watch Tesla's Robocar Achieve Warp Speed

Our tech elite grew up with Star Trek, which is why a lot of our tech looks like props from that old TV show: the flip phone is like Capt. Kirk’s communicator, the laser pointer is a phaser, the tricorder is a reality.

Next year, when Tesla Motors releases the dual-motor, four-wheel-drive, semiautonomous version of its Model S, drivers will boldly go where no (electric) car has gone before: into Warp drive.

This video shows a test ride in which the zero-to-60 time comes to 3.6 seconds, 0.4 seconds longer than Tesla is promising for the car next year. Note how it’s framed by lighting reminiscent of the special effects that early Star Trek movies used to suggest faster-than-light travel:

Study Shows Voice-Controlled Devices Distract Drivers

Ever since Knight Rider introduced us to its star car, KITT, in 1982, a car we can talk to has been high on many drivers’ sci-fi wish lists. Some of today’s models do respond to spoken commands from drivers. And while talking to your car can be convenient, a series of tests conducted by University of Utah researchers found that talking to voice activated devices like smartphones and dashboard computers can be a dangerous distraction for drivers.

 

Researchers looked at two different kinds of voice-control in two different studies, which follow up on previous work that, like these studies, was sponsored by the AAA Foundation for Traffic Safety. In the first study, they observed how interaction with dashboard-based “infotainment” systems in cars from five different auto manufacturers (Chevrolet, Chrysler, Ford, Hyundai, and Mercedes) affected driver response times. They also examined how drivers reacted when listening to and responding to text messages and emails relayed to them by Apple’s voice-controlled digital assistant, Siri. You can read the full reports here.

162 drivers chatted with their smartphone servants and dashboard computers in a series of three experimental setups: a laboratory, a driving simulator, and a car driving around neighborhoods in Salt Lake City. The difficulty of various tasks, including listening to text messages read by a computer and navigating a menu using voice commands, was rated on a scale of one to five points. These numbers represent the mental workload the activity subjected drivers to, with a score of one comparable to driving with no distractions and a score of five comparable to solving complicated math problems while behind the wheel. Levels of driver distraction were determined by measuring factors like heart rate and response time.

For in-dash computer systems that let drivers control car functions with their voices, University of Utah researchers led by psychology professor David Strayer tested how much attention it took for drivers to accomplish common tasks like changing radio stations or dialing phone numbers. 

“The basic root question is can you make these techs so they’re not demanding and safe to be used,” Strayer told IEEE Spectrum. To learn that, the team had to figure out how safe they are right now, and what factors impact that.

Strayer and his colleagues found that the distraction associated with in-dash systems was closely tied to how well those systems understood spoken commands. Generally speaking, results showed that the better the voice recognition in a system, the less distracting it was to drivers. They also found that regardless of the system being used, listening to messages was less distracting for drivers than composing responses.

Among in-dash computer systems tested, Toyota’s Entune distracted drivers the least, with a score of 1.7 points on the workload scale — about the equivalent of listening to an audiobook while driving. With a workload rating of 3.7 points, Chevrolet’s MyLink system, described by Strayer as “almost unusable” had the poorest performance in the tests.

“We had people trying to change the radio station and changing the temperature instead,” Strayer says, noting that the system left some testers hurling four-letter words instead of requests following a string of misunderstood voice commands. 

The results didn’t come as a surprise to AAA officials who sponsored the study. “We already know that drivers can miss stop signs, pedestrians and other cars while using voice technologies because their minds are not fully focused on the road ahead,” said Bob Darbelnet, chief executive officer of AAA. “We now understand that current shortcomings in these products, intended as safety features, may unintentionally cause greater levels of cognitive distraction.”

Apple’s Siri, running on a modified version of iOS 7, scored the worst in testing, with a 4.1 point workload rating. That’s several steps up in mental workload from just talking on a cellphone, and just shy of the top of the scale, which Strayer described as “like trying to balance your checkbook” while driving. In a statement, Apple pointed out that drivers in the study were not taking advantage of available software to optimize Siri for use behind the wheel. 

The study also found that, contrary to previous studies, listening to messages read by natural human voices and synthetic, computer-generated ones occupied drivers to the same degree. Older studies had found synthetic voices to be more distracting to drivers, suggesting to researchers that improvements in computer-generated voice quality in recent years have helped to close that gap. 

“The technology on the text-to-speech part has improved to the point where it’s no longer part of the problem,” Strayer says. But, he adds, even as technology improves,“ ... there’s only so much room for improvement before we reach the limits of what the brain can do.”

 That doesn’t mean we should abandon voice command technology, which is obviously something consumers want, Strayer says, but it should make us think about how we put it to use. “It would be a mistake to throw the technology out. It clearly has promise,” he tells IEEE Spectrum. “But there are a lot of issues with the technology that need to be solved, too.”

Some of those issues, Strayer says, are technological hiccups that can be ironed out. But others are thornier psychological concerns over how much outside input and interaction the human brain can manage while driving. 

Strayer has a couple suggestions for car manufacturers to make these systems safer and less distracting. Designers can cut the physical clutter to make sure people keep their eyes on the road, and place a premium on user interface — menus should be easy to navigate, and reliably respond to commands to mimize user frustration.

In addition, he says, voice interactions with a car should be short and directly related to driving. Being able to change the temperature inside a car without reaching for a knob or button is probably a good feature, he says. But sending a tweet or posting to Facebook should probably wait until you’re no longer operating an automobile. 

Strayer also pointed out that in-car systems are not as easy to upgrade as mobile devices, often requiring a trip to the dealer if they can be upgraded at all. That means someone who buys a car with a poor voice control system could be stuck with it for the life of the vehicle, which tends to be much longer than the life of other devices like smartphones or personal computers. 

“This technology is something that is meant to make your car stand out,” says Strayer. “What I think the automobile industry is about to find out is that if it’s frustrating and you’re swearing at it, it might make your cars stand out for the wrong reasons.

Tesla's Model S Will Offer 360-degree Sonar

Elon Musk's much-anticipated announcement last night turned out to be less about self-driving capability and more about good, old-fashioned oomph. The Tesla Model S will offer four-wheel drive with a second motor for the extra axle. Musk said that this gets the car from zero to 60 miles per hour (that is, to 97 kilometers per hour) in 3.2 seconds. That’s supercar territory.

It’s also Thinking Cars territory, because the all-electronic drive juggles torque between the front and back wheels from one millisecond to the next, improving both the car’s grip on the road and its energy efficiency. The updated model will go 443 km (275 miles) on a charge, up 3.7 percent from the standard model.

On the other hand, the car’s self-driving capability—the feature the auto press expected to be the main news—turns out to be just an echo of what Mercedes-Benz already offers in its S Class. Like the Mercedes flagship, the Tesla will have forward-looking radar, cameras that recognize stop signs, and systems that use sensor data to keep the car in its lane and to avoid headlong crashes.

After a prolonged tease from a showman like Musk, we expect more. Tesla has merely become the next in a line of automakers that say they, too, will eventually catch up with Mercedes. Volvo plans a similar degree of autonomy in two years, Nissan and Audi in roughly four.

But Musk did give us two interesting nuggets.

First, every Tesla car will come with self-driving equipment—something no other manufacturer does. To be fair, it’s fairly easy for a niche player like Tesla. The old General Motors may have had a car for “every purse and purpose,” but Tesla will settle for every prince and potentate. At least for now.

Second, the cars will pack sonar, a.k.a. ultrasound range finding. This one is unusual because Musk appears to be talking about something beyond the cheap, compact devices other auto makers now salt around their cars. Those sensors mostly function as aides to close-in work, like self-parking, and as backups to the front- and rear-looking radars. The Mercedes S Class, for instance, has 12 of them, with ranges under 5 meters.

But Musk says that his ultrasound system is “long-range” and offers 360 degree” coverage. He adds that it establishes a protective cocoon around the car. It can see anything: a small child, a dog. And it can operate at any speed.” 

Every self-driving car needs many senses. Musk ticks off four: radar, cameras, sonar and GPS. Eventually there will be a fifth: information channeled from other cars and from the road itself.

Volvo Tech Makes Trucks Smart Enough to Not Run You Over

As much as we’re looking forward to cars with autonomous features, adding autonomy to trucks is at least as valuable, if not more valuable, considering how much time trucks (and the humans in charge of them) spend on the roads.

Volvo, which has been experimenting with autonomous vehicles for quite a while, has a goal of making its trucks (and cars) accident free, and it’s working on a project that combines cameras, radars, and other sensors to create a predictive 360 degree view of things that Volvos should probably try and avoid running into.

 

Read More

Musk Promises 90% Autopilot for Teslas in 2015, Doesn't Say How

In September of 2013, Elon Musk predicted that Tesla automobiles would operate autonomously for “90 percent of miles driven within three years.” One year later, he’s revised his estimate a bit, now saying that “a Tesla car next year will probably be 90-percent capable of autopilot. Like, so 90 percent of your miles can be on auto. For sure highway travel.”

Although he didn’t go into any detail (besides some suggestion of an obligatory sensor fusion approach), Musk seems confident that this is something that Tesla will make happen, not just sometime soon, but actually next year. We’d love for this to be true, but Musk is going to have to find a way to solve some basic problems that not even Google has locked down.

 

It’s not clear what, exactly, Musk means by 90-percent capable. Does he mean that 90 percent of the time, on any road, the car can be autonomous? Or does he mean that on 100 percent of highways, and some lesser percentage of other roads, a Tesla will be able to drive itself? Let’s take a look at the highway example (since a highway is probably the most structured and controlled environment that you can expect to drive in short of a car wash) and think about what Tesla is going to have to figure out before any of us would probably be willing to let a car take over completely.

On well-marked, well-maintained highways during the day in good weather, autonomous driving is (relatively) simple. All you have to do is combine adaptive cruise control with lane departure detectors (both already available in consumer cars) to keep yourself safely in a lane while making sure you don’t run into the car in front of you if it slows down for some reason. Teslas recently acquired lane departure detection, which works thanks to a camera mounted at the top of the windshield that visually identifies lane markers and watches to see if they drift.

However, at night, in bad weather, or if the lane markers are faded, the system won’t function nearly as well, if at all. Humans use other visual cues in these cases, and although computer vision can do this to some extent, there’s a limit to how adaptable you can expect a system like this to be. The same goes for situations that are a mix of other variables that the system doesn’t necessarily expect, like accidents or construction zones. At the moment, such variability is difficult for computers to deal with.

So, okay, if we accept that the aforementioned situations are the 10 percent that a car can’t deal with and for which a human has to be in control, the problem then becomes how you transition from vehicle autonomy to human control. Going the other way is easy: the car can ask you if you want it to take over driving, and if you agree, it takes over and you relax. You’ll probably be legally required to remain at the wheel paying attention on some level while it’s in control, but what if you fall asleep? Even if the car knows that you’ve fallen asleep, the fundamental problem is that the car can’t force you to instantly take control again regardless of the circumstances. It can try to wake you up, but if it can’t, then what? Does it pull over on the side of the highway? Does it just try to keep going and do its best? What’s the solution?

There are many other examples that could aptly illustrate the problems that must ironed out before releasing a car with the capability for full autonomy into the consumer market. And those are just the technical challenges. Now add on the hurdles of dealing with the government (state departments of motor vehicles, the National Highway Traffic Safety Administration, etc.) and insurance companies, which will be another matter entirely (none of them are ready for consumer autonomy). Then there’s the issue what will happen when, inevitably, the first self-driving car gets into the first autonomous-mode accident. The point we’re making here is that there’s a significant difference between “technically capable,” “realistically capable,” and “consumer availability.” And we’re not nearly as confident as Musk seems to be regarding where things will stand in 2015.

We don’t mean to sound pessimistic about all of this, because autonomous cars are an inevitability that we can’t wait to experience for ourselves. It’ll mean vastly safer and more efficient driving, hugely improved quality of life for people with lengthy commutes, and eventually (and best of all) no more lengthy waits at intersections.

Video: [ CNN Money ]

Mercedes-Benz to Test Its Robocar at Abandoned Naval Base

Mercedes-Benz already has approval from the government of California to test self-driving cars on public roads under certain conditions.

But the automaker has been looking for a place to test its robocars where it can get around those “certain conditions” and have its robocars encounter things that might occur during normal driving but are difficult to replicate given the strictures of the state authorization. Late last week, the company reported that it has found just such a place.

Mercedes-Benz Research & Development North America and the Contra Costa (California) Transportation Authority  announced plans to begin testing autonomous vehicles at the Concord Naval Weapons Station in the city of Concord later this month. The partners see the now-shuttered naval base as an ideal site for trying out an array of new transportation technology, including smart infrastructure such as traffic signals that talk with cars. 

 

The 850-hectare Concord Naval Weapons Station has 31.5 kilometers of paved, citylike roadways that are off limits to the public, making it the largest secure test bed site in the United States.

“We can use the test site in Concord, California, to run simulation tests with self-driving vehicles in a secure way, including specific hazardous situations,” said Axel Gern, head of autonomous driving at Mercedes R&D, in a press release. “Taken in conjunction with the results of our test drives on public roads, these tests will help us with the ongoing development of our autonomous cars.”

Mercedes-Benz has already wowed observers with the self-driving S-Class vehicle that drove without human intervention for the 100 kilometers between Mannheim and Pforzheim, Germany, as it replicated the first ever documented joy ride taken by Bertha Benz. But now the company can try to see just how much danger a robocar can be placed in and manage to reach its destination without a scratch. (This no doubt will include some of the stupid stunts that human “drivers” will dream up in order to see what feats a self-driving car is capable of.)

What’s in it for the local government? “The City of Concord is hopeful that use of the Concord Naval Weapons Station as a test bed hub in the region will attract high-tech jobs to our community and allow new technologies to be tested and researched,” said Tim Grayson, Mayor of the City of Concord, in a press release.

Cars That Talk Need Wireless That Works

At two recent auto events the hyped vision of communicative cars came up against a hard reality: the spottiness of wireless technology.

During the Intelligent Transport Systems World Congress, held at Detroit's Cobo Hall last month, the Wi-Fi system died just as I was trying to send my editor a description of a presentation on real-time, car-to-car communication. And when I tried to text him, I found that my carrier's coverage didn't reach inside the auditorium.  

Now comes a similar report from the Paris Auto Show by a car-covering colleague from Canada, the National Post's David Booth. The headline says it all: "If we can't figure out Wi-Fi in a stationary building, what does that say for autonomous car networks?"

"Every single automaker exhibiting at the Paris Expo is currently developing a self-driving automobile, most of which will require interconnectivity if robotic cars are to safely navigate our roads," Booth writes. "That said, the fact that manufacturers can’t always provide enough bandwidth for a few measly laptops in a very stationary building doesn’t exactly breed confidence in our autonomous future."

We've seen this before, nearly two decades ago, when Oracle and other networking companies said computers would soon shed all of their storage and much of their processing power. The resulting stripped-down "thin clients" would then tap the "fat pipes" of a superfast Internet, giving us all the power we needed at a piddling cost. Well, that vision is coming true to a degree. But we still can't rely on the cloud for quick answers to life-and-death questions.

Unreliable wireless communication may well be the greatest single technical problem holding back smart cars. Until we solve it, smart cars will have to treat such communication as just another sensor, and not a particularly reliable one at that.

Argonne Labs Simulates a Diesel Engine

Late last year, the handlers of a newly established service at Argonne National Laboratory spent three weeks running programs on the fifth-fastest supercomputer in the world. All they got to show for it was half a second's simulated combustion in a single cylinder of a diesel engine. 

That was plenty.

"To the best of our knowledge, it was the largest diesel engine simulation ever done," says Sibendu Som, a mechanical engineer at Argonne. But, he adds, it was worth it because it nailed down the right degree of detail such simulations need.

Read More
Advertisement

Cars That Think

IEEE Spectrum’s blog about the sensors, software, and systems that are making cars smarter, more entertaining, and ultimately, autonomous.
Contact us:  p.ross@ieee.org

Senior Editor
Philip E. Ross
New York City
Assistant Editor
Willie D. Jones
New York City
 
Senior Writer
Evan Ackerman
Berkeley, Calif.
Contributor
Lucas Laursen
Madrid
 

Newsletter Sign Up

Sign up for the Cars That Think newsletter and get biweekly updates, all delivered directly to your inbox.

Load More