Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Hyundai Robocar Competition: KAIST Details Weather Problems, Philosophical Differences

The project advisor of KAIST unmanned car research explains why autonomous cars have so much trouble with rain

4 min read

Hyundai Robocar Competition: KAIST Details Weather Problems, Philosophical Differences
Image: KAIST

Last week, we posted a pair of videos from Hyundai’s Future Automobile Technology Competition in South Korea. The videos showed Team KAIST’s autonomous car navigating the course in good weather, and then doing it again the next day, after some heavy rain. We speculated about some of the problems that the rain might have caused for the car, and how tricky it is for autonomous vehicles in general to deal with changing weather conditions.

David Hyunchul Shim, the project advisor for KAIST’s unmanned car research, wrote to us to provide more details about what was going on with their car on that rainy day, and how their philosophy about how autonomous cars should work made things much more difficult.

If you missed our post last week, it provided a direct comparison between KAIST’s autonomous car navigating a course in dry weather, and in wet weather. Here’s the video of the tricky wet weather run:

The competition circuit included missions like pedestrian detection, vehicle following, and obeying road signs. Vehicles that failed these missions got penalized two minutes per mission, and vehicles that required human interventions got a three minute penalty per intervention. A human driver could complete the course in a bit over four minutes, while the winning team made it around in five minutes and thirty seconds. A flawless run by KAIST’s car took a bit over six minutes.

The fundamental difference between KAIST’s approach and that of the winning team (Hanyang University) is a philosophical one. It’s possible to perform very accurate localization on roads if you have a pre-existing map of that road that your autonomous car can use to compare against what its sensors see. This is what Google does with its autonomous cars, and what the winner of the Hyundai competition did as well.

The KAIST team, however, feels that relying on prebuilt maps isn’t the best way to go for autonomous cars: “We believed this is not the right way to go if our system is to function even when it runs in an area where a map is not available,” says Shim. Instead, they’re trying to develop an autonomous car that can drive itself just like humans do, relying on sensors and a general knowledge of the environment to tackle any road, anytime, anywhere.

This difference in philosophy can be seen as far back as the DARPA Grand Challenges, where the winning team (Stanford) chose to let their robot Stanley decide autonomously its own speed over the course, whereas the runner up CMU hand-labeled thousands of GPS waypoints with speed limits for their robots (Highlander and Sandstorm) to follow. You can watch more about that here.

The other issue that KAIST had to deal with was the wet weather, or more specifically, the wet road surface and how the sensors on the car handled it, as Shim explains:

The real problem on the second day was due to the sudden rainfall right before our run. We used LIDAR and cameras together to detect the lanes, and it became extremely difficult due to the sporadic readings caused by the reflection of the water. A few days before the competition, we had an opportunity to find a sensor settings for wet roads (but not as wet as the competition day), so the car was able to run on the watery road much better than we thought. However, as the threshold for lane detection was set higher than the normal day setting, the sensor reading decreased by half, as you can see the obstacle detection was reduced. This caused all our problems with lane and obstacle detection.

Basically, what KAIST had to do was to tell their optical vision systems to ignore a lot of the data that it was seeing, because so much of it was likely to be caused by increased reflections from the wet road. If you lower this threshold setting even a tiny bit too much, it leads to the vision system instead ignoring lanes and obstacles that really are there, which is what caused KAIST’s vehicle to have so much trouble.

Furthermore, the partly cloudy day was not at all friendly to the cameras that were being used to detect road signs and pedestrians. Shim describes what the failure at 02:50 on the video: “the pedestrian detection failed because the camera was set for ‘cloudy’ day. But, as the car passed there, the sky partly cleared and the camera faced the sun directly. We checked the image from the camera and it was almost all saturated to white.” This also caused problems on the final parking task.

As Shim points out, “it’s not so easy to come by videos showing autonomous driving on a rainy day,” and he’s absolutely right, because autonomous cars aren’t very good at it. Cameras and LIDAR systems simply don’t have the intelligence and versatility (yet) that human eyes and brains do. Before Google, Tesla, or anyone else makes a robotic vehicle that consumers can use, they’re going to have to reliably address these issues.

Once again, we’d like to thank KAIST (and particularly David Shim) for making these videos public, and sharing the situations in which they don’t get everything quite right, as well as the situations in which they do. We wish that all robotics companies would be this open about their development process, since it helps to give us a much better sense of what the reality of robotics is right now.

[ KAIST ]

The Conversation (0)