U.S. Military, Looking to Automate Post-Disaster Damage Recognition, Seeks a Winning Formula

The xView2 competition is looking to crown the computer vision algorithm that can rapidly analyze what floods, tsunamis, earthquakes, and hurricanes have done

3 min read

Satellite images from Digital Globe/MAXAR's Open Data Program showing buildings in Santa Rosa, CA before the Tubbs Fire (left), and after (right).
Satellite images from Digital Globe/MAXAR's Open Data Program showing buildings in Santa Rosa, Calif., before the Tubbs Fire (left), and after (right).
Images: Digital Globe/MAXAR

It seems like natural disasters are happening more and more frequently these days. Worse still, we seem ill prepared to deal with them. Even if it's something we can see coming, like a hurricane, the path to recovery is often a confused mess as first responders scramble to figure out where to allocate resources. Remote sensing technology can help with this, but the current state of the art comes down to comparing aerial before-and-after images from disaster scenes by hand and trying to identify which locations were hit hardest.

To help with this problem, the Defense Innovation Unit (a sort of tech accelerator inside the Department of Defense) is sponsoring a challenge called xView2. Its goal: to develop a computer vision algorithm that can automate the process of detecting and labeling damage based on differences in before-and-after photos. And like all good challenges, there's a big pile of money at the end for whoever manages to do the best job of it.

The idea behind xView2 is relatively simple. As input, you’ve got a satellite image of an area before a disaster. As output, you’ve got a satellite image of the same area, taken immediately after the earthquake, tsunami, flood, volcanic eruption, wildfire, tornado, or apocalyptic combination of any of the above. All the algorithm has to do is identify structures and then rate each structure on a four-point damage scale that ranges from spotless to obliterated. 

Fortunately, this kind of pattern recognition is something that computer vision algorithms tend to do very well. The key to their effectiveness is the training data they’re fed, and xView2 is providing a massive, hand-labeled dataset for competitors to use. Leveraging Digital Globe’s Open Data Program, xView2 has managed to amass 45,361 square kilometers of pre-disaster and post-disaster imagery at a resolution of 0.3 meters per pixel. The images feature 850,736 building outlines, each one drawn by a human and assigned a building damage assessment score, and even more data is in the process of being added.

n these satellite images, the red annotations indicate buildings that have been completely destroyed, and blue patches represent no damage.In these satellite images, the red annotations indicate buildings that have been completely destroyed, and blue patches represent no damage.Gif: Digital Globe/MAXAR

The folks running xView2 have been very careful to make sure that this dataset is as accurate and as high quality as possible. Fifteen countries are represented, including exotic locations like Australia, Indonesia, Tuscaloosa, and Bangladesh. The standardized Joint Damage Scale for buildings (which weirdly did not exist before) was developed with input from FEMA, the US Air Force, and local first responders. Those agencies also had an opportunity to check the labeling for accuracy before the dataset was finalized.

The winner of the xView2 challenge will be the algorithm that performs the best on a previously unseen dataset, recognizing and labeling buildings by smashed-up-ness on the Joint Damage Scale with the closest adherence to the ratings given by expert humans. The algorithm will have to be a generalist, able to recognize and score buildings after any of the six kinds of disasters, anywhere in the world. The hope is that the winning algorithm could be used to compare pre-disaster satellite images with post-disaster images taken from aircraft or drones, helping first responders move even more quickly and effectively. And even if the best algorithm isn’t perfect, that’s okay. Even a pretty good algorithm could be very useful, especially when time is a factor.

Refreshingly, the Defense Innovation Unit seems to be mostly interested in encouraging people to participate and do well in the xView2 challenge, without getting hung up on owning the winning software or anything. You can compete in the Open Source track, where you can win $25,000 as long as you agree to release your code under a permissive license. If you’d rather keep your code private, but you’re okay with giving the government a non-exclusive license to use it, the Government Purpose track has a first prize of $38,000. Anyone participating in the Open Source track can potentially win the other track too. The final Evaluation Only track is for teams who really don’t want to share anything; the government will check out your algorithm and tell you how you did, but that’s it. The top prize in that case is $3,000. Lastly, teams who do well will be considered for “follow-on prototyping,” meaning that there may be more opportunities later on.

The dataset for the xView2 challenge is available now, with submissions due on 22 November (update: now 31 December). Winners will be announced at the Humanitarian Assistance and Disaster Recovery (HADR) workshop at NeurIPS in December.

The Conversation (0)