Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Predicting Hurricane Sandy

Last year’s hurricane models were quite accurate in their predictions of the track of Hurricane Irene, and algorithm changes earlier this year give forecasters even better tools for predicting Hurricane Sandy.

4 min read

Predicting Hurricane Sandy
Photo: Jemal Countess/Redux

In 2009, the U.S. National Weather Service set ambitious goals—to improve the accuracy of forecasts of hurricane tracks and intensity by 20 percent by 2014 and by 50 percent by 2019. Last year for the first time, in modeling Hurricane Irene, the National Weather Service was able to incorporate data from flying planes through the storm directly into the simulations of the storm’s progress. Adding that detailed data from inside the storm, and other advances, meant that the 48-hour forecast of Hurricane Irene was just as accurate as a 24-hour forecast had been a decade earlier.

And the advance forecast of Hurricane Sandy, now heading for the east coast of the United States, should be even more accurate, thanks to a couple of advances in modeling that earlier this year moved from the research laboratories into operational use as part of the U.S. Global Forecast System (GFS). (Other countries run their own models; for example, Europe has a model called the ECMWF Model and the United Kingdom has UKMET Office Model.)

The biggest change in the U.S. GFS came in May. That’s when new algorithms for data assimilation—the process by which all the temperature, humidity, wind, and other data collected by aircraft, weather balloons, satellites, and ground stations is translated into evenly spaced points on a grid of the world for the prediction models—began being used in operational weather models. One major change, explained Robert Gall, technical development director for the Hurricane Forecast Improvement Project at the National Oceanic and Atmospheric Administration, had to do with how the data is extrapolated between the real-world points at which it was collected to determine the data at points on the grid, with statistics on the accuracy of previous runs of the model fed in to adjust those extrapolation calculations. This change alone appears to be improving the accuracy of hurricane tracks from the weather prediction models by 20 percent, Gall said.

Another big change, implemented in June, affected the regional models, the models that zero in on more localized storm effects after the global models determine the general track. Gall explained that the resolution, that is, the spacing of the points on the map to which the weather data is attached, was increased to 3 km from the previous 9 km. Researchers had amassed evidence that such a change in resolution could improve forecasts in storm intensity, something that hadn’t improved in two decades. Gall credits the team at the National Center for Environmental Prediction for pushing hard to make the calculations of the regional model more efficient in order to allow the increase in resolution without significantly extending the computer time needed to run the calculations. This change, Gall says, is causing a 10 to 20 percent improvement in the accuracy of hurricane intensity forecasts. And experiments underway now to incorporate data gathered by flights into hurricanes into the intensity forecasts should bring another 10 percent improvement, he says.

Does all this new technology mean that predicting the track and intensity of Hurricane Sandy is turning out to be a lot easier than predicting last year’s Hurricane Irene?

Not entirely. Gall reported today that, unlike Irene, which was an easy storm to predict, Sandy initially had hurricane forecasters wondering. Forecasters don’t look at just one model of a hurricane’s future path, they look at a whole set of them that they call an ensemble. That’s because in dealing with such complex

systems, small variations at one point can cause big changes down the line. Modelers can run the same software on different computers, for example, and differences in the order in which calculations are done can cause variations in the outcome. They can ask two different programmers to code the same model, and slight differences in coding style lead to different outcomes. Or they can change temperature at all points on the grid by a very small percentage of a degree at the starting time of a model and change the outcome dramatically. If they do all these things and they end up with essentially the same prediction of a hurricane’s future, they are pretty sure that forecast is accurate. But sometimes the results of the different simulations are all over the place.

That was the case, until Thursday, with Hurricane Sandy. The divergence in the models was caused by a pressure system over the Atlantic Ocean that weather scientists call an omega block; it’s two areas of low pressure flanking an area of high pressure—on a weather map it looks like the Greek letter Omega. The ensemble of models run at the National Weather Service early in this hurricane’s development were splitting to either side of the omega block; some showed the hurricane tracking east, where it would veer harmlessly out to sea, others showed it tracking west, threatening the east coast of the U.S. By last night, Gall reports, the consensus of the models was that the storm would head toward the US mainland.

So far, Hurricane Sandy is following the predicted track laid out by the GFS system.  Forecasts made last Saturday and this past Monday, compared with the actual path of the storm, have turned out to be right on (figure, second from top: models compared with actual storm track). And the models from other countries are all in alignment with the current U.S. National Hurricane Center forecasts (image, top). The path is clear to see in the graphic; as for intensity, Gall says Sandy will barely be a Category 1 storm, the lowest in the intensity rankings, but will come with a big storm surge and huge amounts of rain.

 

The Conversation (0)