A Neural Network Takes Asia's Air Temperatures

Earth's surface temperatures do not correlate with those of the air just above them, but TaNet can overcome that hurdle

3 min read

blue dots connected in the shape of Earth

Ground-based weather stations can tell you the temperature of the air just above a given spot, but their observations are often blocked by the terrain. Satellites can measure the temperature of the surface, but then it’s hard to use that information to infer the temperature of the air.

Deploying deep learning may be able to make that inference, say researchers from the Chengdu University of Information Technology and the China Meteorological Administration. They’ve developed a transformer-based neural network that crunches infrared observations from a weather satellite and spits out near-real-time temperature. During testing, the network the researchers call TaNet outperformed another machine-learning-based approach and established non-machine-learning-based models.

TaNet’s creators published their work in the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing on October 9, 2023.

TaNet’s polestar is FengYun-4A (FY-4A), a Chinese satellite that began sending weather data in May 2018. From geostationary orbit, FY-4A keeps ceaseless watch over a window of Earth’s surface between 50 degrees north and south and 40 and 140 degrees east, covering South and East Asia, the Indian Ocean, most of Australia, and slivers of Arabia and the Horn of Africa.

FY-4A observes surface temperatures in infrared, but TaNet’s creators wanted to use its observations to reconstruct air temperatures 2 meters above the ground. Such near-surface temperature does not necessarily correlate with surface temperature: Land, for example, cools and heats up more quickly than the air above it.

“Because the physical relationship between land surface temperature and near-surface Earth temperature is so complicated and non-linear, it’s actually pretty impossible to use a physical model. As far as I know, nobody has,” says Afshin Afshari, an urban physicist at the Fraunhofer Institute for Building Physics in Stuttgart, Germany, who was not involved with TaNet. “You have to use a black-box model.”

TaNet’s developers trained it to take in FY-4A’s infrared observations and output a corresponding near-surface temperature map from ERA5. Released by the European Centre for Medium-Range Weather Forecasts, ERA5 is the fruit of a process that climate scientists call renalysis: combing over prior observations with computational models to paint a detailed picture of Earth’s weather at a given point in time.

ERA5 is gold-standard data, but because it must be enhanced by the power of hindsight, it is not live. To get ERA5 data at all for a particular timestamp can take days; to get quality-controlled, validated data can take months. TaNET can work from FY-4A updates that come every hour, if not faster. “This means you have access to ERA5-quality data, but in real time,” Afshari says.

The researchers tested TaNET’s inference against two other renalysis-generated datasets (CRA, from China’s national weather service; and CFSv2, from the U.S. National Oceanic and Atmospheric Administration) and a model driven by a U-Net convolutional neural network. TaNET’s output correlated more strongly than that of all of its rivals, both with data from ground-based stations and with ERA5 data that wasn’t in TaNet’s training set.

Such approaches also have the ability to link raw satellite data with more detailed models. “Deep learning has the unique capability that can help you learn that mapping,” says Gianluca Valentino, a computer engineer at the University of Malta, who was also not involved with TaNet. “It’s not something that classical techniques would really be able to give you.”

TaNet is a relatively niche case. A more common application of deep learning to satellite imagery is super-resolution: In this case, that would involve boosting the image’s resolution beyond the limits of the satellite’s hardware. Infrared sensors can’t reach the hyperfine resolutions of visible light cameras; the highest resolution of any infrared camera is Landsat, which can go down to 100 meters.

In practice, forecasters trying to get near-real-time data must settle for even coarser resolution. “There’s a tradeoff between revisit time and resolution,” Valentino says. “To get a higher resolution, that means you need to revisit the same spot less frequently.” Deep learning techniques, often rooted in computer vision, can fill in the gap.

Afshari’s group combined both those approaches. They used land surface temperature observations from the European Space Agency’s MSG satellites to reconstruct, via a convolutional neural network, outputs from the WRF weather simulation software. Combined with land use data, their network could map near-surface temperatures across the urban expanse of Greater Berlin with four times the resolution of MSG unaided.

The applications are still fairly limited. What could really boost their takeup, Afshari says, would be the ability to make high-resolution weather predictions of the near future. That task, however, is much more complex.

The Conversation (0)