Tracking Weather’s Flight Path

Smoother, safer flying is in the forecast for all kinds of weather, thanks to aviation weather technologies now in development

22 min read
Tracking Weather’s Flight Path


Vacationers were not the only ones crowding U.S. aircraft and airports this summer. Prototypes of systems for helping pilots and air traffic controllers cope better with untoward weather were also jockeying for a place aboard critical test flights and in air traffic control and dispatch facilities on the ground.

The prototypes are the products of a burst of recent research in technologies for detecting turbulence in all its varieties, predicting the persistence of fog, and giving pilots a clear picture of the weather ahead. They, along with systems tested last winter for detecting and predicting aircraft icing, will begin impacting commercial flights soon, although few will come into wide use before mid-decade. But all will make flying safer—and a lot more comfortable.

Why so much activity?

The underlying reason, explained Paul Stough, a senior research engineer at NASA's Langley Research Center, in Hampton, Va., is that the aviation accident rate for commercial (as against general) aviation aircraft has remained flat—for the past five years, for example, at around 0.4 accidents per 100 000 departures, according to the National Transportation Safety Board. But, with more and more people flying, the numbers of flights have been climbing steadily and are projected to go on climbing. Unless something is done to decrease the accident rate, the raw number of disasters will soon exceed what the flying public will accept.

And that something, to a large extent, has to do with weather. Some 30 percent of commercial aircraft accidents have weather as a contributing factor, according to a NASA planning group.

"If we could provide reliable, more intuitive weather information in the cockpit, pilots could make better decisions and avoid getting into weather situations that cause accidents," said Ron Colantonio, a project manager at NASA's John H. Glenn Research Center, in Cleveland, Ohio.

An accident typically has a chain of events leading up to it," Stough said, "and if any one of them can be prevented, the chain can be broken and the accident won't happen."

A deadline has been set. In 1997 U.S. President Bill Clinton established a national goal to reduce the fatal accident rate by 80 percent by 2007. So research is under way, mostly funded by the U.S. Federal Aviation Administration's (FAA's) Aviation Weather Research Program and NASA. This fiscal year the FAA alone spent US $19 million on research into different parts of the aviation weather picture.

Talking turbulence

In 1986 this writer was aboard a vacation charter to the Caribbean, watching a thunderstorm off in the distance, when the plane suddenly dropped—how far or for how long, I'm not sure. I just know it was long enough for me to grab onto the hands of the strangers that were my seatmates, for the nearest flight attendant to dive to the ground and clutch a seat leg, for every meal tray to plaster itself onto the ceiling, and for the woman across the aisle to scrawl her name in ink on her stomach. ("To identify the body," she later explained.)

The aircraft leveled off and no one was hurt, just slightly stunned and wet from spilled drinks. This was an encounter with severe turbulence, of which the aviation industry recognizes several varieties. It was not, however, reported as an accident because no one was injured and the aircraft was not structurally damaged.

"Turbulence has long been the silent problem for the aviation industry," said Larry Cornman, an atmospheric scientist at the National Center for Atmospheric Research, in Boulder, Colo. "Commercial airplanes that encounter turbulence don't go down, they don't fall apart in flight, and people usually don't get killed." Turbulence rarely makes the headlines, and often goes unrecorded.

But turbulence, even if moderate, can cause injury. In fact, it is the leading cause of nonfatal aviation accidents, Cornman indicated. While flying food trays are unlikely to cause a concussion, flying laptops could. If a plane lurches, passengers and flight attendants who are not seated and strapped in could be knocked off their feet.

Turbulence can also kill. In 1998, on a United Airlines flight from Japan to Hawaii, a run-in with turbulence killed one passenger and seriously injured others. That accident reinforced the general concern over the accident rate, bringing turbulence to the forefront of aviation research. The problem is of great interest to researchers because, though invisible to the eye, turbulence can be detected far enough ahead to give passengers a chance to strap in and stow heavy objects, and maybe even far enough ahead to avoid it altogether.

Confounding the problem are the different types of turbulence that must be dealt with. One type is convective turbulence, which is what I encountered on my trip to the Caribbean. Another is turbulence due to the terrain, which can persist up to surprisingly high altitudes, into the stratosphere and beyond. Still another is the turbulence due to the wake-vortex of another airplane. And there is also clear-air turbulence, a kind that is typically due to changes in the wind within and around the jet stream [Fig. 1]. Sometimes convective turbulence outside the storm cloud is called clear-air turbulence as well, as is terrain-induced turbulence, which looks to the pilot as if it is taking place in clear air. These different types of turbulence are detected by different types of sensors.

Figure 1 (1) Clear-air turbulence, in which air currents move in strong breaking-wave patterns, shows up quite starkly, as in this image observed with a downward-looking Doppler laser radar, or lidar, aboard the NASA ER-2 high-altitude research aircraft. With a near-infrared wavelength of about 2 µm, lidar can bounce off dust and other aerosol particles as tiny as a micrometer in diameter. The technology probably holds the most promise for the detection of turbulence in clear air. Image: NASA Goddard Space Flight Center

Convective turbulence is generated by a nearby thunderstorm and can come as a surprise to a pilot who thinks he has successfully flown around the storm. Its detection within the precipitation area of the storm may be recorded by on-board weather radar or by Doppler wind shear radar, which many commercial aircraft have on board for use in detecting low-altitude wind shear. (Wind shear, one form of which is called a microburst, is a weather condition, often caused by thunderstorms, in which a sudden, large increase in the headwind is quickly followed by as sudden and as large an increase in the tailwind. Until ground-based and airborne detection equipment was developed, wind shear was responsible for several catastrophically fatal accidents.)

"The Doppler radars are not being used to detect wind shear above 2000 feet," Cornman told IEEE Spectrum. "They have a great deal more processing power than do airborne weather radar systems. So, if we can use their hardware to detect convective turbulence, that can give us a quick payoff."

As part of NASA's Aviation Safety Program, the National Center for Atmospheric Research has been developing algorithms to turn this airborne weather data into turbulence warnings. An important flight test employing the new algorithms was performed in Colorado last summer, and the data from it has been analyzed and used to refine the algorithms. This September a flight test of the prototype algorithms is scheduled, and commercial products are expected to be available in a little over a year.

But the drawback of airborne radar, which usually has a relatively large 3-cm wavelength, is that it cannot gather data without a good supply of fairly large water droplets in the atmosphere. The equipment is useless, therefore, in detecting clear-air turbulence, the form that presents one of the greatest research challenges.

High hopes for lidar

The technology that probably holds the most promise for the detection of turbulence in clear air is a Doppler laser radar, or lidar [Fig. 2]. With a near-infrared wavelength of about 2 µm, lidar has no need for raindrops to return its signals: it can bounce off dust and other aerosol particles as tiny as a micrometer in diameter and invisible to the eye. These particles exist at all altitudes of interest to civil aviation, but are less dense at cruise altitudes (35 000 feet, for example) than they are near the ground.

Figure 2 (2) In an onboard turbulence-detection lidar system, pulses of eyesafe laser light are transmitted through a window. The light is scattered back by naturally occurring aerosols that move with the wind and turbulence. The movement of the particles causes a Doppler frequency shift in the backscattered light, which is detected by a sensor and processed to determine the distance to, and strength of, the turbulence ahead. Image: Coherent Technologies Inc.

Research on applying lidar to airborne turbulence detection at first focused on the High Speed Civil Transport program, which was an attempt of the early to mid-'90s to develop a next-generation supersonic Concorde. Moderate or even light levels of turbulence outside a supersonic jet can cause the shock wave at the engine inlet to travel forward and to do so with such force as to increase the inlet's drag dramatically, to the point of reducing the engine thrust or even stalling the engine. The event is called an engine inlet unstart, and passengers and crew experience it as an encounter with very severe turbulence. If the engine is designed to operate with higher safety margins, the problem can be avoided, but this approach reduces fuel efficiency.

The goal of the civil transport program was to use lidar to detect turbulence ahead and adjust the engine inlet so as to optimize fuel efficiency while minimizing the chances of an unstart. But, in the late '90s, the program was essentially suspended, and the focus of the lidar research moved toward airborne turbulence detection for other than supersonic passenger aircraft.

Of course, for a signal to be reflected back to lidar, dust particles must be present in the air in some numbers. But at high altitudes, they are sometimes in short supply. So a critical challenge in developing affordable, sensitive systems that do not harm the eye is to find the right wavelength, said Steve Hannon, principal scientist of Coherent Technologies Inc., in Lafayette, Colo., a company that is developing the technology.

"The smaller the wavelength we select, the greater the signal we see. But if we select too small of a wavelength, less than 1.4 microns, we are no longer eyesafe" because the signal moves into the visible region, Hannon told Spectrum. "At this point we are testing at 2 microns, and developing systems at 1.5 microns. With the shorter wavelength, however, the optics must be better and the system tolerances must be tighter, which will drive up cost. We need to drive the system cost down so that the instrument is affordable to airlines."

While Coherent Technologies has been developing the core technologies for the past decade, focused product development work began only this year as part of a joint program with Honeywell Inc.'s Commercial Electronic Systems Division, Redmond, Wash., and United Airlines Inc., Elk Grove, Ill. The intention is to develop a device that can be integrated with Honeywell's on-board wind-shear weather radar. The company was to flight-test an advanced prototype this summer and expects to have systems on the market in late 2002.

The cockpit interface, which will provide turbulence information from lidar and on-board Doppler radar, is likely to include a graphical display of radar and/or lidar data and icons for areas of detected turbulence, along with an audible alert for hazardous conditions, Cornman said. This is likely to be integrated with existing or future on-board displays that are there for other purposes.

Where features of the terrain regularly roil the air, airborne turbulence systems may not be needed. Rather, the presence of low-altitude turbulence may be calculated from ground-based measurements. Such a system already exists in Hong Kong at the Chek Lap Kok International Airport. In 1994, lidars were used in research for two months to identify areas of turbulence and associate them with different wind patterns.

Today the Hong Kong airport uses anemometers to feed data on wind speed and direction to several Intel-based PCs running the Linux operating system. Employing algorithms from research data by the National Center for Atmospheric Research, the PCs identify areas of turbulence and transmit turbulence warnings to the workstations of air traffic controllers, who then radio the warnings to affected pilots.

A similar system is under development for Juneau, Alaska, where the terrain creates turbulence patterns that can be more complicated than those in Hong Kong. For the past four years, the U.S. National Center for Atmospheric Research has been taking measurements in Juneau with ground-based anemometers, radars, lidars, and wind profilers (vertically pointing Doppler radars), as well as with research aircraft. Algorithms are currently under development, again, on Linux-based PCs. Based on the results of a feasibility assessment, a real-time alert system may go into effect within two to three years. According to Cornman, Linux has been the operating system of choice because the programmers on the project prefer it and it runs on inexpensive PCs.

In the wake

In tackling the problem of turbulence generated behind an aircraft—the wake vortex—the concern is efficiency as well as passenger safety and comfort. Today aircraft taking off and landing stay far enough back from the plane they follow to avoid even worst-case wake persistence. This means maintaining unvarying separation standards. But if wake vortices could be predicted in advance and detected in real time, aircraft could be more closely spaced and airports could bring more aircraft in and out in a day.

One such automated system for predicting wake vortices is the aircraft vortex spacing system (Avoss), a NASA research project that was to be demonstrated at Dallas/Fort Worth airport in July. Avoss has a difficult job to do, because wake vortices vary with the size and weight of the aircraft, the speed at which it flies, ambient wind, and the relative weight of the following aircraft.

In the Avoss system, a number of sensors are installed in the terminal area to measure such parameters as wind direction and speed, ambient turbulence, temperature layers, and actual wake vortex behavior. The sensors feed their data into a series of Sun Microsystems workstations, which, adding in information about aircraft size and weight, calculate wake motion and decay and provide controllers with real-time advisories on aircraft spacing.

The algorithms used in Avoss were developed by working with numerical supercomputer models of wake behavior in the atmosphere, then were validated with data collected since the mid-'90s by lidars, at various airports. Tests over the past year have shown a possible 6 percent increase in aircraft landings, rising to as much as 11 percent in certain kinds of weather, according to David Hinton, an Avoss principal investigator.

"Such a system could start test operation within a year," Hinton said. "And it could be used for separating traffic within three to five years. It depends on the interest and will of the users."

Another approach is being taken by some FAA-sponsored researchers. They are trying to determine the turbulence patterns of wake vortices from acoustic sensors rather than lidar reflections. To this end, they install systems alongside runways to sense nearby wake turbulence.

"It is well-established that wake vortices generate low-frequency sound [1500 Hz and below]," said Sam Kovnat, chief executive officer of Flight Safety Technologies Inc., in New London, Conn. Another well-established fact is that the speed of a laser beam is affected by sound waves. So Kovnat's researchers are beaming a low-power laser with a 1.3-µm wavelength across a runway to rebound off a mirror on the other side. By comparing the actual return time to a calibrated time, Kovnat's team can determine the change in laser beam speed caused by encounters with sound waves. A group of digital signal-processing chips demodulates the pattern of changes in the light speed into an acoustic signature of the sound.

As a proof of principle, Flight Safety Technologies did a two-week test at New York City's John F. Kennedy International Airport in 1998. The first operational test was also to last two weeks, at Dallas/Fort Worth Airport in late August. The company is continuing to refine its algorithms and is planning another test, in cooperation with United Airlines, at San Francisco Airport in mid-2001.

"We aim to start permanent airport installations in 2003," Kovnat told Spectrum. The installations will consist of arrays of eight to 16 lasers in the approach path, stretching out approximately 4.5 km from the airport.

Kovnat said that, theoretically, acoustic technology could be used to detect all forms of atmospheric turbulence, not just that generated by wake vortices. Others disagree.

Cornman of the National Center for Atmospheric Research, for one, concedes that the technology has had some modest success detecting wake vortices, but said, "I'm skeptical whether it could be used to detect other types of turbulence. First, we don't know a lot about the acoustic structure of turbulence. And I think other acoustic sources, like atmospheric temperature fluctuations, could overwhelm the system. But I'm open to being proven wrong."

In addition to efforts to detect and avoid turbulence, researchers are investigating how pilots can counteract the phenomenon while in flight. The possibility of utilizing aircraft control systems, old and new, to cushion passengers against turbulence is being studied at NASA, according to the agency's Colantonio. Under investigation will be simple refinements to the software used to control the aircraft, such as the autopilot algorithms, coupled with input from turbulence sensors into the software instruction set that specifies how an airplane's flaps and other control features should operate.

Also to be investigated are possible modifications of the aerodynamic characteristics of the aircraft, typically coupling some kind of fast-actuating direct-lift flap system with some type of pitch control mechanism. But this project will not bear fruit in the near future. "We hope to have demonstrated something in the air by 2004," Colantonio told Spectrum, "and have something available for airlines by 2007."

Looks like rain

Like turbulence, other forms of bad weather—including thunderstorms, icing conditions, and volcanic ash—are better flown around than flown through. Unfortunately, the type of weather information currently available to a commercial pilot is scanty: a sheet of weather data printed out before takeoff, maybe outdated and of minimal use.

The sheet lists such information as temperature at the surface, wind speed, cloud height at departure and arrival airports, and thunderstorm warnings. The pilot also gets updates from his airline's operation center, sent to a text printer in the cockpit by a 2400-baud modem, which also sends him severe weather advisories emanating from the National Weather Service, in Silver Spring, Md. He may also hear anecdotal reports from other pilots in the area. "The information is sparse, hard to decipher, usually out of date, and not very accurate," Cornman said.

Because of the dearth of comprehensive weather information, it is not unheard of for a pilot to walk back into the plane's first class section and ask a laptop-using passenger to surf the Web for a weather update. That information is likely to be more up to date than what is available in the cockpit.

The goals were spelled out for Spectrum by Stough, who is manager of NASA's aviation weather information systems (AWIN) project at the Langley Research Center. They are to provide weather information relative to the pilot's flight path, present it to the pilot in the cockpit in an easy-to-intepret graphical format, and give him decision-making aids to help him use that information—a tool, for example, to design an optimal flight path around storm cells.

To give pilots that capability, several research projects are under way. Projects for improving forecasts of thunderstorms and visibility, along with the turbulence detection research just discussed, are intended to provide data for a cockpit weather display, once it is developed. One of this summer's experiments was to take existing data from ground-based weather radar and combine it with information from on-board weather radar—the better to picture weather in the flight path.

Another research effort concerns the development of a type of synthetic vision for pilots. A view of the ground as they would see it on a cloudless day, no matter what the weather, would be derived from a database of terrain features.

"The terrain information has to be in the on-board system," NASA's Colantonio said, "because we don't want a pilot diverting around a thunderstorm only to fly into a mountain."

Here, communications is an issue. Weather graphics are dense with data. Getting all those bits into the cockpit fast enough to be of use is going to take a lot more than a 2400-baud modem.

The many technologies for higher-speed data link communications are available in the consumer world (like wireless phones and satellite television), but they have yet to be adapted to aviation's unique requirements. The job bristles with difficulties. It is a matter of determining not only just how big the pipe needs to be, but also whether it will be robust, timely, and secure enough to serve aviation.

"We are investigating a broad variety of potential data link technologies," said Gus Martzaklis, a project manager at NASA's Glenn Research Center. These include ground-based microwave communications in the VHF (118-137 MHz) and UHF (300-1000 MHz) bands, and satellite-based communications in the L (1-2 GHz), S (2-4 GHz), Ku (12-18 GHz), and K/Ka (18-40 GHz) bands.

Key tradeoffs are the quality of the displayed images, the ability to reach aircraft anywhere in the sky, and cost. Generally, ground-based communications are bandwidth limited, can have coverage problems at the higher altitudes and on the surface, and are subject to interference because of the congestion of the available aeronautical frequencies.

In contrast, satellite-based communications have better coverage and are less subject to interference, but historically have been much more expensive to implement. Martzaklis said that researchers are now hoping to achieve data rates as high as 5 Mb/s to the cockpit with a return data rate of 2 Mb/s to the ground. The first major series of experiments of high-bit-rate data-link technology will run in the summer of 2002 and a second series in the summer of 2004.

For the near term, a VHF data link running at 31.5 kb/s has been tested and may be operational as early as next year. In contrast, DSL Internet service has a maximum bit rate of 700-1500 kb/s and digital satellite television services send images down at a rate of 2-6 Mb/s.

Whatever technology is selected, the equipment will not be cheap. So NASA is investigating leveraging additional uses for the data link, including providing passengers with entertainment and Internet access. The hope here is that such flexibility will make budgeting the money for the devices easier on airlines.

Figuring out how to present all that weather information to the pilot is perhaps the biggest job this project faces. Human factor issues will be critical, and are just beginning to be addressed.

"The current weather products—information about ceiling, visibility, wind speeds, air temperature, and storm cells—are designed mostly for use by the pilot while he's still on the ground," NASA's Stough said. "The question is how do you combine the new kinds of weather information with navigation and air traffic information in a way that the pilot can readily understand?"

Figure 3 (3) The Honeywell Weather Information Network display (above) presents pilots with easy-to-understand information in the cockpit based on data from NASA's Aviation Weather Information project. A newer version of the system is to be evaluated by NASA this fall. Photo: NASA Langley Research Center

In August, a prototype cockpit system that included aviation weather information, enhanced on-board radar for weather detection, turbulence detection, and synthetic vision, was to be tested in the air [Fig. 3]. In 2002, NASA intends to demonstrate prototype systems that provide weather information, turbulence warnings, and synthetic vision and can operate anywhere in the United States. In 2004, NASA's goal is to make those capabilities function worldwide.

The icing factor

Besides the weather information that already exists and simply needs to be better communicated to the pilot, there are other types of weather "nowcasts" (information about current conditions) and forecasts that could improve aviation safety. One such type of weather product, as the industry refers to it, is the "Weather Support to Deicing Decision Making System." This system "provides accurate and timely nowcasts and forecasts of weather conditions that affect ice accumulation on aircraft and on runways," said Warren Fellner, a project leader for Systems Resources Corp., Washington, D.C.

Information about icing—particularly, how fast airplane wings can in freezing weather be expected to become covered with ice and how thick that ice will be—is important because airport maintenance crews must make decisions about how often and how long to de-ice a plane before takeoff. As little as 0.8 mm of ice on a wing surface can dramatically decrease lift, increase drag, and destabilize the aircraft—which was part of the cause of the 1982 Air Florida crash into the Potomac River at Washington National Airport and nine other accidents of commercial aircraft during takeoff between 1978 and 1997.

Previously, airlines relied on the National Weather Service and pilot estimates on snow intensity, which were predicated on prevailing visibility. If the flakes are big and visibility is poor, they rate snow intensity as high. If the flakes are little and visibility is fair, they rate snow intensity as low.

However, FAA-funded research in the '90s ascertained that visibility is misleading. Large, thick flakes of snow hamper visibility, but smaller flakes are wetter.

Under an FAA contract, the National Center for Atmospheric Research installed snow gauges as part of a prototype icing information system around the Chicago, Denver, and LaGuardia (New York) airports and operated them for five years, from 1995 to 1999. The snow gauges automatically fed a measurement of the water in the falling snow to Sun workstations, which were also getting information from Doppler radars, surface weather radars, and such weather information as pressure, dewpoint, and wind statistics.

The system then output the liquid snowfall equivalent (a total for water fallen in the previous half hour) and forecast the amount of water likely to fall in the next half hour. Airport ground crews used the information to determine when and how often to de-ice aircraft; airport authorities used it to schedule snow removal and determine the appropriate method (sweeping, blowing, or plowing).

Last winter the system went into operation for the first time at the three New York-area airports, which were reportedly pleased with the results. A fourth airport in the United States is expected to be added this winter. Because airline decision-makers would like icing forecasts as far out as 4 hours into the future, research continues.

Icing is a hazard for planes in the air as well as those preparing to take off. For aircraft in flight, the National Center for Atmospheric Research has created what it terms a diagnosis of the icing environment. In other words, it has developed an algorithm combining forecasts from the National Weather Service with data from the sensors aboard weather satellites, data from ground-based weather radars, surface weather measurements, and pilot reports of icing in the air. The diagnosis gives icing information at different altitudes at any route across the United States, and is currently being posted by the center on the Aviation Digital Data Service Web page [Fig. 4]. The postings are labeled experimental, but can be used by any pilot who wants them for preflight planning.

Next year, the center will begin development of radar systems to analyze ice. Using a polarized, short-wavelength (8.6-mm) radar, researchers believe they can distinguish drizzle drops, which are nearly spherical, from flat or irregular ice crystals and use that data to determine just how hazardous icing conditions are. A future project will match a short-wavelength radar with a long-wavelength radar and use the reflectivity difference to measure the water from within clouds and precipitation.

This winter, the center also plans to utilize ice accretion detectors aboard aircraft, which flag the pilot in the cockpit to turn on his ice protection system (which either heats the leading edges of the wings or inflates a bladder that cracks off the ice). The center will demonstrate a product that will take data from the detectors and automatically downlink it to the ground-based computer system that is creating the icing diagnosis.

"With that information" said Marcia Politovich, the project scientist in charge of the center's flight icing product development team, "you can build a display that depicts in real time where an airplane is actually encountering icing. Our goal is to have a map of this information displayed in the cockpit within three to five years."

Waiting out the fog

Fog can contribute to disaster. In last summer's tragedy in which the plane piloted by John F. Kennedy Jr. plunged into the Atlantic, the cause was determined to be pilot disorientation because of fog or low clouds in the approach path.

For commercial aviation, though, fog does not typically lead to death, merely inconvenience, sometimes in the extreme. San Francisco-bound passengers, in particular, can be trapped on the ground at their departure airports for hours and hours, as controllers institute programs of planned delays, such as holding aircraft at the gate. (In making judgments, the flow controllers at the FAA Air Traffic Central System Command Center in Herndon, Va., look at the U.S. air traffic system as a whole and looming traffic jams in the sky, to determine when planes should be held at their departure gates, their engines turned off to prevent fuel-burning. They also decide when such delay programs can be canceled.)

While fog and low ceilings can cause delays anywhere, it is a particular problem in San Francisco, where runways are parallel and closely spaced, so that pilots must maintain visual separation during their final approach. Low ceilings, due to marine stratus clouds, cut San Francisco's peak arrivals from 55 per hour to 30 per hour on an average of 70 mornings each summer [Fig. 5], according to Wes Wilson, a member of the technical staff of the Massachusetts Institute of Technology's Lincoln Laboratory, Lexington, Mass.

Figure 5 (5) Taken from a plane landing at San Francisco Airport, this image shows the marine stratus cloud layer, which dramatically cuts landing rates at that airport for 70 days each summer. Photo: Gary Randall/FPG International

Researchers can't make fog lift sooner. But if controllers knew just when the fog was going to lift, they could release planes from gate holds and get them into the air much sooner and increase the rate at which planes land the moment the approach clears.

"Right now," Wilson said, "if San Francisco waits until someone looks at the sky and sees that it's clear before releasing flights, the airport loses the benefit of available full capacity for an additional hour, since the nearest major airports (like Los Angeles) are at least an hour away. That comes out to 25 landing slots, which are worth about $200 000 in operational costs to the airlines. In addition, there are the costs of passenger inconvenience."

San Francisco now relies on human weather forecasters to make such decisions. Some days, the weather forecasts are right; other days, he is wrong.

"When they're wrong," Wilson said, "and the stratus clears much later than expected, there is a huge burden on the controllers who are handling all those extra planes up there."

It's not only San Francisco that faces the summer marine stratus problem regularly. Marine stratus also impacts Los Angeles, San Diego, Seattle, and Portland. "It's primarily a West Coast problem," Wilson told Spectrum. "And until you fly up and down the West Coast, you have no idea how devastating this problem can be.

Wilson, who heads up Lincoln Laboratory's Marine Stratus Project under an FAA contract, is striving to develop an automated forecast guidance system. "With an accurate forecast of the time of clearing, they can recover their available landing capacity more quickly," he said. This automated forecast guidance would not stand alone but provide new information to the operational (human) forecaster. However, Wilson expects it will increase forecast accuracy dramatically.

Lincoln Laboratory researchers began their efforts on this project in 1995. For support, they enlisted the help of weather experts in the San Francisco Bay Area, including professors of meteorology Peter Lester and Doug Sinton at San Jose State University. They were asked what information they now lacked would help them make better marine stratus forecasts.

One missing link turned out to be the height of the inversion base. At this altitude, temperatures, which till then have been decreasing with altitude, suddenly start increasing instead, by as much as 10 degrees in a few hundred feet. The temperature inversion creates cool pools of air near the ground that prevent air from moving up and down and fog from dissipating.

To measure the height of the inversion base, Wilson's team installed sonic detection and ranging (Sodar) instruments at San Francisco and San Carlos Airports. The Sodar uses 2-kHz sound waves to measure the refractive return from the inversion layer. The lower edge of the intensity spike marks the base of the inversion.

"This is a classic instrument used for studying atmospheric surface layers," Wilson explained. "We have developed algorithms, running on workstations, which measure and trace the evolution of the inversion layer, key information for the improvement of the stratus forecast."

In another group effort, a team of researchers at the University of Quebec in Montreal is developing a high-resolution one-dimensional model to analyze a vertical column of the lowest part of the atmosphere—specifically, the processes in the column that control the heating, mixing, and evaporation of liquid water. Because this model does not include explicit information about horizontal changes, researchers must merge this vertical analysis with regional weather influences, such as cooling breezes from the ocean to provide an accurate forecast.

Finally, it is necessary to combine these results to provide a single consensus forecast.

"Our goal," Wilson said, "is to be able to state that the stratus will clear at a specific time, and to provide an indication of the confidence of our prediction." The level of confidence may end up being expressed as a time window (plus or minus some number of minutes), as a percentage confidence level (say, clearing at 10 a.m. with 85 percent confidence), or as betting odds (clearing before 10 a.m., with a 5-to-1 chance of being right). It is important to provide this information in the way that is the most useful.

The Marine Stratus Project began giving information to forecasters this past summer as an operational demonstration. By the end of the summer of 2001, Wilson expects to have determined how well these automated forecast systems work and just how useful they are. At that point, the next step is implementation, which will be completed at San Francisco and other affected airports.

Forecast: still speculative

Indeed, while many of these technologies will be ready for prime time in a few years, when they will come into wide use is mostly a matter of conjecture. Some of the algorithms developed with FAA funding will be implemented on existing systems and rolled out nationally. But the prospects for implementation of algorithms that must be customized for each location is less clear.

While the Marine Stratus Project will be implemented in the San Francisco Bay Area, plans for rollout to other locations is uncertain. In effect, the burden for funding on-board systems that can make use of the new data will be on the airlines—when they will decide to upgrade their systems. As for when off-the-shelf systems will be available from aircraft equipment manufacturers like Honeywell, Boeing, and Rockwell, that is anybody's guess.

To some extent, implementation depends on the flying public. For example, would you be willing to pay a few dollars more to fly on a plane that has a 95 percent chance of staying out of turbulence? After my Caribbean encounter, I know I would.

The Conversation (0)

Video Friday: DARPA Subterranean Challenge Final

1 min read

This week we have a special DARPA SubT edition of Video Friday, both because the SubT Final is happening this week and is amazing, and also because (if I'm being honest) the SubT Final is happening this week and is amazing and I've spent all week covering it mostly in a cave with zero access to Internet. Win-win, right? So today, videos to watch are DARPA's recaps of the preliminary competition days, plus (depending on when you're tuning in) a livestream of the prize round highlights, the awards ceremony, and the SubT Summit with roundtable discussions featuring both the Virtual and Systems track teams.

Keep Reading ↓ Show less

Making 3D-Printed Objects Feel

3D-printing technique lets objects sense forces applied onto them for new interactive applications

2 min read

Researchers from MIT have developed a method to integrate sensing capabilities into 3D printable structures comprised of repetitive cells, which enables designers to rapidly prototype interactive input devices.


Some varieties of 3D-printed objects can now “feel," using a new technique that builds sensors directly into their materials. This research could lead to novel interactive devices such as intelligent furniture, a new study finds.

The new technique 3D-prints objects made from metamaterials—substances made of grids of repeating cells. When force is applied to a flexible metamaterial, some of their cells may stretch or compress. Electrodes incorporated within these structures can detect the magnitude and direction of these changes in shape, as well as rotation and acceleration.

Keep Reading ↓ Show less

Simulation Apps at Work: 4 Use Cases

Specialized simulation apps enable collaboration across the enterprise and drive innovation

1 min read

Organizations are turning to specialized simulation apps to enable collaboration between engineers across the enterprise. This white paper covers the underlying technology for creating and deploying simulation apps to larger groups of people. Use cases highlight how apps are being used to benefit product development and drive innovation.

Trending Stories

The most-read stories on IEEE Spectrum right now