Software for Optical Systems Spells the End of Blur

NASA software that calculates optical aberrations will sharpen images from space and could redefine perfect vision for humans

12 min read
Software for Optical Systems Spells the End of Blur

Photo: Palomar Observatory
Night Vision: Earth's rotation produced these star trails behind Palomar Observatory's Hale Telescope.

Soon after the Hubble Space Telescope settled into orbit following its launch in 1990, astronomers discovered a big problem. The images Hubble sent back to Earth were blurry—embarrassingly, disappointingly blurry.

Several crack teams of engineers and scientists from NASA, industry, and academia worked frantically to resolve this predicament. The problem, they discovered, was that the primary mirror was misshapen, and the offending curve focused the incoming light incorrectly.

To rescue Hubble, experts worked for months to build new optics to intercept the light bouncing off the telescope's main mirror and correct its aberrant shape. By 1993, they had what they wanted. Astronauts spacewalked out from the space shuttle Endeavor to the malfunctioning telescope and replaced two of Hubble's original cameras with new optical systems. Hubble's images suddenly snapped into focus, and one of those new cameras ended up becoming the most productive astronomical instrument of the last 50 years.

What started out as NASA's greatest embarrassment set in motion a new wave of optics research with broad implications for astronomy and vision science. It took NASA three years, a space shuttle launch, and US $700 million to fix Hubble. We're hoping to eliminate all that trouble for future telescopes by measuring and analyzing blur in real time, using just the data in an image.

In our work at the Jet Propulsion Laboratory (JPL) at Caltech, in Pasadena, Calif., under a contract with NASA, we have built software to help us fix all kinds of blur, including but not limited to the type that afflicted Hubble.

The general idea is simple. We're taking advantage of the fact that our future telescopes will include flexible mirrors that bend and move upon command. By understanding the deficiencies in an image, we can compensate for them by remote control—no astronauts needed. The power of this method lies in its ability to use an optical system's existing camera as a sensor to detect its own error, without installing any separate devices. This software-based approach has already extended our telescopes' ability to peer into the darkness of the universe. On Earth, we believe the software could enable vision scientists to enhance human eyesight beyond "perfect" 20/20 vision, opening up the possibility of "superhuman" vision. 

Most adults don't have 20/20 vision. Imperfect vision is caused by aberrations in the way the lens of the eye transmits light to the retina. Those deviations occur in an important aspect of light called the wavefront, a set of points that are all in the same phase. You might be familiar with the term from ads for LASIK eye surgery. The vision-correcting procedure uses a beam of laser light that penetrates the eye, reflects off the retina, and travels back through the eye, capturing and mapping the errors in the way the wavefront strikes the eye.

What causes the wavefront to become distorted? Objects emit or reflect light in spherical waves. Our eyes intercept a small portion of that wave surface, and at great distances this surface is considered basically flat. To form a perfect image on your retina, your eye forces these flat waves to curve inward, so that the waves converge at one point on the focal plane behind your eye's lens. If the converging waves are not perfectly curved, not all of the light will come into focus at a single point on your retina. The result is a blurry or distorted image. All the points of deviation from the perfect spherical wave shape are called wavefront error. Plotted on a 2-D map, the error would look like mountainous terrain, with peaks and valleys corresponding to each deviation from the ideal flat surface.

Photo: Palomar Observatory
Star Power: Hale Telescope, at Palomar Observatory, has a 5.1-meter primary mirror.

The aperture of a telescope works much the same way as your eye, and the causes and fixes for blurry images are also similar. If we know the shape of the wavefront error, we can manufacture an optic with the opposite shape and cancel the imperfections. Scientists did just that for Hubble. LASIK operations are similar: Ophthalmologists calculate the opposite pattern of the aberrant light, and then lasers carve that shape into the cornea. It sounds gruesome, but it works.

A more exciting application is to alter the shape of a flexible lens or mirror to fine-tune the optical system on the fly. The next generation of space telescopes will be equipped with changeable or "active" optics, and many existing observatories on the ground have already upgraded the rigid, fixed optical systems of the past. The main feature is a deformable mirror, which is a thin, reflective surface that can be made from a variety of materials, including glass, beryllium, molybdenum, and composites. The mirror is mounted on a grid of actuators that are lined up like a bed of nails. By moving these actuators, we can morph the mirror to cancel the wavefront error exactly and produce a clear, sharp image. When tackling the rapidly changing wavefront error caused by turbulence in the atmosphere, we can even take the "twinkle" out of stars.

The foundation for this wavefront-sensing technique emerged in 1904, when astrophysicist Johannes Hartmann mounted a giant screen across the aperture of the telescope at the Great Refractor observatory, in Potsdam, Germany. The screen was punctured with several holes. Light rays passing through each hole hit photographic plates both before and after they came into focus, producing two spot diagrams. Because he knew the distance between the two plates, Hartmann could determine the locations where these bundles of rays came into focus. This gave Hartmann a basic estimate of the aberrations in the telescope's primary lens.

Astronomers working on large telescopes used Hartmann's technique for more than 60 years. Then, in the late 1960s, the U.S. Air Force became interested in using ground-based telescopes to keep watch over satellites in orbit. They wanted to improve the images of satellites by simultaneously measuring the wavefront. But satellites are dim to begin with, and most of the light coming into the telescope was needed just to produce a basic image of the satellite. That posed a problem for the Hartmann screen, which blocks most of the incident light to produce the separate ray bundles.

Images: Sidd Bikkannavar/JPL
Wavefront Wipeout: map of the wavefront's deviation becomes flat, and a sharper image [top] is produced.

Roland Shack, at the University of Arizona Optical Sciences Center, in Tucson, figured out a way to replace the Hartmann screen with a grid of tiny lenses, or lenslets, that could capture all the incoming light. He used a beam splitter to allow some of the incident light to pass through to the imaging camera and redirected the rest of the light to a new wavefront sensor. The lenslets focused light into hundreds or even thousands of tiny spots on the semiconductor photodetector arrays of a charge-coupled device camera. A flat, undistorted wavefront would form spots in a perfect grid pattern, but a distorted wavefront would result in a distorted grid. By examining how the erroneous spots deviated from the ideal, Shack could reconstruct a model of the wavefront error. Adding more lenslets to the array produced more spots, which led to more precise estimates of the error. On the other hand, using fewer lenslets allowed the wavefront to be computed more rapidly—fast enough to keep up with the atmospheric turbulence that ground-based observatories must contend with.

Versions of the Shack-Hartmann sensor are now used in the adaptive optics systems of most advanced telescopes. The results can be extraordinary. In the summer of 2007, astronomers at Palomar Observatory, in California, used an adaptive optics system built by our group to create the highest-resolution direct images of space ever recorded in visible light. Using a Shack-Hartmann wavefront sensor, a deformable mirror, and a camera called LuckyCam, built by the Institute of Astronomy in Cambridge, England, the astronomers captured images of the M13 globular cluster, about 25 100 light-years from Earth, that were twice the resolution of what Hubble can produce. In other words, a ground-based observatory outperformed its space-based cousin—an astonishing feat considering the significant distortions created by Earth's atmosphere. The combination of these technologies was named one of Time magazine's Best Inventions of 2007.

Images: New Eyes and Messier 100: ESA Hubble Space Telescope/NASA (4); M13 and Cat’s-Eye: Caltech/JPL/Cambridge/Palomar Observatory (4)
Blur Busters: Adaptive-optics software that corrects wavefront error cleared up these fuzzy images.
Click to enlarge image

The Shack-Hartmann wavefront sensor isn't perfect, however. Its hardware introduces small flaws into the error measurement, and the number of lenslets in the sensor limits its resolution. A better approach is to measure the error right where a camera perceives it, without introducing extra equipment. For the slowly changing wavefront error that telescopes experience in space, we can do this, and at a much higher resolution than the Shack-Hartmann sensor can.

Let's say you're observing a certain star. No matter how perfect the optical system—be it a ground-based observatory, a space telescope, or the human eye—the optics will distort the light in some way, which degrades the image. But that degradation is also a signature imprinted on the image. We can decode that signature to reveal the shape of the wavefront error. This is the function of our software—it processes images to decode the underlying wavefront errors.

Light is an electromagnetic wave, so it has both an amplitude, which we see as the intensity of light, and a phase, which neither our eyes nor a camera can see directly. We want to detect the phase because it's essentially equal to the wavefront error. We'd like to detect it at a particular location, namely at the aperture of the telescope, which is located at the large primary mirror. To do so we must use an indirect "phase retrieval" method, where we process image data together with a computational model of the optical system. Engineers Ralph Gerchberg and W. Owen Saxton proposed this technique in the 1970s while working on electron microscopes.

Phase retrieval begins with the shape of the telescope's aperture, which sets the parameters for our solution. We set the amplitude to match this aperture shape, and we set the phase randomly. Then we turn to our computational model, which calculates how a wave moves between the aperture and the image plane of an optical system.

The model sends the phase and amplitude information through the optical system to the image plane, where the camera resides. This transformation adjusts the values of the amplitude and phase to correspond with the new plane. Then we use a real image, taken by the camera, and replace the old amplitude with a new value from that image.

Next, we use the model in reverse, to send the light back to the aperture. Once again, we use the aperture to constrain the result, giving us a new estimate of the phase. This basic algorithm is repeated until the image generated by the computer model agrees with the measured data from the aperture.

The method generates wavefront estimates that are consistent with a single image, but it does not resolve all ambiguities. Worse, it fails completely if the wavefront error exceeds the wavelength of light. To fix those shortcomings, we must process multiple images at the same time. The Hubble analysis of the early 1990s incorporated this realization, and we've been working on this problem ever since.

Our software runs multiple loops, each processing one of many images taken at different focus settings. The results from all these loops are combined to create a joint estimate of the wavefront error. We then run all those loops again, this time using the joint estimate to set the starting phase for each loop. We repeat the operation until the joint estimate converges on a single wavefront estimate.

Our main innovation at JPL was to improve the model of the true optical system while it was running. This has allowed us to compensate for one of the original method's main failings. Recall that a perfect wavefront is essentially flat, and an aberrant wavefront has high peaks and deep valleys. As we mentioned earlier, if the length of a peak or valley exceeds the wavelength of light, the Gerchberg-Saxton algorithm breaks down. Now we have a workaround. Early runs of our algorithm provide guesses as to what is really happening in these problem spots, and we can use that information to guide the solution.

By adapting our optical model, we can also reduce inconsistencies in the data. What if the telescope moved while taking some of the images? Or what if the focus changed? The software tracks these things and puts them in the model. With each iteration of the algorithm, the model updates and improves—and the wavefront-error solution becomes more accurate. To see just how accurate it has become, consider that the test mirror that caused Hubble's problems was misaligned by just the thickness of a human hair. Using our algorithms, we've demonstrated wavefront-sensing accuracy to 2 billionths of a human hair.

The major benefit of our software, called the modified Gerchberg-Saxton algorithm, is that it can work with almost any type of optical hardware or deformable mirror. If an observatory has only a single mechanically movable lens, the software will calculate how the lens's position must change to best focus the final image. If a more sophisticated deformable mirror is in place, the algorithm can determine the precise voltages needed to adjust the mirror's actuators.

To train the software to make those corrections, we must toggle each of the optical system's movable components by a small amount. The software measures how each movement affects the wavefront error. It then reverses the error to find the precise movements necessary to create the ideal wavefront shape for that particular optical system.

The software has already contributed to science at Palomar, where it is now used with the observatory's existing adaptive-optics system. Researchers have also begun to apply this technique to a wide range of endeavors in space science.

Let's say you're peering through a telescope in search of undiscovered planets. One challenge is that the brightness of a star may obscure interesting objects near it. But if you can alter the shape of the wavefront, you can in effect redistribute the stellar light away from the orbiting companion, without blocking, filtering, or otherwise losing any of the light. Two exoplanet astronomers at Palomar Observatory have begun testing this light-altering technique to image candidate stars in search of faint companions.

Our wavefront-sensing algorithm has another valuable feature. In the previous example, we described how the technique can change the optics before an image is made. But we can also use it to fix existing blurry pictures. The estimate the algorithm produces can be transformed to what's called a point-spread function, which tells us how the error creates blur around a perfect point of light. We can plug this information into a number of advanced image-processing techniques to achieve "superresolution" in an image. Superresolution refers to the ability to obtain information beyond what was recorded by a camera—in other words, the algorithm can help us restore image data that was lost by a camera's imperfect optical system.

We tried out this application with data from NASA's Deep Impact mission, launched in 2005, which sent a probe to crash into a comet named 9P/Tempel to stir up dust for astronomers to observe. As the probe journeyed through space, its science team discovered that the spacecraft's high-resolution imager was out of focus. On a whim, we asked the researchers for the data from a picture taken before the impact. We used the modified Gerchberg-Saxton algorithm to calculate the imager's wavefront error and then superresolved an image of Earth's moon. Sure enough, it snapped into focus.

Photo: Palomar Observatory
Sharp Shooter: The high-speed LuckyCam uses adaptive-optics software to beat atmospheric blur.

The next generation of space telescopes will make good use of the clarifying powers of our software. To fit inside a rocket, newer space telescopes will have mirrors that are segmented and deployed on orbit, where they will unfurl like the petals of a flower. By taking a few images and checking them, the modified Gerchberg-Saxton software can assess and correct the space telescope's wavefront error on orbit. We demonstrated the feasibility of this approach during the early stages of the James Webb Space Telescope project, which will be able to look much deeper into the universe than Hubble can. For the many contributions we've been able to make to the nation's space program, our group received NASA's Software of the Year Award in 2007.

We're far from finished with our work. Although we designed the software to sense wavefront error in light, the calculations are valid for any part of the electromagnetic spectrum. The Deep Space Network, a ground-based system of three 70-meter satellite dishes that JPL runs to communicate with NASA's spacecraft, suffers from the same basic problem of wavefront error. Our algorithm could be adapted to correct the network's aberrations.

Using Heat to Defeat Blur

One unexpected way to fix blurry images involves heaters. Spacecraft instruments undergo large temperature swings, depending on their place in orbit and the angle of the sun. As a result, they tend to be kept under careful thermal control. Using our algorithm, we can figure out how to operate a nearby heater to change the optics to correct wavefront error.

Imagine, for example, that some component in an optical system changes its shape over time—a primary mirror warps under mechanical strain, causing light to bend in unintended ways and producing unfocused images. We could apply heat to expand the support structures, changing the distance between two mirrors in a space telescope.

We could also use heaters mounted to a particular mirror to bend it in a desirable way. We actually did this on the 1992 Mars Observer mission prior to its unfortunate demise; we are now building mirrors with heaters deliberately placed so that we can use them to control the wavefront.

But there's more. Our own eyes may one day also become beneficiaries of the software. Prescription eyeglasses correct for focus and perhaps astigmatism, but those are just two ways that wavefronts can be distorted. In reality, there are many other subtle, higher-order effects that can degrade your vision. Correcting these effects will lead to what vision science researchers have termed superhuman vision, the enhancement of visual acuity beyond normal 20/20 vision.

Vision scientists already use techniques for wavefront sensing and control. Most modern LASIK procedures, for example, use Shack-Hartmann wavefront sensors to determine eye aberrations. But as we discussed earlier, the sensor's resolution is limited by the density of its lenslet array. The modified Gerchberg-Saxton algorithm, by contrast, is limited only by the resolution of the camera used to capture the points of light, which is in the realm of tens of millions of pixels. Some vision scientists have proposed the possibility of replacing their aberration-measuring equipment with our computations for the human eye.

We're also working on cutting down the amount of time the algorithm needs to complete the complex calculations. Our software performs a massive number of calculations, which can take a long time on a regular microprocessor. By translating these computations into a graphics-processing problem, we can offload much of the work from the CPU to run on high-performance graphics cards, in effect turning our desktop PCs into supercomputers. Now we can generate wavefront-error results in seconds, down from about 5 minutes. Evolving graphics technology will help the software come closer to running in real time.

Ultimately, our dream is to be able to correct astronomical images "on sky," with the software running in real time and using only camera imagery to continuously determine and correct wavefront error. On Earth, our goal is for crisp, unprecedented visual clarity. In all, we're helping to write a new legacy for Hubble, and we're turning a boondoggle into a boon for the entire electromagnetic spectrum.

The authors would like to thank their collaborators, especially Scott Basinger, Joseph Green, Catherine Ohara, and Fang Shi.

This article originally appeared in print as "The End of Blur".

About the Author

Sidd Bikkannavar and David Redding, work at NASA's Jet Propulsion Laboratory. Bikkannavar first encountered IEEE Spectrum's staff in the Australian outback, where he was helping to run a solar-car race the magazine was covering. Both authors say they stumbled into careers in optics. Redding did so while working on control systems for positioning laser beams and "discovered we didn't really know how to point mirrors." He got curious and, as he puts it, "one day I woke up an optical engineer".

This article is for IEEE members only. Join IEEE to access our full archive.

Join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of Spectrum’s articles, podcasts, and special reports. Learn more →

If you're already an IEEE member, please sign in to continue reading.

Membership includes:

  • Get unlimited access to IEEE Spectrum content
  • Follow your favorite topics to create a personalized feed of IEEE Spectrum content
  • Save Spectrum articles to read later
  • Network with other technology professionals
  • Establish a professional profile
  • Create a group to share and collaborate on projects
  • Discover IEEE events and activities
  • Join and participate in discussions