16 November 2011—All cameras capture the intensity of light as it strikes their imaging chips. Color filters provide a second set of data, sorting the rays into different wavelengths. But new devices—including one produced commercially and others still in the lab—are starting to capture a third piece of information: angle. This allows cameras to go beyond focusing on a single plane to measuring images at many different depths of a scene at once. Cameras that capture both intensity and angle will allow for refocusing already-snapped pictures, lensless cameras, and the creation of 3-D images with a single camera, according to researchers in several competing groups.
Alyosha Molnar, assistant professor of electrical and computer engineering at Cornell University, in Ithaca, N.Y., has developed angle-sensitive pixels for CMOS imagers. Each pixel is made up of a photodiode beneath two layers of diffraction grating, one of which is slightly out of alignment with the other. The top grating creates an interference pattern on the grating beneath it. Depending on how that pattern of light and darkness lines up with the second grating, light will either pass through to the photodiode below or be blocked. So whether the diode sees bright light or dim depends on the angle the light is coming from.
Forming an image requires having a detector with pixels in many different orientations and gratings with different amounts of spacing between them. Taken together, all the pixels produce a series of measurements that can be processed using a mathematical function known as the Fourier transform to create an image with any focus the user chooses. “The chip gives you the transformed version of a standard bitmap,” Molnar says. “You transform it back.” And because the camera can use the same data to produce two images at different depths, it can also generate a 3-D image. Molnar says that smartphones may one day make 3-D movies using this technology.
According to Molnar, camera chips with angle-sensitive pixels—known as light-field imagers—should be cheap to produce, because the gratings are inscribed in the layers of wires that are already built into the detector. The diffraction gratings can be added simply by changing the interconnect layout in an image chip’s design. “The result is, it’s basically free,” he says.
A newer approach, which Molnar is presenting in December at the IEEE International Electron Devices Meeting, would inscribe the gratings in the glass on the surface of the detector, generating the same diffraction patterns while blocking less light, thus making the device more sensitive.
Molnar says his approach should be cheaper than a similar technology already hitting the market. Lytro, a start-up in Mountain View, Calif., has introduced a consumer camera that allows users to snap pictures first and focus later. The company started taking orders for the cameras in October and promises to start shipping them in 2012. The product is based on technology developed at Stanford by Lytro founder and CEO Ren Ng.
The heart of the device is an array of microlenses that lie over the detector. The microlenses focus light rays from different angles on different pixels in the detector, which yields images at many depths but leads to fairly low resolution. “You lose a lot of resolution, but you get freedom of refocusing afterwards,” says Ramesh Raskar, head of the Camera Culture group at the MIT Media Lab, who is familiar with the technology but not involved with Lytro.
Raskar has come up with a different approach. He places a patterned piece of glass between the camera’s lens and the image sensor chip. Raskar describes it as the effect of light passing through a screen door. The screen attenuates the light rays entering the lens, with the amount of attenuation depending on the angle of the ray. He says his method is similar to the microlens technique but with no loss in resolution. However, for consumers, the Lytro approach might be preferable, Raskar says, because it doesn’t require as much computing power as his technique.
Molnar thinks the existence of three technologies—Lytro’s, Raskar’s, and his—will help push the concept of computational photography forward. And Raskar agrees. “It’s redefining the notion of a camera,” he says. “When we went from film to digital, we really didn’t see any variations in the technology of photography itself. I think now we’re seeing a second generation of camera technology—that you will actually be able to go beyond what a film camera could do.”
A version of this article appeared in the December 2011 print edition of IEEE Spectrum.
About the Author
Neil Savage writes about strange semiconductors and amazing optoelectronics from Lowell, Mass. In October 2011 he reported on a laser-powered mechanical memory chip.