Leonardo da Vinci sketched out tanks, helicopters, and mechanical calculators centuries before the first examples were built. Now another of his flights of imagination has finally been realized—an imaging device capable of capturing every optical aspect of the scene before it.
Lytro, a Silicon Valley start‑up, has just launched the world’s first consumer light-field camera, which shoots pictures that can be focused long after they’re captured, either on the camera itself or online. Lytro promises no more blurry subjects, and no shutter lag waiting for the camera’s lens to focus. A software update to the camera, coming soon, will even let you produce 3-D images.
Light-field technology heralds one of the biggest changes to imaging since 1826, when Joseph-Nicéphore Niépce made the first permanent photograph of a scene from nature. A single light-field snapshot can provide photos where focus, exposure, and even depth of field are adjustable after the picture is taken. And that’s just for starters. The next generation of light-field optical wizardry promises ultra-accurate facial-recognition systems, personalized 3-D televisions, and cameras that provide views of the world that are indistinguishable from what you’d see out a window.
But light-field cameras also demand serious computing power, challenge existing assumptions about resolution and image quality, and are forcing manufacturers to rethink standards and usability. Perhaps most important, these cameras require a fundamental shift in the way people think about the creative act of taking a photo.
In his manuscripts on painting, Leonardo wrote, “The air is full of an infinite number of radiant pyramids caused by the objects located in it. These pyramids intersect and interweave without interfering with each other.…The semblance of a body is carried by them as a whole into all parts of the air, and each smallest part receives into itself the image that has been caused.”
Nowadays, scientists and engineers prefer to think in terms of light rays rather than Leonardo’s more poetic “radiant pyramids.” But light-field photography is based precisely on his idea that the light arriving at any point—what he called the “smallest part” of the air—carries all the information necessary to reproduce any view that can be had from that position.
Doesn’t an ordinary camera do that? Not at all. In a conventional digital camera, the light rays hitting each point on the image sensor combine. The sensor records the total intensity of the light rays landing on each point, or photosite, but in the process loses directional information about where the different rays came from. So the best a typical camera can provide is the familiar two-dimensional photograph, which has a fixed point of view and a focus determined entirely by how the lens was set when the photo was snapped.
Light-field photography is far more ambitious. Instead of merely recording the sum of all the light rays falling on each photosite, a light-field camera aims to measure the intensity and direction of every incoming ray. With that information, you can generate not just one but every possible image of whatever is within the camera’s field of view at that moment. For example, a portrait photographer often adjusts the lens of the camera so that the subject’s face is in focus, leaving what’s behind purposefully blurry. Others might want to blur the face and make a tree in the background razor sharp. With light-field photography, you can attain either effect from the very same snapshot.
The information a light-field camera records is, mathematically speaking, part of something that optics specialists call the plenoptic function. This function describes the totality of light rays filling a given region of space at any one moment. It’s a function of five dimensions, because you need three (x, y, and z) to specify the position of each vantage point, plus two more (often denoted θ and φ) for the angle of every incoming ray.
When measuring light in a region that’s free of any obstructions, you have to keep track of only four dimensions rather than five. Think about it: If you know that a ray isn’t blocked, it’s simple to follow where it goes. Record where it hits one plane (x and y) and the angle at which it hits (θ and φ) and you can work out where it came from and where it’s headed. The same is true for any other ray hitting that plane at any angle. So with just the knowledge of the light crossing a single plane, you can calculate the position and direction of the rays filling the surrounding region, so long as there are no obstructions present. This four-dimensional function is called the light field (hence the term light-field camera).
All this has been known for many years. Back in 1908—the same year he won the Nobel Prize in physics for color photography—the French scientist Gabriel Lippmann invented something he called “the integral camera.” His idea was to use an array of tiny lenses to project a scene onto a single sheet of film. The multiple views these lenses recorded could then be reconstituted into a 3-D image by viewing the processed film through an identical lens array. Three years later, Russian physicist P. P. Sokolov constructed the first integral camera using a pinhole array instead of the harder-to-fabricate lenses that Lippmann envisioned. Building the Lytro camera, however, required technologies that would not be realized for almost another century.