A camera system that captures a snapshot of overlapping light waves in a tiny fraction of a second could lead to new methods for imaging, allowing scientists to watch the brain’s neurons interacting or see neutrinos colliding with matter.
The camera system took snapshots at a rate of 100 billion frames per second, fast enough to capture a pulse of laser light spreading out in a Mach cone, the optical equivalent of the sonic boom created by an airplane traveling faster than the speed of sound.
“You can think of the laser source as the supersonic jet and everything is dragged behind. Instead of generating a sound, we’re generating a scattered wavelet,” says Jinyang Liang, a postdoctoral research associate in Lihong Wang’s Optical Imaging Lab at Washington University, in St. Louis. The researchers and their collaborators from Tsinghua University in China and the University of Illinois at Urbana-Champaign describe their work in today’s issue of Science Advances.
An airplane creates a Mach cone when it passes Mach 1, the speed of sound. Because the source of the noise—the plane’s engines—is moving faster than sound itself, the sound waves get compressed and spread out in a cone shape behind the aircraft. The same thing can happen to light.
To generate their optical Mach cone, the researchers made two silicone display panels, which they laced with aluminum oxide powder to scatter the light toward the cameras. They placed the panels on opposite sides of an air-filled tunnel, then threw in a chunk of dry ice to create a fog meant to scatter light. The researchers then fired a laser beam through the tunnel. Because the silicone has a higher index of refraction than the air, light striking the panels moves more slowly than the light striking the fog, so the source of the light waves is “moving faster” than the waves in the silicone are, the same as with the supersonic jet.
Gif: Liang et al./Science Advances
To capture an image of the light waves, the team used three charge-coupled-device (CCD) cameras. One was a streak camera, which converts photons into electrons and lets them flow between two plates, where a voltage is quickly increasing. As the voltage increases it bends the path of the electrons, and this bend gets greater over time, so seeing where the electrons land in a streak across a detector tells you when they passed between the plates, allowing you to re-create the movement of a wave. This method has been in use for a while, but it provides only a narrow, one-dimensional view of a phenomenon. In this case the researchers opened the slit of the streak camera wider than normal to get a 2D view.
They also used a patterned filter to impress a series of what were essentially bar codes on the image. Just as a CT scanner uses slices of an X-ray to build up a 3D picture of an organ, these “bar codes” allowed a computer to divide the single snapshot into slices and rebuild them into a three-dimensional data cube that separated the slices in time and space, giving shape to what would otherwise have been just a smudge of light. The system also contained two external cameras that did not use the streak approach, to get different perspectives and increase the final resolution of the image.
Whereas existing imaging technologies allow scientists to see small clusters of neurons firing, or view a larger neural network but not the individual activity, this method may give them both a broad and detailed view simultaneously, Liang says.