Color Night Vision Brought to You by AI

New method detects visible colors in pitch darkness

2 min read
Diagram showing ​Conventional visible spectrum photographs are composed of red, green and blue channels in the visible spectrum on the left and infrared images of 3 different wavelengths and deep learning to predict the visible spectrum image on the right creating the same image..

Conventional visible-spectrum photographs are composed of red, green, and blue channels in the visible spectrum. The authors used infrared images of three different wavelengths and deep learning to predict the visible-spectrum image.

Browne Lab/UC Irvine Department of Ophthalmology

Night-vision systems that use infrared light to see in the dark typically render views as monochromatic images. Now, in a new study, with the help of deep-learning AI technology, scientists have devised a way in which infrared vision might help see the visible colors in a scene in the absence of light.

“A lot of the world is color-coded in ways that we rely on to make decisions—think of traffic signals, for instance,” says study lead author Andrew Browne, an engineer, surgeon, and vision scientist at the University of California, Irvine.

Night-vision systems that work by illuminating the world with infrared light typically render scenes only in green, failing to reveal the colors that are visible under normal light. Newer systems employ ultrasensitive cameras to amplify visible light, but these do little to reveal the colors in pitch-black settings that have no light to amplify.

In the new study, the scientists reasoned that each dye and pigment that gives an object a visible color not only reflects a set of visible wavelengths but may also reflect a set of infrared wavelengths. If they could train a night-vision system to identify the infrared fingerprint of each dye and pigment, they could display images using the visible colors linked with each of those compounds.

“This began when I was sitting and thinking, ‘Is there a way to do everything in the dark?’” Browne says.

Working together with computer scientist Pierre Baldi at the University of California, Irvine, Browne and his colleagues experimented with a camera that was sensitive to both visible and near-infrared light and connected to a convolutional neural network, which are often used in image recognition. They used this system to analyze 140 images of faces printed using cyan, magenta, yellow, and black inks lit under multispectral illumination spanning both visible and infrared wavelengths.

The scientists found that when tested with 20 printed faces that were lit only with invisible near-infrared light, their system could successfully predict and reconstruct how these faces would look in visible light.

“You can imagine this having military applications, or just recreational applications to see at night,” Browne says. “You can also imagine this finding use in chemical processing, where people need to work in places that are sensitive to visible light, or in medical or research situations where one has to work with light-sensitive retinal tissue.”

Future research can analyze more pigments, dyes, and objects in visible and infrared light to help this system develop better color vision in the dark.

”How well the model functions all depends on the quality and quantity of the data with which you train it, and the limited context in which you train it,” Browne says. “We just worked with pictures of human faces printed on paper with four inks, and it performed very well. If you used another context—say, photographs of flowers—you’d want to make sure you had a broad enough number of species of flowers and acquired enough data to perform well.”

The scientists detailed their findings online 6 April in the journal PLOS ONE.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less