Color Night Vision Brought to You by AI

New method detects visible colors in pitch darkness

2 min read

Diagram showing ​Conventional visible spectrum photographs are composed of red, green and blue channels in the visible spectrum on the left and infrared images of 3 different wavelengths and deep learning to predict the visible spectrum image on the right creating the same image..

Conventional visible-spectrum photographs are composed of red, green, and blue channels in the visible spectrum. The authors used infrared images of three different wavelengths and deep learning to predict the visible-spectrum image.

Browne Lab/UC Irvine Department of Ophthalmology

Night-vision systems that use infrared light to see in the dark typically render views as monochromatic images. Now, in a new study, with the help of deep-learning AI technology, scientists have devised a way in which infrared vision might help see the visible colors in a scene in the absence of light.

“A lot of the world is color-coded in ways that we rely on to make decisions—think of traffic signals, for instance,” says study lead author Andrew Browne, an engineer, surgeon, and vision scientist at the University of California, Irvine.

Night-vision systems that work by illuminating the world with infrared light typically render scenes only in green, failing to reveal the colors that are visible under normal light. Newer systems employ ultrasensitive cameras to amplify visible light, but these do little to reveal the colors in pitch-black settings that have no light to amplify.

In the new study, the scientists reasoned that each dye and pigment that gives an object a visible color not only reflects a set of visible wavelengths but may also reflect a set of infrared wavelengths. If they could train a night-vision system to identify the infrared fingerprint of each dye and pigment, they could display images using the visible colors linked with each of those compounds.

“This began when I was sitting and thinking, ‘Is there a way to do everything in the dark?’” Browne says.

Working together with computer scientist Pierre Baldi at the University of California, Irvine, Browne and his colleagues experimented with a camera that was sensitive to both visible and near-infrared light and connected to a convolutional neural network, which are often used in image recognition. They used this system to analyze 140 images of faces printed using cyan, magenta, yellow, and black inks lit under multispectral illumination spanning both visible and infrared wavelengths.

The scientists found that when tested with 20 printed faces that were lit only with invisible near-infrared light, their system could successfully predict and reconstruct how these faces would look in visible light.

“You can imagine this having military applications, or just recreational applications to see at night,” Browne says. “You can also imagine this finding use in chemical processing, where people need to work in places that are sensitive to visible light, or in medical or research situations where one has to work with light-sensitive retinal tissue.”

Future research can analyze more pigments, dyes, and objects in visible and infrared light to help this system develop better color vision in the dark.

”How well the model functions all depends on the quality and quantity of the data with which you train it, and the limited context in which you train it,” Browne says. “We just worked with pictures of human faces printed on paper with four inks, and it performed very well. If you used another context—say, photographs of flowers—you’d want to make sure you had a broad enough number of species of flowers and acquired enough data to perform well.”

The scientists detailed their findings online 6 April in the journal PLOS ONE.

The Conversation (0)