Optical AI Could Feed Voracious Data Needs

To power the 3D-printed, multiplexing system, just shine light on it

2 min read
multiple colored lines twisting around together in front of a white background

When scientists developed a 3D-printed, multiplexing optical neural network, they calculated it could handle as many as 2,000 data channels simultaneously.


A brain-imitating neural network that employs photons instead of electrons could rapidly analyze vast amounts of data by running many computations simultaneously using thousands of wavelengths of light, a new study finds.

Artificial neural networks are increasingly finding use in applications such as analyzing medical scans and supporting autonomous vehicles. In these artificial-intelligence systems, components (also known as neurons) are fed data and cooperate to solve a problem, such as recognizing faces. A neural network is dubbed “deep“ if it possesses multiple layers of neurons.

As neural networks grow in size and power, they are becoming more energy hungry when run on conventional electronics. Which is why some scientists have been investigating optical computing as a promising, next-generation AI medium. This approach uses light instead of electricity to perform computations more quickly and with less power than its electronic counterparts use.

For example, a diffractive optical neural network is composed of a stack of layers, each possessing thousands of pixels that can diffract, or scatter, light. These diffractive features serve as the neurons in a neural network. Deep learning is used to design each layer so when input in the form of light shines on the stack, the output light encodes data from complex tasks such as image classification or image reconstruction. All this computing “does not consume power, except for the illumination light,” says study senior author Aydogan Ozcan, an optical engineer at the University of California, Los Angeles.

Such diffractive networks could analyze large amounts of data at the speed of light to perform tasks such as identifying objects. For example, they could help autonomous vehicles instantly recognize pedestrians or traffic signs, or help medical diagnostic systems quickly identify evidence of disease. Conventional electronics need to first image those items, then convert those signals to data, and finally run programs to figure out what those objects are. In contrast, diffractive networks only need to receive light reflected off or otherwise arriving from those items—they can identify an object because the light from it gets mostly diffracted toward a single pixel assigned to that kind of object.

Previously, Ozcan and his colleagues designed a monochromatic diffractive network using a series of thin 64-square-centimeter polymer wafers fabricated using 3D printing. When illuminated with a single wavelength or color of light, this diffractive network could implement a single matrix multiplication operation. These calculations, which involve multiplying grids of numbers known as matrices, are key to many computational tasks, including operating neural networks.

Now the researchers have developed a broadband diffractive optical processor that can accept multiple input wavelengths of light at once for up to thousands of matrix multiplication operations “executed simultaneously at the speed of light,” Ozcan says.

In the new study, the scientists 3D-printed three diffractive layers, each with 14,400 diffractive features. Their experiments showed the diffractive network could successfully operate using two submillimeter-wavelength terahertz-frequency channels. Their computer models suggested these diffractive networks could accept up to roughly 2,000 wavelength channels simultaneously.

“We demonstrated the feasibility of massively parallel optical computing by employing a wavelength multiplexing scheme,” Ozcan says.

The scientists note it should prove possible to build diffractive networks that use visible and other frequencies of light other than terahertz. Such optical neural nets can also be manufactured from a wide variety of materials and techniques.

All in all, they “may find applications in various fields, including, for example, biomedical imaging, remote sensing, analytical chemistry, and material science,” Ozcan says.

The scientists detailed their findings 9 January in the journal Advanced Photonics.

The Conversation (0)

How Duolingo’s AI Learns What You Need to Learn

The AI that powers the language-learning app today could disrupt education tomorrow

9 min read
This playful illustration shows Duolingo’s owl mascot, cut away down the midline, showing hidden inside a high-tech skeleton suggestive of some sort of AI robot.
Eddie Guy

It’s lunchtime when your phone pings you with a green owl who cheerily reminds you to “Keep Duo Happy!” It’s a nudge from Duolingo, the popular language-learning app, whose algorithms know you’re most likely to do your 5 minutes of Spanish practice at this time of day. The app chooses its notification words based on what has worked for you in the past and the specifics of your recent achievements, adding a dash of attention-catching novelty. When you open the app, the lesson that’s queued up is calibrated for your skill level, and it includes a review of some words and concepts you flubbed during your last session.

Duolingo, with its gamelike approach and cast of bright cartoon characters, presents a simple user interface to guide learners through a curriculum that leads to language proficiency, or even fluency. But behind the scenes, sophisticated artificial-intelligence (AI) systems are at work. One system in particular, called Birdbrain, is continuously improving the learner’s experience with algorithms based on decades of research in educational psychology, combined with recent advances in machine learning. But from the learner’s perspective, it simply feels as though the green owl is getting better and better at personalizing lessons.

Keep Reading ↓Show less