The July 2022 issue of IEEE Spectrum is here!

Close bar

Here's How This Metasurface Lens Could Improve Imaging

Metasurface performs the mathematical operations necessary for edge detection

2 min read
Tilted SEM image of the Si metasurface performing the second-derivative operation. The scale bar is 400 nm.
Tilted SEM image of the Si metasurface performing the second-derivative operation. The scale bar is 400 nm.
Image: Nano Letters

Processing images to allow self-driving cars to see where they’re going could get easier thanks to a specially sculpted lens that does the work of a computer.

Dutch and American researchers say they can use a metasurface to passively detect the edges of objects in video. Computers can perform such edge detection for autonomous vehicles or virtual reality applications, but that uses power and is not instantaneous. “If you want to do that digitally, it takes time for the computer to compute,” says Andrea Cordaro, a PhD student at AMOLF, a scientific research institute in Amsterdam, the Netherlands.

In a paper in Nano Letters, Cordaro and colleagues, including Albert Polmanwho heads the Light Management in New Photovoltaic Materials group at AMOLF, and Andrea Alù at the City University of New York, describe how their material performs the mathematical operations necessary for edge detection.

They built a metasurface, which is studded with tiny pillars, smaller than the wavelength of light, which can manipulate light in unusual ways based on their size and arrangement. In this case, they started with a thin sheet of sapphire, less than half a millimeter thick, and added pillars of silicon that were 206 nm thick, 142 nm tall, and spaced 300 nm apart.

When placed on the surface of a standard CCD chip, the metasurface acts like a lens, passing light that strikes it at steep angles but filtering out light hitting it at very slight angles. The features of an image are built from combinations of different light waves, and the waves that get filtered out carry the fine details of the image, leaving only the sharper components, such as the edges of a person’s face compared to the whiteboard behind her.

Depending on the computer and the size of the image, it might take several milliseconds to process this information digitally. With the analog approach, only limiting factor is the thickness of the metasurface. “It’s just the time light takes to travel 150 nm, which is basically nothing,” Cordaro says.

It’s also a passive technique. “It’s just a piece of glass, so you don’t need to give it power,” he says. Of course, the digital camera and a computer would still have a role, but Cordaro says this hybrid approach should be more efficient.

The researchers would like to try other materials, such as titanium oxide or silicon nitride, to see if they can get even better results. And while this metasurface captures edges in one dimension, they’d like to try two-dimensional designs, so they can capture edges at different orientations.

This post was updated on 7 February 2020. 

The Conversation (0)

3 Ways 3D Chip Tech Is Upending Computing

AMD, Graphcore, and Intel show why the industry’s leading edge is going vertical

8 min read
Vertical
A stack of 3 images.  One of a chip, another is a group of chips and a single grey chip.
Intel; Graphcore; AMD
DarkBlue1

A crop of high-performance processors is showing that the new direction for continuing Moore’s Law is all about up. Each generation of processor needs to perform better than the last, and, at its most basic, that means integrating more logic onto the silicon. But there are two problems: One is that our ability to shrink transistors and the logic and memory blocks they make up is slowing down. The other is that chips have reached their size limits. Photolithography tools can pattern only an area of about 850 square millimeters, which is about the size of a top-of-the-line Nvidia GPU.

For a few years now, developers of systems-on-chips have begun to break up their ever-larger designs into smaller chiplets and link them together inside the same package to effectively increase the silicon area, among other advantages. In CPUs, these links have mostly been so-called 2.5D, where the chiplets are set beside each other and connected using short, dense interconnects. Momentum for this type of integration will likely only grow now that most of the major manufacturers have agreed on a 2.5D chiplet-to-chiplet communications standard.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}