AI-Powered Microdisplay Adapts to Users’ Eyesight

“NeuralDisplay” could make AR less squinty, blurry, and nausea-inspiring

3 min read
Photo-illustration of a close up of an eye with a circle of measurements over it.
iStock

Modern augmented reality (AR) and virutal reality (VR) headsets achieve pixel counts in the tens of millions. Yet that doesn’t guarantee a crisp image. Headsets often leave users squinting to focus on a blurry mess or, in some cases, a creeping sense of nausea.

Microdisplay manufacturer KOPIN, of Westborough, Mass., working in partnership with MIT’s Computer Science & Artificial Intelligence Laboratory, may have a solution: the NeuralDisplay. It combines eye-tracking with machine learning to compensate for a user’s vision on-the-fly without additional optics.

“We asked, how can we change the technology for the user, and not make the user try to change themselves, to use the technology? And the answer that came back was AI,” says Michael Murray, CEO of KOPIN.

A display that adjusts for your eyes

The first NeuralDisplay is a 1.5-inch square micro-OLED with a resolution of 3,840 x 3,840 and a maximum brightness of 10,000 candelas. These specifications place it in league with other leading micro-OLED, like Sony’s 1.3-inch 4K micro-OLED. It also has an unusual quad-pixel arrangement that places red, blue, and green sub-pixels alongside a fourth pixel containing a pixel imager.

A slide from a PowerPoint describing NeuralDisplay, a micro-display with a built-in AI accelerator. The display is attached to a circuit board but is not displaying an image..NeuralDisplay packs 3,840 x 3,840 resolution and an onboard AI accelerator. KOPIN

The pixel imager doesn’t function as a display element. It has a different task: to measure the light reflected by the user’s eyes. It’s similar in concept to a digital camera, but simpler in execution, as the imagers operate in monochrome and concentrate on measuring brightness. That’s enough to deduce details about a user’s eyes, including the direction of their gaze, their eye position in relation to the screen, and the dilation of their pupils.

Those measurements feed into an AI model that learns to compensate for the quirks of each user’s vision by adjusting the brightness and contrast of the display. “Think of it like two knobs, brightness and contrast, that we can turn in real-time,” says Murray. The pixel imagers continue to take readings, which are fed back into the machine learning algorithm to continually adjust the image. “The eye tracking piece is to have a feedback loop in the system. Did these changes make any difference, and what’s the user experience like?”

The data isn’t sent to the cloud, or even to a connected device, but instead handled by an onboard AI accelerator integrated into the display. Keeping data local is necessary to process it with the speed human vision requires. Murray says the human brain can interpret what it’s seeing in as little as 500 microseconds, after which problems caused by a headset’s optics are noticeable. Placing the AI accelerator onboard keeps latency in check and ensures the AI model is reliably fed new data.

Demystifying headset customization

The NeuralDisplay isn’t a panacea, its makers confess, and its ability to compensate for near or farsightedness is uncertain. “We’re still testing that part,” says Murray. But the display can help with other common problems that lead to nausea, discomfort, and disorientation while using an AR/VR headset. “My problem with the Apple Vision Pro, for example, is my right eye is dominant in everything that I do. If you put two perfect displays in front of my eyes, I just get nauseous, because my brain can’t deal with that level of detail in my left eye.”

That’s a common problem. Nearly everyone has a dominant eye, with right eye dominance occurring more often than left. And it’s not the only aspect of vision AR/VR headsets struggle to accommodate. Headset engineers must also account for differences in interpupillary distance (the distance between the center of each pupil), eye depth, and face shape.

A Meta Quest 3 headset viewed from the front. The headset's interpupillary distance dial is located on the lower left hand corner.Most headsets have a dial or knob to manually adjust interpupillary distance, which can lead to fuzzy or otherwise less-than-pristine image quality.Meta

Most current headsets attempt to correct for these differences with physical adjustments that change the shape and size of a headset. Owners of the Meta Quest 3 can turn a knob to adjust the headset for an interpupillary distance between 53 to 75 millimeters (Meta claims this accommodates 95 percent of users). It also has “eye-relief” buttons that adjust for eye depth.

These features are helpful, but cumbersome, and support a limited range of adjustment. They also rely on manual adjustment or use a simple automatic adjustment that requires the user to have some idea of how the system works. Those who don’t understand how to adjust a headset, or don’t know it’s possible, are likely to experience discomfort. NeuralDisplay’s automated, display-level adjustment could eliminate the need for user intervention, making AR/VR headsets more accessible.

“Ultimately, putting more lenses into a piece of electronics [to correct a user's vision] breaks the laws of consumer electronics, which are size, weight, power, and usability,” says Murray. “We’re trying to re-write that equation.”

The Conversation (1)
sarat poluri
sarat poluri17 Dec, 2023
INDV

If you have iris recognition, you can detect who the user is. Per user, you perform a calibration step the first time they use it and each year after that, to adjust for changes in vision with age etc. Calibration should help with dominant eye issues, near and far sightedness, inter-pupillary distance etc.