Mirror Arrays Make Augmented Reality More Realistic

Relying on a collection of fast-moving mirrors, this AR system can control how much of the real and virtual world users perceive

2 min read

Photo-illustration of a man touching virtual reality objects
Photo-illustration: Shutterstock

In the world of augmented reality (AR), real and virtual images are combined to create immersive environments for users. While the technology is advancing, it has remained challenging to make virtual images appear more “solid” in front of real-world objects, an effect that’s essential for us in achieving depth perception.

Now, a team of researchers has developed a compact AR system that, using an array of miniature mirrors that switch positions tens of thousands of times per second, can create this elusive effect. They describe their new system in a study published February 13 in IEEE Transactions on Visualization and Computer Graphics.

Occlusion is when light from objects in the foreground block the light from objects at farther distances. Commercialized AR systems have limited occlusion because they tend to rely on a single spatial light modulator (SLM) to create the virtual images, while allowing natural light from real-world objects to also be perceived by the user.

“As can be seen with many commercial devices such as the Microsoft HoloLens or Magic Leap, not blocking high levels of incoming light [from real-world objects] means that virtual content becomes so transparent it becomes difficult to even see,” explains Brooke Krajancich, a researcher at Stanford University. This lack of occlusion can interfere with the user’s depth perception, which would be problematic in tasks requiring precision, such as AR-assisted surgery.

To achieve better occlusion, some researchers have been exploring the possibility of a second SLM that controls the incoming light of real-world objects. However, incorporating two SLMs into one system involves a lot of hardware, making it bulky. Instead, Krajancich and her colleagues developed a new design that combines virtual projection and light-blocking abilities into one element.

Their design relies on a dense array of miniature mirrors that can be individually flipped between two states—one that allows light through and one that reflects light—at a rate of up to tens of thousands of times per second.

Demonstration images combine a physical scene with a digital image to form a target composition.In this set of demonstration images, the composition captured with the technology shows significant improvements to both light blocking and color fidelity.Images: Stanford University/IEEE

“Our system uses these mirrors to switch between a see-through state, which allows the user to observe a small part of the real world, and a reflective state, where the same mirror blocks light from the scene in favor of an [artificial] light source,” explains Krajancich. The system computes the optimal arrangement for the mirrors and adjusts accordingly.

Krajancich notes some trade-offs with this approach, including some challenges in rendering colors properly. It also requires a lot of computing power, and therefore may require higher power consumption than other AR systems. While commercialization of this system is a possibility in the future, she says, the approach is still in the early research stages.

The Conversation (0)