CES 2020 News: Delta Airlines Dips Into Parallel Realities at Detroit Metro Airport

It seems like magic, but hundreds of people in airports will soon be able to look at the same screen and see completely different things

1 min read
A blue screen shows a message greeting for Tekla in white text.
A personalized message appears on a screen as the writer, Tekla, walks through a simulated terminal at CES 2020.
Photo: Tekla Perry

A useful, though somewhat eerie, technology will start guiding passengers through airline terminals later this year. Developed by Misapplied Sciences based in Redmond, Wash., and being rolled out by Delta Airlines, Parallel Reality technology will replace the typical information-packed airport display screen that shows all departing flights with individual messages customized for each traveler. The real magic? All of those messages will appear simultaneously on a single large screen, but you’ll just see the one intended for you.

A wall of screens, each displaying different images, stands on the CES 2020 showroom floor. This wall of mirrors reflects one large-screen display; each mirror represents a person standing in a different place in the same room.Photo: Tekla Perry

The technology took some five years to develop, says Misapplied Sciences chief operating and creative officer Dave Thompson. It involves hardware that splits the light from each pixel in the display, sending different colors and brightnesses in a variety of directions, adjusting the angles on the fly. The demo I saw at CES 2020 allowed up to 100 people to view personalized images—each seeing their own name and flight information in their language of choice—but Thompson says the technology can accommodate thousands of simultaneous viewers. Travelers scan their boarding passes—it’s an opt-in system—and then cameras follow their movements around the terminal to ensure that when they look at a display, they see a personalized message.

Delta plans to start using the technology this summer at the Detroit Metro Airport.

The Conversation (0)

Deep Learning Could Bring the Concert Experience Home

The century-old quest for truly realistic sound production is finally paying off

12 min read
Vertical
Image containing multiple aspects such as instruments and left and right open hands.
Stuart Bradford
Blue

Now that recorded sound has become ubiquitous, we hardly think about it. From our smartphones, smart speakers, TVs, radios, disc players, and car sound systems, it’s an enduring and enjoyable presence in our lives. In 2017, a survey by the polling firm Nielsen suggested that some 90 percent of the U.S. population listens to music regularly and that, on average, they do so 32 hours per week.

Behind this free-flowing pleasure are enormous industries applying technology to the long-standing goal of reproducing sound with the greatest possible realism. From Edison’s phonograph and the horn speakers of the 1880s, successive generations of engineers in pursuit of this ideal invented and exploited countless technologies: triode vacuum tubes, dynamic loudspeakers, magnetic phonograph cartridges, solid-state amplifier circuits in scores of different topologies, electrostatic speakers, optical discs, stereo, and surround sound. And over the past five decades, digital technologies, like audio compression and streaming, have transformed the music industry.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}