Ultrasonic beams fired through a 3D-printed metasurface can create localized pockets of sound that are inaudible to passers-by. The technique could be used to create private speech zones for secure communications or enable personalized audio spaces in public spaces and vehicles.
The ability to deliver sounds to a specific listener without the need for headphones, known as directional sound, has been a long-standing area of research in audio engineering. But achieving this typically requires large and complicated sound sources, and it is often possible to hear the audio signal along the path of the beam.
A new approach from researchers at Pennsylvania State University gets around these limitations by combining a compact array of ultrasonic emitters with a specially patterned 2D structure, which is designed to manipulate the properties of waves. This structure, known as a metasurface, creates “self-bending” ultrasound beams that are inaudible to humans and can steer around obstacles. When two of these beams cross paths, they interact in a way that generates sound in a range audible to humans but confined to a spot just a few centimeters across, which the researchers call an “audible enclave.”
“The key innovation is that sound is only generated where two beams intersect, making it possible to deliver audio to a precise spot while keeping the beams themselves silent,” says Jia-Xin Zhong, a postdoctoral researcher at Penn State and lead author of a paper in the Proceedings of the National Academy of Sciences that describes the new approach.
Previous research has demonstrated audible self-bending beams that can curve around obstacles. But the long wavelengths of audible sound mean the sources typically have to be on the scale of meters, and it is possible to hear the signal anywhere along the path of the beam.
The Penn State team has come up with a novel technique that instead relies on ultrasound beams that can’t be heard by humans and can be produced with much smaller hardware. To make the beams self-bending, they 3D-printed a 16- by 8-centimeter grid-shaped metasurface that is placed in front of an array of ultrasonic emitters. The effect was to precisely control the phase of the output beam. “These metasurfaces act like an acoustic lens, controlling the wavefront so that the beams curve as they propagate,” says Zhong.
The team demonstrated that two of these beams could be curved around a dummy’s head and made to intersect just in front of its face. When two sound waves interact, they can create a secondary wave whose frequency is equal to the difference in frequency between the original waves. By using a pair of beams at 40 and 39.5 kilohertz, the researchers created a pocket of sound a few centimeters across in front of the dummy’s head at 500 hertz, which is within the audible range.
The researchers showed that by varying the frequency of one of the beams they could use the same metasurface to generate audio across six octaves in the audible range, from 125 Hz to 4 kHz. While these experiments involved creating simple single-frequency tones, the researchers also demonstrated the approach works with a 9-second burst of the “Hallelujah Chorus” from Handel’s Messiah, which fluctuates across a range of frequencies.
Intersecting Ultrasound Beams
The main problem with the current approach, says Zhong, is that when the two beams interact it can generate distortions that muddy the audio signal. However, the researchers think this can be resolved using new signal-processing techniques, including the use of deep-learning algorithms that can learn to compensate for the distortion.
For now, the trajectories of the self-bending beams are fixed, which means that the placement of the sound source has to be precise in order to avoid obstacles. Zhong says the group hopes to eventually create reconfigurable beams that can dynamically adjust to avoid different objects.
“We envision using adaptive processing algorithms to dynamically adjust beam trajectories in real time, allowing the beams to intelligently navigate around obstacles based on environmental feedback,” he says. “This would make the technology even more versatile for real-world applications where obstacles may move or change position.”
The potential applications of the technology are broad, says Zhong. These could include creating personalized audio in public spaces, for instance audio tours in museums that don’t require headphones. It could allow different passengers in a car to listen to separate audio without interfering with each other’s experience, he adds, or create private speech zones for confidential information. Projecting audio that cancels out existing sound fields could also allow for localized noise cancellation.
- Speaker Chip Uses Ultrasound to Crack Volume Limits ›
- How Ultrasound Became Ultra Small ›
- New "Ultrasound on a Chip" Tool Could Revolutionize Medical Imaging ›
Edd Gent is a freelance science and technology writer based in Bengaluru, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience. He's on Twitter at @EddytheGent and email at edd dot gent at outlook dot com. His PGP fingerprint is ABB8 6BB3 3E69 C4A7 EC91 611B 5C12 193D 5DFC C01B. His public key is here. DM for Signal info.



