The October 2022 issue of IEEE Spectrum is here!

Close bar

New AI Dupes Humans Into Believing Synthesized Sound Effects Are Real

Using machine-learning, AutoFoley determines what actions are taking place in a video clip and creates realistic sound effects

3 min read
4 stills from a video in which the AutoFoley program analyzed the movements of a horse and created sound effects to match the scene.
Images: Sanchita Ghose and Jeff Prevost

Imagine you are watching a scary movie: The heroine creeps through a dark basement, on high alert. Suspenseful music plays in the background, while some unseen, sinister creature creeps in the shadows…and then–BANG! It knocks over an object.

Such scenes would hardly be as captivating and scary without the intense, but perfectly timed sound effects, like the loud bang that sent our main character wheeling around in fear. Usually these sound effects are recorded by Foley artists in the studio, who produce the sounds using oodles of objects at their disposal. Recording the sound of glass breaking may involve actually breaking glass repeatedly, for example, until the sound closely matches the video clip.

In a more recent plot twist, researchers have created an automated program that analyzes the movement in video frames and creates its own artificial sound effects to match the scene. In a survey, the majority of people polled indicated that they believed the fake sound effects were real. The model, AutoFoley, is described in a study published 25 June in IEEE Transactions on Multimedia.

“Adding sound effects in postproduction using the art of Foley has been an intricate part of movie and television soundtracks since the 1930s,” explains Jeff Prevost, a professor at the University of Texas at San Antonio who cocreated AutoFoley. “Movies would seem hollow and distant without the controlled layer of a realistic Foley soundtrack. However, the process of Foley sound synthesis therefore adds significant time and cost to the creation of a motion picture.”

Intrigued by the thought of an automated Foley system, Prevost and his Ph.D. student, Sanchita Ghose, set about creating a multilayered machine-learning program. They created two different models that could be used in the first step, which involves identifying the actions in a video and determining the appropriate sound.

The first machine-learning model extracts image features (such as color and motion) from the frames of fast-moving action clips to determine an appropriate sound effect.

The second model analyzes the temporal relationship of an object in separate frames. By using relational reasoning to compare different frames across time, the second model can anticipate what action is taking place in the video.

In a final step, sound is synthesized to match the activity or motion predicted by one of the models. Prevost and Ghose used AutoFoley to create sound for 1,000 short movie clips capturing a number of common actions, like falling raining, a galloping horse, and a ticking clock.

Analysis shows–unsurprisingly–that AutoFoley is best at producing sounds where the timing doesn’t need to align perfectly with the video (such as falling rain or a crackling fire). But the program is more likely to be out of sync with the video when visual scenes contain random actions with variation in time (such as typing or thunderstorms).

Next, Prevost and Ghose surveyed 57 local college students on which movie clips they thought included original soundtracks. In assessing soundtracks generated by the first model, 73 percent of students surveyed chose the synthesized AutoFoley clip as the original piece, over the true original sound clip. In assessing the second model, 66 percent of respondents chose the AutoFoley clip over the original sound clip.

“One limitation in our approach is the requirement that the subject of classification is present in the entire video frame sequence,” says Prevost, also noting that AutoFoley currently relies on a data set with limited Foley categories. While a patent for AutoFoley is still in the early stages, Prevost says these limitations will be addressed in future research.

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}