Hey there, human — the robots need you! Vote for IEEE’s Robots Guide in the Webby Awards.

Close bar

Echolocation by Smartphone Possible

A burst of sound and a pair of microphones may be enough to map simple rooms

3 min read

Echolocation by Smartphone Possible
Photo: Robert Harding World Imagery / Alamy

Submarines, bats, and even humans can echolocate, but they need high-end acoustic gear, brainpower, or training in order to do it. Now electrical engineer Ivan Dokmanić, of the École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland, could bring that capability to smartphones. He has used echolocation combined with a simple algorithm and off-the-shelf microphones to map part of a complex structure—the Lausanne Cathedral. Used in reverse, this kind of technology could one day help smartphones find their location inside buildings.

Echolocation at its most basic consists of sending a sound toward an item of interest and timing its return. If you know the medium, you also know how fast it will carry the sound. Solve a simple equation and you have the distance to the item.

But mapping even the simplest room, let alone a cathedral, is more complex. The first sound reflects from all the room’s surfaces, flooding the listener with signals from many directions. Even after passing the microphone the first time, those first sound waves can reflect on opposing walls and return to the microphone a second time, adding secondary reflections to the already confusing signal. “You need somehow a way to tell, ‘This group of echoes corresponds to one wall, and another group of echoes corresponds to another wall,’ ” Dokmanić says.

Some solutions involve sending sound from multiple known locations at different times. Other solutions involve using multiple microphones. Dokmanić, who says he has a taste for simplicity, once tried to calculate a hypothetical room’s geometry using just one sound source and one microphone [PDF]. This system worked on paper for some kinds of rooms in noiseless environments, but in the real world, noise is everywhere. “Maybe you’ll have some spurious spikes in your signal,” Dokmanić says, “so you also need a way to discard these.”

Dokmanić’s method, published online this week in the Proceedings of the National Academy of Sciences, uses a mathematical tool called a Euclidean distance matrix, which helps sort the reflected sounds along a timeline. But he conceded a point to complexity and used multiple microphones—although only one sound source.

Electrical engineering researcher Flavio P. Ribeiro, of Microsoft’s Applied Sciences Group, in Redmond, Wash., calls this application of Euclidean distance matrices “useful” but notes that Dokmanić’s algorithm assumes tidier environments than exist in the real world, such as rooms with no furniture or other clutter that might complicate the sound signal. Such clutter creates “sound shadows” that would require more computing power to untangle.

Other algorithms, including one created by electrical engineer Sakari Tervo of Aalto University, in Finland, and a colleague, seek to reconstruct a room’s geometry even in the absence of some of the initial sound reflections, although these algorithms rely on multiple microphones. Dokmanić’s latest system assumes he has captured all the first reflections before he can filter out the secondary reflections and noise.

Tervo also worries that Dokmanić’s algorithm will not translate to more complex settings. In their paper, Dokmanić and his colleagues note that their map of the cathedral is imperfect due to reflections from figurines, columns, and curved surfaces. They were unable to distinguish between some of the smaller walls and the secondary reflections from bigger walls, he says. They achieved much better accuracy when they mapped a simple classroom with a fifth wall made of stacked tables.

Even so, the experiments inspired Dokmanić to explore hiring a developer who could help create smartphone applications using his algorithm. In a room with known dimensions, a pair of sound-emitting devices might be able to calculate their positions in the room, he suggests. The algorithm might also help improve teleconferencing sound quality. Electrical engineer Fabio Antonacci at Politecnico di Milano, in Italy, says he and others aim to improve teleconferencing too. They presented a paper last year in which they tried to identify sound sources at multiple locations in order to focus the listening devices on all of them at once, in much the same way that recent experimental cameras allow users to focus on light images at multiple depths.

Achieving those goals will take “smarter algorithms,” Dokmanić says, but after this experiment, he is optimistic: “It is kind of surprising that you can do it with so little infrastructure.”

About the Author

Lucas Laursen has contributed to the chicken and human-robot interaction beats for IEEE Spectrum since 2010. In April 2013 he reported for us on body-fluid-powered microrockets for drug delivery.

The Conversation (0)