The July 2022 issue of IEEE Spectrum is here!

Close bar

Silicon Cochlea Mimics Human Hearing

Low-power, multichannel listening device could put computers closer to perceiving the world the way we do

3 min read
Silicon Cochlea Mimics Human Hearing
Photo: Katherine Bourzac

Cameras and audio equipment are getting better all the time, but mostly through brute force: more pixels, more sensors, and better post-processing. Mammalian eyes and ears beat them handily when it comes to efficiency and the ability to only focus on what’s interesting or important.

Neuromorphic engineers, who try to mimic the strengths of biological systems in manmade ones, have made big strides in recent years, especially with vision. Researchers have made machine-vision systems that only take pictures of moving objects, for example. Instead of taking many images at a steady, predetermined rate, these kinds of cameras monitor for changes in a scene and only record those. This strategy, called event-based sampling, saves a lot of energy and can also enable higher resolution.

One example is a silicon retina made by Tobi Delbrück of the Institute for Neuroinformatics in Zurich; it was used as the eyes in a robotic soccer goalie. This design, made in 2007, has a 3-millisecond reaction time.

Last week, at the IEEE International Solid-State Circuits Conference in San Francisco, another group showed how this approach can also work for hearing. Shih-Chii Liu, co-leader of the Sensors Group at the Institute of Neuroinformatics, described a silicon cochlea that uses just 55 microwatts of power (three orders of magnitude less than previous versions of the system) to detect sound in a humanlike way.

The neuromorphic auditory system uses two “ears,” with each one capable of being moved independently of the other. The difference in timing between sound waves reaching the two ears makes it possible to locate the origin of a sound, says Liu. Each silicon ear has 64 channels, which each responds to a different frequency band, from low pitches to high. These channels mimic the cells in the human cochlea, which also responds to different frequencies (about a thousand in the real thing).

Liu connects the silicon cochlea to her laptop and shows what it’s recording with a graph of frequency over time. When we’re quiet, there’s no activity. When one of us speaks into the microphone, there are spikes around the 100-to-200-hertz range. The other channels, ranging from 20 Hz to 20 kilohertz, are not recording.

Liu says her group and Delbruck’s are now working to integrate the silicon cochlea and retina. This could give a humanoid robot a lot more low-power smarts. Besides being more humanlike, multimodal sensing means machines will miss less of what’s going on. This is evident in the way human senses support each other. When you talk to someone in a noisy restaurant, for example, you can’t always hear her every word. But your brain fills in the missing auditory pieces with visual information gathered when you watch their lips.

The neuromorphic researchers want to integrate these smart, low-power sensors with processors running deep learning algorithms. This kind of artificial intelligence does a good job of recognizing what’s going on in an image; some versions can even generate a surprisingly accurate sentence describing a scene. Neural networks excel at understanding and generating speech, too. Combining neuromorphic engineering with deep learning could yield computers that mimic human sensory perception better than ever before.

Asked whether this advance would someday help humans who are deaf or hard of hearing, Liu said the current design wouldn’t work for cochlear implants, so that is not an application her group is pursuing. She notes that although it could work in theory, it would probably involve some fundamental changes in hearing aid design that might cost so much to implement that the ultimate payoff would not be worth the effort.

This post was corrected on 4 April to clarify Liu’s name and affiliation.

The Conversation (0)
A photo showing machinery in a lab

Foundries such as the Edinburgh Genome Foundry assemble fragments of synthetic DNA and send them to labs for testing in cells.

Edinburgh Genome Foundry, University of Edinburgh

In the next decade, medical science may finally advance cures for some of the most complex diseases that plague humanity. Many diseases are caused by mutations in the human genome, which can either be inherited from our parents (such as in cystic fibrosis), or acquired during life, such as most types of cancer. For some of these conditions, medical researchers have identified the exact mutations that lead to disease; but in many more, they're still seeking answers. And without understanding the cause of a problem, it's pretty tough to find a cure.

We believe that a key enabling technology in this quest is a computer-aided design (CAD) program for genome editing, which our organization is launching this week at the Genome Project-write (GP-write) conference.

With this CAD program, medical researchers will be able to quickly design hundreds of different genomes with any combination of mutations and send the genetic code to a company that manufactures strings of DNA. Those fragments of synthesized DNA can then be sent to a foundry for assembly, and finally to a lab where the designed genomes can be tested in cells. Based on how the cells grow, researchers can use the CAD program to iterate with a new batch of redesigned genomes, sharing data for collaborative efforts. Enabling fast redesign of thousands of variants can only be achieved through automation; at that scale, researchers just might identify the combinations of mutations that are causing genetic diseases. This is the first critical R&D step toward finding cures.

Keep Reading ↓Show less