Mitsubishi Electric’s AI Can Follow and Separate Simultaneous Speech

The Japanese company believes it has created speech separation technology good enough to solve the cocktail party problem

2 min read
The Japanese company believes it has created speech separation technology good enough to solve the cocktail party problem
Photo: John Boyd

The cocktail party problem refers to the challenge of following a single person’s speech in a room full of surrounding chatter and noise. With a little concentration, humans can focus in on what a particular person is saying. But when we want technology to separate the speech of a targeted person from the simultaneous conversations of others—as we do with hands-free telephony when a caller is in a car with kids in the back seat—the results leave much to be desired.

Until now, that is, says Mitsubishi Electric. The company demonstrated its speech separation technology at its annual R&D Open House in Tokyo on 24 May. In one type of demonstration, two people spoke a sentence in different languages simultaneously into a single microphone. The speech separation technology separated the two sentences in real time (about 3 seconds), and then reconstructed and played them back consecutively with impressive accuracy. However, the demonstration took place in a closed room and required silence from all those watching. 

A second demonstration used a simulated mix of three speakers. Unsurprisingly, the result was noticeably less accurate.

Mitsubishi claims up to 90-percent and 80-percent accuracy levels respectively for the two scenarios under ideal conditions of low ambient noise and speakers talking at about the same volume—the best ever, the company believes. This compares well to conventional technology, which has an accuracy of only around 50 percent for two speakers using a single microphone, says the company.

The technology uses Mitsubishi’s Deep Clustering, a proprietary deep-learning method based on artificial intelligence. 

The system has learned how to examine and separate mixed speech data.  A deep network encodes the speech signals or elements based on each speaker’s tone, pitch, intonation, etc. The encoded signals are optimized so that different components belonging to the same speaker have similar encodings, while those belonging to another speaker have dissimilar encodings. A clustering algorithm processes the encodings into groups depending on their similarities. Each person’s speech is then reconstructed by synthesizing the separated speech components.

“Unlike separating a speaker from background noise, separating a speaker’s speech from another speaker talking is most difficult, because they have similar characteristics,” says Anthony Vetro, deputy director at Mitsubishi Electric Research Laboratories in Cambridge, Mass. “You can do it to some degree by using more elaborate set-ups of two or more mics to localize the speakers, but it is very difficult with just one mic.”

The beauty of this system, he adds, is that it is not speaker dependent, so no speaker-specific training is involved. Similarly, it is not language dependent. 

Yohei Okato, senior manager of Mitsubishi Electric’s Natural Language Processing Technology Group in Kamakura, near Tokyo, says the company will use the technology to improve the quality of voice communications and the accuracy of automatic speech recognition in applications such as controlling automobiles and elevators, as will as in the home to operate various appliances and gadgets. “We will be introducing it in the near future,” he adds.

The Conversation (0)

Why Functional Programming Should Be the Future of Software Development

It’s hard to learn, but your code will produce fewer nasty surprises

11 min read
A plate of spaghetti made from code
Shira Inbar

You’d expectthe longest and most costly phase in the lifecycle of a software product to be the initial development of the system, when all those great features are first imagined and then created. In fact, the hardest part comes later, during the maintenance phase. That’s when programmers pay the price for the shortcuts they took during development.

So why did they take shortcuts? Maybe they didn’t realize that they were cutting any corners. Only when their code was deployed and exercised by a lot of users did its hidden flaws come to light. And maybe the developers were rushed. Time-to-market pressures would almost guarantee that their software will contain more bugs than it would otherwise.

Keep Reading ↓Show less