The December 2022 issue of IEEE Spectrum is here!

Close bar
woman holding up smartphone and speaking into it
iStock

During the early stages of some illnesses, there may be limited evidence that a health complication is developing, yet some subtle clues may still be present. Two recent studies show that AI algorithms can successfully analyze people’s voices to identify those in the early stages of Parkinson’s disease or a severe COVID-19 infection of the lungs. The researchers incorporated these algorithms into a smartphone app called Aum, which can be used to diagnose and monitor the respective conditions.

“[People’s voices are] routinely used by clinicians to evaluate their patients, especially for neurodegenerative disease, such as Parkinson's disease, and pulmonary disease, whether it be COVID-19 or other lung diseases,” explains Dinesh Kumar, a professor at the Royal Melbourne Institute of Technology (RMIT), in Australia, who was involved in both studies. “This [approach] is not only to detect the disease, but to evaluate the effects of medicine and to optimize the dosage.”

Kumar and his colleagues sought to explore whether these subtleties in a person’s voice could be detected by machine-learning algorithms. In their first study, they recruited 36 people with Parkinson’s disease and 36 healthy volunteers.

Importantly, there are natural differences between people’s voices, which could make it difficult to differentiate who does and does not have an underlying disorder. “To overcome this problem, we looked for nonlanguage sounds that could detect the disease despite the natural differences in voice of people,” explains Kumar.

Therefore, his team asked study participants to say different phonemes that require sounds from the throat (/a/), the mouth (/o/), and the nose (/m/). Recordings were made using an iOS-based smartphone. They then developed and applied a machine-learning algorithm that could differentiate between people with Parkinson’s disease and the healthy volunteers, despite ambient background noise. In their study published 12 September inIEEE Access, they report that the algorithm can identify people in the study population who have Parkinson’s disease with 100 percent accuracy.

An App That Detects Parkinson’s and COVID-19www.youtube.com

“More importantly, we can differentiate between PD with active medication and without active medication,” Kumar notes. “This is very important because the neurologist can evaluate their patients even when the patients are unable to go to the clinic, and hence is suitable for people living in remote regions.”

One of the researchers on this initial study, RMIT associate professor Nemuel Pah, had an opportunity to conduct a similar study in COVID-19 patients in Indonesia during June and July of 2021. In that case, Pah recorded 40 hospitalized COVID-19 patients and 48 healthy subjects over the course of 22 days, asking them to say six phonemes (/a/, /e/, /i/, /o/, /u/, and /m/).

The researchers then applied a different machine-learning algorithm to that data, finding that the features extracted from the vowel /i/ during the first three days after admittance to the hospital were the most effective at differentiating between people with a COVID-19 lung infection and healthy controls. The algorithm did so with 94 percent accuracy. These results were published 20 September in the IEEE Journal of Translational Engineering in Health and Medicine.

An advantage of using an app to identify COVID-19 patients, Kumar says, is that voice recordings do not require doctors to make physical contact with a patient, and therefore would be a suitable way to limit exposure. “Voice recordings can be obtained from patients located remotely,” he says, noting that this approach to phoneme analysis is also not dependent on language.

Kumar says his team is interested applying these approaches in the real world, but more work needs to be done. Specifically, they are looking to register their technology with Australia’s Therapeutic Goods Administration, the U.S. Food and Drug Administration (FDA), and China’s National Medical Products Administration. “That needs time and money; we are hoping to get funds from [people and] organizations focused on Parkinson’s disease, such as Michael J. Fox, or investors for this purpose,” Kumar says.

The Conversation (1)
R Watkins10 Nov, 2022
M

Talking into the bottom edge of the handset exposes the rear microphone to one's voice, thereby defeating the handset's noise-cancellation software. Probably not optimal for diagnostic purposes.

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}