Again and again, experts have pleaded that we need more and faster testing to control the coronavirus pandemic—and many have suggested that artificial intelligence (AI) can help. Numerous COVID-19 diagnostics in development use AI to quickly analyze X-ray or CT scans, but these techniques require a chest scan at a medical facility.
Since the spring, research teams have been working toward anytime, anywhere apps that could detect coronavirus in the bark of a cough. In June, a team at the University of Oklahoma showed it was possible to distinguish a COVID-19 cough from coughs due to other infections, and now a paper out of MIT, using the largest cough dataset yet, identifies asymptomatic people with a remarkable 100 percent detection rate.
If approved by the FDA and other regulators, COVID-19 cough apps, in which a person records themselves coughing on command, could eventually be used for free, large-scale screening of the population.
With potential like that, the field is rapidly growing: Teams pursuing similar projects include a Bill and Melinda Gates Foundation-funded initiative, Cough Against Covid, at the Wadhwani Institute for Artificial Intelligence in Mumbai; the Coughvid project out of the Embedded Systems Laboratory of the École Polytechnique Fédérale de Lausanne in Switzerland; and the University of Cambridge’s COVID-19 Sounds project.
The fact that multiple models can detect COVID in a cough suggests that there is no such thing as truly asymptomatic coronavirus infection—physical changes always occur that change the way a person produces sound. “There aren’t many conditions that don’t give you any symptoms,” says Brian Subirana, director of the MIT Auto-ID lab and co-author on the recent study, published in the IEEE Open Journal of Engineering in Medicine and Biology.
While human ears cannot distinguish those changes, AI can. Ali Imran, who led the earlier project at the University of Oklahoma’s AI4Neworks Research Center, compares the concept to a guitar: If you put objects of different shapes or materials in a guitar but play the same notes, it will lead to subtly different sounds. “The human ear is capable of distinguishing maybe five to ten different features of cough,” says Imran. “With signal processing and machine learning, we can extract up to 300 different distinct features.”
When the pandemic hit, Subirana’s team at MIT had been working on a set of machine learning algorithms to detect Alzheimer’s disease in audio recordings using biomarkers such as vocal cord strength, sentiment, lung performance, and muscular degradation. When it became clear that coughing was key feature of COVID-19, they quickly pivoted to seeing if it was possible for AI to detect coronavirus infections.
In a crowd-sourcing effort, the team gathered forced-cough recordings via a website between April and May, developing what the team claims is the largest audio COVID-19 dataset to date, with 70,000 recordings, of which 2,680 were submitted by people confirmed to have COVID-19.
Originally, the MIT team developed AI models for the project from scratch, but reached an accuracy ceiling of about 70%. As a test one weekend, the researchers trained their existing Alzheimer’s disease AI model with the COVID-19 cough data, and it worked, says Subirana. The model was accurate 98.5 percent of the time at detecting people who had received a positive test result. In detecting individuals with no symptoms at all, that accuracy climbed to 100 percent, with 83.2 percent success identifying negative cases. “It was a bit counterintuitive” that detecting asymptomatic patients was easier than symptomatic patients, says Subirana, but it makes sense that confounding factors of other infections would make it harder to pinpoint COVID-19 cough features.
Back in June, Imran and colleagues were able to develop an AI model to identify asymptomatic coughs and sift through those confounding factors to distinguish COVID-19 coughs from the cough sounds of bronchitis, whooping cough, and asthma with overall 90 percent accuracy. “Our goal was to make sure someone who simply has asthma would not be mis-diagnosed as having COVID,” says Imran.
Most teams pursuing this work are currently collecting more cough recordings: at workplaces, hospitals, online, and elsewhere. Researchers hope that cough apps will someday be used for daily screenings, such as students or factory workers coughing into their phones before heading to school or work. Eventually, says Subirana, the tool could be part of a true COVID-19 diagnostic, perhaps when used in combination with other biomarkers, such as fever.
Sound-based tools could also be used as an early warning system, in which coughs across a population are detected via hospital recordings or home smart speakers to pick up early signs of infection of a new disease. “This kind of solution can be used to identify unique cough signatures which will not be in the database already,” says Imran. “It can become an alarm system.”
And it’s not the only push to use AI to detect the sounds of COVID-19: A team of researchers from Saudi Arabia, India and the UK are developing an app to screen for COVID-19 symptoms in an individual’s speech.
Megan is an award-winning freelance journalist based in Boston, Massachusetts, specializing in the life sciences and biotechnology. She was previously a health columnist for the Boston Globe and has contributed to Newsweek, Scientific American, and Nature, among others. She is the co-author of a college biology textbook, “Biology Now,” published by W.W. Norton. Megan received an M.S. from the Graduate Program in Science Writing at the Massachusetts Institute of Technology, a B.A. at Boston College, and worked as an educator at the Museum of Science, Boston.