November 03, 2020
One of the biggest challenges with slowing the spread of COVID-19 is detecting it in people who never show any symptoms.
So far, without getting tested, there is no way to know if you have been infected or not. And most people won't get tested unless they start to feel sick or have knowingly been exposed to the virus.
All of this may change, however, with a new AI algorithm created by Massachusetts Institute of Technology researchers. The model can detect asymptomatic COVID-19 infections just by the sound of a person's cough.
A cough can speak volumes about a person's state of health, but you have to know what you are listening for. The MIT researchers said that the differences in cough between healthy people and an individual with COVID-19 isn't decipherable to the human ear, but it can be picked up by AI.
In their study, AI was able to accurately identify 98.5% of coughs from people with confirmed COVID-19, including 100% of those who were asymptomatic.
The team hopes that their findings can help provide a convenient way to screen asymptomatic people for COVID-19.
For their project, they asked people to voluntarily submit forced-cough recordings through their cellphones and laptops. The team then fed the recordings into the algorithm which is based on tens of thousands of samples of coughs as well as spoken words.
The participants were also asked to fill out a survey indicating any symptoms they are experiencing, whether or not they have had COVID-19 and if it was confirmed by a formal test.
Currently, they have collected 200,000 forced cough audio samples. About 2,500 of them were submitted from people with confirmed COVID-19, including those who were asymptomatic.
The MIT team is in the process of integrating the algorithm into an app. If it receives U.S. Food and Drug Administration approval, it could become a free, noninvasive screening tool to identify people with asymptomatic COVID-19.
The users of the app could log into it daily and simply cough into their phone. The instant results would alert them to a possible infection and the need to get tested.
"The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory or a restaurant," said co-author Brian Subirana, a research scientist in MIT's Auto-ID Laboratory.
The idea for this screening tool came from previous research on other diseases.
Pre-pandemic, other research groups had trained algorithms to be able to detect pneumonia and asthma from cellphone recordings of coughs.
The MIT team that worked on the the COVID-19 algorithm also developed models to detect signs of Alzheimer's disease through audio recordings. Weakened vocal cords is one sign of this type of dementia.
Subirana and his colleagues first trained two models, one to distinguish different degrees of vocal cord strength and one to detect changes in emotional state through speech.
Alzheimer's patients will more frequently express frustration or flat emotion than people without the disease, the researchers said.
They then trained another model on a database of coughs to pick up changes in lung and respiratory performance.
These three models combined with another that detects muscular degradation was found to be an effective way to identify Alzheimer's just by using audio recordings.
The AI model for COVID-19 uses the same four biomarkers — vocal cord strength, sentiment, lung and respiratory performance and muscular degradation. The COVID-19 model was just tweaked slightly to look for patterns specific to the viral infection.
Subirana and his team's findings are published in the IEEE Journal of Engineering in Medicine and Biology.
Other organizations like Cambridge University, Carnegie Mellon University and Novoic, a UK health start-up, are working on similar screening tools.