Smart Speaker Listens for Audible Signs of Cardiac Arrest

This AI system detects unique gasping sounds that occur when the heart stops beating

3 min read
Image of Alexa sitting on a table next to a bed as someone sleeps.
Photo: Sarah McQuate/University of Washington

When a person’s heart malfunctions and suddenly stops beating, death can occur within minutes—unless someone intervenes. A bystander administering CPR right away can triple a person’s chances of surviving a cardiac arrest.

Last July, we described a smart watch designed to detect cardiac arrest and summon help. Now, a team at the University of Washington has developed a totally contactless AI system that listens to detect the telltale sound of agonal breathing—a unique guttural gasping sound made by 50 percent of cardiac arrest patients.

The smart speaker system, described today in the journal npj Digital Medicine, detected agonal breathing events 97 percent of the time with almost no false alarms in a proof-of-concept study.

The team imagines using the tool—which can run on Amazon’s Alexa or Google Home, among other devices—to passively monitor bedrooms for the sound of agonal breathing and, if detected, set off an alarm.

“Cardiac arrest is one of those events where seconds count,” says study leader Jacob Sunshine, a physician and researcher at the UW School of Medicine and equity holder in a spinout company commercializing the device. He envisions the system alerting someone locally to get to the patient fast, as well as directly calling 911 to get the patient to the hospital quickly.

A recent study of cardiac arrests in Japan suggests that the bedroom is one of the most common places cardiac arrests happen at home. The team trained a machine learning algorithm to recognize agonal breathing in bedrooms using a unique dataset of real instances captured from 911 calls to Seattle’s Emergency Medical Services. Bystanders often put phones up to patients’ mouths so a dispatcher can determine if the patient needs CPR.

The researchers extracted 236 clips of agonal breathing from 911 calls between 2009 and 2017, then played these clips at different distances to a speaker and added interfering sounds typically found in a bedroom—such as pets, cars honking, and air conditioners. They also included sound samples of snoring and sleep apnea recorded in sleep labs. (They did not test how the algorithm responds to the gasping, moaning sounds of sex. We asked.)

The system correctly detected agonal breathing 97 percent of the time, up to 6 meters away from the sounds.

Next, the team had to be sure the algorithm wouldn’t have too many false positives—incorrectly classifying other sounds as agonal breathing. Initially, the algorithm incorrectly categorized sounds from the sleep lab as agonal breathing 0.1 percent of the time and sounds of volunteers sleeping in their own homes 0.2 percent of the time.

But when the algorithm was tweaked to classify a sound as agonal breathing only when detected three times, 10 to 20 seconds apart, the false positive rate dropped to zero for all sounds. Plus, “In a real deployment of this system, we’d envision having a warning system to give you a chance to cancel any false alarm,” says first author Justin Chan, a PhD student at UW.

Chan says the privacy concerns of such a system are no greater than normally using a smart speaker in a room, listening for a wake word. “Our system is designed in such a way that it only requires local processing, so no data is sent over the Internet or the cloud,” says Chan. “In that sense, it is really privacy preserving.”

In addition to the bedroom, the team imagines the system being used in health facilities such as elder care homes. “A lot of people there are at risk for cardiac events, and what you’d really like is a contactless way to monitor people all the time,” says Chan. This, he adds, would be a better solution than a wearable they’d have to remember to recharge and put on regularly.

The Conversation (0)

Are You Ready for Workplace Brain Scanning?

Extracting and using brain data will make workers happier and more productive, backers say

11 min read
Vertical
A photo collage showing a man wearing a eeg headset while looking at a computer screen.
Nadia Radic
DarkGray

Get ready: Neurotechnology is coming to the workplace. Neural sensors are now reliable and affordable enough to support commercial pilot projects that extract productivity-enhancing data from workers’ brains. These projects aren’t confined to specialized workplaces; they’re also happening in offices, factories, farms, and airports. The companies and people behind these neurotech devices are certain that they will improve our lives. But there are serious questions about whether work should be organized around certain functions of the brain, rather than the person as a whole.

To be clear, the kind of neurotech that’s currently available is nowhere close to reading minds. Sensors detect electrical activity across different areas of the brain, and the patterns in that activity can be broadly correlated with different feelings or physiological responses, such as stress, focus, or a reaction to external stimuli. These data can be exploited to make workers more efficient—and, proponents of the technology say, to make them happier. Two of the most interesting innovators in this field are the Israel-based startup InnerEye, which aims to give workers superhuman abilities, and Emotiv, a Silicon Valley neurotech company that’s bringing a brain-tracking wearable to office workers, including those working remotely.

Keep Reading ↓Show less
{"imageShortcodeIds":[]}