Algorithm Aims to Predict Bickering Among Couples

Machine learning uses data from smartphones and wearables to identify signs of relationship conflicts

4 min read
A man and a woman, face to face, yelling at each other.
Photo: iStockphoto

Smartphone apps could eventually predict arguments among couples and help nip them in the bud before they blow up. For the first time outside the lab, artificial intelligence has helped researchers begin looking for patterns in couples’ language and physiological signs that could help predict conflicts in relationships.

Most of conflict-monitoring experiments with real-life couples have previously taken place in the controlled settings of psychology labs. Researchers with the Couple Mobile Sensing Project at the University of Southern California, in Los Angeles, took a different approach by studying couples in their normal living conditions using wearable devices and smartphones to collect data. Their early field trial with 34 couples has shown that the combination of wearable devices and artificial intelligence based on machine learning AI could lead to the future of smartphone apps acting as relationship counselors.

“In our current models, we can detect when conflict is occurring, but we haven't yet predicted conflict before it happens,” says Adela Timmons, a doctoral candidate in clinical and quantitative psychology at the University of Southern California (USC). “In our next steps, we hope to predict conflict episodes and to also send real-time prompts, for example prompting couples to take a break or do a meditation exercise, to see if we can prevent or deescalate conflict cycles in couples.”

Trying to predict something as complex as conflict among couples is no easy task in the real world. In that sense, machine learning algorithms that can automatically begin identifying patterns in data could help researchers sift through the language of couples and their different physiological indicators—such as heart rate or skin conductance response—to more accurately identify signs of brewing conflict. The USC team detailed its approach in IEEE Computer.

Before turning their off-the-shelf machine learning algorithm loose on the data, researchers had to identify which key features they should focus on during the experiment to get the best possible predictors of conflict. Past psychology studies have shown that conflict between couples is associated with physiological arousal signs such as raised heart rate and skin conductance level. Arguing couples also tend to use certain wording such as more second-person pronouns (“you”), more negative emotion words, and more certainty words such as “always” or “never,” Timmons explains.

The 34 couples who participated in the day-long trial were given wearable devices such as a wristband sensor to measure  skin conductance, body temperature and physical activity. A separate sensor worn on the chest measured heart rate. Each romantic partner also received a smartphone to collect audio recordings of their conversations and to allow for GPS tracking. To verify that a conflict had taken place, the smartphone would prompt couples to report whenever they had in fact been arguing. (Of the 34 couples, 19 ended up reporting a conflict during the experiment.)

Early results with the small sample size were promising. The findings generally matched with what past psychological studies and theories had suggested about conflict in relationships. For example, negative emotion expressed in language was associated with conflict at an accuracy rate of 62.3 percent. When the machine learning algorithm analyzed all the data from many different indicators in addition to negative emotion, it accurately identified conflict 79.3 percent of the time.

“These models rely on machine learning,” Timmons says. “To be able to do classification experiments and say with reasonable accuracy whether conflict is occurring or not occurring really requires big data.”

That 79.3 percent accuracy is still below what might be expected in a future smartphone app providing active counseling or similar interventions for real couples. Incorrect identification of conflict could potentially cause unnecessary alarm, says Theodora Chaspari, a doctoral candidate in the Signal Analysis and Interpretation Laboratory (SAIL) at USC and coauthor on the study. But the higher accuracy from combining data from many different features seems to confirm the general approach of using many different measures to help infer the mental state of couples in conflict.

The researchers also face challenges in cleaning up the real-life data collected from the couples, which was much messier than the data that can be collected in the controlled confines of a lab. They sometimes encountered missing data segments, such as when some couples turned off their smartphone audio recordings at certain times for privacy. Still, Chaspari expects a larger dataset collected from additional couples to help the machine learning algorithm smooth out some of these wrinkles.

Eventually, the USC team hopes to use their system to collect enough data on individual couples to identify the personal quirks in their conflict patterns—something that could go a long way toward boosting the system’s accuracy in identifying conflicts for each couple. “We now have a generalized system that works, but the challenge is how to make the system specific for a couple or certain clusters of couples,” Chaspari says.

Accurate identification of conflicts could eventually enable the algorithm to predict conflicts before couples are even aware they have begun fighting. The USC team’s next steps will involve collecting additional data to boost the accuracy of their current algorithm. For example, wearable and smartphone technologies could help researchers collect data on a variety of other factors, such as phone usage, time on the internet, or how much light exposure couples receive during the course of their day—all theoretical but possible subtle predictors of conflict, for example given how light exposure can impact individual mood.

“Part of what helps these models work well and to have high classification accuracy is to have a lot of data and a lot of features,” Timmons says. “In our next steps we’re going to be including more predictors of conflict.”

The Conversation (0)

Will AI Steal Submarines’ Stealth?

Better detection will make the oceans transparent—and perhaps doom mutually assured destruction

11 min read
A photo of a submarine in the water under a partly cloudy sky.

The Virginia-class fast attack submarine USS Virginia cruises through the Mediterranean in 2010. Back then, it could effectively disappear just by diving.

U.S. Navy

Submarines are valued primarily for their ability to hide. The assurance that submarines would likely survive the first missile strike in a nuclear war and thus be able to respond by launching missiles in a second strike is key to the strategy of deterrence known as mutually assured destruction. Any new technology that might render the oceans effectively transparent, making it trivial to spot lurking submarines, could thus undermine the peace of the world. For nearly a century, naval engineers have striven to develop ever-faster, ever-quieter submarines. But they have worked just as hard at advancing a wide array of radar, sonar, and other technologies designed to detect, target, and eliminate enemy submarines.

The balance seemed to turn with the emergence of nuclear-powered submarines in the early 1960s. In a 2015 study for the Center for Strategic and Budgetary Assessment, Bryan Clark, a naval specialist now at the Hudson Institute, noted that the ability of these boats to remain submerged for long periods of time made them “nearly impossible to find with radar and active sonar.” But even these stealthy submarines produce subtle, very-low-frequency noises that can be picked up from far away by networks of acoustic hydrophone arrays mounted to the seafloor.

Keep Reading ↓Show less
{"imageShortcodeIds":["30133857"]}