acm-header
Sign In

Communications of the ACM

ACM News

Spotting High-Risk Behavior Online


View as: Print Mobile App Share:
An illustration of how some may react to social media posts.

Posts on social media could conceal clues about mental health problems or high-risk behaviors.

Credit: Lara Antal/Verywell

Social media is used widely to share experiences with friends, or to join like-minded communities to discuss common interests. Yet people's posts also could conceal clues about mental health problems or high-risk behaviors that, if recognized early enough, could help save lives.

"Depression, anxiety, suicidal behaviors and some disordered eating behaviors are difficult to detect in person and it's unlikely that people are going to go to a clinic because of how stigmatized these conditions are," says Stevie Chancellor, a researcher at Northwestern University in Evanston, IL. "If we could use social media data as a way to understand these behaviors, perhaps we could use that information to assist them."

Chancellor and other researchers are investigating how machine learning could be harnessed to identify signs of dangerous behavior on social media. Around half of the world's population, roughly three billion people, now use social media platforms including Facebook, Twitter, and Reddit, so there is lots of data available. "It allows us to target a lot more people at a greater level than we've ever been able to before to understand these populations," says Benjamin Ricard, a Ph.D. student at the Geisel School of Medicine at Dartmouth College in Hanover, NH.

Some research is using social media to examine what people write, attempting to predict risky behavior from language used. Another approach involves looking at information related to posts, such as how much information a person shares, at what time of day, and whether that individual's  posting habits have changed. "A common symptom of depression is insomnia, so if your posting history over time starts shifting later, that might indicate that you are struggling with insomnia, which could relate to depression," says Chancellor.

Analyzing what a person's friends and followers post is a similar strategy. Signs of depression, for example, may be hard to gauge solely from a sufferer's posts. Past research has shown that people suffering from depression may not always be in a low mood, but may instead experience a wider range of highs and lows compared to healthy individuals. "Depressed people don't post depressive things all the time, so it's not so easy," says Ricard.

Ricard and his colleagues investigated whether this wider community could help predict depression by examining posts on Instagram in recent work. Participants completed a clinically validated depression questionnaire to assess their mental health. Then, using information from feeds such as language used in captions and comments, emojis, and number of 'likes', they built a few machine learning models. One model was trained solely on data from an individual user, while a second one combined data generated by a user with data from their wider community.  A third model only focused on posts from friends and followers.

Based on their results, Ricard and his team think models that leverage information from friends and followers can help predict depression. Further, their results revealed that data from the community provided new information, and didn't overlap with what could be gleaned from an individual user.

Munmun De Choudhury, an associate professor at the Georgia Institute of Technology, thinks the usefulness of community-generated data revealed in this work is interesting, but says it isn't clear why this data is useful for detecting depression, or whether it will help with early screening of the condition. "(Community signals) likely represent social support in response to already prevalent depression in the poster, rather than something about the poster's mental state," she says.

De Choudhury and her colleagues were among the first to study whether a person's mental health state could be assessed from their social media posts using machine learning. In early work from 2013, they found that changes in behavior, such as a decrease in social engagement, could be gleaned from a person's posts in the year preceding a diagnosis of depression.

Targeting specific support groups on social media also can provide insight into a condition or behavior. In one large-scale study, Chancellor and her colleagues were interested in learning about alternative treatments used by recovering opioid addicts, such as untested drugs and medications prescribed by doctors for other conditions. The use of these substances is controversial since they can have dangerous side-effects. However, some people claim they help control withdrawal symptoms, and hence assist recovery. "Our hope is that individuals' self-described experiences can inform new clinical experiments and trials to better evaluate these substances," says Chancellor.

With help from a substance-abuse researcher, Chancellor and her colleagues identified sub-communities on Reddit as likely to include recovering opioid users, rather than recreational users or current addicts, by searching for terms such as 'non-opioid pain relievers' that would relate to them. Machine learning techniques then were used to pick out posts related to recovery in those groups.

Deep learning also was used to identify specific unconventional substances they were using. "It's a cool use of machine learning and language processing to extract new information that clinicians didn't know," says Chancellor.

Part of the success of their models may come down to having addiction experts involved. Chancellor is a big proponent of a human-centered approach to machine learning,  where computer scientists partner with experts in the field (in this case, substance-abuse experts) when working on problems that have an impact on people. From screening the data collected to giving feedback on how algorithms are being trained, such experts can help ensure algorithms are making relevant predictions. "Human-centeredness starts to try and put the human as the final outcome of the system," says Chancellor. "The question is, are we actually being respectful of what people need and what they want when they're struggling?"

In addition, interpretable machine learning models are preferable so medical professionals can better understand and trust results, and social implications of predictions are also considered.

Chancellor thinks extracting information from social media could be useful during triage at clinics, since it could provide clues about the severity of a person's condition. It also could help point people towards less-risky behaviors by using nudges online, although in some cases more serious interventions could be needed. She thinks it will be tricky to determine who should intervene (the police, a friend, or a family member) and whether social networks would be blamed if risky behavior was missed and a person harmed themself, for example. "You get into this weird social responsibility question that I think is actually going to be the sticking point of applying these systems," says Chancellor.  

However, follow-up work on the opioid study is showing that machine learning analyses of social media also can have less-obvious applications. Chancellor and her colleagues are now developing a way to retrieve the dosage of alternative treatments used by recovering addicts from their posts. The names of the substances, as well as the quantities used, could then be handed over to medical researchers to evaluate if they have potential as new approved therapies.

"I'm totally hooked and very invested in the mental health space," says Chancellor. "I see a lot of promise because of people's use of natural conversations and their honesty on social media."

Sandrine Ceurstemont is a freelance science writer based in London, U.K.


 

No entries found