Can artificial intelligence help prevent suicides?

Daniel Druhora and Amy Blumenthal | December 12, 2019 

New tool from the Center for Artificial Intelligence in Society at USC aims to prevent suicide among youth

According to the CDC, the suicide rate for individuals 10–24 years has increased 56% between 2007 and 2017. In comparison to the general population, more than half of people experiencing homelessness have had thoughts of suicide or have attempted suicide, the National Health Care for the Homeless Council reported.

Phebe Vayanos, assistant professor of Industrial and Systems Engineering and Computer Science at the USC Viterbi School of Engineering has been enlisting the help of a powerful ally –artificial intelligence– to help mitigate the risk of suicide.

“In this research, we wanted to find ways to mitigate suicidal ideation and death among youth. Our idea was to leverage real-life social network information to build a support network of strategically positioned individuals that can ‘watch-out’ for their friends and refer them to help as needed,” Vayanos said.

Vayanos, an associate director at USC’s Center for Artificial Intelligence in Society (CAIS), and her team have been working over the last couple of years to design an algorithm capable of identifying who in a given real-life social group would be the best persons to be trained as “gatekeepers” capable of identifying warning signs of suicide and how to respond.

Vayanos and Ph.D. candidate Aida Rahmattalabi, the lead author of the study “Exploring Algorithmic Fairness in Robust Graph Covering Problems,” investigated the potential of social connections such as friends, relatives, and acquaintances to help mitigate the risk of suicide. Their paper will be presented at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS) in December 2019.

“We want to ensure that a maximum number of people are being watched out for, taking into account resource limitations and uncertainties of open world deployment. For example, if some of the people in the network are not able to make it to the gatekeeper training, we still want to have a robust support network,” Vayanos said.

For this study, Vayanos and Rahmattalabi looked at the web of social relationships of young people experiencing homelessness in Los Angeles, given that 1 in 2 youth who are homeless have considered suicide.

“Our algorithm can improve the efficiency of suicide prevention trainings for this particularly vulnerable population,” Vayanos said.

For Vayanos, efficiency translates into developing a model and algorithm that can stretch limited resources as far as they can go. In this scenario, the limited resources are the human gatekeepers. This algorithm tries to plan how these individuals can be best positioned and trained in a network to watch out for others.

“If you are strategic,” says Vayanos, “you can cover more people and you can have a more robust network of support.”

“Through this study, we can also help inform policymakers who are making decisions regarding funding on suicide prevention initiatives; for example, by sharing with them the minimum number of people who need to receive the gatekeeper training to ensure that all youth have at least one trained friend who can watch out for them,” Vayanos said.

“Our aim is to protect as many youth as possible,” said lead author, Rahmattalabi.

An important goal when deploying this A.I. system is to ensure fairness and transparency.

“We often work in environments that have limited resources, and this tends to disproportionately affect historically marginalized and vulnerable populations,” said co-author on the study Anthony Fulginiti, an assistant professor of social work at the University of Denver who received his Ph.D. from USC, having begun his research with Eric Rice, founding director of USC CAIS.

“This algorithm can help us find a subset of people in a social network that gives us the best chance that youth will be connected to someone who has been trained when dealing with resource constraints and other uncertainties,”   said Fulginiti.

This work is particularly important for vulnerable populations, say the researchers, particularly for youth who are experiencing homelessness.

“One of the surprising things we discovered in our experiments based on social networks of homeless youth is that existing A.I. algorithms, if deployed without customization, result in discriminatory outcomes by up to 68% difference in protection rate across races. The goal is to make this algorithm as fair as possible and  adjust the algorithm to protect those groups that are worse off​,” Rahmattalabi said.

The USC CAIS researchers want to ensure that “gatekeeper” coverage of the more vulnerable groups is as high as possible. Their algorithm reduced the bias in coverage in real-life social networks of homeless youth by as much as 20%.

Said Rahmattalabi: “Not only does our solution advance the field of computer science by addressing a computationally hard problem, but also it pushes the boundaries of social work and risk management science by bringing in computational methods into design and deployment of prevention programs.”

The research is supported by the National Science Foundation and the Army Research Office and involves collaborators: Eric Rice, associate professor at USC Suzanne Dworak-Peck School of Social Work and founding director of the USC Center for Artificial Intelligence in Society; Anthony Fulginiti, an assistant professor at the University of Denver Graduate School of Social Work; Milind Tambe, professor of computer science at Harvard University; Bryan Wilder, Ph.D. student in the Department of Computer Science at Harvard; and Amulya Yadav (’18 Ph.D.), assistant professor of computer science at Penn State University.

Published on December 12th, 2019

Last updated on May 16th, 2024

Share this Story