AI System Helps Doctors Identify Patients at Risk for Suicide

A new study from Vanderbilt University Medical Center shows that clinical alerts driven by artificial intelligence (AI) can help doctors identify patients at risk for suicide, potentially improving prevention efforts in routine medical settings.

A team led by Colin Walsh, MD, MA, associate professor of Biomedical Informatics, Medicine and Psychiatry, tested whether their AI system, called the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL), could effectively prompt doctors in three neurology clinics at VUMC to screen patients for suicide risk during regular clinic visits.

The study, reported in JAMA Network Open, compared two approaches - automatic pop-up alerts that interrupted the doctor's workflow versus a more passive system that simply displayed risk information in the patient's electronic chart.

The study found that the interruptive alerts were far more effective, leading doctors to conduct suicide risk assessments in connection with 42% of screening alerts, compared to just 4% with the passive system.

"Most people who die by suicide have seen a health care provider in the year before their death, often for reasons unrelated to mental health," Walsh said. "But universal screening isn't practical in every setting. We developed VSAIL to help identify high-risk patients and prompt focused screening conversations."

Suicide has been on the rise in the U.S. for a generation and is estimated to claim the lives of 14.2 in 100,000 Americans each year, making it the nation’s 11th leading cause of death. Studies have shown that 77% of people who die by suicide have contact with primary care providers in the year before their death.

Calls to improve risk screening have led researchers to explore ways to identify patients most in need of assessment. The VSAIL model, which Walsh's team developed at Vanderbilt, analyzes routine information from electronic health records to calculate a patient's 30-day risk of suicide attempt. In earlier prospective testing, where VUMC patient records were flagged but no alerts were fired, the model proved effective at identifying high-risk patients, with one in 23 individuals flagged by the system later reporting suicidal thoughts.

In the new study, when patients identified as high-risk by VSAIL came for appointments at Vanderbilt's neurology clinics, their doctors received on a randomized basis either the interruptive or non-interruptive alerts. The research focused on neurology clinics because certain neurological conditions are associated with increased suicide risk.

The researchers suggested that similar systems could be tested in other medical settings.

"The automated system flagged only about 8% of all patient visits for screening," Walsh said. "This selective approach makes it more feasible for busy clinics to implement suicide prevention efforts."

The study involved 7,732 patient visits over six months, prompting 596 total screening alerts. During the 30-day follow-up period, in a review of VUMC health records, no patients in either randomized alert group were found to have experienced episodes of suicidal ideation or attempted suicide. While the interruptive alerts were more effective at prompting screenings, they could potentially contribute to "alert fatigue" - when doctors become overwhelmed by frequent automated notifications. The researchers noted that future studies should examine this concern.

"Health care systems need to balance the effectiveness of interruptive alerts against their potential downsides," Walsh said. "But these results suggest that automated risk detection combined with well-designed alerts could help us identify more patients who need suicide prevention services."

Walsh CG, Ripperger MA, Novak L, Reale C, Anders S, Spann A, Kolli J, Robinson K, Chen Q, Isaacs D, Acosta LMY, Phibbs F, Fielstein E, Wilimitis D, Musacchio Schafer K, Hilton R, Albert D, Shelton J, Stroh J, Stead WW, Johnson KB.
Risk Model-Guided Clinical Decision Support for Suicide Screening: A Randomized Clinical Trial.
JAMA Netw Open. 2025 Jan 2;8(1):e2452371. doi: 10.1001/jamanetworkopen.2024.52371

Most Popular Now

Personalized Breast Cancer Prevention No…

A new telemedicine service for personalised breast cancer prevention has launched at preventcancer.co.uk. It allows women aged 30 to 75 across the UK to understand their risk of developing breast...

New App may Help Caregivers of People Ge…

A new study by investigators from Mass General Brigham showed that a new app they created can help improve the quality of life for caregivers of patients undergoing bone marrow...

An App to Detect Heart Attacks and Strok…

A potentially lifesaving new smartphone app can help people determine if they are suffering heart attacks or strokes and should seek medical attention, a clinical study suggests. The ECHAS app (Emergency...

A Machine Learning Tool for Diagnosing, …

Scientists aiming to advance cancer diagnostics have developed a machine learning tool that is able to identify metabolism-related molecular profile differences between patients with colorectal cancer and healthy people. The analysis...

Fine-Tuned LLMs Boost Error Detection in…

A type of artificial intelligence (AI) called fine-tuned large language models (LLMs) greatly enhances error detection in radiology reports, according to a new study published in Radiology, a journal of...

DeepSeek-R1 Offers Promising Potential t…

A joint research team from The Hong Kong University of Science and Technology and The Hong Kong University of Science and Technology (Guangzhou) has published a perspective article in MedComm...

Deep Learning can Predict Lung Cancer Ri…

A deep learning model was able to predict future lung cancer risk from a single low-dose chest CT scan, according to new research published at the ATS 2025 International Conference...

New Research Finds Specific Learning Str…

If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm...

'AI Scientist' Suggests Combin…

An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence...

Patients say "Yes..ish" to the…

As artificial intelligence (AI) continues to be integrated in healthcare, a new multinational study involving Aarhus University sheds light on how dental patients really feel about its growing role in...

Philips Foundation 2024 Annual Report: E…

Marking its tenth anniversary, Philips Foundation released its 2024 Annual Report, highlighting a year in which the Philips Foundation helped provide access to quality healthcare for 46.5 million people around...

Brains vs. Bytes: Study Compares Diagnos…

A University of Maine study compared how well artificial intelligence (AI) models and human clinicians handled complex or sensitive medical cases. The study published in the Journal of Health Organization...