New Framework for Using AI in health Care Considers Medical Knowledge, Practices, Procedures, Values

Health care organizations are looking to artificial intelligence (AI) tools to improve patient care, but their translation into clinical settings has been inconsistent, in part because evaluating AI in health care remains challenging. In a new article, researchers propose a framework for using AI that includes practical guidance for applying values and that incorporates not just the tool's properties but the systems surrounding its use.

The article was written by researchers at Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto. It is published in Patterns.

"Regulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices, and procedures necessary to integrate the model within the larger social systems of medical practice," explains Alex John London, K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon, who coauthored the article. "Tools are not neutral - they reflect our values - so how they work reflects the people, processes, and environments in which they are put to work."

London is also Director of Carnegie Mellon's Center for Ethics and Policy and Chief Ethicist at Carnegie Mellon's Block Center for Technology and Society as well as a faculty member in CMU's Department of Philosophy.

London and his coauthors advocate for a conceptual shift in which AI tools are viewed as parts of a larger "intervention ensemble," a set of knowledge, practices, and procedures that are necessary to deliver care to patients. In previous work with other colleagues, London has applied this concept to pharmaceuticals and to autonomous vehicles. The approach treats AI tools as "sociotechnical systems," and the authors' proposed framework seeks to advance the responsible integration of AI systems into health care.

Previous work in this area has been largely descriptive, explaining how AI systems interact with human systems. The framework proposed by London and his colleagues is proactive, providing guidance to designers, funders, and users about how to ensure that AI systems can be integrated into workflows with the greatest potential to help patients. Their approach can also be used for regulation and institutional insights, as well as for appraising, evaluating, and using AI tools responsibly and ethically. To illustrate their framework, the authors apply it to the development of AI systems developed for diagnosing more than mild diabetic retinopathy.

"Only a small majority of models evaluated through clinical trials have shown a net benefit," says Melissa McCradden, a Bioethicist at the Hospital for Sick Children and Assistant Professor of Clinical and Public Health at the Dalla Lana School of Public Health, who coauthored the article. "We hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems."

Melissa D McCradden, Shalmali Joshi, James A Anderson, Alex John London. A normative framework for artificial intelligence as a sociotechnical system in healthcare.
Patterns, 2023. doi: 10.1016/j.patter.2023.100864

Most Popular Now

Mobile Phone Data Helps Track Pathogen S…

A new way to map the spread and evolution of pathogens, and their responses to vaccines and antibiotics, will provide key insights to help predict and prevent future outbreaks. The...

AI Model to Improve Patient Response to …

A new artificial intelligence (AI) tool that can help to select the most suitable treatment for cancer patients has been developed by researchers at The Australian National University (ANU). DeepPT, developed...

Can AI Tell you if You Have Osteoporosis…

Osteoporosis is so difficult to detect in early stage it’s called the "silent disease." What if artificial intelligence could help predict a patient’s chances of having the bone-loss disease before...

Study Reveals Why AI Models that Analyze…

Artificial intelligence (AI) models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always...

Think You're Funny? ChatGPT might b…

A study comparing jokes by people versus those told by ChatGPT shows that humans need to work on their material. The research team behind the study published on Wednesday, July 3...

Innovative, Highly Accurate AI Model can…

If there is one medical exam that everyone in the world has taken, it's a chest x-ray. Clinicians can use radiographs to tell if someone has tuberculosis, lung cancer, or...

New AI Approach Optimizes Antibody Drugs

Proteins have evolved to excel at everything from contracting muscles to digesting food to recognizing viruses. To engineer better proteins, including antibodies, scientists often iteratively mutate the amino acids -...

AI Speeds Up Heart Scans, Saving Doctors…

Researchers have developed a groundbreaking method for analysing heart MRI scans with the help of artificial intelligence (AI), which could save valuable NHS time and resources, as well as improve...

Researchers Customize AI Tools for Digit…

Scientists from Weill Cornell Medicine and the Dana-Farber Cancer Institute in Boston have developed and tested new artificial intelligence (AI) tools tailored to digital pathology - a rapidly growing field...

Young People Believe that AI is a Valuab…

Children and young people are generally positive about artificial intelligence (AI) and think it should be used in modern healthcare, finds the first-of-its-kind survey led by UCL and Great Ormond...