"Medical professionals early in their career may face challenges in performing the appropriate patient-tailored physical exam because of their limited experience or other context-dependent factors, such as lower resourced settings," said senior author Marc D. Succi, MD, strategic innovation leader at Mass General Brigham Innovation, associate chair of innovation and commercialization for enterprise radiology and executive director of the Medically Engineered Solutions in Healthcare (MESH) Incubator at Mass General Brigham. "LLMs have the potential to serve as a bridge and parallel support physicians and other medical professionals with physical exam techniques and enhance their diagnostic abilities at the point of care."
Succi and his colleagues prompted GPT-4 to recommend physical exam instructions based on the patient’s primary symptom, for example, a painful hip. GPT-4’s responses were then evaluated by three attending physicians on a scale of 1 to 5 points based on accuracy, comprehensiveness, readability and overall quality. They found that GPT-4 performed well at providing instructions, scoring at least 80% of the possible points. The highest score was for "Leg Pain Upon Exertion" and the lowest was for "Lower Abdominal Pain."
"GPT-4 performed well in many respects, yet its occasional vagueness or omissions in critical areas, like diagnostic specificity, remind us of the necessity of physician judgment to ensure comprehensive patient care," said lead author Arya Rao, a student researcher in the MESH Incubator attending Harvard Medical School.
Although GPT-4 provided detailed responses, the researchers found that it occasionally left out key instructions or was overly vague, indicating the need for a human evaluator. According to researchers, the LLM’s strong performance suggests its potential as a tool to help fill gaps in physicians’ knowledge and aid in diagnosing medical conditions in the future.
Rao, Arya S et al.
A Large Language Model-Guided Approach to the Focused Physical Exam.
Journal of Medical Artificial Intelligence, 2024. doi: 10.21037/jmai-24-275