Are You Eligible for a Clinical Trial? ChatGPT can Find Out

A new study in the academic journal Machine Learning: Health discovers that ChatGPT can accelerate patient screening for clinical trials, showing promise in reducing delays and improving trial success rates.

Researchers at UT Southwestern Medical Centre used ChatGPT to assess whether patients were eligible to take part in clinical trials and were able to identify suitable candidates within minutes.

Clinical trials, which test new medications and procedures on the public, are vital for developing and validating new treatments. But many trials struggle to enrol enough participants. According to a recent study, up to 20% of National Cancer Institute (NCI)-affiliated trials fail due to low enrolment. This not only inflates costs and delays results, but also undermines the reliability of new treatments.

Currently, screening patients for trials is a manual process. Researchers must review each patient’s medical records to determine if they meet eligibility criteria, which takes around 40 minutes per patient. With limited staff and resources, this process is often too slow to keep up with demand.

Part of the problem is that valuable patient information contained in electronic health records (EHRs) is often buried in unstructured text, such as doctors’ notes, which traditional machine learning software struggles to decipher. As a result, many eligible patients are overlooked because there simply isn’t enough capacity to review every case. This contributes to low enrolment rates, trial delays and even cancellations, ultimately slowing down access to new therapies.

To counter this problem, the researchers have looked at ways of speeding up the screening process by using ChatGPT. Researchers used GPT-3.5 and GPT-4 to analyse 74 patients’ data to see if they qualified for a head and neck cancer trial.

Three ways of prompting the AI were tested:

  • Structured Output (SO): asking for answers in a set format.
  • Chain of Thought (CoT): asking the model to explain its reasoning.
  • Self-Discover (SD): letting the model figure out what to look for.

The results were promising. GPT-4 was more accurate than GPT-3.5, though slightly slower and more expensive. Screening times ranged from 1.4 to 12.4 minutes per patient, with costs between $0.02 and $0.27.

"LLMs like GPT-4 can help screen patients for clinical trials, especially when using flexible criteria," said Dr. Mike Dohopolski, lead author of the study. "They’re not perfect, especially when all rules must be met, but they can save time and support human reviewers."

This research highlights the potential for AI to support faster, more efficient clinical trials - bringing new treatments to patients sooner.

The study is one of the first articles published in IOP Publishing's Machine Learning series™, the world’s first open access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.

The same research team have worked on a method that allows surgeons to adjust patients’ radiation therapy in real time whilst they are still on the table. Using a deep learning system called GeoDL, the AI delivers precise 3D dose estimates from CT scans and treatment data in just 35 milliseconds. This could make adaptive radiotherapy faster and more efficient in real clinical settings.

Jacob Beattie et al.
ChatGPT augmented clinical trial screening.
Mach. Learn.: Health, 2025. doi: 10.1088/3049-477X/adbd47

Most Popular Now

ChatGPT 4o Therapeutic Chatbot 'Ama…

One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot 'Amanda' for relationship support shows that a single session of chatbot therapy...

AI Tools Help Predict Severe Asthma Risk…

Mayo Clinic researchers have developed artificial intelligence (AI) tools that help identify which children with asthma face the highest risk of serious asthma exacerbation and acute respiratory infections. The study...

AI Distinguishes Glioblastoma from Look-…

A Harvard Medical School–led research team has developed an AI tool that can reliably tell apart two look-alike cancers found in the brain but with different origins, behaviors, and treatments. The...

Overcoming the AI Applicability Crisis a…

Opinion Article by Harry Lykostratis, Chief Executive, Open Medical. The government’s 10 Year Health Plan makes a lot of the potential of AI-software to support clinical decision making, improve productivity, and...

Smart Device Uses AI and Bioelectronics …

As a wound heals, it goes through several stages: clotting to stop bleeding, immune system response, scabbing, and scarring. A wearable device called "a-Heal," designed by engineers at the University...

Dartford and Gravesham Implements Clinis…

Dartford and Gravesham NHS Trust has taken a significant step towards a more digital future by rolling out electronic test ordering using Clinisys ICE. The trust deployed the order communications...