ChatGPT Candidate Performs Well in Obstetrics and Gynaecology Clinical Examination

In a study to determine how the Chat Generative Pre-Trained Transformer or ChatGPT would fare in medical specialist examinations compared to human candidates without additional training, the Artificial Intelligence chatbot performed better than human candidates in a mock Obstetrics and Gynaecology (O&G) specialist clinical examination, used to assess the eligibility of individuals to become O&G specialists. The results from the mock clinical examination detailed that ChatGPT also achieved high scores in empathetic communication, information-gathering and clinical reasoning.

The tabulated results showed that ChatGPT attained a higher average score of 77.2%, compared to the human candidates who scored an average of 73.7%. It was also recorded that ChatGPT took an average of 2 minutes and 54 seconds to complete each station, markedly ahead of the stipulated 10 minutes given. Even though ChatGPT completed the stations in record time, ChatGPT did not outperform all the individuals in each cohort. To minimise bias, the responses of all three candidates were submitted to the examination panel, while concealing the true identity of ChatGPT.

In the study, the team selected seven stations that were in the objective structured clinical examinations (OSCEs) that had been run in the actual mock examinations in the two previous years, all similar in scope and difficulty, with no inclusion of visual interpretations to cater to the current limitations of ChatGPT present at the time of study. Each station has multiple layers of evolving questions based on initial data presented and subsequent responses from the candidate. The OSCE is a criterion-based assessment, where each candidate is assessed on their clinical competencies by completing a series of circuit stations in a simulated environment.

Given 10 minutes to complete each station, the candidate is introduced to an unfamiliar clinical scenario, coupled with the necessary information which would aid them to make an informed clinical decision. The candidate is expected to articulate a care plan, while demonstrating expertise such as communication, information gathering, application of clinical knowledge and patient safety within the time limit. The stations were introduced in an identical format and in the same order to two human candidates - Candidates A and B, and ChatGPT, known as Candidate C.

The study team from the Department of O&G at the Yong Loo Lin School of Medicine (NUS Medicine), led by Associate Professor Mahesh Choolani, Head of the Department of O&G, also conducted an analysis of the answers and found that ChatGPT scored very well in the empathetic communication domain. It was able to skilfully and rapidly generate factually accurate and contextually relevant answers to evolving clinical questions, based on unfamiliar data in the shortest time possible, a feat that would take an average intelligent person more than 10 years of clinical training to be able to understand the questions in this type of highly-complex examinations and answer them appropriately.

It is laudable that Generative AI, which is at present only in its infancy stage, has the prowess to consolidate and interpret huge chunks of general content quickly, and organise it into coherent and concise conversational-type responses, something that would not come naturally to non-native English speakers or candidates facing examination stress. Despite best efforts to blind the examination panel, examiners were generally able to identify the responses from ChatGPT, but not in all cases.

From the mix of answers from human candidates and ChatGPT that were transcribed verbatim and assessed by 14 trained clinician examiners, it was also observed that even though English was used throughout, there was also an infusion of Singlish or words loaned from Malay, Tamil and Chinese dialects, that was included extensively by human candidates. The intonation and unique vocabulary are very familiar and endearing to Singaporeans or long-term residents in Singapore. This method of communication would very well serve as a bridge to initiate closeness and build trust, while helping to ease nervousness in patients, compared to the more articulately-scripted answers of ChatGPT. The lack of local ethnic knowledge is one of the major limitations of ChatGPT, on top of the lack of up-to-date medical references and data, which in turn causes hallucinations in ChatGPT, compelling it to churn out irrelevant or incorrect answers and conclusions at times.

Crucially, the study results also revealed that ChatGPT is less able to handle subjects that have multiple changes of scenarios, within the question itself, that require open interpretation. The stations with multiple-changing scenarios would require additional training in context-specific medical knowledge in highly-specialised topics. This would be manageable for a highly-trained human candidate who has cultivated higher-level discernment and flexible reasoning needed to tackle ambiguities within these questions. ChatGPT was found to outperform human candidates in several knowledge areas, including labour management, gynecologic oncology and postoperative care, topics or stations that largely focused on standard protocol-driven decision-making, but not in highly contextual situations.

"The arrival and increased use of ChatGPT has proven that it can be a viable resource in guiding medical education, possibly provide adjunct support for clinical care in real time, and even support the monitoring of medical treatment in patients. In an era where accurate knowledge and information is instantly accessible, and these capabilities could be embedded within appropriate context by Generative AI in the foreseeable future, the need for future generations of medical doctors to clearly demonstrate the value and importance of the human touch is now saliently obvious. As doctors and medical educators, we need to strongly emphasise and exemplify the use of soft skills, compassionate communication and knowledge application in medical training and clinical care," said Associate Professor Mahesh Choolani.

In a study to determine how the Chat Generative Pre-Trained Transformer or ChatGPT would fare in medical specialist examinations compared to human candidates without additional training, the Artificial Intelligence chatbot performed better than human candidates in a mock Obstetrics and Gynaecology (O&G) specialist clinical examination, used to assess the eligibility of individuals to become O&G specialists. The results from the mock clinical examination detailed that ChatGPT also achieved high scores in empathetic communication, information-gathering and clinical reasoning.

The tabulated results showed that ChatGPT attained a higher average score of 77.2%, compared to the human candidates who scored an average of 73.7%. It was also recorded that ChatGPT took an average of 2 minutes and 54 seconds to complete each station, markedly ahead of the stipulated 10 minutes given. Even though ChatGPT completed the stations in record time, ChatGPT did not outperform all the individuals in each cohort. To minimise bias, the responses of all three candidates were submitted to the examination panel, while concealing the true identity of ChatGPT.

In the study, the team selected seven stations that were in the objective structured clinical examinations (OSCEs) that had been run in the actual mock examinations in the two previous years, all similar in scope and difficulty, with no inclusion of visual interpretations to cater to the current limitations of ChatGPT present at the time of study. Each station has multiple layers of evolving questions based on initial data presented and subsequent responses from the candidate. The OSCE is a criterion-based assessment, where each candidate is assessed on their clinical competencies by completing a series of circuit stations in a simulated environment.

Given 10 minutes to complete each station, the candidate is introduced to an unfamiliar clinical scenario, coupled with the necessary information which would aid them to make an informed clinical decision. The candidate is expected to articulate a care plan, while demonstrating expertise such as communication, information gathering, application of clinical knowledge and patient safety within the time limit. The stations were introduced in an identical format and in the same order to two human candidates - Candidates A and B, and ChatGPT, known as Candidate C.

The study team from the Department of O&G at the Yong Loo Lin School of Medicine (NUS Medicine), led by Associate Professor Mahesh Choolani, Head of the Department of O&G, also conducted an analysis of the answers and found that ChatGPT scored very well in the empathetic communication domain. It was able to skilfully and rapidly generate factually accurate and contextually relevant answers to evolving clinical questions, based on unfamiliar data in the shortest time possible, a feat that would take an average intelligent person more than 10 years of clinical training to be able to understand the questions in this type of highly-complex examinations and answer them appropriately.

It is laudable that Generative AI, which is at present only in its infancy stage, has the prowess to consolidate and interpret huge chunks of general content quickly, and organise it into coherent and concise conversational-type responses, something that would not come naturally to non-native English speakers or candidates facing examination stress. Despite best efforts to blind the examination panel, examiners were generally able to identify the responses from ChatGPT, but not in all cases.

From the mix of answers from human candidates and ChatGPT that were transcribed verbatim and assessed by 14 trained clinician examiners, it was also observed that even though English was used throughout, there was also an infusion of Singlish or words loaned from Malay, Tamil and Chinese dialects, that was included extensively by human candidates. The intonation and unique vocabulary are very familiar and endearing to Singaporeans or long-term residents in Singapore. This method of communication would very well serve as a bridge to initiate closeness and build trust, while helping to ease nervousness in patients, compared to the more articulately-scripted answers of ChatGPT. The lack of local ethnic knowledge is one of the major limitations of ChatGPT, on top of the lack of up-to-date medical references and data, which in turn causes hallucinations in ChatGPT, compelling it to churn out irrelevant or incorrect answers and conclusions at times.

Crucially, the study results also revealed that ChatGPT is less able to handle subjects that have multiple changes of scenarios, within the question itself, that require open interpretation. The stations with multiple-changing scenarios would require additional training in context-specific medical knowledge in highly-specialised topics. This would be manageable for a highly-trained human candidate who has cultivated higher-level discernment and flexible reasoning needed to tackle ambiguities within these questions. ChatGPT was found to outperform human candidates in several knowledge areas, including labour management, gynecologic oncology and postoperative care, topics or stations that largely focused on standard protocol-driven decision-making, but not in highly contextual situations.

"The arrival and increased use of ChatGPT has proven that it can be a viable resource in guiding medical education, possibly provide adjunct support for clinical care in real time, and even support the monitoring of medical treatment in patients. In an era where accurate knowledge and information is instantly accessible, and these capabilities could be embedded within appropriate context by Generative AI in the foreseeable future, the need for future generations of medical doctors to clearly demonstrate the value and importance of the human touch is now saliently obvious. As doctors and medical educators, we need to strongly emphasise and exemplify the use of soft skills, compassionate communication and knowledge application in medical training and clinical care," said Associate Professor Mahesh Choolani.

Li SW, Kemp MW, Logan SJS, Dimri PS, Singh N, Mattar CNZ, Dashraath P, Ramlal H, Mahyuddin AP, Kanayan S, Carter SWD, Thain SPT, Fee EL, Illanes SE, Choolani MA; National University of Singapore Obstetrics and Gynecology Artificial Intelligence (NUS OBGYN-AI) Collaborative Group.
ChatGPT outscored human candidates in a virtual objective structured clinical examination in obstetrics and gynecology.
Am J Obstet Gynecol. 2023 Apr 22:S0002-9378(23)00251-X. doi: 10.1016/j.ajog.2023.04.020

Most Popular Now

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

SPARK TSL Acquires Sentean Group

SPARK TSL is acquiring Sentean Group, a Dutch company with a complementary background in hospital entertainment and communication, and bringing its Fusion Bedside platform for clinical and patient apps to...

Standing Up for Health Tech and SMEs: Sh…

AS the new chair of the health and social care council at techUK, Shane Tickell talked to Highland Marketing about his determination to support small and innovative companies, by having...

GPT-4 Matches Radiologists in Detecting …

Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology, a journal of the Radiological Society of North America...