ChatGPT Candidate Performs Well in Obstetrics and Gynaecology Clinical Examination

In a study to determine how the Chat Generative Pre-Trained Transformer or ChatGPT would fare in medical specialist examinations compared to human candidates without additional training, the Artificial Intelligence chatbot performed better than human candidates in a mock Obstetrics and Gynaecology (O&G) specialist clinical examination, used to assess the eligibility of individuals to become O&G specialists. The results from the mock clinical examination detailed that ChatGPT also achieved high scores in empathetic communication, information-gathering and clinical reasoning.

The tabulated results showed that ChatGPT attained a higher average score of 77.2%, compared to the human candidates who scored an average of 73.7%. It was also recorded that ChatGPT took an average of 2 minutes and 54 seconds to complete each station, markedly ahead of the stipulated 10 minutes given. Even though ChatGPT completed the stations in record time, ChatGPT did not outperform all the individuals in each cohort. To minimise bias, the responses of all three candidates were submitted to the examination panel, while concealing the true identity of ChatGPT.

In the study, the team selected seven stations that were in the objective structured clinical examinations (OSCEs) that had been run in the actual mock examinations in the two previous years, all similar in scope and difficulty, with no inclusion of visual interpretations to cater to the current limitations of ChatGPT present at the time of study. Each station has multiple layers of evolving questions based on initial data presented and subsequent responses from the candidate. The OSCE is a criterion-based assessment, where each candidate is assessed on their clinical competencies by completing a series of circuit stations in a simulated environment.

Given 10 minutes to complete each station, the candidate is introduced to an unfamiliar clinical scenario, coupled with the necessary information which would aid them to make an informed clinical decision. The candidate is expected to articulate a care plan, while demonstrating expertise such as communication, information gathering, application of clinical knowledge and patient safety within the time limit. The stations were introduced in an identical format and in the same order to two human candidates - Candidates A and B, and ChatGPT, known as Candidate C.

The study team from the Department of O&G at the Yong Loo Lin School of Medicine (NUS Medicine), led by Associate Professor Mahesh Choolani, Head of the Department of O&G, also conducted an analysis of the answers and found that ChatGPT scored very well in the empathetic communication domain. It was able to skilfully and rapidly generate factually accurate and contextually relevant answers to evolving clinical questions, based on unfamiliar data in the shortest time possible, a feat that would take an average intelligent person more than 10 years of clinical training to be able to understand the questions in this type of highly-complex examinations and answer them appropriately.

It is laudable that Generative AI, which is at present only in its infancy stage, has the prowess to consolidate and interpret huge chunks of general content quickly, and organise it into coherent and concise conversational-type responses, something that would not come naturally to non-native English speakers or candidates facing examination stress. Despite best efforts to blind the examination panel, examiners were generally able to identify the responses from ChatGPT, but not in all cases.

From the mix of answers from human candidates and ChatGPT that were transcribed verbatim and assessed by 14 trained clinician examiners, it was also observed that even though English was used throughout, there was also an infusion of Singlish or words loaned from Malay, Tamil and Chinese dialects, that was included extensively by human candidates. The intonation and unique vocabulary are very familiar and endearing to Singaporeans or long-term residents in Singapore. This method of communication would very well serve as a bridge to initiate closeness and build trust, while helping to ease nervousness in patients, compared to the more articulately-scripted answers of ChatGPT. The lack of local ethnic knowledge is one of the major limitations of ChatGPT, on top of the lack of up-to-date medical references and data, which in turn causes hallucinations in ChatGPT, compelling it to churn out irrelevant or incorrect answers and conclusions at times.

Crucially, the study results also revealed that ChatGPT is less able to handle subjects that have multiple changes of scenarios, within the question itself, that require open interpretation. The stations with multiple-changing scenarios would require additional training in context-specific medical knowledge in highly-specialised topics. This would be manageable for a highly-trained human candidate who has cultivated higher-level discernment and flexible reasoning needed to tackle ambiguities within these questions. ChatGPT was found to outperform human candidates in several knowledge areas, including labour management, gynecologic oncology and postoperative care, topics or stations that largely focused on standard protocol-driven decision-making, but not in highly contextual situations.

"The arrival and increased use of ChatGPT has proven that it can be a viable resource in guiding medical education, possibly provide adjunct support for clinical care in real time, and even support the monitoring of medical treatment in patients. In an era where accurate knowledge and information is instantly accessible, and these capabilities could be embedded within appropriate context by Generative AI in the foreseeable future, the need for future generations of medical doctors to clearly demonstrate the value and importance of the human touch is now saliently obvious. As doctors and medical educators, we need to strongly emphasise and exemplify the use of soft skills, compassionate communication and knowledge application in medical training and clinical care," said Associate Professor Mahesh Choolani.

In a study to determine how the Chat Generative Pre-Trained Transformer or ChatGPT would fare in medical specialist examinations compared to human candidates without additional training, the Artificial Intelligence chatbot performed better than human candidates in a mock Obstetrics and Gynaecology (O&G) specialist clinical examination, used to assess the eligibility of individuals to become O&G specialists. The results from the mock clinical examination detailed that ChatGPT also achieved high scores in empathetic communication, information-gathering and clinical reasoning.

The tabulated results showed that ChatGPT attained a higher average score of 77.2%, compared to the human candidates who scored an average of 73.7%. It was also recorded that ChatGPT took an average of 2 minutes and 54 seconds to complete each station, markedly ahead of the stipulated 10 minutes given. Even though ChatGPT completed the stations in record time, ChatGPT did not outperform all the individuals in each cohort. To minimise bias, the responses of all three candidates were submitted to the examination panel, while concealing the true identity of ChatGPT.

In the study, the team selected seven stations that were in the objective structured clinical examinations (OSCEs) that had been run in the actual mock examinations in the two previous years, all similar in scope and difficulty, with no inclusion of visual interpretations to cater to the current limitations of ChatGPT present at the time of study. Each station has multiple layers of evolving questions based on initial data presented and subsequent responses from the candidate. The OSCE is a criterion-based assessment, where each candidate is assessed on their clinical competencies by completing a series of circuit stations in a simulated environment.

Given 10 minutes to complete each station, the candidate is introduced to an unfamiliar clinical scenario, coupled with the necessary information which would aid them to make an informed clinical decision. The candidate is expected to articulate a care plan, while demonstrating expertise such as communication, information gathering, application of clinical knowledge and patient safety within the time limit. The stations were introduced in an identical format and in the same order to two human candidates - Candidates A and B, and ChatGPT, known as Candidate C.

The study team from the Department of O&G at the Yong Loo Lin School of Medicine (NUS Medicine), led by Associate Professor Mahesh Choolani, Head of the Department of O&G, also conducted an analysis of the answers and found that ChatGPT scored very well in the empathetic communication domain. It was able to skilfully and rapidly generate factually accurate and contextually relevant answers to evolving clinical questions, based on unfamiliar data in the shortest time possible, a feat that would take an average intelligent person more than 10 years of clinical training to be able to understand the questions in this type of highly-complex examinations and answer them appropriately.

It is laudable that Generative AI, which is at present only in its infancy stage, has the prowess to consolidate and interpret huge chunks of general content quickly, and organise it into coherent and concise conversational-type responses, something that would not come naturally to non-native English speakers or candidates facing examination stress. Despite best efforts to blind the examination panel, examiners were generally able to identify the responses from ChatGPT, but not in all cases.

From the mix of answers from human candidates and ChatGPT that were transcribed verbatim and assessed by 14 trained clinician examiners, it was also observed that even though English was used throughout, there was also an infusion of Singlish or words loaned from Malay, Tamil and Chinese dialects, that was included extensively by human candidates. The intonation and unique vocabulary are very familiar and endearing to Singaporeans or long-term residents in Singapore. This method of communication would very well serve as a bridge to initiate closeness and build trust, while helping to ease nervousness in patients, compared to the more articulately-scripted answers of ChatGPT. The lack of local ethnic knowledge is one of the major limitations of ChatGPT, on top of the lack of up-to-date medical references and data, which in turn causes hallucinations in ChatGPT, compelling it to churn out irrelevant or incorrect answers and conclusions at times.

Crucially, the study results also revealed that ChatGPT is less able to handle subjects that have multiple changes of scenarios, within the question itself, that require open interpretation. The stations with multiple-changing scenarios would require additional training in context-specific medical knowledge in highly-specialised topics. This would be manageable for a highly-trained human candidate who has cultivated higher-level discernment and flexible reasoning needed to tackle ambiguities within these questions. ChatGPT was found to outperform human candidates in several knowledge areas, including labour management, gynecologic oncology and postoperative care, topics or stations that largely focused on standard protocol-driven decision-making, but not in highly contextual situations.

"The arrival and increased use of ChatGPT has proven that it can be a viable resource in guiding medical education, possibly provide adjunct support for clinical care in real time, and even support the monitoring of medical treatment in patients. In an era where accurate knowledge and information is instantly accessible, and these capabilities could be embedded within appropriate context by Generative AI in the foreseeable future, the need for future generations of medical doctors to clearly demonstrate the value and importance of the human touch is now saliently obvious. As doctors and medical educators, we need to strongly emphasise and exemplify the use of soft skills, compassionate communication and knowledge application in medical training and clinical care," said Associate Professor Mahesh Choolani.

Li SW, Kemp MW, Logan SJS, Dimri PS, Singh N, Mattar CNZ, Dashraath P, Ramlal H, Mahyuddin AP, Kanayan S, Carter SWD, Thain SPT, Fee EL, Illanes SE, Choolani MA; National University of Singapore Obstetrics and Gynecology Artificial Intelligence (NUS OBGYN-AI) Collaborative Group.
ChatGPT outscored human candidates in a virtual objective structured clinical examination in obstetrics and gynecology.
Am J Obstet Gynecol. 2023 Apr 22:S0002-9378(23)00251-X. doi: 10.1016/j.ajog.2023.04.020

Most Popular Now

Unlocking the 10 Year Health Plan

The government's plan for the NHS is a huge document. Jane Stephenson, chief executive of SPARK TSL, argues the key to unlocking its digital ambitions is to consider what it...

Alcidion Grows Top Talent in the UK, wit…

Alcidion has today announced the addition of three new appointments to their UK-based team, with one internal promotion and two external recruits. Dr Paul Deffley has been announced as the...

AI can Find Cancer Pathologists Miss

Men assessed as healthy after a pathologist analyses their tissue sample may still have an early form of prostate cancer. Using AI, researchers at Uppsala University have been able to...

AI, Full Automation could Expand Artific…

Automated insulin delivery (AID) systems such as the UVA Health-developed artificial pancreas could help more type 1 diabetes patients if the devices become fully automated, according to a new review...

How AI could Speed the Development of RN…

Using artificial intelligence (AI), MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies. After training...

MIT Researchers Use Generative AI to Des…

With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA). Using generative AI algorithms, the research...

AI Hybrid Strategy Improves Mammogram In…

A hybrid reading strategy for screening mammography, developed by Dutch researchers and deployed retrospectively to more than 40,000 exams, reduced radiologist workload by 38% without changing recall or cancer detection...

New Training Year Starts at Siemens Heal…

In September, 197 school graduates will start their vocational training or dual studies in Germany at Siemens Healthineers. 117 apprentices and 80 dual students will begin their careers at Siemens...

Penn Developed AI Tools and Datasets Hel…

Doctors treating kidney disease have long depended on trial-and-error to find the best therapies for individual patients. Now, new artificial intelligence (AI) tools developed by researchers in the Perelman School...

Are You Eligible for a Clinical Trial? C…

A new study in the academic journal Machine Learning: Health discovers that ChatGPT can accelerate patient screening for clinical trials, showing promise in reducing delays and improving trial success rates. Researchers...

New AI Tool Addresses Accuracy and Fairn…

A team of researchers at the Icahn School of Medicine at Mount Sinai has developed a new method to identify and reduce biases in datasets used to train machine-learning algorithms...

Global Study Reveals How Patients View M…

How physicians feel about artificial intelligence (AI) in medicine has been studied many times. But what do patients think? A team led by researchers at the Technical University of Munich...