Creating Exam Questions with ChatGPT

For the study, the UKB (Universitätsklinikum Bonn) researchers created two sets of 25 multiple-choice questions (MCQs), each with five possible answers, one of which was correct. The first set of questions was written by an experienced medical lecturer, the second set was created by ChatGPT. 161 students answered all questions in random order. For each question, students also indicated whether they thought it was created by a human or by ChatGPT.

Matthias Laupichler, one of the study authors and research associate at the Institute for Medical Didactics at the UKB, explains: "We were surprised that the difficulty of human-generated and ChatGPT-generated questions was virtually identical. Even more surprising for us, however, was that the students were unable to correctly identify the origin of the question in almost half of the cases. Although the results obviously need to be replicated in further studies, the automated generation of exam questions using ChatGPT and co. appears to be a promising tool for medical studies."

His colleague and co-author of the study Johanna Rother adds: "Lecturers can use ChatGPT to generate ideas for exam questions, which are then checked and, if necessary, revised by the lecturers. In our opinion, however, students in particular benefit from the automated generation of medical practice questions, as it has long been known that self-testing one's own knowledge is very beneficial for learning."

Tobias Raupach, Director of the Institute of Medical Didactics, continues: "We knew from previous studies that language models such as ChatGPT can answer the questions in medical state examinations. We have now been able to show for the first time that the software can also be used to write new questions that hardly differ from those of experienced teachers."

Tizian Kaiser, who is studying human medicine in his seventh semester, comments: "When working on the mock exam, I was quite surprised at how difficult it was for me to tell the questions apart. My approach was to differentiate between the questions based on their length, the complexity of their sentence structure and the difficulty of their content. But to be honest, in some situations I simply had to guess and the evaluation showed that I was barely able to differentiate between them. This leads me to the conviction that a meaningful knowledge query, as in this exam, is also possible exclusively through questions posed by the AI."

He is convinced that ChatGPT has great potential for student learning. It allows students to repeat what they have learned in different ways and in different ways again and again. "There is the option of being quizzed by the AI on predefined topics, having mock exams designed or simulating oral exams in writing. The repetition of the material is thus tailored to the exam concept and the training possibilities are endless," says the study participant, while also qualifying: "However, I would only use Chat-GPT for this purpose and not beforehand in the learning process, in which the study topics have to be worked through and summarized. Because while Chat-GPT is excellent for repetition, I fear that errors can occur when preparing learning content. I wouldn't notice these errors without a prior overview of the topic."

It is known from other studies that regular testing - even and especially without grading - helps students to remember learning content more sustainably. Such tests can now be created with little effort. However, the current study should first be transferred to other contexts (i. e. other subjects, semesters and countries) and it should be investigated whether ChatGPT can also write questions other than the multiple choice questions commonly used in medicine.

Laupichler MC, Rother JF, Grunwald Kadow IC, Ahmadi S, Raupach T.
Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions.
Acad Med. 2023 Dec 28. doi: 10.1097/ACM.0000000000005626

Most Popular Now

Do Fitness Apps do More Harm than Good?

A study published in the British Journal of Health Psychology reveals the negative behavioral and psychological consequences of commercial fitness apps reported by users on social media. These impacts may...

AI Tool Beats Humans at Detecting Parasi…

Scientists at ARUP Laboratories have developed an artificial intelligence (AI) tool that detects intestinal parasites in stool samples more quickly and accurately than traditional methods, potentially transforming how labs diagnose...

Making Cancer Vaccines More Personal

In a new study, University of Arizona researchers created a model for cutaneous squamous cell carcinoma, a type of skin cancer, and identified two mutated tumor proteins, or neoantigens, that...

AI, Health, and Health Care Today and To…

Artificial intelligence (AI) carries promise and uncertainty for clinicians, patients, and health systems. This JAMA Summit Report presents expert perspectives on the opportunities, risks, and challenges of AI in health...

AI can Better Predict Future Risk for He…

A landmark study led by University' experts has shown that artificial intelligence can better predict how doctors should treat patients following a heart attack. The study, conducted by an international...

A New AI Model Improves the Prediction o…

Breast cancer is the most commonly diagnosed form of cancer in the world among women, with more than 2.3 million cases a year, and continues to be one of the...

AI System Finds Crucial Clues for Diagno…

Doctors often must make critical decisions in minutes, relying on incomplete information. While electronic health records contain vast amounts of patient data, much of it remains difficult to interpret quickly...

New AI Tool Makes Medical Imaging Proces…

When doctors analyze a medical scan of an organ or area in the body, each part of the image has to be assigned an anatomical label. If the brain is...