With concerns mounting that artificial intelligence (AI) could have a profound impact on traditional teaching in academic settings, many question the role of ChatGPT, a sophisticated AI language model that can generate content that mimics human conversation. 

ChatGPT offers the potential to assist or take over the student writing process with the capability of authoring everything from college admissions essays to term papers. But, can it also be used to aid the prodigious, sometimes daunting learning process in the medical school curriculum?

Researchers from Boston University Chobanian & Avedisian School of Medicine used ChatGPT to create multiple-choice questions, along with explanations of correct and incorrect choices, for a graduate and medical school immunology class which was taught by faculty in the school’s department of pathology & laboratory medicine. They found the AI language model wrote acceptable questions but failed to produce appropriate answers. 


For the ornithologically inclined or the nerd who loves owlish humor, this T-shirt knows whoooo makes science fun! The comfy premium tee is ideal for hitting the books or the lab, going on nature walks to birdwatch, or just making your fellow owl and science fans smile. Hoot hoot – time to fly to the top of the class armed with curiosity and wordplay!

“Unfortunately, ChatGPT only generated correct questions and answers with explanations in 32% of the questions (19 out of 60 individual questions). In many instances, ChatGPT failed to provide an explanation for the incorrect answers. An additional 25% of the questions had answers that were either wrong or misleading,” explained corresponding author Daniel Remick, MD, professor of pathology & laboratory medicine at the school

According to the researchers, students appreciate practice exams that can be used to study for their actual exams. These practice exams have even greater utility when explanations for answers are included since students will learn the rationale for the correct answer and have explanations for the incorrect answers.

Since ChatGPT generated questions with vague or confusing question stems and poor explanations of the answer choices, this study tool may not be entirely viable. “These types of misleading questions may create further confusion about the topics, especially since the students have not gained expertise and they may not be able to find errors in the questions. “However, despite the issues we encountered, instructors may still find ChatGPT useful for creating practice exams with explanations – with the caveat that extensive editing may be required,” added Remick.


Sign up for the Daily Dose Newsletter and get the morning’s best science news from around the web delivered straight to your inbox? It’s easy like Sunday morning.

Processing…
Success! You're on the list.

Sylvester researchers develop a nanoparticle that can penetrate the blood-brain barrier
Researchers at Sylvester Comprehensive Cancer Center at the University of Miami Miller School of …
Study highlights need for improvement of patient safety in outpatient settings
Over the last several decades, research has brought nationwide awareness to issues of patient …
A fragment of human brain, mapped
Harvard and Google researchers achieved a groundbreaking 3D reconstruction of a human …
Quantum breakthrough: World’s purest silicon brings scientists one step closer to scaling up quantum computers
More than 100 years ago, scientists at The University of Manchester changed …

2 responses to “Study finds AI language model failed to produce appropriate questions, answers for medical school exam”

  1. The Minoan Civilization: Mythical Minos and the Labyrinth https://wordpress.com/post/bugsunderstressblog.wordpress.com/1398

  2. […] Study finds AI language model failed to produce appropriate questions, answers for medical school ex… Share This Post ! […]

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading