With concerns mounting that artificial intelligence (AI) could have a profound impact on traditional teaching in academic settings, many question the role of ChatGPT, a sophisticated AI language model that can generate content that mimics human conversation.ย 

ChatGPT offers the potential to assist or take over the student writing process with the capability of authoring everything from college admissions essays to term papers. But, can it also be used to aid the prodigious, sometimes daunting learning process in the medical school curriculum?

Researchers from Boston University Chobanian & Avedisian School of Medicine used ChatGPT to create multiple-choice questions, along with explanations of correct and incorrect choices, for a graduate and medical school immunology class which was taught by faculty in the schoolโ€™s department of pathology & laboratory medicine. They found the AI language model wrote acceptable questions but failed to produce appropriate answers. 


For the ornithologically inclined or the nerd who loves owlish humor, this T-shirt knows whoooo makes science fun! The comfy premium tee is ideal for hitting the books or the lab, going on nature walks to birdwatch, or just making your fellow owl and science fans smile. Hoot hoot – time to fly to the top of the class armed with curiosity and wordplay!

โ€œUnfortunately, ChatGPT only generated correct questions and answers with explanations in 32% of the questions (19 out of 60 individual questions). In many instances, ChatGPT failed to provide an explanation for the incorrect answers. An additional 25% of the questions had answers that were either wrong or misleading,โ€ explained corresponding author Daniel Remick, MD, professor of pathology & laboratory medicine at the school

According to the researchers, students appreciate practice exams that can be used to study for their actual exams. These practice exams have even greater utility when explanations for answers are included since students will learn the rationale for the correct answer and have explanations for the incorrect answers.

Since ChatGPT generated questions with vague or confusing question stems and poor explanations of the answer choices, this study tool may not be entirely viable. โ€œThese types of misleading questions may create further confusion about the topics, especially since the students have not gained expertise and they may not be able to find errors in the questions. โ€œHowever, despite the issues we encountered, instructors may still find ChatGPT useful for creating practice exams with explanations โ€“ with the caveat that extensive editing may be required,โ€ added Remick.


Sign up for the Daily Dose Newsletter and get the morning’s best science news from around the web delivered straight to your inbox? It’s easy like Sunday morning.

Processingโ€ฆ
Success! You're on the list.

Non-rotating early galaxy is a surprise to astronomers
Astronomers have discovered a non-rotating galaxy, XMM-VID1-2075, using the James Webb Space …
How plants make copies of themselves โ€“ key gene identified in model plant
A Hiroshima University team identified the GEMMIFER gene as crucial for asexual …
Marker of biological aging linked to cognitive symptoms of depression
Blood tests analyzing white blood cell aging can predict cognitive and mood …
Outer solar system object has an atmosphere but shouldnโ€™t
Japanese astronomers discovered a thin atmosphere around the small trans-Neptunian object 2002 …

2 responses to “Study finds AI language model failed to produce appropriate questions, answers for medical school exam”

  1. The Minoan Civilization: Mythical Minos and the Labyrinth https://wordpress.com/post/bugsunderstressblog.wordpress.com/1398

  2. […] Study finds AI language model failed to produce appropriate questions, answers for medical school ex… Share This Post ! […]

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading