A study of ChatGPT found the artificial intelligence (AI) tool answered less than half of the questions correctly in an exam resource commonly used by trainees preparing for ophthalmology certification.
The study, published in JAMA Ophthalmology and led by St Michael’s Hospital in Toronto, Canada, found ChatGPT correctly answered 46% of questions when initially conducted in January 2023. When researchers conducted the same test one month later, the bot scored more than 10% higher.
The researchers noted the potential of AI in medicine and exam preparation has garnered excitement since ChatGPT became publicly available in November 2022. It’s also raising concern for the potential of incorrect information and cheating in academia. ChatGPT is free, available to anyone with an internet connection, and works in a conversational manner.
“ChatGPT may have an increasing role in medical education and clinical practice over time, however it is important to stress the responsible use of such AI systems,” said Dr Rajeev Muni, principal investigator of the study and a researcher at the Li Ka Shing Knowledge Institute at St Michael’s.
“ChatGPT as used in this investigation did not answer sufficient multiple choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.”
Researchers used a dataset of practice multiple choice questions from the free trial of OphthoQuestions, a common resource for board certification exam preparation. To ensure ChatGPT’s responses were not influenced by concurrent conversations, entries or conversations with ChatGPT were cleared prior to inputting each question and a new ChatGPT account was used. Questions that used images and videos were not included because ChatGPT only accepts text input.
Of 125 text-based multiple-choice questions, ChatGPT answered 58 (46%) questions correctly when the study was first conducted in January 2023. Researchers repeated the analysis in February 2023, and the performance improved to 58%
“ChatGPT is an artificial intelligence system that has tremendous promise in medical education. Though it provided incorrect answers to board certification questions in ophthalmology about half the time, we anticipate that ChatGPT’s body of knowledge will rapidly evolve,” said Dr Marko Popovic, a co-author of the study and a resident physician in the Department of Ophthalmology and Vision Sciences at the University of Toronto.
ChatGPT closely matched how trainees answer questions, and selected the same multiple-choice response as the most common answer provided by ophthalmology trainees 44%of the time. ChatGPT selected the multiple-choice response that was least popular among ophthalmology trainees 11% of the time, second least popular 18% of the time, and second most popular 22% of the time.
“ChatGPT performed most accurately on general medicine questions, answering 79% of them correctly. On the other hand, its accuracy was considerably lower on questions for ophthalmology subspecialties,” said Mr Andrew Mihalache, lead author of the study and undergraduate student at Western University.
“For instance, the chatbot answered 20% of questions correctly on oculoplastics and 0% correctly from the subspecialty of retina. The accuracy of ChatGPT will likely improve most in niche subspecialties in the future.”
More reading
Bringing telehealth and artificial intelligence into real-world ophthalmology practice
Decoding artificial intelligence in eyecare