Introduction
Artificial intelligence models can provide textual answers to a wide range of questions, including medical questions. Recently, these models have incorporated the ability to interpret and answer image-based questions, and this includes radiological images. The main objective of this study is to analyse the performance of ChatGPT-4o compared to third-year medical students in a Radiology and Applied Physics in Medicine practical exam. We also intend to assess the capacity of ChatGPT to interpret medical images and answer related questions.
Materials and method
Thirty-three students set an exam of 10 questions on radiological and nuclear medicine images. Exactly the same exam in the same format was given to ChatGPT (version GPT-4o) without prior training. The exam responses were evaluated by professors who were unaware of which exam corresponded to which respondent type. The Mann-Whitney U test was used to compare the results of the 2 groups.
Results
The students outperformed ChatGPT on 8 questions. The students’ average final score was 7.78, while ChatGPT's was 6.05, placing it in the 9th percentile of the students’ grade distribution.
Discussion
ChatGPT demonstrates competent performance in several areas, but students achieve better grades, especially in the interpretation of images and contextualised clinical reasoning, where students’ training and practical experience play an essential role. Improvements in artificial intelligence models are still needed to achieve human-like capabilities in interpreting radiological images and integrating clinical information.
扫码关注我们
求助内容:
应助结果提醒方式:
