{"title":"The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination.","authors":"J Chan, T Dong, G D Angelini","doi":"10.1308/rcsann.2024.0023","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination.</p><p><strong>Methods: </strong>The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%.</p><p><strong>Results: </strong>The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively (<i>p</i> = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% (<i>p</i> = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions (<i>p</i> = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs.</p><p><strong>Conclusions: </strong>Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.</p>","PeriodicalId":8088,"journal":{"name":"Annals of the Royal College of Surgeons of England","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528401/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of the Royal College of Surgeons of England","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1308/rcsann.2024.0023","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/6 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Large language models (LLM), such as Chat Generative Pre-trained Transformer (ChatGPT) and Bard utilise deep learning algorithms that have been trained on a massive data set of text and code to generate human-like responses. Several studies have demonstrated satisfactory performance on postgraduate examinations, including the United States Medical Licensing Examination. We aimed to evaluate artificial intelligence performance in Part A of the intercollegiate Membership of the Royal College of Surgeons (MRCS) examination.
Methods: The MRCS mock examination from Pastest, a commonly used question bank for examinees, was used to assess the performance of three LLMs: GPT-3.5, GPT 4.0 and Bard. Three hundred mock questions were input into the three LLMs, and the responses provided by the LLMs were recorded and analysed. The pass mark was set at 70%.
Results: The overall accuracies for GPT-3.5, GPT 4.0 and Bard were 67.33%, 71.67% and 65.67%, respectively (p = 0.27). The performances of GPT-3.5, GPT 4.0 and Bard in Applied Basic Sciences were 68.89%, 72.78% and 63.33% (p = 0.15), respectively. Furthermore, the three LLMs obtained correct answers in 65.00%, 70.00% and 69.17% of the Principles of Surgery in General questions (p = 0.67). There were no differences in performance in the overall and subcategories among the three LLMs.
Conclusions: Our findings demonstrated satisfactory performance for all three LLMs in the MRCS Part A examination, with GPT 4.0 the only LLM that achieved the pass mark set.
期刊介绍:
The Annals of The Royal College of Surgeons of England is the official scholarly research journal of the Royal College of Surgeons and is published eight times a year in January, February, March, April, May, July, September and November.
The main aim of the journal is to publish high-quality, peer-reviewed papers that relate to all branches of surgery. The Annals also includes letters and comments, a regular technical section, controversial topics, CORESS feedback and book reviews. The editorial board is composed of experts from all the surgical specialties.