{"title":"大型语言模型在日本急诊医学委员会认证考试中的表现。","authors":"Yutaka Igarashi, Kyoichi Nakahara, Tatsuya Norii, Nodoka Miyake, Takashi Tagami, Shoji Yokobori","doi":"10.1272/jnms.JNMS.2024_91-205","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Emergency physicians need a broad range of knowledge and skills to address critical medical, traumatic, and environmental conditions. Artificial intelligence (AI), including large language models (LLMs), has potential applications in healthcare settings; however, the performance of LLMs in emergency medicine remains unclear.</p><p><strong>Methods: </strong>To evaluate the reliability of information provided by ChatGPT, an LLM was given the questions set by the Japanese Association of Acute Medicine in its board certification examinations over a period of 5 years (2018-2022) and programmed to answer them twice. Statistical analysis was used to assess agreement of the two responses.</p><p><strong>Results: </strong>The LLM successfully answered 465 of the 475 text-based questions, achieving an overall correct response rate of 62.3%. For questions without images, the rate of correct answers was 65.9%. For questions with images that were not explained to the LLM, the rate of correct answers was only 52.0%. The annual rates of correct answers to questions without images ranged from 56.3% to 78.8%. Accuracy was better for scenario-based questions (69.1%) than for stand-alone questions (62.1%). Agreement between the two responses was substantial (kappa = 0.70). Factual error accounted for 82% of the incorrectly answered questions.</p><p><strong>Conclusion: </strong>An LLM performed satisfactorily on an emergency medicine board certification examination in Japanese and without images. However, factual errors in the responses highlight the need for physician oversight when using LLMs.</p>","PeriodicalId":56076,"journal":{"name":"Journal of Nippon Medical School","volume":" ","pages":"155-161"},"PeriodicalIF":1.2000,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance of a Large Language Model on Japanese Emergency Medicine Board Certification Examinations.\",\"authors\":\"Yutaka Igarashi, Kyoichi Nakahara, Tatsuya Norii, Nodoka Miyake, Takashi Tagami, Shoji Yokobori\",\"doi\":\"10.1272/jnms.JNMS.2024_91-205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Emergency physicians need a broad range of knowledge and skills to address critical medical, traumatic, and environmental conditions. Artificial intelligence (AI), including large language models (LLMs), has potential applications in healthcare settings; however, the performance of LLMs in emergency medicine remains unclear.</p><p><strong>Methods: </strong>To evaluate the reliability of information provided by ChatGPT, an LLM was given the questions set by the Japanese Association of Acute Medicine in its board certification examinations over a period of 5 years (2018-2022) and programmed to answer them twice. Statistical analysis was used to assess agreement of the two responses.</p><p><strong>Results: </strong>The LLM successfully answered 465 of the 475 text-based questions, achieving an overall correct response rate of 62.3%. For questions without images, the rate of correct answers was 65.9%. For questions with images that were not explained to the LLM, the rate of correct answers was only 52.0%. The annual rates of correct answers to questions without images ranged from 56.3% to 78.8%. Accuracy was better for scenario-based questions (69.1%) than for stand-alone questions (62.1%). Agreement between the two responses was substantial (kappa = 0.70). Factual error accounted for 82% of the incorrectly answered questions.</p><p><strong>Conclusion: </strong>An LLM performed satisfactorily on an emergency medicine board certification examination in Japanese and without images. However, factual errors in the responses highlight the need for physician oversight when using LLMs.</p>\",\"PeriodicalId\":56076,\"journal\":{\"name\":\"Journal of Nippon Medical School\",\"volume\":\" \",\"pages\":\"155-161\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Nippon Medical School\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1272/jnms.JNMS.2024_91-205\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/2 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nippon Medical School","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1272/jnms.JNMS.2024_91-205","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/2 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Performance of a Large Language Model on Japanese Emergency Medicine Board Certification Examinations.
Background: Emergency physicians need a broad range of knowledge and skills to address critical medical, traumatic, and environmental conditions. Artificial intelligence (AI), including large language models (LLMs), has potential applications in healthcare settings; however, the performance of LLMs in emergency medicine remains unclear.
Methods: To evaluate the reliability of information provided by ChatGPT, an LLM was given the questions set by the Japanese Association of Acute Medicine in its board certification examinations over a period of 5 years (2018-2022) and programmed to answer them twice. Statistical analysis was used to assess agreement of the two responses.
Results: The LLM successfully answered 465 of the 475 text-based questions, achieving an overall correct response rate of 62.3%. For questions without images, the rate of correct answers was 65.9%. For questions with images that were not explained to the LLM, the rate of correct answers was only 52.0%. The annual rates of correct answers to questions without images ranged from 56.3% to 78.8%. Accuracy was better for scenario-based questions (69.1%) than for stand-alone questions (62.1%). Agreement between the two responses was substantial (kappa = 0.70). Factual error accounted for 82% of the incorrectly answered questions.
Conclusion: An LLM performed satisfactorily on an emergency medicine board certification examination in Japanese and without images. However, factual errors in the responses highlight the need for physician oversight when using LLMs.
期刊介绍:
The international effort to understand, treat and control disease involve clinicians and researchers from many medical and biological science disciplines. The Journal of Nippon Medical School (JNMS) is the official journal of the Medical Association of Nippon Medical School and is dedicated to furthering international exchange of medical science experience and opinion. It provides an international forum for researchers in the fields of bascic and clinical medicine to introduce, discuss and exchange thier novel achievements in biomedical science and a platform for the worldwide dissemination and steering of biomedical knowledge for the benefit of human health and welfare. Properly reasoned discussions disciplined by appropriate references to existing bodies of knowledge or aimed at motivating the creation of such knowledge is the aim of the journal.