Xueqi Wang, Jin Tang, Yajing Feng, Cijun Tang, Xuebin Wang
{"title":"Can ChatGPT-4 perform as a competent physician based on the Chinese critical care examination?","authors":"Xueqi Wang, Jin Tang, Yajing Feng, Cijun Tang, Xuebin Wang","doi":"10.1016/j.jcrc.2024.155010","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making.</div></div><div><h3>Methods</h3><div>A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components—fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge—was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric.</div></div><div><h3>Results</h3><div>ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, <em>p</em> < 0.001), while no significant difference was observed between case-based and non-case-based questions.</div></div><div><h3>Conclusion</h3><div>ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.</div></div>","PeriodicalId":15451,"journal":{"name":"Journal of critical care","volume":"86 ","pages":"Article 155010"},"PeriodicalIF":2.9000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of critical care","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0883944124004970","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/2 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"CRITICAL CARE MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Background
The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making.
Methods
A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components—fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge—was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric.
Results
ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, p < 0.001), while no significant difference was observed between case-based and non-case-based questions.
Conclusion
ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.
ChatGPT在医学应用中的应用越来越受到关注。然而,其在重症监护医学中的疗效仍不确定。本研究旨在评估ChatGPT-4在重症监护检查中的表现,为其作为临床决策工具的潜力提供见解。方法采用中国危重病医学卫生专业技术资格考试数据集,包括基础知识、专业知识、专业实践技能和相关医学知识四个组成部分。ChatGPT-4回答了600个问题,这些问题由重症监护专家使用标准化标准进行评估。结果schatgpt -4的成功率为73.5%,4个单项的通过率均超过60%,其中基础知识的准确率最高,为81.94%。ChatGPT-4在单项选择题上的表现明显优于多项选择题(76.72% vs. 51.32%, p <;0.001),而基于案例的问题和非基于案例的问题之间没有显著差异。结论chatgpt在重症监护检查中具有显著优势,在临床决策、信息检索和医学教育等方面具有较大的应用潜力。然而,需要谨慎考虑其产生不准确反应的可能性。因此,它在重症监护中的应用必须由医疗专业人员仔细监督,以确保信息的准确性和患者的安全。
期刊介绍:
The Journal of Critical Care, the official publication of the World Federation of Societies of Intensive and Critical Care Medicine (WFSICCM), is a leading international, peer-reviewed journal providing original research, review articles, tutorials, and invited articles for physicians and allied health professionals involved in treating the critically ill. The Journal aims to improve patient care by furthering understanding of health systems research and its integration into clinical practice.
The Journal will include articles which discuss:
All aspects of health services research in critical care
System based practice in anesthesiology, perioperative and critical care medicine
The interface between anesthesiology, critical care medicine and pain
Integrating intraoperative management in preparation for postoperative critical care management and recovery
Optimizing patient management, i.e., exploring the interface between evidence-based principles or clinical insight into management and care of complex patients
The team approach in the OR and ICU
System-based research
Medical ethics
Technology in medicine
Seminars discussing current, state of the art, and sometimes controversial topics in anesthesiology, critical care medicine, and professional education
Residency Education.