高风险儿科脊柱手术中手术部位感染预防的 ChatGPT 和专家共识声明比较。

IF 1.4 3区 医学 Q3 ORTHOPEDICS Journal of Pediatric Orthopaedics Pub Date : 2024-08-30 DOI:10.1097/BPO.0000000000002781
Aaron N Chester, Shay I Mandler
{"title":"高风险儿科脊柱手术中手术部位感染预防的 ChatGPT 和专家共识声明比较。","authors":"Aaron N Chester, Shay I Mandler","doi":"10.1097/BPO.0000000000002781","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) represents and exciting shift for orthopaedic surgery, where its role is rapidly evolving. ChatGPT is an AI language model which is preeminent among those leading the mass consumer uptake of AI. Artamonov and colleagues compared ChatGPT with orthopaedic surgeons when considering the diagnosis and management of anterior shoulder instability; they found a limited correlation between them. This study aims to further explore how reliable ChatGPT is compared with orthopaedic surgeons.</p><p><strong>Methods: </strong>Twenty-three statements were extracted from the article \"Building Consensus: Development of a Best Practice Guideline (BPG) for Surgical Site Infection (SSI) Prevention in High-risk Pediatric Spine Surgery\" by Vitale and colleagues. These included 14 consensus statements and an additional 9 statements that did not reach consensus. ChatGPT was asked to state the extent to which it agreed with each statement.</p><p><strong>Results: </strong>ChatGPT appeared to demonstrate a fair correlation with most expert responses to the 14 consensus statements. It appeared less emphatic than the experts, often stating that it \"agreed\" with a statement, where the most frequent response from experts was \"strongly agree.\" It reached the opposite conclusion to the majority of experts on a single consensus statement regarding the use of ultraviolet light in the operating theatre; it may have been that ChatGPT was drawing from more up to date literature that was published subsequent to the consensus statement.</p><p><strong>Conclusions: </strong>This study demonstrated a reasonable correlation between ChatGPT and orthopaedic surgeons when providing simple responses. ChatGPT's function may be limited when asked to provide more complex answers. This study adds to a growing body of discussion and evidence exploring AI and whether its function is reliable enough to enter the high-accountability world of health care.</p><p><strong>Clinical relevance: </strong>This article is of high clinical relevance to orthopaedic surgery given the rapidly emerging applications of AI. This creates a need to understand the level to which AI can function in the clinical setting and the risks that would entail.</p>","PeriodicalId":16945,"journal":{"name":"Journal of Pediatric Orthopaedics","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparison of ChatGPT and Expert Consensus Statements on Surgical Site Infection Prevention in High-Risk Paediatric Spine Surgery.\",\"authors\":\"Aaron N Chester, Shay I Mandler\",\"doi\":\"10.1097/BPO.0000000000002781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence (AI) represents and exciting shift for orthopaedic surgery, where its role is rapidly evolving. ChatGPT is an AI language model which is preeminent among those leading the mass consumer uptake of AI. Artamonov and colleagues compared ChatGPT with orthopaedic surgeons when considering the diagnosis and management of anterior shoulder instability; they found a limited correlation between them. This study aims to further explore how reliable ChatGPT is compared with orthopaedic surgeons.</p><p><strong>Methods: </strong>Twenty-three statements were extracted from the article \\\"Building Consensus: Development of a Best Practice Guideline (BPG) for Surgical Site Infection (SSI) Prevention in High-risk Pediatric Spine Surgery\\\" by Vitale and colleagues. These included 14 consensus statements and an additional 9 statements that did not reach consensus. ChatGPT was asked to state the extent to which it agreed with each statement.</p><p><strong>Results: </strong>ChatGPT appeared to demonstrate a fair correlation with most expert responses to the 14 consensus statements. It appeared less emphatic than the experts, often stating that it \\\"agreed\\\" with a statement, where the most frequent response from experts was \\\"strongly agree.\\\" It reached the opposite conclusion to the majority of experts on a single consensus statement regarding the use of ultraviolet light in the operating theatre; it may have been that ChatGPT was drawing from more up to date literature that was published subsequent to the consensus statement.</p><p><strong>Conclusions: </strong>This study demonstrated a reasonable correlation between ChatGPT and orthopaedic surgeons when providing simple responses. ChatGPT's function may be limited when asked to provide more complex answers. This study adds to a growing body of discussion and evidence exploring AI and whether its function is reliable enough to enter the high-accountability world of health care.</p><p><strong>Clinical relevance: </strong>This article is of high clinical relevance to orthopaedic surgery given the rapidly emerging applications of AI. This creates a need to understand the level to which AI can function in the clinical setting and the risks that would entail.</p>\",\"PeriodicalId\":16945,\"journal\":{\"name\":\"Journal of Pediatric Orthopaedics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Pediatric Orthopaedics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/BPO.0000000000002781\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ORTHOPEDICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pediatric Orthopaedics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/BPO.0000000000002781","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:人工智能(AI)对骨科手术来说是一个令人兴奋的转变,其作用正在迅速发展。ChatGPT 是一种人工智能语言模型,在引领消费者大规模使用人工智能方面处于领先地位。Artamonov 及其同事将 ChatGPT 与骨科医生在考虑肩关节前方不稳定的诊断和管理时进行了比较;他们发现两者之间的相关性有限。本研究旨在进一步探讨 ChatGPT 与骨科医生相比的可靠性:方法:从 "建立共识:最佳实践指南的制定(Best Practice Guideline)"一文中摘录了 23 项声明:Vitale 及其同事撰写的文章《建立共识:制定高风险小儿脊柱手术中预防手术部位感染 (SSI) 的最佳实践指南 (BPG)》中摘录了 23 项声明。其中包括 14 项共识声明和另外 9 项未达成共识的声明。ChatGPT 被要求说明其同意每项声明的程度:结果:ChatGPT 似乎与大多数专家对 14 项共识声明的回应有相当的相关性。它似乎没有专家那么强调,经常表示 "同意 "某项声明,而专家最常见的回答是 "非常同意"。在一项关于手术室使用紫外线的共识声明中,ChatGPT 得出了与大多数专家相反的结论;这可能是因为 ChatGPT 采用的是共识声明之后发表的最新文献:本研究表明,ChatGPT 和骨科医生在提供简单回答时具有合理的相关性。当要求提供更复杂的答案时,ChatGPT 的功能可能会受到限制。这项研究为探讨人工智能及其功能是否足够可靠以进入高问责的医疗保健领域的讨论和证据增加了新的内容:鉴于人工智能的应用正在迅速兴起,这篇文章与骨科手术具有高度的临床相关性。因此,我们有必要了解人工智能在临床环境中发挥作用的程度以及可能带来的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Comparison of ChatGPT and Expert Consensus Statements on Surgical Site Infection Prevention in High-Risk Paediatric Spine Surgery.

Background: Artificial intelligence (AI) represents and exciting shift for orthopaedic surgery, where its role is rapidly evolving. ChatGPT is an AI language model which is preeminent among those leading the mass consumer uptake of AI. Artamonov and colleagues compared ChatGPT with orthopaedic surgeons when considering the diagnosis and management of anterior shoulder instability; they found a limited correlation between them. This study aims to further explore how reliable ChatGPT is compared with orthopaedic surgeons.

Methods: Twenty-three statements were extracted from the article "Building Consensus: Development of a Best Practice Guideline (BPG) for Surgical Site Infection (SSI) Prevention in High-risk Pediatric Spine Surgery" by Vitale and colleagues. These included 14 consensus statements and an additional 9 statements that did not reach consensus. ChatGPT was asked to state the extent to which it agreed with each statement.

Results: ChatGPT appeared to demonstrate a fair correlation with most expert responses to the 14 consensus statements. It appeared less emphatic than the experts, often stating that it "agreed" with a statement, where the most frequent response from experts was "strongly agree." It reached the opposite conclusion to the majority of experts on a single consensus statement regarding the use of ultraviolet light in the operating theatre; it may have been that ChatGPT was drawing from more up to date literature that was published subsequent to the consensus statement.

Conclusions: This study demonstrated a reasonable correlation between ChatGPT and orthopaedic surgeons when providing simple responses. ChatGPT's function may be limited when asked to provide more complex answers. This study adds to a growing body of discussion and evidence exploring AI and whether its function is reliable enough to enter the high-accountability world of health care.

Clinical relevance: This article is of high clinical relevance to orthopaedic surgery given the rapidly emerging applications of AI. This creates a need to understand the level to which AI can function in the clinical setting and the risks that would entail.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.30
自引率
17.60%
发文量
512
审稿时长
6 months
期刊介绍: ​Journal of Pediatric Orthopaedics is a leading journal that focuses specifically on traumatic injuries to give you hands-on on coverage of a fast-growing field. You''ll get articles that cover everything from the nature of injury to the effects of new drug therapies; everything from recommendations for more effective surgical approaches to the latest laboratory findings.
期刊最新文献
ChatGPT Responses to Common Questions About Slipped Capital Femoral Epiphysis: Correspondence. Anterior Distal Femoral Hemiepiphysiodesis Using Coronally Oriented 8-plates for the Correction of Fixed Knee Flexion Deformities in Children-Preliminary Results. Epidemiology of Pediatric Firearm Injuries in the United States: The Progression of Gunshot Injury Rates Through the Coronavirus Disease 2019 Pandemic. Risk of Acute Compartment Syndrome in Pediatric Patients With Tibial Tubercle Avulsion Fractures: A Retrospective Review. Dynamic Magnetic Resonance Imaging Protocol: An Effective and Useful Tool to Assess Discoid Lateral Meniscus Instability in Children.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1