人工智能聊天机器人对计算机断层扫描和磁共振成像场景的风险和益处的理解。

IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes Pub Date : 2024-08-01 Epub Date: 2024-01-06 DOI:10.1177/08465371231220561
Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs
{"title":"人工智能聊天机器人对计算机断层扫描和磁共振成像场景的风险和益处的理解。","authors":"Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs","doi":"10.1177/08465371231220561","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.</p><p><strong>Methods: </strong>Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.</p><p><strong>Results: </strong>ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, <i>P</i> < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, <i>P</i> = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, <i>P</i> < .0001).</p><p><strong>Conclusions: </strong>ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.</p>","PeriodicalId":55290,"journal":{"name":"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes","volume":" ","pages":"518-524"},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence Chatbots' Understanding of the Risks and Benefits of Computed Tomography and Magnetic Resonance Imaging Scenarios.\",\"authors\":\"Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs\",\"doi\":\"10.1177/08465371231220561\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.</p><p><strong>Methods: </strong>Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.</p><p><strong>Results: </strong>ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, <i>P</i> < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, <i>P</i> = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, <i>P</i> < .0001).</p><p><strong>Conclusions: </strong>ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.</p>\",\"PeriodicalId\":55290,\"journal\":{\"name\":\"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes\",\"volume\":\" \",\"pages\":\"518-524\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/08465371231220561\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/08465371231220561","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/6 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

目的:患者可能会寻求在线信息,以更好地了解医学成像程序。本研究旨在评估两种流行的人工智能(AI)聊天机器人提供的有关常见成像场景的风险、益处和替代方案的信息的准确性:研究使用了 14 个与计算机断层扫描(CT)或磁共振成像(MRI)相关的成像场景。分析中包含的因素包括静脉注射造影剂的使用、肾脏疾病的存在以及患者是否怀孕。在 ChatGPT 和 Bard 中输入了针对每种情况的 3 个提示,以概述(1)风险、(2)益处和(3)其他成像选择或不使用造影剂的潜在影响。两名独立审核员使用评分标准和 5 点李克特量表对回答进行评分。同时还评估了提示的可变性和聊天机器人的上下文依赖性:结果:ChatGPT 在按李克特评分标准准确回复提示方面的表现优于 Bard(4.36 ± 0.63 vs 3.25 ± 1.03 秒,P P = .24)。ChatGPT 的响应时间更长(34 ± 2 秒 vs 8 ± 1 秒,P 结论):ChatGPT 在概述常见成像场景的风险、益处和替代方案方面优于 Bard。一般来说,上下文依赖性和提示的可变性不会改变聊天机器人的回复内容。由于缺乏详细的科学推理和无法提供特定患者的信息,这两种人工智能聊天机器人作为患者信息资源都有局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial Intelligence Chatbots' Understanding of the Risks and Benefits of Computed Tomography and Magnetic Resonance Imaging Scenarios.

Purpose: Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.

Methods: Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.

Results: ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, P < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, P = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, P < .0001).

Conclusions: ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.20
自引率
12.90%
发文量
98
审稿时长
6-12 weeks
期刊介绍: The Canadian Association of Radiologists Journal is a peer-reviewed, Medline-indexed publication that presents a broad scientific review of radiology in Canada. The Journal covers such topics as abdominal imaging, cardiovascular radiology, computed tomography, continuing professional development, education and training, gastrointestinal radiology, health policy and practice, magnetic resonance imaging, musculoskeletal radiology, neuroradiology, nuclear medicine, pediatric radiology, radiology history, radiology practice guidelines and advisories, thoracic and cardiac imaging, trauma and emergency room imaging, ultrasonography, and vascular and interventional radiology. Article types considered for publication include original research articles, critically appraised topics, review articles, guest editorials, pictorial essays, technical notes, and letter to the Editor.
期刊最新文献
Bowel Emergencies in Patients With Cancer. Bowel Emergencies in Oncologic Patients. Imaging Features of Immunodeficiency-Associated Primary CNS Lymphoma: A Review. Breast Arterial Calcifications on Mammography: Awareness and Reporting Preferences Amongst Referring Physicians in Canada. Niwiiwaabamaa: Addressing Indigenous Representation in Medicine and Radiology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1