Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs
{"title":"人工智能聊天机器人对计算机断层扫描和磁共振成像场景的风险和益处的理解。","authors":"Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs","doi":"10.1177/08465371231220561","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.</p><p><strong>Methods: </strong>Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.</p><p><strong>Results: </strong>ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, <i>P</i> < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, <i>P</i> = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, <i>P</i> < .0001).</p><p><strong>Conclusions: </strong>ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.</p>","PeriodicalId":55290,"journal":{"name":"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes","volume":" ","pages":"518-524"},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence Chatbots' Understanding of the Risks and Benefits of Computed Tomography and Magnetic Resonance Imaging Scenarios.\",\"authors\":\"Nikhil S Patil, Ryan S Huang, Scott Caterine, Jason Yao, Natasha Larocque, Christian B van der Pol, Euan Stubbs\",\"doi\":\"10.1177/08465371231220561\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.</p><p><strong>Methods: </strong>Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.</p><p><strong>Results: </strong>ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, <i>P</i> < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, <i>P</i> = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, <i>P</i> < .0001).</p><p><strong>Conclusions: </strong>ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.</p>\",\"PeriodicalId\":55290,\"journal\":{\"name\":\"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes\",\"volume\":\" \",\"pages\":\"518-524\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/08465371231220561\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Association of Radiologists Journal-Journal De L Association Canadienne Des Radiologistes","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/08465371231220561","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/6 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Artificial Intelligence Chatbots' Understanding of the Risks and Benefits of Computed Tomography and Magnetic Resonance Imaging Scenarios.
Purpose: Patients may seek online information to better understand medical imaging procedures. The purpose of this study was to assess the accuracy of information provided by 2 popular artificial intelligence (AI) chatbots pertaining to common imaging scenarios' risks, benefits, and alternatives.
Methods: Fourteen imaging-related scenarios pertaining to computed tomography (CT) or magnetic resonance imaging (MRI) were used. Factors including the use of intravenous contrast, the presence of renal disease, and whether the patient was pregnant were included in the analysis. For each scenario, 3 prompts for outlining the (1) risks, (2) benefits, and (3) alternative imaging choices or potential implications of not using contrast were inputted into ChatGPT and Bard. A grading rubric and a 5-point Likert scale was used by 2 independent reviewers to grade responses. Prompt variability and chatbot context dependency were also assessed.
Results: ChatGPT's performance was superior to Bard's in accurately responding to prompts per Likert grading (4.36 ± 0.63 vs 3.25 ± 1.03 seconds, P < .0001). There was substantial agreement between independent reviewer grading for ChatGPT (κ = 0.621) and Bard (κ = 0.684). Response text length was not statistically different between ChatGPT and Bard (2087 ± 256 characters vs 2162 ± 369 characters, P = .24). Response time was longer for ChatGPT (34 ± 2 vs 8 ± 1 seconds, P < .0001).
Conclusions: ChatGPT performed superior to Bard at outlining risks, benefits, and alternatives to common imaging scenarios. Generally, context dependency and prompt variability did not change chatbot response content. Due to the lack of detailed scientific reasoning and inability to provide patient-specific information, both AI chatbots have limitations as a patient information resource.
期刊介绍:
The Canadian Association of Radiologists Journal is a peer-reviewed, Medline-indexed publication that presents a broad scientific review of radiology in Canada. The Journal covers such topics as abdominal imaging, cardiovascular radiology, computed tomography, continuing professional development, education and training, gastrointestinal radiology, health policy and practice, magnetic resonance imaging, musculoskeletal radiology, neuroradiology, nuclear medicine, pediatric radiology, radiology history, radiology practice guidelines and advisories, thoracic and cardiac imaging, trauma and emergency room imaging, ultrasonography, and vascular and interventional radiology. Article types considered for publication include original research articles, critically appraised topics, review articles, guest editorials, pictorial essays, technical notes, and letter to the Editor.