Esat Kaba, Mehmet Beyazal, Fatma Beyazal Çeliker, İbrahim Yel, Thomas J Vogl
{"title":"Accuracy and Readability of ChatGPT on Potential Complications of Interventional Radiology Procedures: AI-Powered Patient Interviewing.","authors":"Esat Kaba, Mehmet Beyazal, Fatma Beyazal Çeliker, İbrahim Yel, Thomas J Vogl","doi":"10.1016/j.acra.2024.10.028","DOIUrl":null,"url":null,"abstract":"<p><strong>Rationale and objectives: </strong>It is crucial to inform the patient about potential complications and obtain consent before interventional radiology procedures. In this study, we investigated the accuracy, reliability, and readability of the information provided by ChatGPT-4 about potential complications of interventional radiology procedures.</p><p><strong>Materials and methods: </strong>Potential major and minor complications of 25 different interventional radiology procedures (8 non-vascular, 17 vascular) were asked to ChatGPT-4 chatbot. The responses were evaluated by two experienced interventional radiologists (>25 years and 10 years of experience) using a 5-point Likert scale according to Cardiovascular and Interventional Radiological Society of Europe guidelines. The correlation between the two interventional radiologists' scoring was evaluated by the Wilcoxon signed-rank test, Intraclass Correlation Coefficient (ICC), and Pearson correlation coefficient (PCC). In addition, readability and complexity were quantitatively assessed using the Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and Simple Measure of Gobbledygook (SMOG) index.</p><p><strong>Results: </strong>Interventional radiologist 1 (IR1) and interventional radiologist 2 (IR2) gave 104 and 109 points, respectively, out of a potential 125 points for the total of all procedures. There was no statistically significant difference between the total scores of the two IRs (p = 0.244). The IRs demonstrated high agreement across all procedure ratings (ICC=0.928). Both IRs scored 34 out of 40 points for the eight non-vascular procedures. 17 vascular procedures received 70 points out of 85 from IR1 and 75 from IR2. The agreement between the two observers' assessments was good, with PCC values of 0.908 and 0.896 for non-vascular and vascular procedures, respectively. Readability levels were overall low. The mean Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and SMOG index were 12.51 ± 1.14 (college level) 30.27 ± 8.38 (college level), and 14.46 ± 0.76 (college level), respectively. There was no statistically significant difference in readability between non-vascular and vascular procedures (p = 0.16).</p><p><strong>Conclusion: </strong>ChatGPT-4 demonstrated remarkable performance, highlighting its potential to enhance accessibility to information about interventional radiology procedures and support the creation of educational materials for patients. Based on the findings of our study, while ChatGPT provides accurate information and shows no evidence of hallucinations, it is important to emphasize that a high level of education and health literacy are required to fully comprehend its responses.</p>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.acra.2024.10.028","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Rationale and objectives: It is crucial to inform the patient about potential complications and obtain consent before interventional radiology procedures. In this study, we investigated the accuracy, reliability, and readability of the information provided by ChatGPT-4 about potential complications of interventional radiology procedures.
Materials and methods: Potential major and minor complications of 25 different interventional radiology procedures (8 non-vascular, 17 vascular) were asked to ChatGPT-4 chatbot. The responses were evaluated by two experienced interventional radiologists (>25 years and 10 years of experience) using a 5-point Likert scale according to Cardiovascular and Interventional Radiological Society of Europe guidelines. The correlation between the two interventional radiologists' scoring was evaluated by the Wilcoxon signed-rank test, Intraclass Correlation Coefficient (ICC), and Pearson correlation coefficient (PCC). In addition, readability and complexity were quantitatively assessed using the Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and Simple Measure of Gobbledygook (SMOG) index.
Results: Interventional radiologist 1 (IR1) and interventional radiologist 2 (IR2) gave 104 and 109 points, respectively, out of a potential 125 points for the total of all procedures. There was no statistically significant difference between the total scores of the two IRs (p = 0.244). The IRs demonstrated high agreement across all procedure ratings (ICC=0.928). Both IRs scored 34 out of 40 points for the eight non-vascular procedures. 17 vascular procedures received 70 points out of 85 from IR1 and 75 from IR2. The agreement between the two observers' assessments was good, with PCC values of 0.908 and 0.896 for non-vascular and vascular procedures, respectively. Readability levels were overall low. The mean Flesch-Kincaid Grade Level, Flesch Reading Ease scores, and SMOG index were 12.51 ± 1.14 (college level) 30.27 ± 8.38 (college level), and 14.46 ± 0.76 (college level), respectively. There was no statistically significant difference in readability between non-vascular and vascular procedures (p = 0.16).
Conclusion: ChatGPT-4 demonstrated remarkable performance, highlighting its potential to enhance accessibility to information about interventional radiology procedures and support the creation of educational materials for patients. Based on the findings of our study, while ChatGPT provides accurate information and shows no evidence of hallucinations, it is important to emphasize that a high level of education and health literacy are required to fully comprehend its responses.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.