Angie K Puerto Nino, Valentina Garcia Perez, Silvia Secco, Cosimo De Nunzio, Riccardo Lombardo, Kari A O Tikkinen, Dean S Elterman
{"title":"Can ChatGPT provide high-quality patient information on male lower urinary tract symptoms suggestive of benign prostate enlargement?","authors":"Angie K Puerto Nino, Valentina Garcia Perez, Silvia Secco, Cosimo De Nunzio, Riccardo Lombardo, Kari A O Tikkinen, Dean S Elterman","doi":"10.1038/s41391-024-00847-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT has recently emerged as a novel resource for patients' disease-specific inquiries. There is, however, limited evidence assessing the quality of the information. We evaluated the accuracy and quality of the ChatGPT's responses on male lower urinary tract symptoms (LUTS) suggestive of benign prostate enlargement (BPE) when compared to two reference resources.</p><p><strong>Methods: </strong>Using patient information websites from the European Association of Urology and the American Urological Association as reference material, we formulated 88 BPE-centric questions for ChatGPT 4.0+. Independently and in duplicate, we compared the ChatGPT's responses and the reference material, calculating accuracy through F1 score, precision, and recall metrics. We used a 5-point Likert scale for quality rating. We evaluated examiner agreement using the interclass correlation coefficient and assessed the difference in the quality scores with the Wilcoxon signed-rank test.</p><p><strong>Results: </strong>ChatGPT addressed all (88/88) LUTS/BPE-related questions. For the 88 questions, the recorded F1 score was 0.79 (range: 0-1), precision 0.66 (range: 0-1), recall 0.97 (range: 0-1), and the quality score had a median of 4 (range = 1-5). Examiners had a good level of agreement (ICC = 0.86). We found no statistically significant difference between the scores given by the examiners and the overall quality of the responses (p = 0.72).</p><p><strong>Discussion: </strong>ChatGPT demostrated a potential utility in educating patients about BPE/LUTS, its prognosis, and treatment that helps in the decision-making process. One must exercise prudence when recommending this as the sole information outlet. Additional studies are needed to completely understand the full extent of AI's efficacy in delivering patient education in urology.</p>","PeriodicalId":20727,"journal":{"name":"Prostate Cancer and Prostatic Diseases","volume":" ","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Prostate Cancer and Prostatic Diseases","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41391-024-00847-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: ChatGPT has recently emerged as a novel resource for patients' disease-specific inquiries. There is, however, limited evidence assessing the quality of the information. We evaluated the accuracy and quality of the ChatGPT's responses on male lower urinary tract symptoms (LUTS) suggestive of benign prostate enlargement (BPE) when compared to two reference resources.
Methods: Using patient information websites from the European Association of Urology and the American Urological Association as reference material, we formulated 88 BPE-centric questions for ChatGPT 4.0+. Independently and in duplicate, we compared the ChatGPT's responses and the reference material, calculating accuracy through F1 score, precision, and recall metrics. We used a 5-point Likert scale for quality rating. We evaluated examiner agreement using the interclass correlation coefficient and assessed the difference in the quality scores with the Wilcoxon signed-rank test.
Results: ChatGPT addressed all (88/88) LUTS/BPE-related questions. For the 88 questions, the recorded F1 score was 0.79 (range: 0-1), precision 0.66 (range: 0-1), recall 0.97 (range: 0-1), and the quality score had a median of 4 (range = 1-5). Examiners had a good level of agreement (ICC = 0.86). We found no statistically significant difference between the scores given by the examiners and the overall quality of the responses (p = 0.72).
Discussion: ChatGPT demostrated a potential utility in educating patients about BPE/LUTS, its prognosis, and treatment that helps in the decision-making process. One must exercise prudence when recommending this as the sole information outlet. Additional studies are needed to completely understand the full extent of AI's efficacy in delivering patient education in urology.
期刊介绍:
Prostate Cancer and Prostatic Diseases covers all aspects of prostatic diseases, in particular prostate cancer, the subject of intensive basic and clinical research world-wide. The journal also reports on exciting new developments being made in diagnosis, surgery, radiotherapy, drug discovery and medical management.
Prostate Cancer and Prostatic Diseases is of interest to surgeons, oncologists and clinicians treating patients and to those involved in research into diseases of the prostate. The journal covers the three main areas - prostate cancer, male LUTS and prostatitis.
Prostate Cancer and Prostatic Diseases publishes original research articles, reviews, topical comment and critical appraisals of scientific meetings and the latest books. The journal also contains a calendar of forthcoming scientific meetings. The Editors and a distinguished Editorial Board ensure that submitted articles receive fast and efficient attention and are refereed to the highest possible scientific standard. A fast track system is available for topical articles of particular significance.