James R Janopaul-Naylor, Andee Koo, David C Qian, Neal S McCall, Yuan Liu, Sagar A Patel
{"title":"Physician Assessment of ChatGPT and Bing Answers to American Cancer Society's Questions to Ask About Your Cancer.","authors":"James R Janopaul-Naylor, Andee Koo, David C Qian, Neal S McCall, Yuan Liu, Sagar A Patel","doi":"10.1097/COC.0000000000001050","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Artificial intelligence (AI) chatbots are a new, publicly available tool for patients to access health care-related information with unknown reliability related to cancer-related questions. This study assesses the quality of responses to common questions for patients with cancer.</p><p><strong>Methods: </strong>From February to March 2023, we queried chat generative pretrained transformer (ChatGPT) from OpenAI and Bing AI from Microsoft questions from the American Cancer Society's recommended \"Questions to Ask About Your Cancer\" customized for all stages of breast, colon, lung, and prostate cancer. Questions were, in addition, grouped by type (prognosis, treatment, or miscellaneous). The quality of AI chatbot responses was assessed by an expert panel using the validated DISCERN criteria.</p><p><strong>Results: </strong>Of the 117 questions presented to ChatGPT and Bing, the average score for all questions were 3.9 and 3.2, respectively ( P < 0.001) and the overall DISCERN scores were 4.1 and 4.4, respectively. By disease site, the average score for ChatGPT and Bing, respectively, were 3.9 and 3.6 for prostate cancer ( P = 0.02), 3.7 and 3.3 for lung cancer ( P < 0.001), 4.1 and 2.9 for breast cancer ( P < 0.001), and 3.8 and 3.0 for colorectal cancer ( P < 0.001). By type of question, the average score for ChatGPT and Bing, respectively, were 3.6 and 3.4 for prognostic questions ( P = 0.12), 3.9 and 3.1 for treatment questions ( P < 0.001), and 4.2 and 3.3 for miscellaneous questions ( P = 0.001). For 3 responses (3%) by ChatGPT and 18 responses (15%) by Bing, at least one panelist rated them as having serious or extensive shortcomings.</p><p><strong>Conclusions: </strong>AI chatbots provide multiple opportunities for innovating health care. This analysis suggests a critical need, particularly around cancer prognostication, for continual refinement to limit misleading counseling, confusion, and emotional distress to patients and families.</p>","PeriodicalId":50812,"journal":{"name":"American Journal of Clinical Oncology-Cancer Clinical Trials","volume":" ","pages":"17-21"},"PeriodicalIF":1.6000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10841271/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Clinical Oncology-Cancer Clinical Trials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/COC.0000000000001050","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/12 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Artificial intelligence (AI) chatbots are a new, publicly available tool for patients to access health care-related information with unknown reliability related to cancer-related questions. This study assesses the quality of responses to common questions for patients with cancer.
Methods: From February to March 2023, we queried chat generative pretrained transformer (ChatGPT) from OpenAI and Bing AI from Microsoft questions from the American Cancer Society's recommended "Questions to Ask About Your Cancer" customized for all stages of breast, colon, lung, and prostate cancer. Questions were, in addition, grouped by type (prognosis, treatment, or miscellaneous). The quality of AI chatbot responses was assessed by an expert panel using the validated DISCERN criteria.
Results: Of the 117 questions presented to ChatGPT and Bing, the average score for all questions were 3.9 and 3.2, respectively ( P < 0.001) and the overall DISCERN scores were 4.1 and 4.4, respectively. By disease site, the average score for ChatGPT and Bing, respectively, were 3.9 and 3.6 for prostate cancer ( P = 0.02), 3.7 and 3.3 for lung cancer ( P < 0.001), 4.1 and 2.9 for breast cancer ( P < 0.001), and 3.8 and 3.0 for colorectal cancer ( P < 0.001). By type of question, the average score for ChatGPT and Bing, respectively, were 3.6 and 3.4 for prognostic questions ( P = 0.12), 3.9 and 3.1 for treatment questions ( P < 0.001), and 4.2 and 3.3 for miscellaneous questions ( P = 0.001). For 3 responses (3%) by ChatGPT and 18 responses (15%) by Bing, at least one panelist rated them as having serious or extensive shortcomings.
Conclusions: AI chatbots provide multiple opportunities for innovating health care. This analysis suggests a critical need, particularly around cancer prognostication, for continual refinement to limit misleading counseling, confusion, and emotional distress to patients and families.
期刊介绍:
American Journal of Clinical Oncology is a multidisciplinary journal for cancer surgeons, radiation oncologists, medical oncologists, GYN oncologists, and pediatric oncologists.
The emphasis of AJCO is on combined modality multidisciplinary loco-regional management of cancer. The journal also gives emphasis to translational research, outcome studies, and cost utility analyses, and includes opinion pieces and review articles.
The editorial board includes a large number of distinguished surgeons, radiation oncologists, medical oncologists, GYN oncologists, pediatric oncologists, and others who are internationally recognized for expertise in their fields.