Julian M M Rogasch, Giulia Metzger, Martina Preisler, Markus Galler, Felix Thiele, Winfried Brenner, Felix Feldhaus, Christoph Wetz, Holger Amthauer, Christian Furth, Imke Schatka
{"title":"ChatGPT: Can You Prepare My Patients for [<sup>18</sup>F]FDG PET/CT and Explain My Reports?","authors":"Julian M M Rogasch, Giulia Metzger, Martina Preisler, Markus Galler, Felix Thiele, Winfried Brenner, Felix Feldhaus, Christoph Wetz, Holger Amthauer, Christian Furth, Imke Schatka","doi":"10.2967/jnumed.123.266114","DOIUrl":null,"url":null,"abstract":"<p><p>We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [<sup>18</sup>F]FDG PET/CT in common clinical indications before and after scanning. <b>Methods:</b> Thirteen questions regarding [<sup>18</sup>F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated \"useful\" or \"appropriate,\" a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. <b>Results:</b> Responses were rated \"appropriate\" for 92% of 25 tasks and \"useful\" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated \"empathetic.\" <b>Conclusion:</b> ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.</p>","PeriodicalId":16758,"journal":{"name":"Journal of Nuclear Medicine","volume":" ","pages":"1876-1879"},"PeriodicalIF":9.1000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10690125/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nuclear Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2967/jnumed.123.266114","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [18F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [18F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated "useful" or "appropriate," a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated "appropriate" for 92% of 25 tasks and "useful" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated "empathetic." Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.
期刊介绍:
The Journal of Nuclear Medicine (JNM), self-published by the Society of Nuclear Medicine and Molecular Imaging (SNMMI), provides readers worldwide with clinical and basic science investigations, continuing education articles, reviews, employment opportunities, and updates on practice and research. In the 2022 Journal Citation Reports (released in June 2023), JNM ranked sixth in impact among 203 medical journals worldwide in the radiology, nuclear medicine, and medical imaging category.