Role of Natural Language Processing in Automatic Detection of Unexpected Findings in Radiology Reports: A Comparative Study of RoBERTa, CNN, and ChatGPT
Pilar López-Úbeda PhD , Teodoro Martín-Noguerol MD , Jorge Escartín MD , Antonio Luna MD, PhD
{"title":"Role of Natural Language Processing in Automatic Detection of Unexpected Findings in Radiology Reports: A Comparative Study of RoBERTa, CNN, and ChatGPT","authors":"Pilar López-Úbeda PhD , Teodoro Martín-Noguerol MD , Jorge Escartín MD , Antonio Luna MD, PhD","doi":"10.1016/j.acra.2024.07.057","DOIUrl":null,"url":null,"abstract":"<div><h3>Rationale and Objectives</h3><div>Large Language Models can capture the context of radiological reports, offering high accuracy in detecting unexpected findings. We aim to fine-tune a Robustly Optimized BERT Pretraining Approach (RoBERTa) model for the automatic detection of unexpected findings in radiology reports to assist radiologists in this relevant task. Second, we compared the performance of RoBERTa with classical convolutional neural network (CNN) and with GPT4 for this goal.</div></div><div><h3>Materials and Methods</h3><div>For this study, a dataset consisting of 44,631 radiological reports for training and 5293 for the initial test set was used. A smaller subset comprising 100 reports was utilized for the comparative test set. The complete dataset was obtained from our institution's Radiology Information System, including reports from various dates, examinations, genders, ages, etc. For the study's methodology, we evaluated two Large Language Models, specifically performing fine-tuning on RoBERTa and developing a prompt for ChatGPT. Furthermore, extending previous studies, we included a CNN in our comparison.</div></div><div><h3>Results</h3><div>The results indicate an accuracy of 86.15% in the initial test set using the RoBERTa model. Regarding the comparative test set, RoBERTa achieves an accuracy of 79%, ChatGPT 64%, and the CNN 49%. Notably, RoBERTa outperforms the other systems by 30% and 15%, respectively.</div></div><div><h3>Conclusion</h3><div>Fine-tuned RoBERTa model can accurately detect unexpected findings in radiology reports outperforming the capability of CNN and ChatGPT for this task.</div></div>","PeriodicalId":50928,"journal":{"name":"Academic Radiology","volume":"31 12","pages":"Pages 4833-4842"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Radiology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1076633224005622","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Rationale and Objectives
Large Language Models can capture the context of radiological reports, offering high accuracy in detecting unexpected findings. We aim to fine-tune a Robustly Optimized BERT Pretraining Approach (RoBERTa) model for the automatic detection of unexpected findings in radiology reports to assist radiologists in this relevant task. Second, we compared the performance of RoBERTa with classical convolutional neural network (CNN) and with GPT4 for this goal.
Materials and Methods
For this study, a dataset consisting of 44,631 radiological reports for training and 5293 for the initial test set was used. A smaller subset comprising 100 reports was utilized for the comparative test set. The complete dataset was obtained from our institution's Radiology Information System, including reports from various dates, examinations, genders, ages, etc. For the study's methodology, we evaluated two Large Language Models, specifically performing fine-tuning on RoBERTa and developing a prompt for ChatGPT. Furthermore, extending previous studies, we included a CNN in our comparison.
Results
The results indicate an accuracy of 86.15% in the initial test set using the RoBERTa model. Regarding the comparative test set, RoBERTa achieves an accuracy of 79%, ChatGPT 64%, and the CNN 49%. Notably, RoBERTa outperforms the other systems by 30% and 15%, respectively.
Conclusion
Fine-tuned RoBERTa model can accurately detect unexpected findings in radiology reports outperforming the capability of CNN and ChatGPT for this task.
期刊介绍:
Academic Radiology publishes original reports of clinical and laboratory investigations in diagnostic imaging, the diagnostic use of radioactive isotopes, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasound, digital subtraction angiography, image-guided interventions and related techniques. It also includes brief technical reports describing original observations, techniques, and instrumental developments; state-of-the-art reports on clinical issues, new technology and other topics of current medical importance; meta-analyses; scientific studies and opinions on radiologic education; and letters to the Editor.