Sonit Singh, Sarvnaz Karimi, K. Ho-Shon, Len Hamey
{"title":"From Chest X-Rays to Radiology Reports: A Multimodal Machine Learning Approach","authors":"Sonit Singh, Sarvnaz Karimi, K. Ho-Shon, Len Hamey","doi":"10.1109/DICTA47822.2019.8945819","DOIUrl":null,"url":null,"abstract":"Interpreting medical images and summarising them in the form of radiology reports is a challenging, tedious, and complex task. A radiologist provides a complete description of a medical image in the form of radiology report by describing normal or abnormal findings and providing a summary for decision making. Research shows that the radiology practice is error-prone due to the limited number of experts, increasing patient volumes, and the subjective nature of human perception. To reduce the number of diagnostic errors and to alleviate the task of radiologists, there is a need for a computer-aided report generation system that can automatically generate a radiology report for a given medical image. We propose an encoder-decoder based framework that can automatically generate radiology reports from medical images. Specifically, we use a Convolutional Neural Network as an encoder coupled with a multi-stage Stacked Long Short-Term Memory as a decoder to generate reports. We perform experiments on the Indiana University Chest X-ray collection, a publicly available dataset, to measure the effectiveness of our model. Experimental results show the effectiveness of our model in automatically generating radiology reports from medical images.","PeriodicalId":6696,"journal":{"name":"2019 Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA47822.2019.8945819","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
Interpreting medical images and summarising them in the form of radiology reports is a challenging, tedious, and complex task. A radiologist provides a complete description of a medical image in the form of radiology report by describing normal or abnormal findings and providing a summary for decision making. Research shows that the radiology practice is error-prone due to the limited number of experts, increasing patient volumes, and the subjective nature of human perception. To reduce the number of diagnostic errors and to alleviate the task of radiologists, there is a need for a computer-aided report generation system that can automatically generate a radiology report for a given medical image. We propose an encoder-decoder based framework that can automatically generate radiology reports from medical images. Specifically, we use a Convolutional Neural Network as an encoder coupled with a multi-stage Stacked Long Short-Term Memory as a decoder to generate reports. We perform experiments on the Indiana University Chest X-ray collection, a publicly available dataset, to measure the effectiveness of our model. Experimental results show the effectiveness of our model in automatically generating radiology reports from medical images.