下载PDF
{"title":"通过弱监督学习对胸部 X 光片进行特定解剖学进展分类","authors":"Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Clare B Poynton, Kayhan Batmanghelich","doi":"10.1148/ryai.230277","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To develop a machine learning approach for classifying disease progression in chest radiographs using weak labels automatically derived from radiology reports. Materials and Methods In this retrospective study, a twin neural network was developed to classify anatomy-specific disease progression into four categories: improved, unchanged, worsened, and new. A two-step weakly supervised learning approach was employed, pretraining the model on 243 008 frontal chest radiographs from 63 877 patients (mean age, 51.7 years ± 17.0 [SD]; 34 813 [55%] female) included in the MIMIC-CXR database and fine-tuning it on the subset with progression labels derived from consecutive studies. Model performance was evaluated for six pathologic observations on test datasets of unseen patients from the MIMIC-CXR database. Area under the receiver operating characteristic (AUC) analysis was used to evaluate classification performance. The algorithm is also capable of generating bounding-box predictions to localize areas of new progression. Recall, precision, and mean average precision were used to evaluate the new progression localization. One-tailed paired <i>t</i> tests were used to assess statistical significance. Results The model outperformed most baselines in progression classification, achieving macro AUC scores of 0.72 ± 0.004 for atelectasis, 0.75 ± 0.007 for consolidation, 0.76 ± 0.017 for edema, 0.81 ± 0.006 for effusion, 0.7 ± 0.032 for pneumonia, and 0.69 ± 0.01 for pneumothorax. For new observation localization, the model achieved mean average precision scores of 0.25 ± 0.03 for atelectasis, 0.34 ± 0.03 for consolidation, 0.33 ± 0.03 for edema, and 0.31 ± 0.03 for pneumothorax. Conclusion Disease progression classification models were developed on a large chest radiograph dataset, which can be used to monitor interval changes and detect new pathologic conditions on chest radiographs. <b>Keywords:</b> Prognosis, Unsupervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Emergency Radiology, Named Entity Recognition <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Alves and Venkadesh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230277"},"PeriodicalIF":8.1000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427915/pdf/","citationCount":"0","resultStr":"{\"title\":\"Anatomy-specific Progression Classification in Chest Radiographs via Weakly Supervised Learning.\",\"authors\":\"Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Clare B Poynton, Kayhan Batmanghelich\",\"doi\":\"10.1148/ryai.230277\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Purpose To develop a machine learning approach for classifying disease progression in chest radiographs using weak labels automatically derived from radiology reports. Materials and Methods In this retrospective study, a twin neural network was developed to classify anatomy-specific disease progression into four categories: improved, unchanged, worsened, and new. A two-step weakly supervised learning approach was employed, pretraining the model on 243 008 frontal chest radiographs from 63 877 patients (mean age, 51.7 years ± 17.0 [SD]; 34 813 [55%] female) included in the MIMIC-CXR database and fine-tuning it on the subset with progression labels derived from consecutive studies. Model performance was evaluated for six pathologic observations on test datasets of unseen patients from the MIMIC-CXR database. Area under the receiver operating characteristic (AUC) analysis was used to evaluate classification performance. The algorithm is also capable of generating bounding-box predictions to localize areas of new progression. Recall, precision, and mean average precision were used to evaluate the new progression localization. One-tailed paired <i>t</i> tests were used to assess statistical significance. Results The model outperformed most baselines in progression classification, achieving macro AUC scores of 0.72 ± 0.004 for atelectasis, 0.75 ± 0.007 for consolidation, 0.76 ± 0.017 for edema, 0.81 ± 0.006 for effusion, 0.7 ± 0.032 for pneumonia, and 0.69 ± 0.01 for pneumothorax. For new observation localization, the model achieved mean average precision scores of 0.25 ± 0.03 for atelectasis, 0.34 ± 0.03 for consolidation, 0.33 ± 0.03 for edema, and 0.31 ± 0.03 for pneumothorax. Conclusion Disease progression classification models were developed on a large chest radiograph dataset, which can be used to monitor interval changes and detect new pathologic conditions on chest radiographs. <b>Keywords:</b> Prognosis, Unsupervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Emergency Radiology, Named Entity Recognition <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also commentary by Alves and Venkadesh in this issue.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\" \",\"pages\":\"e230277\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427915/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.230277\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.230277","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用