求助PDF
{"title":"深度学习应用于扩散加权成像,无需病灶分割即可区分恶性与良性乳腺肿瘤","authors":"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto","doi":"10.1148/ryai.240206","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; <i>P</i> = .64) or sensitivity (85.9% versus 98.8%; <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240206"},"PeriodicalIF":8.1000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.\",\"authors\":\"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto\",\"doi\":\"10.1148/ryai.240206\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>\\\"Just Accepted\\\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; <i>P</i> = .64) or sensitivity (85.9% versus 98.8%; <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\" \",\"pages\":\"e240206\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.240206\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240206","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用