Abstract 181: High-accuracy breast cancer detection in mammography using EfficientNet and end-to-end training

D. Petrini, C. Shimizu, G. Valente, Guilherme Folgueira, Guilherme Apolinario Silva Novaes, M. H. Katayama, P. Serio, R. A. Roela, T. Tucunduva, M. A. K. Folgueira, Hae Yong Kim
{"title":"Abstract 181: High-accuracy breast cancer detection in mammography using EfficientNet and end-to-end training","authors":"D. Petrini, C. Shimizu, G. Valente, Guilherme Folgueira, Guilherme Apolinario Silva Novaes, M. H. Katayama, P. Serio, R. A. Roela, T. Tucunduva, M. A. K. Folgueira, Hae Yong Kim","doi":"10.1158/1538-7445.AM2021-181","DOIUrl":null,"url":null,"abstract":"Background:Breast cancer (BC) is the second most common cancer among women. BC screening is usually based on mammography interpreted by radiologists. Recently, some researchers have used deep learning to automatically diagnose BC in mammography and so assist radiologists. The progress of BC detection algorithms can be measured by their performance on public datasets. The CBIS-DDSM is a widely used public dataset composed of scanned mammographies, equally divided into malignant and non-malignant (benign) images. Each image is accompanied by the segmentation of the lesion. Shen et al. (Nature Sci. Rep., 2019) presented a BC detection algorithm using an “end-to-end” approach to train deep neural networks. In this algorithm, a patch classifier is first trained to classify local image patches. The patch classifier9s weights are then used to initialize the whole image classifier, that is refined using datasets with the cancer status of the whole image. They achieved an AUC of 0.87 [0.84, 0.90] in classifying CBIS-DDSM images, using their best single-model, single-view breast classifier. They used ResNet (He et al., CVPR 2016) as the basis of their algorithm. Our hypothesis was that replacing the old ResNet with the modern EfficientNet (Tan et al., arXiv 2019) and MobileNetV2 (Sandler et al.,CVPR 2018) would result in greater accuracy. Methods:We tested many different models, to conclude that the best model is obtained using EfficientNet-B4 as the base model, with a MobileNetV2 block at the top, followed by a dense layer with two output categories. We trained the patch classifier using 52,528 patches with 224x224 pixels extracted from CBIS-DDSM. From each image, we extracted 20 patches: 10 patches containing the lesion and 10 from the background (without lesion). The patch classifier weights were then used to initialize the whole image classifier, that was trained using the end-to-end approach with CBIS-DDSM images resized to 1152x896 pixels, with data augmentation. The training was performed using a step learning rate of 1e-4 for the first 20 epochs then 1e-5 for the remaining 10 and batch size of 4, using 10-fold cross-validation. We used 81% of the dataset for training, 9% for validation and 10% for testing. Results:We obtained an AUC of 0.8963±0.06, using a single-model, single-view classifier and without test-time data augmentation. Conclusions:Using EfficientNet and MobileNetV2 as the basis of the BC detection algorithm (instead of ResNet), we obtained an improvement in classifying CBIS-DDSM images into malignant/non-malignant: AUC has increased from 0.87 to 0.896. Our AUC is also larger than other recent papers in the literature, such as Shu et al. (IEEE Trans Med. Image, 2020) that achieved an AUC of 0.838 in the same CBIS-DDSM dataset. Citation Format: Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria Aparecida A. Folgueira, Hae Y. Kim. High-accuracy breast cancer detection in mammography using EfficientNet and end-to-end training [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 181.","PeriodicalId":73617,"journal":{"name":"Journal of bioinformatics and systems biology : Open access","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of bioinformatics and systems biology : Open access","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1158/1538-7445.AM2021-181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Background:Breast cancer (BC) is the second most common cancer among women. BC screening is usually based on mammography interpreted by radiologists. Recently, some researchers have used deep learning to automatically diagnose BC in mammography and so assist radiologists. The progress of BC detection algorithms can be measured by their performance on public datasets. The CBIS-DDSM is a widely used public dataset composed of scanned mammographies, equally divided into malignant and non-malignant (benign) images. Each image is accompanied by the segmentation of the lesion. Shen et al. (Nature Sci. Rep., 2019) presented a BC detection algorithm using an “end-to-end” approach to train deep neural networks. In this algorithm, a patch classifier is first trained to classify local image patches. The patch classifier9s weights are then used to initialize the whole image classifier, that is refined using datasets with the cancer status of the whole image. They achieved an AUC of 0.87 [0.84, 0.90] in classifying CBIS-DDSM images, using their best single-model, single-view breast classifier. They used ResNet (He et al., CVPR 2016) as the basis of their algorithm. Our hypothesis was that replacing the old ResNet with the modern EfficientNet (Tan et al., arXiv 2019) and MobileNetV2 (Sandler et al.,CVPR 2018) would result in greater accuracy. Methods:We tested many different models, to conclude that the best model is obtained using EfficientNet-B4 as the base model, with a MobileNetV2 block at the top, followed by a dense layer with two output categories. We trained the patch classifier using 52,528 patches with 224x224 pixels extracted from CBIS-DDSM. From each image, we extracted 20 patches: 10 patches containing the lesion and 10 from the background (without lesion). The patch classifier weights were then used to initialize the whole image classifier, that was trained using the end-to-end approach with CBIS-DDSM images resized to 1152x896 pixels, with data augmentation. The training was performed using a step learning rate of 1e-4 for the first 20 epochs then 1e-5 for the remaining 10 and batch size of 4, using 10-fold cross-validation. We used 81% of the dataset for training, 9% for validation and 10% for testing. Results:We obtained an AUC of 0.8963±0.06, using a single-model, single-view classifier and without test-time data augmentation. Conclusions:Using EfficientNet and MobileNetV2 as the basis of the BC detection algorithm (instead of ResNet), we obtained an improvement in classifying CBIS-DDSM images into malignant/non-malignant: AUC has increased from 0.87 to 0.896. Our AUC is also larger than other recent papers in the literature, such as Shu et al. (IEEE Trans Med. Image, 2020) that achieved an AUC of 0.838 in the same CBIS-DDSM dataset. Citation Format: Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria Aparecida A. Folgueira, Hae Y. Kim. High-accuracy breast cancer detection in mammography using EfficientNet and end-to-end training [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 181.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
181:利用EfficientNet和端到端培训在乳房x线摄影中进行高精度乳腺癌检测
背景:乳腺癌(BC)是女性中第二常见的癌症。BC筛查通常是基于由放射科医生解读的乳房x光检查。最近,一些研究人员已经使用深度学习来自动诊断乳房x光检查中的BC,从而帮助放射科医生。BC检测算法的进步可以通过它们在公共数据集上的表现来衡量。CBIS-DDSM是一个广泛使用的公共数据集,由扫描乳房x线照片组成,平均分为恶性和非恶性(良性)图像。每张图像都伴随着病灶的分割。《自然科学》;Rep., 2019)提出了一种使用“端到端”方法训练深度神经网络的BC检测算法。在该算法中,首先训练一个补丁分类器对局部图像补丁进行分类。然后使用patch分类器权重初始化整个图像分类器,并使用具有整个图像癌症状态的数据集对其进行细化。他们使用他们最好的单模型、单视图乳腺分类器对CBIS-DDSM图像进行分类,AUC为0.87[0.84,0.90]。他们使用ResNet (He et al., CVPR 2016)作为算法的基础。我们的假设是,用现代的EfficientNet (Tan等人,arXiv 2019)和MobileNetV2 (Sandler等人,CVPR 2018)取代旧的ResNet将导致更高的准确性。方法:我们测试了许多不同的模型,得出的结论是,以EfficientNet-B4为基础模型,顶部为MobileNetV2块,然后是具有两个输出类别的密集层,获得了最佳模型。我们使用从CBIS-DDSM中提取的224x224像素的52528个补丁来训练补丁分类器。从每张图像中,我们提取了20块补丁:10块包含病变,10块来自背景(没有病变)。然后使用patch分类器权重初始化整个图像分类器,使用端到端方法对cis - ddsm图像进行训练,将图像大小调整为1152x896像素,并进行数据增强。前20个epoch的步学习率为1e-4,其余10个epoch的步学习率为1e-5,批大小为4,使用10倍交叉验证。我们使用81%的数据集用于训练,9%用于验证,10%用于测试。结果:采用单模型、单视图分类器,未经测试时间数据增强,AUC为0.8963±0.06。结论:使用EfficientNet和MobileNetV2作为BC检测算法的基础(而不是ResNet),我们对CBIS-DDSM图像的恶性/非恶性分类得到了改进:AUC从0.87提高到0.896。我们的AUC也比文献中最近的其他论文要大,例如Shu等人(IEEE Trans Med. Image, 2020)在相同的CBIS-DDSM数据集中实现了0.838的AUC。引用格式:Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria aprecida A. Folgueira, Hae Y. Kim。基于高效网络和端到端训练的乳房x线摄影高精度乳腺癌检测[摘要]。见:美国癌症研究协会2021年年会论文集;2021年4月10日至15日和5月17日至21日。费城(PA): AACR;癌症杂志,2021;81(13 -增刊):摘要第181期。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Linear Regression of Sampling Distributions of the Mean. Transcriptional and Translational Regulation of Differentially Expressed Genes in Yucatan Miniswine Brain Tissues following Traumatic Brain Injury. The Growing Liberality Observed in Primary Animal and Plant Cultures is Common to the Social Amoeba. Role of Transcription Factors and MicroRNAs in Regulating Fibroblast Reprogramming in Wound Healing. CDKs Functional Analysis in Low Proliferating Early-Stage Pancreatic Ductal Adenocarcinoma.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1