Abstract 183: End-to-end training of convolutional network for breast cancer detection in two-view mammography

D. Petrini, C. Shimizu, G. Valente, Guilherme Folgueira, Guilherme Apolinario Silva Novaes, M. H. Katayama, P. Serio, R. A. Roela, T. Tucunduva, M. A. K. Folgueira, Hae Yong Kim
{"title":"Abstract 183: End-to-end training of convolutional network for breast cancer detection in two-view mammography","authors":"D. Petrini, C. Shimizu, G. Valente, Guilherme Folgueira, Guilherme Apolinario Silva Novaes, M. H. Katayama, P. Serio, R. A. Roela, T. Tucunduva, M. A. K. Folgueira, Hae Yong Kim","doi":"10.1158/1538-7445.AM2021-183","DOIUrl":null,"url":null,"abstract":"Background:Early computer-aided detection systems for mammography have failed to improve the performance of radiologists. With the remarkable success of deep learning, some recent studies have described computer systems with similar or even superior performance to that of human experts. Among them, Shen et al. (Nature Sci. Rep., 2019) present a promising “end-to-end” training approach. Instead of training a convolutional net with whole mammograms, they first train a “patch classifier” that recognizes lesions in small subimages. Then, they generalize the patch classifier to “whole image classifier” using the property of fully convolutional networks and the end-to-end approach. Using this strategy, the authors have obtained a per-image AUC of 0.87 [0.84, 0.90] in the CBIS-DDSM dataset. Standard mammography consists of two views for each breast: bilateral craniocaudal (CC) and mediolateral oblique (MLO). The algorithm proposed by Shen et al. processes only single-view mammography. We extend their work, presenting the end-to-end training of convolutional net for two-view mammography. Methods:First, we reproduced Shen et al.9s work, using the CBIS-DDSM dataset. We trained a ResNet50-based net for classifying patches with 224x224 pixels using segmented lesions. Then, the weights of the patch classifier were transferred to the whole image single-view classifier, obtained by removing the dense layers from the patch classifier and stacking one ResNet block at the top. This single-view classifier was trained using full images from the same dataset. Trying to replicate Shen et al.9s work, we obtained an AUC of 0.8524±0.0560, less than 0.87 reported in the original paper. We attribute this worsening to the fact that we are using only 2260 images with two views, instead of 2478 images from the original work. Finally, we built the two-view classifier that receives CC and MLO views as input. This classifier has inside two copies of the patch classifier, loaded with the weights from the single-view classifier. The features extracted by the two patch classifiers are concatenated and submitted to the ResNet block. The two-view classifier is end-to-end trained using full images, refining all its weights, including those inside the two patch classifiers. Results:The two-view classifier yielded an AUC of 0.9199±0.0623 in 5-fold cross-validation to classify mammographies into malignant/non-malignant, using single-model and without test-time data augmentation. This is better than the Shen et al.9s AUC (0.87), our single-view AUC (0.85). Zhang et al. (Plos One, 2020) present another two-view algorithm (without end-to-end training) with AUC of 0.95. However, this work cannot directly be compared with ours, as it was tested on a different set of images. Conclusions:We presented end-to-end training of convolutional net for two-view mammography. Our system9s AUC was 0.92, better than the 0.87 obtained by the previous single-view system. Citation Format: Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria Aparecida A. Folgueira, Hae Y. Kim. End-to-end training of convolutional network for breast cancer detection in two-view mammography [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 183.","PeriodicalId":73617,"journal":{"name":"Journal of bioinformatics and systems biology : Open access","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of bioinformatics and systems biology : Open access","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1158/1538-7445.AM2021-183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Background:Early computer-aided detection systems for mammography have failed to improve the performance of radiologists. With the remarkable success of deep learning, some recent studies have described computer systems with similar or even superior performance to that of human experts. Among them, Shen et al. (Nature Sci. Rep., 2019) present a promising “end-to-end” training approach. Instead of training a convolutional net with whole mammograms, they first train a “patch classifier” that recognizes lesions in small subimages. Then, they generalize the patch classifier to “whole image classifier” using the property of fully convolutional networks and the end-to-end approach. Using this strategy, the authors have obtained a per-image AUC of 0.87 [0.84, 0.90] in the CBIS-DDSM dataset. Standard mammography consists of two views for each breast: bilateral craniocaudal (CC) and mediolateral oblique (MLO). The algorithm proposed by Shen et al. processes only single-view mammography. We extend their work, presenting the end-to-end training of convolutional net for two-view mammography. Methods:First, we reproduced Shen et al.9s work, using the CBIS-DDSM dataset. We trained a ResNet50-based net for classifying patches with 224x224 pixels using segmented lesions. Then, the weights of the patch classifier were transferred to the whole image single-view classifier, obtained by removing the dense layers from the patch classifier and stacking one ResNet block at the top. This single-view classifier was trained using full images from the same dataset. Trying to replicate Shen et al.9s work, we obtained an AUC of 0.8524±0.0560, less than 0.87 reported in the original paper. We attribute this worsening to the fact that we are using only 2260 images with two views, instead of 2478 images from the original work. Finally, we built the two-view classifier that receives CC and MLO views as input. This classifier has inside two copies of the patch classifier, loaded with the weights from the single-view classifier. The features extracted by the two patch classifiers are concatenated and submitted to the ResNet block. The two-view classifier is end-to-end trained using full images, refining all its weights, including those inside the two patch classifiers. Results:The two-view classifier yielded an AUC of 0.9199±0.0623 in 5-fold cross-validation to classify mammographies into malignant/non-malignant, using single-model and without test-time data augmentation. This is better than the Shen et al.9s AUC (0.87), our single-view AUC (0.85). Zhang et al. (Plos One, 2020) present another two-view algorithm (without end-to-end training) with AUC of 0.95. However, this work cannot directly be compared with ours, as it was tested on a different set of images. Conclusions:We presented end-to-end training of convolutional net for two-view mammography. Our system9s AUC was 0.92, better than the 0.87 obtained by the previous single-view system. Citation Format: Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria Aparecida A. Folgueira, Hae Y. Kim. End-to-end training of convolutional network for breast cancer detection in two-view mammography [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 183.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
183:卷积网络的端到端训练在双视图乳房x光检查中的乳腺癌检测
背景:早期乳腺x线摄影的计算机辅助检测系统未能提高放射科医生的工作水平。随着深度学习的显著成功,最近的一些研究已经描述了与人类专家相似甚至优于人类专家的计算机系统。其中,Shen等(Nature Sci.;Rep., 2019)提出了一种有前途的“端到端”培训方法。他们不是用整个乳房x光照片训练卷积网络,而是首先训练一个“补丁分类器”,在小的子图像中识别病变。然后,他们利用全卷积网络的特性和端到端方法将patch分类器推广到“整幅图像分类器”。使用这种策略,作者在CBIS-DDSM数据集中获得了0.87[0.84,0.90]的单幅图像AUC。标准乳房x线照相术包括每个乳房的两个视图:双侧颅侧(CC)和中外侧斜位(MLO)。Shen等人提出的算法只处理单视图乳房x线检查。我们扩展了他们的工作,提出了卷积网络对双视图乳房x线检查的端到端训练。方法:首先,我们使用CBIS-DDSM数据集复制了Shen等人的工作。我们训练了一个基于resnet50的网络,用于使用分割的病灶对224x224像素的斑块进行分类。然后,将patch分类器的权重转移到整个图像的单视图分类器中,该分类器通过去除patch分类器中的密集层并在顶部堆叠一个ResNet块来获得。这个单视图分类器使用来自同一数据集的完整图像进行训练。我们试图复制Shen等人的工作,得到的AUC为0.8524±0.0560,小于原论文报道的0.87。我们将这种恶化归因于我们只使用了2260张带有两个视图的图像,而不是原始作品中的2478张图像。最后,我们构建了接收CC和MLO视图作为输入的双视图分类器。这个分类器有两个补丁分类器的副本,加载了来自单视图分类器的权重。两个补丁分类器提取的特征被连接并提交给ResNet块。双视图分类器使用完整图像进行端到端训练,精炼其所有权重,包括两个补丁分类器内的权重。结果:双视图分类器在单模型、无测试时间数据增强的情况下,5次交叉验证的AUC为0.9199±0.0623。这优于Shen等人的AUC(0.87)和我们的单视图AUC(0.85)。Zhang等人(Plos One, 2020)提出了另一种双视图算法(没有端到端训练),AUC为0.95。但是,这项工作不能直接与我们的工作进行比较,因为它是在不同的图像集上进行测试的。结论:我们提出了用于双视图乳房x线摄影的卷积网络的端到端训练。该系统的AUC为0.92,优于以往单视图系统的0.87。引用格式:Daniel G. Petrini, Carlos Shimizu, Gabriel V. Valente, Guilherme Folgueira, Guilherme A. Novaes, Maria L. Katayama, Pedro Serio, Rosimeire A. Roela, Tatiana C. Tucunduva, Maria aprecida A. Folgueira, Hae Y. Kim。基于卷积网络的乳腺癌双视图乳房x光检查端到端训练[摘要]。见:美国癌症研究协会2021年年会论文集;2021年4月10日至15日和5月17日至21日。费城(PA): AACR;癌症杂志,2021;81(13 -增刊):摘要第183期。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Linear Regression of Sampling Distributions of the Mean. Transcriptional and Translational Regulation of Differentially Expressed Genes in Yucatan Miniswine Brain Tissues following Traumatic Brain Injury. The Growing Liberality Observed in Primary Animal and Plant Cultures is Common to the Social Amoeba. Role of Transcription Factors and MicroRNAs in Regulating Fibroblast Reprogramming in Wound Healing. CDKs Functional Analysis in Low Proliferating Early-Stage Pancreatic Ductal Adenocarcinoma.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1