Improving Classification of Breast Cancer by Utilizing the Image Pyramids of Whole-Slide Imaging and Multi-Scale Convolutional Neural Networks.

Li Tong, Ying Sha, May D Wang
{"title":"Improving Classification of Breast Cancer by Utilizing the Image Pyramids of Whole-Slide Imaging and Multi-Scale Convolutional Neural Networks.","authors":"Li Tong, Ying Sha, May D Wang","doi":"10.1109/compsac.2019.00105","DOIUrl":null,"url":null,"abstract":"<p><p>Whole-slide imaging (WSI) is the digitization of conventional glass slides. Automatic computer-aided diagnosis (CAD) based on WSI enables digital pathology and the integration of pathology with other data like genomic biomarkers. Numerous computational algorithms have been developed for WSI, with most of them taking the image patches cropped from the highest resolution as the input. However, these models exploit only the local information within each patch and lost the connections between the neighboring patches, which may contain important context information. In this paper, we propose a novel multi-scale convolutional network (ConvNet) to utilize the built-in image pyramids of WSI. For the concentric image patches cropped at the same location of different resolution levels, we hypothesize the extra input images from lower magnifications will provide context information to enhance the prediction of patch images. We build corresponding ConvNets for feature representation and then combine the extracted features by 1) late fusion: concatenation or averaging the feature vectors before performing classification, 2) early fusion: merge the ConvNet feature maps. We have applied the multi-scale networks to a benchmark breast cancer WSI dataset. Extensive experiments have demonstrated that our multiscale networks utilizing the WSI image pyramids can achieve higher accuracy for the classification of breast cancer. The late fusion method by taking the average of feature vectors reaches the highest accuracy (81.50%), which is promising for the application of multi-scale analysis of WSI.</p>","PeriodicalId":74502,"journal":{"name":"Proceedings : Annual International Computer Software and Applications Conference. COMPSAC","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7302109/pdf/nihms-1595604.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings : Annual International Computer Software and Applications Conference. COMPSAC","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/compsac.2019.00105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/7/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Whole-slide imaging (WSI) is the digitization of conventional glass slides. Automatic computer-aided diagnosis (CAD) based on WSI enables digital pathology and the integration of pathology with other data like genomic biomarkers. Numerous computational algorithms have been developed for WSI, with most of them taking the image patches cropped from the highest resolution as the input. However, these models exploit only the local information within each patch and lost the connections between the neighboring patches, which may contain important context information. In this paper, we propose a novel multi-scale convolutional network (ConvNet) to utilize the built-in image pyramids of WSI. For the concentric image patches cropped at the same location of different resolution levels, we hypothesize the extra input images from lower magnifications will provide context information to enhance the prediction of patch images. We build corresponding ConvNets for feature representation and then combine the extracted features by 1) late fusion: concatenation or averaging the feature vectors before performing classification, 2) early fusion: merge the ConvNet feature maps. We have applied the multi-scale networks to a benchmark breast cancer WSI dataset. Extensive experiments have demonstrated that our multiscale networks utilizing the WSI image pyramids can achieve higher accuracy for the classification of breast cancer. The late fusion method by taking the average of feature vectors reaches the highest accuracy (81.50%), which is promising for the application of multi-scale analysis of WSI.

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用整体滑动成像的图像金字塔和多尺度卷积神经网络改进乳腺癌分类。
全玻片成像(WSI)是将传统的玻璃玻片数字化。基于 WSI 的自动计算机辅助诊断(CAD)可实现数字化病理学,并将病理学与基因组生物标记等其他数据整合在一起。针对 WSI 开发了许多计算算法,其中大多数算法将从最高分辨率中裁剪的图像片段作为输入。然而,这些模型只利用了每个斑块内的局部信息,而忽略了相邻斑块之间的联系,而这些联系可能包含重要的上下文信息。在本文中,我们提出了一种新型多尺度卷积网络(ConvNet),以利用 WSI 的内置图像金字塔。对于在同一位置裁剪的不同分辨率水平的同心图像补丁,我们假设来自较低倍率的额外输入图像将提供上下文信息,以增强补丁图像的预测。我们构建了相应的 ConvNets 来进行特征表示,然后通过以下方法将提取的特征进行组合:1)后期融合:在进行分类前对特征向量进行串联或平均;2)早期融合:合并 ConvNet 特征图。我们已将多尺度网络应用于基准乳腺癌 WSI 数据集。广泛的实验证明,我们利用 WSI 图像金字塔的多尺度网络可以实现更高的乳腺癌分类准确率。采用特征向量平均值的后期融合方法达到了最高的准确率(81.50%),这为 WSI 的多尺度分析应用带来了希望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Survey of Conversational Agents and Their Applications for Self-Management of Chronic Conditions. Towards Developing a Voice-activated Self-monitoring Application (VoiS) for Adults with Diabetes and Hypertension. Message from the 2022 Program Chairs-in-Chief Welcome - from Sorel Reisman COMPSAC Standing Committee Chair Message from the Standing Committee Vice Chairs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1