{"title":"Improving Classification of Breast Cancer by Utilizing the Image Pyramids of Whole-Slide Imaging and Multi-Scale Convolutional Neural Networks.","authors":"Li Tong, Ying Sha, May D Wang","doi":"10.1109/compsac.2019.00105","DOIUrl":null,"url":null,"abstract":"<p><p>Whole-slide imaging (WSI) is the digitization of conventional glass slides. Automatic computer-aided diagnosis (CAD) based on WSI enables digital pathology and the integration of pathology with other data like genomic biomarkers. Numerous computational algorithms have been developed for WSI, with most of them taking the image patches cropped from the highest resolution as the input. However, these models exploit only the local information within each patch and lost the connections between the neighboring patches, which may contain important context information. In this paper, we propose a novel multi-scale convolutional network (ConvNet) to utilize the built-in image pyramids of WSI. For the concentric image patches cropped at the same location of different resolution levels, we hypothesize the extra input images from lower magnifications will provide context information to enhance the prediction of patch images. We build corresponding ConvNets for feature representation and then combine the extracted features by 1) late fusion: concatenation or averaging the feature vectors before performing classification, 2) early fusion: merge the ConvNet feature maps. We have applied the multi-scale networks to a benchmark breast cancer WSI dataset. Extensive experiments have demonstrated that our multiscale networks utilizing the WSI image pyramids can achieve higher accuracy for the classification of breast cancer. The late fusion method by taking the average of feature vectors reaches the highest accuracy (81.50%), which is promising for the application of multi-scale analysis of WSI.</p>","PeriodicalId":74502,"journal":{"name":"Proceedings : Annual International Computer Software and Applications Conference. COMPSAC","volume":"2019 ","pages":"696-703"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7302109/pdf/nihms-1595604.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings : Annual International Computer Software and Applications Conference. COMPSAC","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/compsac.2019.00105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/7/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Whole-slide imaging (WSI) is the digitization of conventional glass slides. Automatic computer-aided diagnosis (CAD) based on WSI enables digital pathology and the integration of pathology with other data like genomic biomarkers. Numerous computational algorithms have been developed for WSI, with most of them taking the image patches cropped from the highest resolution as the input. However, these models exploit only the local information within each patch and lost the connections between the neighboring patches, which may contain important context information. In this paper, we propose a novel multi-scale convolutional network (ConvNet) to utilize the built-in image pyramids of WSI. For the concentric image patches cropped at the same location of different resolution levels, we hypothesize the extra input images from lower magnifications will provide context information to enhance the prediction of patch images. We build corresponding ConvNets for feature representation and then combine the extracted features by 1) late fusion: concatenation or averaging the feature vectors before performing classification, 2) early fusion: merge the ConvNet feature maps. We have applied the multi-scale networks to a benchmark breast cancer WSI dataset. Extensive experiments have demonstrated that our multiscale networks utilizing the WSI image pyramids can achieve higher accuracy for the classification of breast cancer. The late fusion method by taking the average of feature vectors reaches the highest accuracy (81.50%), which is promising for the application of multi-scale analysis of WSI.
全玻片成像(WSI)是将传统的玻璃玻片数字化。基于 WSI 的自动计算机辅助诊断(CAD)可实现数字化病理学,并将病理学与基因组生物标记等其他数据整合在一起。针对 WSI 开发了许多计算算法,其中大多数算法将从最高分辨率中裁剪的图像片段作为输入。然而,这些模型只利用了每个斑块内的局部信息,而忽略了相邻斑块之间的联系,而这些联系可能包含重要的上下文信息。在本文中,我们提出了一种新型多尺度卷积网络(ConvNet),以利用 WSI 的内置图像金字塔。对于在同一位置裁剪的不同分辨率水平的同心图像补丁,我们假设来自较低倍率的额外输入图像将提供上下文信息,以增强补丁图像的预测。我们构建了相应的 ConvNets 来进行特征表示,然后通过以下方法将提取的特征进行组合:1)后期融合:在进行分类前对特征向量进行串联或平均;2)早期融合:合并 ConvNet 特征图。我们已将多尺度网络应用于基准乳腺癌 WSI 数据集。广泛的实验证明,我们利用 WSI 图像金字塔的多尺度网络可以实现更高的乳腺癌分类准确率。采用特征向量平均值的后期融合方法达到了最高的准确率(81.50%),这为 WSI 的多尺度分析应用带来了希望。