首页 > 最新文献

Journal of Pathology Informatics最新文献

英文 中文
Comparing ensemble methods combined with different aggregating models using micrograph cell segmentation as an initial application example 以显微图像细胞分割为初步应用实例,比较不同聚合模型的集成方法
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100304
St. Göb , S. Sawant , F.X. Erick , C. Schmidkonz , A. Ramming , E.W. Lang , T. Wittenberg , Th.I. Götz

Strategies such as ensemble learning and averaging techniques try to reduce the variance of single deep neural networks. The focus of this study is on ensemble averaging techniques, fusing the results of differently initialized and trained networks. Thereby, using micrograph cell segmentation as an application example, various ensembles have been initialized and formed during network training, whereby the following methods have been applied: (a) random seeds, (b) L1-norm pruning, (c) variable numbers of training examples, and (d) a combination of the latter 2 items. Furthermore, different averaging methods are in common use and were evaluated in this study. As averaging methods, the mean, the median, and the location parameter of an alpha-stable distribution, fit to the histograms of class membership probabilities (CMPs), as well as a majority vote of the members of an ensemble were considered. The performance of these methods is demonstrated and evaluated on a micrograph cell segmentation use case, employing a common state-of-the art deep convolutional neural network (DCNN) architecture exploiting the principle of the common VGG-architecture. The study demonstrates that for this data set, the choice of the ensemble averaging method only has a marginal influence on the evaluation metrics (accuracy and Dice coefficient) used to measure the segmentation performance. Nevertheless, for practical applications, a simple and fast estimate of the mean of the distribution is highly competitive with respect to the most sophisticated representation of the CMP distributions by an alpha-stable distribution, and hence seems the most proper ensemble averaging method to be used for this application.

本研究的重点是集成平均技术,融合不同初始化和训练网络的结果。因此,以显微照片细胞分割为应用实例,在网络训练过程中初始化并形成了各种集合,其中应用了以下方法:(a)随机种子,(b) l1范数修剪,(c)可变数量的训练样例,(d)后两项的组合。此外,不同的平均方法是常用的,并在本研究中进行了评估。作为平均方法,考虑了α稳定分布的均值、中位数和位置参数与类隶属概率(CMPs)直方图的拟合,以及集合中成员的多数投票。这些方法的性能在显微图像细胞分割用例上进行了演示和评估,该用例采用了一种常见的最先进的深度卷积神经网络(DCNN)架构,利用了通用vgg架构的原理。研究表明,对于该数据集,集成平均方法的选择对用于衡量分割性能的评估指标(精度和Dice系数)只有边际影响。然而,在实际应用中,相对于用α稳定分布表示最复杂的CMP分布,对分布均值的简单而快速的估计具有很强的竞争力,因此似乎是用于该应用的最合适的集合平均方法。
{"title":"Comparing ensemble methods combined with different aggregating models using micrograph cell segmentation as an initial application example","authors":"St. Göb ,&nbsp;S. Sawant ,&nbsp;F.X. Erick ,&nbsp;C. Schmidkonz ,&nbsp;A. Ramming ,&nbsp;E.W. Lang ,&nbsp;T. Wittenberg ,&nbsp;Th.I. Götz","doi":"10.1016/j.jpi.2023.100304","DOIUrl":"10.1016/j.jpi.2023.100304","url":null,"abstract":"<div><p>Strategies such as <em>ensemble learning</em> and <em>averaging techniques</em> try to reduce the variance of single deep neural networks. The focus of this study is on <em>ensemble averaging</em> techniques, fusing the results of differently initialized and trained networks. Thereby, using micrograph cell segmentation as an application example, various ensembles have been initialized and formed during network training, whereby the following methods have been applied: (a) random seeds, (b) <em>L</em><sub>1</sub>-norm pruning, (c) variable numbers of training examples, and (d) a combination of the latter 2 items. Furthermore, different averaging methods are in common use and were evaluated in this study. As averaging methods, the <em>mean</em>, the <em>median,</em> and the <em>location parameter</em> of an <em>alpha-stable distribution</em>, fit to the histograms of class membership probabilities (CMPs), as well as a <em>majority</em> vote of the members of an ensemble were considered. The performance of these methods is demonstrated and evaluated on a micrograph cell segmentation use case, employing a common state-of-the art deep convolutional neural network (DCNN) architecture exploiting the principle of the common VGG-architecture. The study demonstrates that for this data set, the choice of the ensemble averaging method only has a marginal influence on the evaluation metrics (accuracy and Dice coefficient) used to measure the segmentation performance. Nevertheless, for practical applications, a simple and fast estimate of the mean of the distribution is highly competitive with respect to the most sophisticated representation of the CMP distributions by an alpha-stable distribution, and hence seems the most proper ensemble averaging method to be used for this application.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10034515/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9187268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Ex vivo 3D scanning and specimen mapping in anatomic pathology 解剖病理中的离体3D扫描和标本标测。
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2022.100186
Alexander N. Perez , Kayvon F. Sharif , Erica Guelfi , Sophie Li , Alexis Miller , Kavita Prasad , Robert J. Sinard , James S. Lewis Jr , Michael C. Topf

Structured light three-dimensional (3D) scanning is a ubiquitous mainstay of object inspection and quality control in industrial manufacturing, and has recently been integrated into various medical disciplines. Photorealistic 3D scans can readily be acquired from fresh or formalin-fixed tissue and have potential for use within anatomic pathology (AP) in a variety of scenarios, ranging from direct clinical care to documentation and education. Methods for scanning and post-processing of fresh surgical specimens rely on relatively low-cost and technically simple procedures. Here, we demonstrate potential use of 3D scanning in surgical pathology in the form of a mixed media pathology report with a novel post-scan virtual inking and marking technique to precisely demarcate areas of tissue sectioning and details of final tumor and margin status. We display a sample mixed-media pathology report (3D specimen map) which integrates 3D and conventional pathology reporting methods. Finally, we describe the potential utility of 3D specimen modeling in both didactic and experiential teaching of gross pathology lab procedures.

结构光三维(3D)扫描是工业制造中物体检测和质量控制的普遍支柱,最近已被整合到各个医学学科中。逼真的3D扫描可以很容易地从新鲜或福尔马林固定的组织中获得,并有可能在解剖病理学(AP)中用于各种场景,从直接临床护理到文件和教育。新鲜外科标本的扫描和后处理方法依赖于相对低成本和技术简单的程序。在这里,我们展示了3D扫描在外科病理学中的潜在用途,其形式是混合介质病理学报告,以及一种新的扫描后虚拟墨迹和标记技术,以精确划分组织切片区域以及最终肿瘤和边缘状态的细节。我们展示了一份样本混合介质病理学报告(3D标本图),该报告集成了3D和传统病理学报告方法。最后,我们描述了三维标本建模在大体病理学实验室程序的教学和体验教学中的潜在效用。
{"title":"Ex vivo 3D scanning and specimen mapping in anatomic pathology","authors":"Alexander N. Perez ,&nbsp;Kayvon F. Sharif ,&nbsp;Erica Guelfi ,&nbsp;Sophie Li ,&nbsp;Alexis Miller ,&nbsp;Kavita Prasad ,&nbsp;Robert J. Sinard ,&nbsp;James S. Lewis Jr ,&nbsp;Michael C. Topf","doi":"10.1016/j.jpi.2022.100186","DOIUrl":"10.1016/j.jpi.2022.100186","url":null,"abstract":"<div><p>Structured light three-dimensional (3D) scanning is a ubiquitous mainstay of object inspection and quality control in industrial manufacturing, and has recently been integrated into various medical disciplines. Photorealistic 3D scans can readily be acquired from fresh or formalin-fixed tissue and have potential for use within anatomic pathology (AP) in a variety of scenarios, ranging from direct clinical care to documentation and education. Methods for scanning and post-processing of fresh surgical specimens rely on relatively low-cost and technically simple procedures. Here, we demonstrate potential use of 3D scanning in surgical pathology in the form of a mixed media pathology report with a novel post-scan virtual inking and marking technique to precisely demarcate areas of tissue sectioning and details of final tumor and margin status. We display a sample mixed-media pathology report (3D specimen map) which integrates 3D and conventional pathology reporting methods. Finally, we describe the potential utility of 3D specimen modeling in both didactic and experiential teaching of gross pathology lab procedures.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9852486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10584107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
H&E image analysis pipeline for quantifying morphological features 用于形态学特征量化的H&E图像分析流水线
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100339
Valeria Ariotta , Oskari Lehtonen , Shams Salloum , Giulia Micoli , Kari Lavikka , Ville Rantanen , Johanna Hynninen , Anni Virtanen , Sampsa Hautaniemi

Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.

从组织病理学图像中检测细胞类型对于各种数字病理学应用是必不可少的。然而,全片图像(wsi)中大量的细胞需要自动化的分析管道来进行有效的细胞类型检测。在此,我们提出苏木精和伊红(H&E)图像处理管道(HEIP),用于自动分析扫描的H&E染色玻片。HEIP是一个灵活的模块化开源软件,可以执行预处理、实例分割和核特征提取。为了评估HEIP的性能,我们将其应用于提取卵巢高级别浆液性癌(HGSC)患者WSIs的细胞类型。HEIP在实例分割中显示出较高的精度,特别是对肿瘤细胞和上皮细胞。我们还表明,基因组倍性值与细胞核长轴等形态特征之间存在显著的相关性。
{"title":"H&E image analysis pipeline for quantifying morphological features","authors":"Valeria Ariotta ,&nbsp;Oskari Lehtonen ,&nbsp;Shams Salloum ,&nbsp;Giulia Micoli ,&nbsp;Kari Lavikka ,&nbsp;Ville Rantanen ,&nbsp;Johanna Hynninen ,&nbsp;Anni Virtanen ,&nbsp;Sampsa Hautaniemi","doi":"10.1016/j.jpi.2023.100339","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100339","url":null,"abstract":"<div><p>Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&amp;E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&amp;E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49858221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposing a hybrid technique of feature fusion and convolutional neural network for melanoma skin cancer detection 提出了一种基于特征融合和卷积神经网络的黑色素瘤皮肤癌检测方法
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100341
Md. Mahbubur Rahman , Mostofa Kamal Nasir , Md. Nur-A-Alam , Md. Saikat Islam Khan

Skin cancer is among the most common cancer types worldwide. Automatic identification of skin cancer is complicated because of the poor contrast and apparent resemblance between skin and lesions. The rate of human death can be significantly reduced if melanoma skin cancer could be detected quickly using dermoscopy images. This research uses an anisotropic diffusion filtering method on dermoscopy images to remove multiplicative speckle noise. To do this, the fast-bounding box (FBB) method is applied here to segment the skin cancer region. We also employ 2 feature extractors to represent images. The first one is the Hybrid Feature Extractor (HFE), and second one is the convolutional neural network VGG19-based CNN. The HFE combines 3 feature extraction approaches namely, Histogram-Oriented Gradient (HOG), Local Binary Pattern (LBP), and Speed Up Robust Feature (SURF) into a single fused feature vector. The CNN method is also used to extract additional features from test and training datasets. This 2-feature vector is then fused to design the classification model. The proposed method is then employed on 2 datasets namely, ISIC 2017 and the academic torrents dataset. Our proposed method achieves 99.85%, 91.65%, and 95.70% in terms of accuracy, sensitivity, and specificity, respectively, making it more successful than previously proposed machine learning algorithms.

皮肤癌是全世界最常见的癌症类型之一。由于皮肤和病变之间的对比度差和明显的相似性,皮肤癌的自动识别是复杂的。如果黑色素瘤皮肤癌可以通过皮肤镜图像快速检测出来,那么人类的死亡率将大大降低。本研究采用各向异性扩散滤波方法对皮肤镜图像进行去噪。为了做到这一点,这里应用快速边界框(FBB)方法来分割皮肤癌区域。我们还使用了2个特征提取器来表示图像。第一种是混合特征提取器(HFE),第二种是基于卷积神经网络vgg19的CNN。HFE将直方图导向梯度(HOG)、局部二值模式(LBP)和加速鲁棒特征(SURF)三种特征提取方法结合为一个融合的特征向量。CNN方法还用于从测试和训练数据集中提取附加特征。然后融合这两个特征向量来设计分类模型。然后将该方法应用于ISIC 2017和学术种子数据集2个数据集。我们提出的方法在准确率、灵敏度和特异性方面分别达到99.85%、91.65%和95.70%,比以前提出的机器学习算法更成功。
{"title":"Proposing a hybrid technique of feature fusion and convolutional neural network for melanoma skin cancer detection","authors":"Md. Mahbubur Rahman ,&nbsp;Mostofa Kamal Nasir ,&nbsp;Md. Nur-A-Alam ,&nbsp;Md. Saikat Islam Khan","doi":"10.1016/j.jpi.2023.100341","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100341","url":null,"abstract":"<div><p>Skin cancer is among the most common cancer types worldwide. Automatic identification of skin cancer is complicated because of the poor contrast and apparent resemblance between skin and lesions. The rate of human death can be significantly reduced if melanoma skin cancer could be detected quickly using dermoscopy images. This research uses an anisotropic diffusion filtering method on dermoscopy images to remove multiplicative speckle noise. To do this, the fast-bounding box (FBB) method is applied here to segment the skin cancer region. We also employ 2 feature extractors to represent images. The first one is the Hybrid Feature Extractor (HFE), and second one is the convolutional neural network VGG19-based CNN. The HFE combines 3 feature extraction approaches namely, Histogram-Oriented Gradient (HOG), Local Binary Pattern (LBP), and Speed Up Robust Feature (SURF) into a single fused feature vector. The CNN method is also used to extract additional features from test and training datasets. This 2-feature vector is then fused to design the classification model. The proposed method is then employed on 2 datasets namely, ISIC 2017 and the academic torrents dataset. Our proposed method achieves 99.85%, 91.65%, and 95.70% in terms of accuracy, sensitivity, and specificity, respectively, making it more successful than previously proposed machine learning algorithms.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353923001554/pdfft?md5=05d2a723e55b6fa38d611a914a8c9ed2&pid=1-s2.0-S2153353923001554-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92014657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pathology Informatics Summit 2022 David L. Lawrence Convention Center May 9-12 Pittsburgh, PA 病理学信息学峰会2022大卫l劳伦斯会议中心5月9日至12日,宾夕法尼亚州匹兹堡
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100325
{"title":"Pathology Informatics Summit 2022 David L. Lawrence Convention Center May 9-12 Pittsburgh, PA","authors":"","doi":"10.1016/j.jpi.2023.100325","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100325","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353923001396/pdfft?md5=c46063ee468d72a70da68565b9981e16&pid=1-s2.0-S2153353923001396-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138466642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-based multi-class histopathologic classification of kidney neoplasms 基于人工智能的肾肿瘤多类别组织病理分类
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100299
Dibson D. Gondim , Khaleel I. Al-Obaidy , Muhammad T. Idrees , John N. Eble , Liang Cheng

Artificial intelligence (AI)-based techniques are increasingly being explored as an emerging ancillary technique for improving accuracy and reproducibility of histopathological diagnosis. Renal cell carcinoma (RCC) is a malignancy responsible for 2% of cancer deaths worldwide. Given that RCC is a heterogenous disease, accurate histopathological classification is essential to separate aggressive subtypes from indolent ones and benign mimickers. There are early promising results using AI for RCC classification to distinguish between 2 and 3 subtypes of RCC. However, it is not clear how an AI-based model designed for multiple subtypes of RCCs, and benign mimickers would perform which is a scenario closer to the real practice of pathology. A computational model was created using 252 whole slide images (WSI) (clear cell RCC: 56, papillary RCC: 81, chromophobe RCC: 51, clear cell papillary RCC: 39, and, metanephric adenoma: 6). 298,071 patches were used to develop the AI-based image classifier. 298,071 patches (350 × 350-pixel) were used to develop the AI-based image classifier. The model was applied to a secondary dataset and demonstrated that 47/55 (85%) WSIs were correctly classified. This computational model showed excellent results except to distinguish clear cell RCC from clear cell papillary RCC. Further validation using multi-institutional large datasets and prospective studies are needed to determine the potential to translation to clinical practice.

基于人工智能(AI)的技术作为一种新兴的辅助技术被越来越多地探索,以提高组织病理学诊断的准确性和可重复性。肾细胞癌(RCC)是一种恶性肿瘤,占全球癌症死亡人数的2%。鉴于肾细胞癌是一种异质性疾病,准确的组织病理学分类对于区分侵袭性亚型、惰性亚型和良性模仿者至关重要。使用人工智能对碾压细胞进行分类,可以区分2和3种碾压细胞亚型,结果很有希望。然而,目前尚不清楚为多种亚型rcc设计的基于人工智能的模型,以及良性模仿者将如何表现,这是一个更接近真实病理实践的场景。使用252张完整的WSI图像(透明细胞RCC: 56张,乳头状RCC: 81张,憎色细胞RCC: 51张,透明细胞乳头状RCC: 39张,后肾腺瘤:6张)建立计算模型,使用298,071个斑块开发基于人工智能的图像分类器。使用298,071块(350 × 350像素)补丁开发基于人工智能的图像分类器。该模型应用于二级数据集,并证明47/55 (85%)wsi被正确分类。该计算模型除了无法区分透明细胞RCC和透明细胞乳头状RCC外,还显示出良好的结果。需要使用多机构大数据集和前瞻性研究进一步验证,以确定转化为临床实践的潜力。
{"title":"Artificial intelligence-based multi-class histopathologic classification of kidney neoplasms","authors":"Dibson D. Gondim ,&nbsp;Khaleel I. Al-Obaidy ,&nbsp;Muhammad T. Idrees ,&nbsp;John N. Eble ,&nbsp;Liang Cheng","doi":"10.1016/j.jpi.2023.100299","DOIUrl":"10.1016/j.jpi.2023.100299","url":null,"abstract":"<div><p>Artificial intelligence (AI)-based techniques are increasingly being explored as an emerging ancillary technique for improving accuracy and reproducibility of histopathological diagnosis. Renal cell carcinoma (RCC) is a malignancy responsible for 2% of cancer deaths worldwide. Given that RCC is a heterogenous disease, accurate histopathological classification is essential to separate aggressive subtypes from indolent ones and benign mimickers. There are early promising results using AI for RCC classification to distinguish between 2 and 3 subtypes of RCC. However, it is not clear how an AI-based model designed for multiple subtypes of RCCs, and benign mimickers would perform which is a scenario closer to the real practice of pathology. A computational model was created using 252 whole slide images (WSI) (clear cell RCC: 56, papillary RCC: 81, chromophobe RCC: 51, clear cell papillary RCC: 39, and, metanephric adenoma: 6). 298,071 patches were used to develop the AI-based image classifier. 298,071 patches (350 × 350-pixel) were used to develop the AI-based image classifier. The model was applied to a secondary dataset and demonstrated that 47/55 (85%) WSIs were correctly classified. This computational model showed excellent results except to distinguish clear cell RCC from clear cell papillary RCC. Further validation using multi-institutional large datasets and prospective studies are needed to determine the potential to translation to clinical practice.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006494/pdf/main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9114263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Imaging bridges pathology and radiology 影像学是病理学和放射学的桥梁
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100298
Martin-Leo Hansmann , Frederick Klauschen , Wojciech Samek , Klaus-Robert Müller , Emmanuel Donnadieu , Sonja Scharf , Sylvia Hartmann , Ina Koch , Jörg Ackermann , Liron Pantanowitz , Hendrik Schäfer , Patrick Wurzel

In recent years, medical disciplines have moved closer together and rigid borders have been increasingly dissolved. The synergetic advantage of combining multiple disciplines is particularly important for radiology, nuclear medicine, and pathology to perform integrative diagnostics. In this review, we discuss how medical subdisciplines can be reintegrated in the future using state-of-the-art methods of digitization, data science, and machine learning. Integration of methods is made possible by the digitalization of radiological and nuclear medical images, as well as pathological images. 3D histology can become a valuable tool, not only for integration into radiological images but also for the visualization of cellular interactions, the so-called connectomes. In human pathology, it has recently become possible to image and calculate the movements and contacts of immunostained cells in fresh tissue explants. Recording the movement of a living cell is proving to be informative and makes it possible to study dynamic connectomes in the diagnosis of lymphoid tissue. By applying computational methods including data science and machine learning, new perspectives for analyzing and understanding diseases become possible.

近年来,医学学科之间的联系越来越紧密,僵化的边界日益消失。多学科结合的协同优势对于放射学、核医学和病理学进行综合诊断尤为重要。在这篇综述中,我们讨论了未来如何使用最先进的数字化、数据科学和机器学习方法重新整合医学分支学科。通过放射学和核医学图像以及病理图像的数字化,使方法的集成成为可能。3D组织学可以成为一个有价值的工具,不仅可以整合到放射图像中,还可以可视化细胞相互作用,即所谓的连接体。在人体病理学中,最近已经可以成像和计算新鲜组织外植体中免疫染色细胞的运动和接触。记录活细胞的运动被证明是信息丰富的,并使研究淋巴组织诊断中的动态连接体成为可能。通过应用包括数据科学和机器学习在内的计算方法,分析和理解疾病的新视角成为可能。
{"title":"Imaging bridges pathology and radiology","authors":"Martin-Leo Hansmann ,&nbsp;Frederick Klauschen ,&nbsp;Wojciech Samek ,&nbsp;Klaus-Robert Müller ,&nbsp;Emmanuel Donnadieu ,&nbsp;Sonja Scharf ,&nbsp;Sylvia Hartmann ,&nbsp;Ina Koch ,&nbsp;Jörg Ackermann ,&nbsp;Liron Pantanowitz ,&nbsp;Hendrik Schäfer ,&nbsp;Patrick Wurzel","doi":"10.1016/j.jpi.2023.100298","DOIUrl":"10.1016/j.jpi.2023.100298","url":null,"abstract":"<div><p>In recent years, medical disciplines have moved closer together and rigid borders have been increasingly dissolved. The synergetic advantage of combining multiple disciplines is particularly important for radiology, nuclear medicine, and pathology to perform integrative diagnostics. In this review, we discuss how medical subdisciplines can be reintegrated in the future using state-of-the-art methods of digitization, data science, and machine learning. Integration of methods is made possible by the digitalization of radiological and nuclear medical images, as well as pathological images. 3D histology can become a valuable tool, not only for integration into radiological images but also for the visualization of cellular interactions, the so-called connectomes. In human pathology, it has recently become possible to image and calculate the movements and contacts of immunostained cells in fresh tissue explants. Recording the movement of a living cell is proving to be informative and makes it possible to study dynamic connectomes in the diagnosis of lymphoid tissue. By applying computational methods including data science and machine learning, new perspectives for analyzing and understanding diseases become possible.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/b9/5d/main.PMC9958472.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10281597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of Remote Digital Pathology based diagnostic reporting of Frozen Sections from home 基于家庭冷冻切片诊断报告的远程数字病理学验证
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100312
Rajiv Kumar Kaushal, Subhash Yadav, Ayushi Sahay, Nupur Karnik, Tushar Agrawal, Vinayak Dave, Nikhil Singh, Ashish Shah, Sangeeta B. Desai

Background

Despite the promising applications of whole-slide imaging (WSI) for frozen section (FS) diagnosis, its adoption for remote reporting is limited.

Objective

To assess the feasibility and performance of home-based remote digital consultation for FS diagnosis.

Material & Method

Cases accessioned beyond regular working hours (5 pm–10 pm) were reported simultaneously using optical microscopy (OM) and WSI. Validation of WSI for FS diagnosis from a remote site, i.e. home, was performed by 5 pathologists. Cases were scanned using a portable scanner (Grundium Ocus®40) and previewed on consumer-grade computer devices through a web-based browser (http://grundium.net). Clinical data and diagnostic reports were shared through a google spreadsheet. The diagnostic concordance, inter- and intra-observer agreement for FS diagnosis by WSI versus OM, and turnaround time (TAT), were recorded.

Results

The overall diagnostic accuracy for OM and WSI (from home) was 98.2% (range 97%–100%) and 97.6% (range 95%–99%), respectively, when compared with the reference standard. Almost perfect inter-observer (k = 0.993) and intra-observer (k = 0.987) agreement for WSI was observed by 4 pathologists. Pathologists used consumer-grade laptops/desktops with an average screen size of 14.58 inches (range = 12.3–17.7 inches) and a network speed of 64 megabits per second (range: 10–90 Mbps). The mean diagnostic assessment time per case for OM and WSI was 1:48 min and 5:54 min, respectively. Mean TAT of 27.27 min per case was observed using WSI from home. Seamless connectivity was observed in approximately 75% of cases.

Conclusion

This study validates the role of WSI for remote FS diagnosis for its safe and efficient adoption in clinical use.

背景:尽管全切片成像(WSI)在冷冻切片(FS)诊断中的应用前景广阔,但其在远程报告中的应用仍然有限。目的探讨家庭远程数字会诊在FS诊断中的可行性和效果。材料,方法采用光学显微镜(OM)和WSI同时对正常工作时间(5pm - 10pm)以外的病例进行报告。5名病理学家从远处(即家中)验证WSI对FS的诊断。使用便携式扫描仪(Grundium Ocus®40)扫描病例,并通过基于网络的浏览器(http://grundium.net)在消费级计算机设备上预览病例。临床数据和诊断报告通过谷歌电子表格共享。记录WSI与OM诊断FS的诊断一致性、观察者之间和观察者内部的一致性以及周转时间(TAT)。结果与参考标准相比,OM和WSI(家庭)的总体诊断准确率分别为98.2%(97% ~ 100%)和97.6%(95% ~ 99%)。4名病理学家观察到WSI的观察者间(k = 0.993)和观察者内(k = 0.987)几乎完全一致。病理学家使用的是消费级笔记本电脑/台式机,平均屏幕尺寸为14.58英寸(范围为12.3-17.7英寸),网络速度为每秒64兆比特(范围:10-90 Mbps)。每例OM和WSI的平均诊断评估时间分别为1:48 min和5:54 min。家中WSI观察到平均TAT为27.27 min /例。在大约75%的情况下观察到无缝连接。结论本研究验证了WSI在FS远程诊断中的作用,可安全、有效地应用于临床。
{"title":"Validation of Remote Digital Pathology based diagnostic reporting of Frozen Sections from home","authors":"Rajiv Kumar Kaushal,&nbsp;Subhash Yadav,&nbsp;Ayushi Sahay,&nbsp;Nupur Karnik,&nbsp;Tushar Agrawal,&nbsp;Vinayak Dave,&nbsp;Nikhil Singh,&nbsp;Ashish Shah,&nbsp;Sangeeta B. Desai","doi":"10.1016/j.jpi.2023.100312","DOIUrl":"10.1016/j.jpi.2023.100312","url":null,"abstract":"<div><h3>Background</h3><p>Despite the promising applications of whole-slide imaging (WSI) for frozen section (FS) diagnosis, its adoption for remote reporting is limited.</p></div><div><h3>Objective</h3><p>To assess the feasibility and performance of home-based remote digital consultation for FS diagnosis.</p></div><div><h3>Material &amp; Method</h3><p>Cases accessioned beyond regular working hours (5 pm–10 pm) were reported simultaneously using optical microscopy (OM) and WSI. Validation of WSI for FS diagnosis from a remote site, i.e. home, was performed by 5 pathologists. Cases were scanned using a portable scanner (Grundium Ocus®40) and previewed on consumer-grade computer devices through a web-based browser (<span>http://grundium.net</span><svg><path></path></svg>). Clinical data and diagnostic reports were shared through a google spreadsheet. The diagnostic concordance, inter- and intra-observer agreement for FS diagnosis by WSI versus OM, and turnaround time (TAT), were recorded.</p></div><div><h3>Results</h3><p>The overall diagnostic accuracy for OM and WSI (from home) was 98.2% (range 97%–100%) and 97.6% (range 95%–99%), respectively, when compared with the reference standard. Almost perfect inter-observer (k = 0.993) and intra-observer (k = 0.987) agreement for WSI was observed by 4 pathologists. Pathologists used consumer-grade laptops/desktops with an average screen size of 14.58 inches (range = 12.3–17.7 inches) and a network speed of 64 megabits per second (range: 10–90 Mbps). The mean diagnostic assessment time per case for OM and WSI was 1:48 min and 5:54 min, respectively. Mean TAT of 27.27 min per case was observed using WSI from home. Seamless connectivity was observed in approximately 75% of cases.</p></div><div><h3>Conclusion</h3><p>This study validates the role of WSI for remote FS diagnosis for its safe and efficient adoption in clinical use.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/81/ac/main.PMC10192998.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9496429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy 用于组织学图像增强的无监督多对多染色翻译提高分类精度
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100195
Maryam Berijanian , Nadine S. Schaadt , Boqiang Huang , Johannes Lotz , Friedrich Feuerhake , Dorit Merhof

Background

Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.

Methods

StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.

Results

The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.

Conclusions

This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.

需要大量图像的深度学习任务在数字病理学中得到了广泛的应用。这给监督任务带来了挑战,因为手动图像注释是一个昂贵且费力的过程。在图像变化很大的情况下,这种情况更加恶化。解决这一问题需要采用图像增强和合成图像生成等方法。在这方面,基于gan的无监督染色翻译最近得到了很多关注,但必须为每对源域和目标域训练一个单独的网络。这项工作使组织病理学染色的无监督多对多翻译与单一的网络,同时寻求保持组织的形状和结构。方法stargan -v2适用于乳腺组织病理图像的无监督多对多染色翻译。包含边缘检测器以激励网络保持组织的形状和结构并具有边缘保持平移。此外,对数字病理学领域的医学和技术专家进行主观测试,以评估生成的图像的质量,并验证它们与真实图像无法区分。作为概念验证,我们分别使用生成的图像和不使用生成的图像对乳腺癌分类器进行训练,以量化使用合成图像增强图像对分类精度的影响。结果结果表明,加入边缘检测器有助于提高翻译图像的质量,并保持组织的总体结构。我们的医疗和技术专家的质量控制和主观测试表明,无法区分真实和人工图像,从而确认合成图像在技术上是可信的。此外,本研究表明,通过将所提出的标记翻译方法的输出增强训练数据集,ResNet-50和VGG-16的乳腺癌分类器的准确率分别提高了8.0%和9.3%。结论本研究表明,在提出的框架内,可以有效地完成从任意源污点到其他污点的翻译。生成的图像是真实的,可以用来训练深度神经网络,以提高其性能,并解决注释图像数量不足的问题。
{"title":"Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy","authors":"Maryam Berijanian ,&nbsp;Nadine S. Schaadt ,&nbsp;Boqiang Huang ,&nbsp;Johannes Lotz ,&nbsp;Friedrich Feuerhake ,&nbsp;Dorit Merhof","doi":"10.1016/j.jpi.2023.100195","DOIUrl":"10.1016/j.jpi.2023.100195","url":null,"abstract":"<div><h3>Background</h3><p>Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.</p></div><div><h3>Methods</h3><p>StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.</p></div><div><h3>Results</h3><p>The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.</p></div><div><h3>Conclusions</h3><p>This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a5/6e/main.PMC9947329.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9356483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An interpretable decision-support model for breast cancer diagnosis using histopathology images 一个可解释的决策支持模型乳腺癌诊断使用组织病理学图像
Q2 Medicine Pub Date : 2023-01-01 DOI: 10.1016/j.jpi.2023.100319
Sruthi Krishna , S.S. Suganthi , Arnav Bhavsar , Jyotsna Yesodharan , Shivsubramani Krishnamoorthy

Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models.

To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images.

We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%–4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist.

The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model’s proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.

活组织切片的显微镜检查被认为是确认癌症细胞存在的黄金标准方法。对大量流入的组织切片进行手动分析极易受到病理学家对组织切片的误读。组织病理学图像分析的计算机化框架被认为是一种诊断工具,极大地有利于病理学家,增强癌症的明确诊断。卷积神经网络(CNN)被证明是检测异常病理组织学的最适用和最有效的技术。尽管临床翻译具有很高的敏感性和预测能力,但由于缺乏对预测的清晰见解,临床翻译受到限制。因此,非常需要一种能够提供明确诊断和可解释性的计算机辅助系统。传统的视觉解释技术,类激活映射(CAM),与CNN模型相结合,提供了可解释的决策。CAM的主要挑战是,它无法优化以创建最佳的可视化地图。CAM还会降低CNN模型的性能。为了应对这一挑战,我们引入了一种新的可解释决策支持模型,该模型使用CNN,并具有可训练的注意力机制,使用基于响应的前馈视觉解释。我们介绍了用于组织病理学图像分类的DarkNet19CNN模型的变体。为了实现视觉解释并提高DarkNet19模型的性能,将注意力分支与DarkNet2019网络集成,形成注意力分支网络(ABN)。注意力分支使用DarkNet19和全局平均池(GAP)的卷积层来对视觉特征的上下文进行建模,并生成热图来识别感兴趣的区域。最后,使用完全连接层来对图像进行分类,从而构成感知分支。我们使用来自公开数据集的7000多张癌症活检切片图像对我们的模型进行了训练和验证,并在组织病理学图像的二元分类中实现了98.7%的准确率。这些观察结果证实了DarkNet19 CNN模型的临床可解释性增强,再加上注意力分支,除了使基线模型的性能提高3%-4%之外。所提出的模型所强调的癌症区域与病理学家的发现有很好的相关性。将注意力分支与CNN模型统一起来的联合方法使病理学家能够增强组织学图像的诊断可解释性,而不会损害最先进的性能。该模型在精确定位感兴趣区域方面的熟练程度是一个额外的好处,可以导致强调临床决策支持的深度学习模型的准确临床翻译。
{"title":"An interpretable decision-support model for breast cancer diagnosis using histopathology images","authors":"Sruthi Krishna ,&nbsp;S.S. Suganthi ,&nbsp;Arnav Bhavsar ,&nbsp;Jyotsna Yesodharan ,&nbsp;Shivsubramani Krishnamoorthy","doi":"10.1016/j.jpi.2023.100319","DOIUrl":"10.1016/j.jpi.2023.100319","url":null,"abstract":"<div><p>Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models.</p><p>To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images.</p><p>We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%–4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist.</p><p>The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model’s proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10320615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Pathology Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1