Learnable color space conversion and fusion for stain normalization in pathology images.

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Medical image analysis Pub Date : 2024-12-24 DOI:10.1016/j.media.2024.103424
Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen
{"title":"Learnable color space conversion and fusion for stain normalization in pathology images.","authors":"Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen","doi":"10.1016/j.media.2024.103424","DOIUrl":null,"url":null,"abstract":"<p><p>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103424"},"PeriodicalIF":10.7000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.media.2024.103424","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
病理图像染色归一化的可学习色彩空间转换与融合。
由于不同机构的载玻片制备方法不同,在h&e染色病理图像中,色调和对比度的变化是常见的。这种染色差异虽然对病理学家的活检诊断没有太大影响,但对计算机辅助诊断系统构成了重大挑战,导致潜在的诊断不足或误诊,特别是当染色分化在不同来源的数据集中引入实质性的异质性时。传统的染色归一化方法,旨在减轻这些问题,往往需要劳动密集型的选择适当的模板,限制了其实用性和自动化。创新地,我们提出了一个可学习的染色归一化层,即LStainNorm,被设计为一个易于集成的病理图像分析组件。它通过自主学习最佳染色特征,最大限度地减少了手动模板选择的需要。此外,学习到的最优染色模板提供了可解释性,以增强对规范化过程的理解。此外,我们证明融合病理图像归一化在多个色彩空间可以提高性能。因此,我们对LStainNorm进行了扩展,引入了一种新的自注意机制,以促进不同属性和色彩空间的特征融合。实验结果表明,LStainNorm在2个分类数据集和3个核分割数据集上的准确率平均提高4.78%,Dice系数平均提高3.53%,IoU平均提高6.59%,优于传统方法和gan。此外,通过支持端到端训练和推理过程,LStainNorm消除了在规范化和分析之间的中间步骤的需要,从而更有效地利用硬件资源和显着更快的推理时间,即比传统方法快数百倍。该代码可在https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
期刊最新文献
Corrigendum to "Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge" [Medical Image Analysis, April 2022, Volume 77, 102333]. Editorial for Special Issue on Foundation Models for Medical Image Analysis. Few-shot medical image segmentation with high-fidelity prototypes. The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction. SAF-IS: A spatial annotation free framework for instance segmentation of surgical tools
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1