Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen
{"title":"Learnable color space conversion and fusion for stain normalization in pathology images.","authors":"Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen","doi":"10.1016/j.media.2024.103424","DOIUrl":null,"url":null,"abstract":"<p><p>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103424"},"PeriodicalIF":10.7000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.media.2024.103424","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.