Jun Shi , Dongdong Sun , Zhiguo Jiang , Jun Du , Wei Wang , Yushan Zheng , Haibo Wu
{"title":"Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer","authors":"Jun Shi , Dongdong Sun , Zhiguo Jiang , Jun Du , Wei Wang , Yushan Zheng , Haibo Wu","doi":"10.1016/j.compmedimag.2025.102502","DOIUrl":null,"url":null,"abstract":"<div><div>Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at <span><span>https://github.com/HFUT-miaLab/WSMCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"121 ","pages":"Article 102502"},"PeriodicalIF":5.4000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000114","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at https://github.com/HFUT-miaLab/WSMCL.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.