{"title":"Learning robust features alignment for cross-domain medical image analysis","authors":"Zhen Zheng, Rui Li, Cheng Liu","doi":"10.1007/s40747-023-01297-9","DOIUrl":null,"url":null,"abstract":"<p>Deep learning demonstrates impressive performance in many medical image analysis tasks. However, its reliability builds on the labeled medical datasets and the assumption of the same distributions between the training data (source domain) and the test data (target domain). Therefore, some unsupervised medical domain adaptation networks transfer knowledge from the source domain with rich labeled data to the target domain with only unlabeled data by learning domain-invariant features. We observe that conventional adversarial-training-based methods focus on the global distributions alignment and may overlook the class-level information, which will lead to negative transfer. In this paper, we attempt to learn the robust features alignment for the cross-domain medical image analysis. Specifically, in addition to a discriminator for alleviating the domain shift, we further introduce an auxiliary classifier to achieve robust features alignment with the class-level information. We first detect the unreliable target samples, which are far from the source distribution via diverse training between two classifiers. Next, a cross-classifier consistency regularization is proposed to align these unreliable samples and the negative transfer can be avoided. In addition, for fully exploiting the knowledge of unlabeled target data, we further propose a within-classifier consistency regularization to improve the robustness of the classifiers in the target domain, which enhances the unreliable target samples detection as well. We demonstrate that our proposed dual-consistency regularizations achieve state-of-the-art performance on multiple medical adaptation tasks in terms of both accuracy and Macro-F1-measure. Extensive ablation studies and visualization results are also presented to verify the effectiveness of each proposed module. For the skin adaptation results, our method outperforms the baseline and the second-best method by around 10 and 4 percentage points. Similarly, for the COVID-19 adaptation task, our model achieves consistently the best performance in terms of both accuracy (96.93%) and Macro-F1 (86.52%).</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"54 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-023-01297-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning demonstrates impressive performance in many medical image analysis tasks. However, its reliability builds on the labeled medical datasets and the assumption of the same distributions between the training data (source domain) and the test data (target domain). Therefore, some unsupervised medical domain adaptation networks transfer knowledge from the source domain with rich labeled data to the target domain with only unlabeled data by learning domain-invariant features. We observe that conventional adversarial-training-based methods focus on the global distributions alignment and may overlook the class-level information, which will lead to negative transfer. In this paper, we attempt to learn the robust features alignment for the cross-domain medical image analysis. Specifically, in addition to a discriminator for alleviating the domain shift, we further introduce an auxiliary classifier to achieve robust features alignment with the class-level information. We first detect the unreliable target samples, which are far from the source distribution via diverse training between two classifiers. Next, a cross-classifier consistency regularization is proposed to align these unreliable samples and the negative transfer can be avoided. In addition, for fully exploiting the knowledge of unlabeled target data, we further propose a within-classifier consistency regularization to improve the robustness of the classifiers in the target domain, which enhances the unreliable target samples detection as well. We demonstrate that our proposed dual-consistency regularizations achieve state-of-the-art performance on multiple medical adaptation tasks in terms of both accuracy and Macro-F1-measure. Extensive ablation studies and visualization results are also presented to verify the effectiveness of each proposed module. For the skin adaptation results, our method outperforms the baseline and the second-best method by around 10 and 4 percentage points. Similarly, for the COVID-19 adaptation task, our model achieves consistently the best performance in terms of both accuracy (96.93%) and Macro-F1 (86.52%).
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.