{"title":"无监督条件对抗域自适应的双视角全局和局部范畴关注域对齐。","authors":"Jiahua Wu, Yuchun Fang","doi":"10.1016/j.neunet.2025.107129","DOIUrl":null,"url":null,"abstract":"<p><p>Conditional adversarial domain adaptation (CADA) is one of the most commonly used unsupervised domain adaptation (UDA) methods. CADA introduces multimodal information to the adversarial learning process to align the distributions of the labeled source domain and unlabeled target domain with mode match. However, CADA provides wrong multimodal information for challenging target features due to utilizing classifier predictions as the multimodal information, leading to distribution mismatch and less robust domain-invariant features. Compared to the recent state-of-the-art UDA methods, CADA also faces poor discriminability on the target domain. To tackle these challenges, we propose a novel unsupervised CADA framework named dual-view global and local category-attentive domain alignment (DV-GLCA). Specifically, to mitigate distribution mismatch and acquire more robust domain-invariant features, we integrate dual-view information into conditional adversarial domain adaptation and then utilize the substantial feature disparity between the two perspectives to better align the multimodal structures of the source and target distributions. Moreover, to learn more discriminative features of the target domain based on dual-view conditional adversarial domain adaptation (DV-CADA), we further propose global category-attentive domain alignment (GCA). We combine coding rate reduction and dual-view centroid alignment in GCA to amplify inter-category domain discrepancies while reducing intra-category domain differences globally. Additionally, to address challenging ambiguous samples during the training phase, we propose local category-attentive domain alignment (LCA). We introduce a new way of using contrastive domain discrepancy in LCA to move ambiguous samples closer to the correct category. Our method demonstrates leading performance on five UDA benchmarks, with extensive experiments showcasing its effectiveness.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"107129"},"PeriodicalIF":6.0000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dual-view global and local category-attentive domain alignment for unsupervised conditional adversarial domain adaptation.\",\"authors\":\"Jiahua Wu, Yuchun Fang\",\"doi\":\"10.1016/j.neunet.2025.107129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Conditional adversarial domain adaptation (CADA) is one of the most commonly used unsupervised domain adaptation (UDA) methods. CADA introduces multimodal information to the adversarial learning process to align the distributions of the labeled source domain and unlabeled target domain with mode match. However, CADA provides wrong multimodal information for challenging target features due to utilizing classifier predictions as the multimodal information, leading to distribution mismatch and less robust domain-invariant features. Compared to the recent state-of-the-art UDA methods, CADA also faces poor discriminability on the target domain. To tackle these challenges, we propose a novel unsupervised CADA framework named dual-view global and local category-attentive domain alignment (DV-GLCA). Specifically, to mitigate distribution mismatch and acquire more robust domain-invariant features, we integrate dual-view information into conditional adversarial domain adaptation and then utilize the substantial feature disparity between the two perspectives to better align the multimodal structures of the source and target distributions. Moreover, to learn more discriminative features of the target domain based on dual-view conditional adversarial domain adaptation (DV-CADA), we further propose global category-attentive domain alignment (GCA). We combine coding rate reduction and dual-view centroid alignment in GCA to amplify inter-category domain discrepancies while reducing intra-category domain differences globally. Additionally, to address challenging ambiguous samples during the training phase, we propose local category-attentive domain alignment (LCA). We introduce a new way of using contrastive domain discrepancy in LCA to move ambiguous samples closer to the correct category. Our method demonstrates leading performance on five UDA benchmarks, with extensive experiments showcasing its effectiveness.</p>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"185 \",\"pages\":\"107129\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-01-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.neunet.2025.107129\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2025.107129","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Dual-view global and local category-attentive domain alignment for unsupervised conditional adversarial domain adaptation.
Conditional adversarial domain adaptation (CADA) is one of the most commonly used unsupervised domain adaptation (UDA) methods. CADA introduces multimodal information to the adversarial learning process to align the distributions of the labeled source domain and unlabeled target domain with mode match. However, CADA provides wrong multimodal information for challenging target features due to utilizing classifier predictions as the multimodal information, leading to distribution mismatch and less robust domain-invariant features. Compared to the recent state-of-the-art UDA methods, CADA also faces poor discriminability on the target domain. To tackle these challenges, we propose a novel unsupervised CADA framework named dual-view global and local category-attentive domain alignment (DV-GLCA). Specifically, to mitigate distribution mismatch and acquire more robust domain-invariant features, we integrate dual-view information into conditional adversarial domain adaptation and then utilize the substantial feature disparity between the two perspectives to better align the multimodal structures of the source and target distributions. Moreover, to learn more discriminative features of the target domain based on dual-view conditional adversarial domain adaptation (DV-CADA), we further propose global category-attentive domain alignment (GCA). We combine coding rate reduction and dual-view centroid alignment in GCA to amplify inter-category domain discrepancies while reducing intra-category domain differences globally. Additionally, to address challenging ambiguous samples during the training phase, we propose local category-attentive domain alignment (LCA). We introduce a new way of using contrastive domain discrepancy in LCA to move ambiguous samples closer to the correct category. Our method demonstrates leading performance on five UDA benchmarks, with extensive experiments showcasing its effectiveness.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.