Hui Wang;Liangli Zheng;Hanbin Zhao;Shijian Li;Xi Li
{"title":"无监督领域自适应与类别感知记忆对齐。","authors":"Hui Wang;Liangli Zheng;Hanbin Zhao;Shijian Li;Xi Li","doi":"10.1109/TNNLS.2023.3238063","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation (UDA) is to make predictions on unlabeled target domain by learning the knowledge from a label-rich source domain. In practice, existing UDA approaches mainly focus on minimizing the discrepancy between different domains by mini-batch training, where only a few instances are accessible at each iteration. Due to the randomness of sampling, such a batch-level alignment pattern is unstable and may lead to misalignment. To alleviate this risk, we propose class-aware memory alignment (CMA) that models the distributions of the two domains by two auxiliary class-aware memories and performs domain adaptation on these predefined memories. CMA is designed with two distinct characteristics: class-aware memories that create two symmetrical class-aware distributions for different domains and two reliability-based filtering strategies that enhance the reliability of the constructed memory. We further design a unified memory-based loss to jointly improve the transferability and discriminability of features in the memories. State-of-the-art (SOTA) comparisons and careful ablation studies show the effectiveness of our proposed CMA.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":null,"pages":null},"PeriodicalIF":10.2000,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unsupervised Domain Adaptation With Class-Aware Memory Alignment\",\"authors\":\"Hui Wang;Liangli Zheng;Hanbin Zhao;Shijian Li;Xi Li\",\"doi\":\"10.1109/TNNLS.2023.3238063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised domain adaptation (UDA) is to make predictions on unlabeled target domain by learning the knowledge from a label-rich source domain. In practice, existing UDA approaches mainly focus on minimizing the discrepancy between different domains by mini-batch training, where only a few instances are accessible at each iteration. Due to the randomness of sampling, such a batch-level alignment pattern is unstable and may lead to misalignment. To alleviate this risk, we propose class-aware memory alignment (CMA) that models the distributions of the two domains by two auxiliary class-aware memories and performs domain adaptation on these predefined memories. CMA is designed with two distinct characteristics: class-aware memories that create two symmetrical class-aware distributions for different domains and two reliability-based filtering strategies that enhance the reliability of the constructed memory. We further design a unified memory-based loss to jointly improve the transferability and discriminability of features in the memories. State-of-the-art (SOTA) comparisons and careful ablation studies show the effectiveness of our proposed CMA.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2024-02-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10454017/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10454017/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Unsupervised Domain Adaptation With Class-Aware Memory Alignment
Unsupervised domain adaptation (UDA) is to make predictions on unlabeled target domain by learning the knowledge from a label-rich source domain. In practice, existing UDA approaches mainly focus on minimizing the discrepancy between different domains by mini-batch training, where only a few instances are accessible at each iteration. Due to the randomness of sampling, such a batch-level alignment pattern is unstable and may lead to misalignment. To alleviate this risk, we propose class-aware memory alignment (CMA) that models the distributions of the two domains by two auxiliary class-aware memories and performs domain adaptation on these predefined memories. CMA is designed with two distinct characteristics: class-aware memories that create two symmetrical class-aware distributions for different domains and two reliability-based filtering strategies that enhance the reliability of the constructed memory. We further design a unified memory-based loss to jointly improve the transferability and discriminability of features in the memories. State-of-the-art (SOTA) comparisons and careful ablation studies show the effectiveness of our proposed CMA.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.