Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma
{"title":"Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation","authors":"Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma","doi":"10.1109/TIP.2025.3534023","DOIUrl":null,"url":null,"abstract":"Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: <uri>https://github.com/PRIS-CV/Reserve_to_Adapt</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1382-1397"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10902069/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: https://github.com/PRIS-CV/Reserve_to_Adapt