Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation

Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma
{"title":"Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation","authors":"Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma","doi":"10.1109/TIP.2025.3534023","DOIUrl":null,"url":null,"abstract":"Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: <uri>https://github.com/PRIS-CV/Reserve_to_Adapt</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1382-1397"},"PeriodicalIF":13.7000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10902069/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: https://github.com/PRIS-CV/Reserve_to_Adapt
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自适应储备:挖掘类间关系的开集域自适应
开放集域自适应(Open-Set Domain Adaptation, OSDA)的目的是将一个在有标记的源域上训练的模型适应到一个被未知类破坏的未标记的目标域。因此,这种开放集设置固有的关键挑战是如何最好地避免模型适应过程中未知类引起的负迁移。大多数现有作品通过简单地将整个未知类推开来应对这一挑战。在本文中,我们采取了不同的立场-而不是将这些未知类作为单个实体来处理,我们在学习嵌入中为它们的子集“保留”中间空间。我们的主要发现是,从源领域学习到的类间关系,可以帮助在目标领域实施类分离——从而为未知类保留空间。更具体地说,我们首先通过收紧已知阶级表征,同时扩大其阶级间边际来准备“保留”。然后,我们在源域中学习软标签原型,以促进目标域中已知和未知样本的区分。因此,这两个步骤在每个历元以互利的方式迭代-更好地识别未知样本有助于空间保留,反之亦然。我们在Office-31、Office-Home、VisDA和ImageCLEF四个标准OSDA数据集上展示了最先进的结果,并进行了进一步的分析,以帮助理解我们的方法。代码可在https://github.com/PRIS-CV/Reserve_to_Adapt获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark. SPAgent: Adaptive Task Decomposition and Model Selection for General Video Generation and Editing. JDPNet: A Network Based on Joint Degradation Processing for Underwater Image Enhancement Long-Tailed and Inter-Class Homogeneity Matters in Multi-Class Weakly Supervised Tissue Segmentation of Histopathology Images DiffLLFace: Learning Alternate Illumination-Diffusion Adaptation for Low-Light Face Super-Resolution and Beyond
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1