Domain Alignment Meets Fully Test-Time Adaptation

Kowshik Thopalli, P. Turaga, Jayaraman J. Thiagarajan
{"title":"Domain Alignment Meets Fully Test-Time Adaptation","authors":"Kowshik Thopalli, P. Turaga, Jayaraman J. Thiagarajan","doi":"10.48550/arXiv.2207.04185","DOIUrl":null,"url":null,"abstract":"A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training. A popular solution to this problem is to adapt a pre-trained model to novel domains using only unlabeled data. In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted. While fully test-time adaptation (FTTA) and unsupervised domain adaptation (UDA) are closely related, the advances in UDA are not readily applicable to TTA, since most UDA methods require access to the source data. Hence, we propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data, through a novel deep subspace alignment strategy. With a minimal overhead of storing the subspace basis set for the source data, CATTAn enables unsupervised alignment between source and target data during adaptation. Through extensive experimental evaluation on multiple 2D and 3D vision benchmarks (ImageNet-C, Office-31, OfficeHome, DomainNet, PointDA-10) and model architectures, we demonstrate significant gains in FTTA performance. Furthermore, we make a number of crucial findings on the utility of the alignment objective even with inherently robust models, pre-trained ViT representations and under low sample availability in the target domain.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":"117 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian Conference on Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2207.04185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training. A popular solution to this problem is to adapt a pre-trained model to novel domains using only unlabeled data. In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted. While fully test-time adaptation (FTTA) and unsupervised domain adaptation (UDA) are closely related, the advances in UDA are not readily applicable to TTA, since most UDA methods require access to the source data. Hence, we propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data, through a novel deep subspace alignment strategy. With a minimal overhead of storing the subspace basis set for the source data, CATTAn enables unsupervised alignment between source and target data during adaptation. Through extensive experimental evaluation on multiple 2D and 3D vision benchmarks (ImageNet-C, Office-31, OfficeHome, DomainNet, PointDA-10) and model architectures, we demonstrate significant gains in FTTA performance. Furthermore, we make a number of crucial findings on the utility of the alignment objective even with inherently robust models, pre-trained ViT representations and under low sample availability in the target domain.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
域对齐满足完全的测试时间适应
部署的ML模型的一个基本要求是泛化到从不同于训练的测试分布中提取的数据。这个问题的一个流行的解决方案是只使用未标记的数据使预训练的模型适应新的领域。在本文中,我们将重点关注该问题的一个具有挑战性的变体,即对原始源数据的访问受到限制。虽然完全测试时自适应(FTTA)和无监督域自适应(UDA)密切相关,但UDA的进展并不容易适用于TTA,因为大多数UDA方法需要访问源数据。因此,我们提出了一种新的方法,CATTAn,它通过一种新的深子空间对齐策略,通过放松对整个源数据的访问需求,架起了UDA和FTTA的桥梁。由于存储源数据的子空间基集的开销最小,CATTAn可以在自适应期间实现源数据和目标数据之间的无监督对齐。通过对多种2D和3D视觉基准(ImageNet-C、Office-31、OfficeHome、DomainNet、PointDA-10)和模型架构进行广泛的实验评估,我们证明了FTTA性能的显著提高。此外,我们对对齐目标的效用做出了许多重要的发现,即使在固有的鲁棒模型,预训练的ViT表示和低样本可用性的目标域中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
RoLNiP: Robust Learning Using Noisy Pairwise Comparisons AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network On the Interpretability of Attention Networks Evaluating the Perceived Safety of Urban City via Maximum Entropy Deep Inverse Reinforcement Learning One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1