Domain Confused Contrastive Learning for Unsupervised Domain Adaptation

Quanyu Long, Tianze Luo, Wenya Wang, Sinno Jialin Pan
{"title":"Domain Confused Contrastive Learning for Unsupervised Domain Adaptation","authors":"Quanyu Long, Tianze Luo, Wenya Wang, Sinno Jialin Pan","doi":"10.48550/arXiv.2207.04564","DOIUrl":null,"url":null,"abstract":"In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL), which can bridge the source and target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a most domain-challenging direction and exquisitely crafts domain confused augmentations as positive pairs, then it contrastively encourages the model to pull representations towards the other domain, thus learning more stable and effective domain invariances. We also investigate whether contrastive learning necessarily helps with UDA when performing other data augmentations. Extensive experiments demonstrate that DCCL significantly outperforms baselines, further ablation study and analysis also show the effectiveness and availability of DCCL.","PeriodicalId":382084,"journal":{"name":"North American Chapter of the Association for Computational Linguistics","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"North American Chapter of the Association for Computational Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2207.04564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL), which can bridge the source and target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a most domain-challenging direction and exquisitely crafts domain confused augmentations as positive pairs, then it contrastively encourages the model to pull representations towards the other domain, thus learning more stable and effective domain invariances. We also investigate whether contrastive learning necessarily helps with UDA when performing other data augmentations. Extensive experiments demonstrate that DCCL significantly outperforms baselines, further ablation study and analysis also show the effectiveness and availability of DCCL.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无监督领域自适应的领域混淆对比学习
在这项工作中,我们以一种具有挑战性的自监督方法研究了无监督域自适应(UDA)。在缺乏目标标签的情况下,如何学习任务识别是难点之一。与以往文献直接对齐跨域分布或利用反向梯度不同,我们提出了域混淆对比学习(DCCL),它可以通过域谜题来桥接源域和目标域,并在适应后保留判别表征。从技术上讲,DCCL寻找最具领域挑战性的方向,并巧妙地将领域混淆的增强作为正对,然后它对比地鼓励模型将表示拉向另一个领域,从而学习更稳定和有效的领域不变性。我们还研究了在执行其他数据增强时,对比学习是否一定有助于UDA。大量的实验表明,DCCL明显优于基线,进一步的消融研究和分析也表明了DCCL的有效性和可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On Synthetic Data for Back Translation Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting Using Paraphrases to Study Properties of Contextual Embeddings GMN: Generative Multi-modal Network for Practical Document Information Extraction Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1