Self-Paced Collaborative and Adversarial Network for Unsupervised Domain Adaptation.

IF 20.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2021-06-01 Epub Date: 2021-05-11 DOI:10.1109/TPAMI.2019.2962476
Weichen Zhang, Dong Xu, Wanli Ouyang, Wen Li
{"title":"Self-Paced Collaborative and Adversarial Network for Unsupervised Domain Adaptation.","authors":"Weichen Zhang,&nbsp;Dong Xu,&nbsp;Wanli Ouyang,&nbsp;Wen Li","doi":"10.1109/TPAMI.2019.2962476","DOIUrl":null,"url":null,"abstract":"<p><p>This paper proposes a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN), which uses the domain-collaborative and domain-adversarial learning strategies for training the neural network. The domain-collaborative learning strategy aims to learn domain specific feature representation to preserve the discriminability for the target domain, while the domain adversarial learning strategy aims to learn domain invariant feature representation to reduce the domain distribution mismatch between the source and target domains. We show that these two learning strategies can be uniformly formulated as domain classifier learning with positive or negative weights on the losses. We then design a collaborative and adversarial training scheme, which automatically learns domain specific representations from lower blocks in CNNs through collaborative learning and domain invariant representations from higher blocks through adversarial learning. Moreover, to further enhance the discriminability in the target domain, we propose Self-Paced CAN (SPCAN), which progressively selects pseudo-labeled target samples for re-training the classifiers. We employ a self-paced learning strategy such that we can select pseudo-labeled target samples in an easy-to-hard fashion. Additionally, we build upon the popular two-stream approach to extend our domain adaptation approach for more challenging video action recognition task, which additionally considers the cooperation between the RGB stream and the optical flow stream. We propose the Two-stream SPCAN (TS-SPCAN) method to select and reweight the pseudo labeled target samples of one stream (RGB/Flow) based on the information from the other stream (Flow/RGB) in a cooperative way. As a result, our TS-SPCAN model is able to exchange the information between the two streams. Comprehensive experiments on different benchmark datasets, Office-31, ImageCLEF-DA and VISDA-2017 for the object recognition task, and UCF101-10 and HMDB51-10 for the video action recognition task, show our newly proposed approaches achieve the state-of-the-art performance, which clearly demonstrates the effectiveness of our proposed approaches for unsupervised domain adaptation.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 6","pages":"2047-2061"},"PeriodicalIF":20.8000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2019.2962476","citationCount":"32","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TPAMI.2019.2962476","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/5/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 32

Abstract

This paper proposes a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN), which uses the domain-collaborative and domain-adversarial learning strategies for training the neural network. The domain-collaborative learning strategy aims to learn domain specific feature representation to preserve the discriminability for the target domain, while the domain adversarial learning strategy aims to learn domain invariant feature representation to reduce the domain distribution mismatch between the source and target domains. We show that these two learning strategies can be uniformly formulated as domain classifier learning with positive or negative weights on the losses. We then design a collaborative and adversarial training scheme, which automatically learns domain specific representations from lower blocks in CNNs through collaborative learning and domain invariant representations from higher blocks through adversarial learning. Moreover, to further enhance the discriminability in the target domain, we propose Self-Paced CAN (SPCAN), which progressively selects pseudo-labeled target samples for re-training the classifiers. We employ a self-paced learning strategy such that we can select pseudo-labeled target samples in an easy-to-hard fashion. Additionally, we build upon the popular two-stream approach to extend our domain adaptation approach for more challenging video action recognition task, which additionally considers the cooperation between the RGB stream and the optical flow stream. We propose the Two-stream SPCAN (TS-SPCAN) method to select and reweight the pseudo labeled target samples of one stream (RGB/Flow) based on the information from the other stream (Flow/RGB) in a cooperative way. As a result, our TS-SPCAN model is able to exchange the information between the two streams. Comprehensive experiments on different benchmark datasets, Office-31, ImageCLEF-DA and VISDA-2017 for the object recognition task, and UCF101-10 and HMDB51-10 for the video action recognition task, show our newly proposed approaches achieve the state-of-the-art performance, which clearly demonstrates the effectiveness of our proposed approaches for unsupervised domain adaptation.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
无监督领域自适应的自进度协作对抗网络。
本文提出了一种新的无监督领域自适应方法——协同对抗网络(CAN),该方法使用领域协作和领域对抗学习策略对神经网络进行训练。领域协作学习策略旨在学习特定于领域的特征表示,以保持目标领域的可分辨性;领域对抗学习策略旨在学习领域不变的特征表示,以减少源领域与目标领域之间的领域分布不匹配。我们证明这两种学习策略可以统一表述为对损失具有正或负权重的领域分类器学习。然后,我们设计了一个协作和对抗的训练方案,该方案通过协作学习从cnn的较低块中自动学习域特定表示,通过对抗学习从较高块中自动学习域不变表示。此外,为了进一步提高分类器在目标域的可分辨性,我们提出了自步进CAN (self - pace CAN, SPCAN)算法,该算法逐步选择伪标记的目标样本对分类器进行重新训练。我们采用自定节奏的学习策略,这样我们就可以以一种容易而困难的方式选择伪标记的目标样本。此外,我们在流行的双流方法的基础上扩展了我们的领域自适应方法,以适应更具挑战性的视频动作识别任务,该方法还考虑了RGB流和光流之间的合作。我们提出了两流SPCAN (TS-SPCAN)方法,该方法基于来自另一流(Flow/RGB)的信息,以一种协作的方式选择并重新加权一个流(RGB/Flow)的伪标记目标样本。因此,我们的TS-SPCAN模型能够在两个流之间交换信息。在不同基准数据集(Office-31、ImageCLEF-DA和VISDA-2017用于目标识别任务,UCF101-10和HMDB51-10用于视频动作识别任务)上的综合实验表明,我们提出的新方法达到了最先进的性能,这清楚地证明了我们提出的方法在无监督域自适应方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
期刊最新文献
Practical Compact Deep Compressed Sensing Fine-Grained Visual Text Prompting Correlation Verification for Image Retrieval and Its Memory Footprint Optimization Task-Oriented Channel Attention for Fine-Grained Few-Shot Classification Streaming quanta sensors for online, high-performance imaging and vision
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1