Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation

J. Niemeijer, J. P. Schäfer
{"title":"Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation","authors":"J. Niemeijer, J. P. Schäfer","doi":"10.1109/ivworkshops54471.2021.9669255","DOIUrl":null,"url":null,"abstract":"This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.","PeriodicalId":256905,"journal":{"name":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ivworkshops54471.2021.9669255","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

This work presents a two-staged, unsupervised domain adaptation process for semantic segmentation models by combining a self-training and self-supervision strategy. Self-training (i. e., training a model on self-inferred pseudo-labels) yields competitive results for domain adaptation in recent research. However, self-training depends on high-quality pseudo-labels. On the other hand, self-supervision trains the model on a surrogate task and improves its performance on the target domain without further prerequisites.Therefore, our approach improves the model’s performance on the target domain with a novel surrogate task. To that, we continuously determine class centroids of the feature representations in the network’s pre-logit layer on the source domain. Our surrogate task clusters the pre-logit feature representations on the target domain regarding these class centroids during both training stages. After the first stage, the resulting model delivers improved pseudo-labels for the additional self-training in the second stage. We evaluate our method on two different domain adaptions, a real-world domain change from Cityscapes to the Berkeley Deep Drive dataset and a synthetic to real-world domain change from GTA5 to the Cityscapes dataset. For the real-world domain change, the evaluation shows a significant improvement of the model from 46% mIoU to 54% mIoU on the target domain. For the synthetic to real-world domain change, we achieve an improvement from 38.8% to 46.42% on the real-world target domain.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
结合语义自监督和自训练的语义分割领域适应
本文通过结合自我训练和自我监督策略,提出了语义分割模型的两阶段无监督域自适应过程。在最近的研究中,自我训练(即在自我推断的伪标签上训练模型)在领域适应方面产生了竞争性的结果。然而,自我训练依赖于高质量的伪标签。另一方面,自我监督在代理任务上训练模型,并在没有进一步先决条件的情况下提高其在目标领域的性能。因此,我们的方法通过新的代理任务提高了模型在目标域中的性能。为此,我们在源域上连续确定网络的pre-logit层中特征表示的类质心。我们的代理任务在两个训练阶段对目标域上关于这些类质心的预logit特征表示进行聚类。在第一阶段之后,生成的模型为第二阶段的额外自我训练提供改进的伪标签。我们在两种不同的领域调整上评估了我们的方法,从cityscape到Berkeley Deep Drive数据集的现实世界领域变化,以及从GTA5到cityscape数据集的合成到现实世界的领域变化。对于现实世界的领域变化,评估表明该模型在目标领域上的mIoU从46%显著提高到54%。对于合成到真实世界领域的变化,我们在真实世界目标领域上实现了从38.8%到46.42%的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Trajectory Planning with Comfort and Safety in Dynamic Traffic Scenarios for Autonomous Driving Unsupervised Joint Multi-Task Learning of Vision Geometry Tasks An adaptive cooperative adaptive cruise control against varying vehicle loads* Fundamental Design Criteria for Logical Scenarios in Simulation-based Safety Validation of Automated Driving Using Sensor Model Knowledge Parameter-Based Testing and Debugging of Autonomous Driving Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1