Revisiting Domain-Adaptive Semantic Segmentation via Knowledge Distillation

Seongwon Jeong;Jiyeong Kim;Sungheui Kim;Dongbo Min
{"title":"Revisiting Domain-Adaptive Semantic Segmentation via Knowledge Distillation","authors":"Seongwon Jeong;Jiyeong Kim;Sungheui Kim;Dongbo Min","doi":"10.1109/TIP.2024.3501076","DOIUrl":null,"url":null,"abstract":"Numerous methods for unsupervised domain adaptation (UDA) have been proposed in semantic segmentation, achieving remarkable improvements. These methods are categorized into an adversarial learning-based approach that utilizes an additional discriminator and image translation model, and a self-supervised approach that uses a teacher model to generate pseudo labels. Among them, the self-supervised UDA approaches based on a self-training show excellent adaptability in semantic segmentation. However, erroneous estimates of the pseudo ground truths (PGTs) used in the self-training may often lead to inaccurate updates in the teacher model. Although several attempts have been made to address this issue, the teacher model updated through exponential moving average (EMA) still has a risk of propagating inaccuracies from the PGTs. Inspired by the fact that UDA shares similar principles with knowledge distillation (KD), we revisit the self-training based UDA approach from the perspective of KD and propose a novel UDA approach that employs two different teacher models. Specifically, we utilize both an EMA-updated teacher model to generate PGTs and a frozen teacher model pretrained with source data to transfer knowledge on a feature space. Since the frozen teacher model has no constraint on the model architecture unlike the EMA updated teacher model, we can effectively leverage a better representation power from the larger frozen teacher. Extensive experiments on various backbones (DeepLab-V2 and DAFormer) and scenarios (GTA\n<inline-formula> <tex-math>$5~\\rightarrow $ </tex-math></inline-formula>\n Cityscapes and SYNTHIA \n<inline-formula> <tex-math>$\\rightarrow $ </tex-math></inline-formula>\n Cityscapes) show that the proposed method improves segmentation performance in the target domain with its scalability. In particular, our method achieves comparable or better performance than state-of-the-arts even with a lightweight backbone.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6761-6773"},"PeriodicalIF":13.7000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10766055/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Numerous methods for unsupervised domain adaptation (UDA) have been proposed in semantic segmentation, achieving remarkable improvements. These methods are categorized into an adversarial learning-based approach that utilizes an additional discriminator and image translation model, and a self-supervised approach that uses a teacher model to generate pseudo labels. Among them, the self-supervised UDA approaches based on a self-training show excellent adaptability in semantic segmentation. However, erroneous estimates of the pseudo ground truths (PGTs) used in the self-training may often lead to inaccurate updates in the teacher model. Although several attempts have been made to address this issue, the teacher model updated through exponential moving average (EMA) still has a risk of propagating inaccuracies from the PGTs. Inspired by the fact that UDA shares similar principles with knowledge distillation (KD), we revisit the self-training based UDA approach from the perspective of KD and propose a novel UDA approach that employs two different teacher models. Specifically, we utilize both an EMA-updated teacher model to generate PGTs and a frozen teacher model pretrained with source data to transfer knowledge on a feature space. Since the frozen teacher model has no constraint on the model architecture unlike the EMA updated teacher model, we can effectively leverage a better representation power from the larger frozen teacher. Extensive experiments on various backbones (DeepLab-V2 and DAFormer) and scenarios (GTA $5~\rightarrow $ Cityscapes and SYNTHIA $\rightarrow $ Cityscapes) show that the proposed method improves segmentation performance in the target domain with its scalability. In particular, our method achieves comparable or better performance than state-of-the-arts even with a lightweight backbone.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过知识提炼重新审视领域自适应语义分割
在语义分割中,人们提出了许多无监督域自适应方法,并取得了显著的进步。这些方法分为基于对抗性学习的方法,该方法利用了额外的鉴别器和图像翻译模型,以及使用教师模型生成伪标签的自监督方法。其中,基于自训练的自监督UDA方法在语义分割中表现出良好的适应性。然而,在自我训练中使用的伪基础真理(pgt)的错误估计可能经常导致教师模型的不准确更新。尽管已经进行了几次尝试来解决这个问题,但通过指数移动平均线(EMA)更新的教师模型仍然有传播pgt不准确性的风险。受知识升华与知识升华(KD)具有相似原理这一事实的启发,我们从KD的角度重新审视了基于自我训练的知识升华方法,并提出了一种采用两种不同教师模型的新颖的知识升华方法。具体来说,我们既利用ema更新的教师模型来生成pgt,也利用源数据预训练的冻结教师模型来在特征空间上传递知识。由于与EMA更新的教师模型不同,冻结教师模型对模型架构没有约束,因此我们可以有效地利用更大的冻结教师的更好的表示能力。在多种主干网(deeplb - v2和DAFormer)和场景(GTA $5~\rightarrow $ cityscape和SYNTHIA $\rightarrow $ cityscape)上进行的大量实验表明,该方法以其可扩展性提高了目标域的分割性能。特别是,即使使用轻量级主干,我们的方法也可以实现与最先进的性能相当或更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
One Step Diffusion-based Super-Resolution with Time-Aware Distillation. Discriminative-Generative Positive and Unlabeled Learning. JDPNet: A Network Based on Joint Degradation Processing for Underwater Image Enhancement Long-Tailed and Inter-Class Homogeneity Matters in Multi-Class Weakly Supervised Tissue Segmentation of Histopathology Images DiffLLFace: Learning Alternate Illumination-Diffusion Adaptation for Low-Light Face Super-Resolution and Beyond
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1