Modality adaptation via feature difference learning for depth human parsing

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computer Vision and Image Understanding Pub Date : 2024-07-08 DOI:10.1016/j.cviu.2024.104070
{"title":"Modality adaptation via feature difference learning for depth human parsing","authors":"","doi":"10.1016/j.cviu.2024.104070","DOIUrl":null,"url":null,"abstract":"<div><p>In the field of human parsing, depth data offers unique advantages over RGB data due to its illumination invariance and geometric detail, which motivates us to explore human parsing with only depth input. However, depth data is challenging to collect at scale due to the specialized equipment required. In contrast, RGB data is readily available in large quantities, presenting an opportunity to enhance depth-only parsing models with semantic knowledge learned from RGB data. However, fully finetuning the RGB-pretrained encoder leads to high training costs and inflexible domain generalization, while keeping the encoder frozen suffers from a large RGB-depth modality gap and restricts the parsing performance. To alleviate the limitations of these naive approaches, we introduce a Modality Adaptation pipeline via Feature Difference Learning (MAFDL) which leverages the RGB knowledge to facilitate depth human parsing. A Difference-Guided Depth Adapter (DGDA) is proposed within MAFDL to learn the feature differences between RGB and depth modalities, adapting depth features into RGB feature space to bridge the modality gap. Furthermore, we also design a Feature Alignment Constraint (FAC) to impose explicit alignment supervision at pixel and batch levels, making the modality adaptation more comprehensive. Extensive experiments on the NTURGBD-Parsing-4K dataset show that our method surpasses previous state-of-the-art approaches.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224001516","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In the field of human parsing, depth data offers unique advantages over RGB data due to its illumination invariance and geometric detail, which motivates us to explore human parsing with only depth input. However, depth data is challenging to collect at scale due to the specialized equipment required. In contrast, RGB data is readily available in large quantities, presenting an opportunity to enhance depth-only parsing models with semantic knowledge learned from RGB data. However, fully finetuning the RGB-pretrained encoder leads to high training costs and inflexible domain generalization, while keeping the encoder frozen suffers from a large RGB-depth modality gap and restricts the parsing performance. To alleviate the limitations of these naive approaches, we introduce a Modality Adaptation pipeline via Feature Difference Learning (MAFDL) which leverages the RGB knowledge to facilitate depth human parsing. A Difference-Guided Depth Adapter (DGDA) is proposed within MAFDL to learn the feature differences between RGB and depth modalities, adapting depth features into RGB feature space to bridge the modality gap. Furthermore, we also design a Feature Alignment Constraint (FAC) to impose explicit alignment supervision at pixel and batch levels, making the modality adaptation more comprehensive. Extensive experiments on the NTURGBD-Parsing-4K dataset show that our method surpasses previous state-of-the-art approaches.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过特征差异学习进行深度人类解析的模态适应
在人类解析领域,深度数据因其光照不变性和几何细节而比 RGB 数据具有独特的优势,这促使我们探索仅使用深度输入进行人类解析的方法。然而,由于需要专业设备,深度数据的大规模收集具有挑战性。相比之下,RGB 数据则很容易大量获得,这为利用从 RGB 数据中学到的语义知识来增强纯深度解析模型提供了机会。然而,对 RGB 预训练编码器进行完全微调会导致高昂的训练成本和不灵活的领域泛化,而保持编码器冻结则会造成巨大的 RGB 深度模态差距,并限制解析性能。为了缓解这些幼稚方法的局限性,我们引入了通过特征差分学习(MAFDL)进行模态适应的管道,利用 RGB 知识促进深度人类解析。我们在 MAFDL 中提出了差异引导深度适配器 (DGDA),用于学习 RGB 和深度模态之间的特征差异,将深度特征适配到 RGB 特征空间中,以弥合模态差距。此外,我们还设计了特征对齐约束 (FAC),在像素和批次级别实施明确的对齐监督,使模态适应更加全面。在 NTURGBD-Parsing-4K 数据集上进行的广泛实验表明,我们的方法超越了以前的先进方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Vision and Image Understanding
Computer Vision and Image Understanding 工程技术-工程:电子与电气
CiteScore
7.80
自引率
4.40%
发文量
112
审稿时长
79 days
期刊介绍: The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views. Research Areas Include: • Theory • Early vision • Data structures and representations • Shape • Range • Motion • Matching and recognition • Architecture and languages • Vision systems
期刊最新文献
Deformable surface reconstruction via Riemannian metric preservation Estimating optical flow: A comprehensive review of the state of the art A lightweight convolutional neural network-based feature extractor for visible images LightSOD: Towards lightweight and efficient network for salient object detection Triple-Stream Commonsense Circulation Transformer Network for Image Captioning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1