Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Image and Vision Computing Pub Date : 2025-02-01 Epub Date: 2024-12-31 DOI:10.1016/j.imavis.2024.105400
Youming Chen , Ting Tuo , Lijun Guo , Rong Zhang , Yirui Wang , Shangce Gao
{"title":"Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification","authors":"Youming Chen ,&nbsp;Ting Tuo ,&nbsp;Lijun Guo ,&nbsp;Rong Zhang ,&nbsp;Yirui Wang ,&nbsp;Shangce Gao","doi":"10.1016/j.imavis.2024.105400","DOIUrl":null,"url":null,"abstract":"<div><div>The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105400"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624005055","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/31 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
鲁棒辅助模态有利于基于视频的换布人再识别
基于视频的换衣人再识别的核心是提取与换衣无关的特征,如体型、面部、步态等。大多数当前的方法依赖于辅助模态来帮助模型专注于这些特征。虽然这些模式可以抵抗服装外观的干扰,但它们对服装变化的影响并不强大,这影响了模型识别。考虑行人的结合点信息,更好地抵抗换布的影响;然而,它包含的行人歧视信息有限。轮廓具有丰富的行人识别信息,能够抵抗服装外观的干扰,但易受换衣的影响。因此,我们将这两种模式结合起来,构建了一个更健壮的模式,将服装对模型的影响降到最低。基于融合模态的特点,设计了不同的时空使用方法,增强了模型对布料无关特征的提取能力。具体来说,在空间层面上,我们开发了一种保留细粒度、与布料无关的特征的引导方法,同时使用融合特征来减少对原始图像中布料相关特征的关注。在时间层面,我们设计了一种融合方法,将轮廓和关节点序列的动作特征结合起来,从而为换布行人提供更鲁棒的动作特征。在CCPG-D和CCVID两个基于视频的换布数据集上的实验表明,我们提出的模型优于现有的最先进的方法。此外,在步态数据集CASIA-B上的测试表明,我们的模型达到了最佳的平均精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
期刊最新文献
HBMF-YOLO: Target detection in harsh environments based on a hybrid backbone network and multi-feature fusion Enhancing biometric transparency through skeletal feature learning in chest X-rays: A triplet network approach with Explainable AI All you need for object detection: From pixels, points, and prompts to Next-Gen fusion and multimodal LLMs/VLMs in autonomous vehicles Bidirectional causal learning for visual question answering MSENet: High efficiency video compression via Multivariate Spatiotemporal Entropy Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1