Youming Chen , Ting Tuo , Lijun Guo , Rong Zhang , Yirui Wang , Shangce Gao
{"title":"Robust auxiliary modality is beneficial for video-based cloth-changing person re-identification","authors":"Youming Chen , Ting Tuo , Lijun Guo , Rong Zhang , Yirui Wang , Shangce Gao","doi":"10.1016/j.imavis.2024.105400","DOIUrl":null,"url":null,"abstract":"<div><div>The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"154 ","pages":"Article 105400"},"PeriodicalIF":4.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624005055","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The core of video-based cloth-changing person re-identification is the extraction of cloth-irrelevant features, such as body shape, face, and gait. Most current methods rely on auxiliary modalities to help the model focus on these features. Although these modalities can resist the interference of clothing appearance, they are not robust against cloth-changing, which affects model recognition. The joint point information of pedestrians was considered to better resist the impact of cloth-changing; however, it contained limited pedestrian discrimination information. In contrast, the silhouettes had rich pedestrian discrimination information and could resist interference from clothing appearance but were vulnerable to cloth-changing. Therefore, we combined these two modalities to construct a more robust modality that minimized the impact of clothing on the model. We designed different usage methods for the temporal and spatial aspects based on the characteristics of the fusion modality to enhance the model for extracting cloth-irrelevant features. Specifically, at the spatial level, we developed a guiding method retaining fine-grained, cloth-irrelevant features while using fused features to reduce the focus on cloth-relevant features in the original image. At the temporal level, we designed a fusion method that combined action features from the silhouette and joint point sequences, resulting in more robust action features for cloth-changing pedestrians. Experiments on two video-based cloth-changing datasets, CCPG-D and CCVID, indicated that our proposed model outperformed existing state-of-the-art methods. Additionally, tests on the gait dataset CASIA-B demonstrated that our model achieved optimal average precision.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.