{"title":"CPCL: Cross-Modal Prototypical Contrastive Learning for Weakly Supervised Text-based Person Re-Identification","authors":"Yanwei Zheng, Xinpeng Zhao, Chuanlin Lan, Xiaowei Zhang, Bowen Huang, Jibin Yang, Dongxiao Yu","doi":"arxiv-2401.10011","DOIUrl":null,"url":null,"abstract":"Weakly supervised text-based person re-identification (TPRe-ID) seeks to\nretrieve images of a target person using textual descriptions, without relying\non identity annotations and is more challenging and practical. The primary\nchallenge is the intra-class differences, encompassing intra-modal feature\nvariations and cross-modal semantic gaps. Prior works have focused on\ninstance-level samples and ignored prototypical features of each person which\nare intrinsic and invariant. Toward this, we propose a Cross-Modal Prototypical\nContrastive Learning (CPCL) method. In practice, the CPCL introduces the CLIP\nmodel to weakly supervised TPRe-ID for the first time, mapping visual and\ntextual instances into a shared latent space. Subsequently, the proposed\nPrototypical Multi-modal Memory (PMM) module captures associations between\nheterogeneous modalities of image-text pairs belonging to the same person\nthrough the Hybrid Cross-modal Matching (HCM) module in a many-to-many mapping\nfashion. Moreover, the Outlier Pseudo Label Mining (OPLM) module further\ndistinguishes valuable outlier samples from each modality, enhancing the\ncreation of more reliable clusters by mining implicit relationships between\nimage-text pairs. Experimental results demonstrate that our proposed CPCL\nattains state-of-the-art performance on all three public datasets, with a\nsignificant improvement of 11.58%, 8.77% and 5.25% in Rank@1 accuracy on\nCUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. The code is\navailable at https://github.com/codeGallery24/CPCL.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2401.10011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Weakly supervised text-based person re-identification (TPRe-ID) seeks to
retrieve images of a target person using textual descriptions, without relying
on identity annotations and is more challenging and practical. The primary
challenge is the intra-class differences, encompassing intra-modal feature
variations and cross-modal semantic gaps. Prior works have focused on
instance-level samples and ignored prototypical features of each person which
are intrinsic and invariant. Toward this, we propose a Cross-Modal Prototypical
Contrastive Learning (CPCL) method. In practice, the CPCL introduces the CLIP
model to weakly supervised TPRe-ID for the first time, mapping visual and
textual instances into a shared latent space. Subsequently, the proposed
Prototypical Multi-modal Memory (PMM) module captures associations between
heterogeneous modalities of image-text pairs belonging to the same person
through the Hybrid Cross-modal Matching (HCM) module in a many-to-many mapping
fashion. Moreover, the Outlier Pseudo Label Mining (OPLM) module further
distinguishes valuable outlier samples from each modality, enhancing the
creation of more reliable clusters by mining implicit relationships between
image-text pairs. Experimental results demonstrate that our proposed CPCL
attains state-of-the-art performance on all three public datasets, with a
significant improvement of 11.58%, 8.77% and 5.25% in Rank@1 accuracy on
CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. The code is
available at https://github.com/codeGallery24/CPCL.