Xiao Hu, Xiaoqiang Guo, Zhuqing Jiang, Yun Zhou, Zixuan Yang
{"title":"PERSON RE-IDENTIFICATION BY REFINED ATTRIBUTE PREDICTION AND WEIGHTED MULTI-PART CONSTRAINTS","authors":"Xiao Hu, Xiaoqiang Guo, Zhuqing Jiang, Yun Zhou, Zixuan Yang","doi":"10.1109/GlobalSIP.2018.8646466","DOIUrl":null,"url":null,"abstract":"Person re-identification (re-id) aims to match person images captured in non-overlapping camera views. Convolutional Neural Network (CNN) has been verified to be powerful in pedestrian feature extraction. However, the CNN features focus more on global visual information, which are sensitive to environmental variations. In comparison, attribute features contain semantic information and prove to be more stable to cross-view appearance changes. In this paper, we present a novel network which leverages high-level semantic attributes to enhance pedestrian descriptors. By introducing hand-crafted multi-colorspaces and texture information to refine CNN features, we acquire a more invariant and reliable feature representation for attribute prediction. The attribute-based stream is further embedded into a part-based CNN branch for re-id. This part-based CNN is trained with a weighted integration of multi-part identification losses. Experiments on two public datasets demonstrate significant performance improvements of our method over state of the arts.","PeriodicalId":119131,"journal":{"name":"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"26 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GlobalSIP.2018.8646466","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Person re-identification (re-id) aims to match person images captured in non-overlapping camera views. Convolutional Neural Network (CNN) has been verified to be powerful in pedestrian feature extraction. However, the CNN features focus more on global visual information, which are sensitive to environmental variations. In comparison, attribute features contain semantic information and prove to be more stable to cross-view appearance changes. In this paper, we present a novel network which leverages high-level semantic attributes to enhance pedestrian descriptors. By introducing hand-crafted multi-colorspaces and texture information to refine CNN features, we acquire a more invariant and reliable feature representation for attribute prediction. The attribute-based stream is further embedded into a part-based CNN branch for re-id. This part-based CNN is trained with a weighted integration of multi-part identification losses. Experiments on two public datasets demonstrate significant performance improvements of our method over state of the arts.