GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification

Alessandro Borgia, Yang Hua, Elyor Kodirov, N. Robertson
{"title":"GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification","authors":"Alessandro Borgia, Yang Hua, Elyor Kodirov, N. Robertson","doi":"10.1109/WACV.2019.00130","DOIUrl":null,"url":null,"abstract":"Video-based person re-identification deals with the inherent difficulty of matching sequences with different length, unregulated, and incomplete target pose/viewpoint structure. Common approaches operate either by reducing the problem to the still images case, facing a significant information loss, or by exploiting inter-sequence temporal dependencies as in Siamese Recurrent Neural Networks or in gait analysis. However, in all cases, the inter-sequences pose/viewpoint misalignment is considered, and the existing spatial approaches are mostly limited to the still images context. To this end, we propose a novel approach that can exploit more effectively the rich video information, by accounting for the role that the changing pose/viewpoint factor plays in the sequences matching process. In particular, our approach consists of two components. The first one attempts to complement the original pose-incomplete information carried by the sequences with synthetic GAN-generated images, and fuse their features vectors into a more discriminative viewpoint-insensitive embedding, namely Weighted Fusion (WF). Another one performs an explicit pose-based alignment of sequence pairs to promote coherent feature matching, namely Weighted-Pose Regulation (WPR). Extensive experiments on two large video-based benchmark datasets show that our approach outperforms considerably existing methods.","PeriodicalId":436637,"journal":{"name":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","volume":"363 11","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2019.00130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Video-based person re-identification deals with the inherent difficulty of matching sequences with different length, unregulated, and incomplete target pose/viewpoint structure. Common approaches operate either by reducing the problem to the still images case, facing a significant information loss, or by exploiting inter-sequence temporal dependencies as in Siamese Recurrent Neural Networks or in gait analysis. However, in all cases, the inter-sequences pose/viewpoint misalignment is considered, and the existing spatial approaches are mostly limited to the still images context. To this end, we propose a novel approach that can exploit more effectively the rich video information, by accounting for the role that the changing pose/viewpoint factor plays in the sequences matching process. In particular, our approach consists of two components. The first one attempts to complement the original pose-incomplete information carried by the sequences with synthetic GAN-generated images, and fuse their features vectors into a more discriminative viewpoint-insensitive embedding, namely Weighted Fusion (WF). Another one performs an explicit pose-based alignment of sequence pairs to promote coherent feature matching, namely Weighted-Pose Regulation (WPR). Extensive experiments on two large video-based benchmark datasets show that our approach outperforms considerably existing methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于gan的视频人物再识别姿态感知调节
基于视频的人物再识别解决了不同长度、不规范和不完整的目标姿态/视点结构序列匹配的固有困难。常见的方法是将问题减少到静态图像的情况下,面临重大的信息损失,或者利用序列间的时间依赖性,如在暹罗递归神经网络或步态分析中。然而,在所有情况下,考虑到序列间的位姿/视点不对齐,现有的空间方法大多局限于静止图像上下文。为此,我们提出了一种新的方法,通过考虑变化的姿态/视点因素在序列匹配过程中所起的作用,可以更有效地利用丰富的视频信息。具体来说,我们的方法由两个部分组成。第一种方法尝试用合成的gan生成的图像来补充序列所携带的原始姿态不完全信息,并将它们的特征向量融合到一个更具判别性的视点不敏感嵌入中,即加权融合(Weighted Fusion, WF)。另一种方法是对序列对进行明确的基于姿态的对齐,以促进连贯的特征匹配,即加权姿态调节(WPR)。在两个大型基于视频的基准数据集上进行的大量实验表明,我们的方法大大优于现有的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ancient Painting to Natural Image: A New Solution for Painting Processing GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification Coupled Generative Adversarial Network for Continuous Fine-Grained Action Segmentation Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network 3D Reconstruction and Texture Optimization Using a Sparse Set of RGB-D Cameras
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1