Feature pyramid random fusion network for visible-infrared modality person re-identification

Q3 Engineering 光电工程 Pub Date : 2020-12-22 DOI:10.12086/OEE.2020.190669
Wang Ronggui, Wang Jing, Yang Juan, Xue Lixia
{"title":"Feature pyramid random fusion network for visible-infrared modality person re-identification","authors":"Wang Ronggui, Wang Jing, Yang Juan, Xue Lixia","doi":"10.12086/OEE.2020.190669","DOIUrl":null,"url":null,"abstract":"Existing works in person re-identification only considers extracting invariant feature representations from cross-view visible cameras, which ignores the imaging feature in infrared domain, such that there are few studies on visible-infrared relevant modality. Besides, most works distinguish two-views by often computing the similarity in feature maps from one single convolutional layer, which causes a weak performance of learning features. To handle the above problems, we design a feature pyramid random fusion network (FPRnet) that learns discriminative multiple semantic features by computing the similarities between multi-level convolutions when matching the person. FPRnet not only reduces the negative effect of bias in intra-modality, but also balances the heterogeneity gap between inter-modality, which focuses on an infrared image with very different visual properties. Meanwhile, our work integrates the advantages of learning local and global feature, which effectively solves the problems of visible-infrared person re-identification. Extensive experiments on the public SYSU-MM01 dataset from aspects of mAP and convergence speed, demonstrate the superiorities in our approach to the state-of-the-art methods. Furthermore, FPRnet also achieves competitive results with 32.12% mAP recognition rate and much faster convergence.","PeriodicalId":39552,"journal":{"name":"光电工程","volume":"3 1","pages":"190669"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"光电工程","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.12086/OEE.2020.190669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

Abstract

Existing works in person re-identification only considers extracting invariant feature representations from cross-view visible cameras, which ignores the imaging feature in infrared domain, such that there are few studies on visible-infrared relevant modality. Besides, most works distinguish two-views by often computing the similarity in feature maps from one single convolutional layer, which causes a weak performance of learning features. To handle the above problems, we design a feature pyramid random fusion network (FPRnet) that learns discriminative multiple semantic features by computing the similarities between multi-level convolutions when matching the person. FPRnet not only reduces the negative effect of bias in intra-modality, but also balances the heterogeneity gap between inter-modality, which focuses on an infrared image with very different visual properties. Meanwhile, our work integrates the advantages of learning local and global feature, which effectively solves the problems of visible-infrared person re-identification. Extensive experiments on the public SYSU-MM01 dataset from aspects of mAP and convergence speed, demonstrate the superiorities in our approach to the state-of-the-art methods. Furthermore, FPRnet also achieves competitive results with 32.12% mAP recognition rate and much faster convergence.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
特征金字塔随机融合网络的可见-红外模态人再识别
现有的人体再识别工作只考虑提取交叉视可见光相机的不变特征表示,忽略了红外域的成像特征,对可见-红外相关模态的研究较少。此外,大多数工作通常通过从单个卷积层计算特征映射的相似度来区分两种视图,这导致学习特征的性能较差。为了解决上述问题,我们设计了一个特征金字塔随机融合网络(FPRnet),该网络在匹配人时通过计算多层次卷积之间的相似度来学习判别多重语义特征。FPRnet不仅减少了模态内偏置的负面影响,而且平衡了模态间的异质性差距,聚焦于具有不同视觉特性的红外图像。同时,我们的工作结合了局部特征和全局特征学习的优势,有效地解决了可见-红外人的再识别问题。在SYSU-MM01公共数据集上从mAP和收敛速度方面进行了大量实验,证明了我们的方法比最先进的方法具有优势。此外,FPRnet也取得了32.12%的mAP识别率和更快的收敛速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
光电工程
光电工程 Engineering-Electrical and Electronic Engineering
CiteScore
2.00
自引率
0.00%
发文量
6622
期刊介绍: Founded in 1974, Opto-Electronic Engineering is an academic journal under the supervision of the Chinese Academy of Sciences and co-sponsored by the Institute of Optoelectronic Technology of the Chinese Academy of Sciences (IOTC) and the Optical Society of China (OSC). It is a core journal in Chinese and a core journal in Chinese science and technology, and it is included in domestic and international databases, such as Scopus, CA, CSCD, CNKI, and Wanfang. Opto-Electronic Engineering is a peer-reviewed journal with subject areas including not only the basic disciplines of optics and electricity, but also engineering research and engineering applications. Optoelectronic Engineering mainly publishes scientific research progress, original results and reviews in the field of optoelectronics, and publishes related topics for hot issues and frontier subjects. The main directions of the journal include: - Optical design and optical engineering - Photovoltaic technology and applications - Lasers, optical fibres and communications - Optical materials and photonic devices - Optical Signal Processing
期刊最新文献
The joint discriminative and generative learning for person re-identification of deep dual attention Fiber coupling technology of high brightness blue laser diode A few-shot learning based generative method for atmospheric polarization modelling Characteristics of wavefront correction using stacked liquid lens based on electrowetting-on-dielectric Research on joint coding for underwater single-photon video communication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1