用于被遮挡人员再识别的多尺度遮挡抑制网络

IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Letters Pub Date : 2024-07-15 DOI:10.1016/j.patrec.2024.07.009
Yunzuo Zhang, Yuehui Yang, Weili Kang, Jiawen Zhen
{"title":"用于被遮挡人员再识别的多尺度遮挡抑制网络","authors":"Yunzuo Zhang,&nbsp;Yuehui Yang,&nbsp;Weili Kang,&nbsp;Jiawen Zhen","doi":"10.1016/j.patrec.2024.07.009","DOIUrl":null,"url":null,"abstract":"<div><p>In practical application scenarios, the occlusion caused by various obstacles greatly undermines the accuracy of person re-identification. Most existing methods for occluded person re-identification focus on inferring visible parts of the body through auxiliary models, resulting in inaccurate feature matching of parts and ignoring the problem of insufficient occluded samples, which seriously affects the accuracy of occluded person re-identification. To address the above issues, we propose a multi-scale occlusion suppression network (MSOSNet) for occluded person re-identification. Specifically, we first propose a dual occlusion augmentation module (DOAM), which combines random occlusion with our proposed novel cross occlusion to generate more diverse occlusion data. Meanwhile, we design a novel occluded-aware spatial attention module (OSAM) to enable the network to focus on non-occluded areas of pedestrian images and effectively extract discriminative features. Ultimately, we propose a part feature matching module (PFMM) that utilizes graph matching algorithms to match non-occluded body parts of pedestrians. Extensive experimental results on both occluded and holistic datasets validate the effectiveness of our method.</p></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"185 ","pages":"Pages 66-72"},"PeriodicalIF":3.9000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-scale occlusion suppression network for occluded person re-identification\",\"authors\":\"Yunzuo Zhang,&nbsp;Yuehui Yang,&nbsp;Weili Kang,&nbsp;Jiawen Zhen\",\"doi\":\"10.1016/j.patrec.2024.07.009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In practical application scenarios, the occlusion caused by various obstacles greatly undermines the accuracy of person re-identification. Most existing methods for occluded person re-identification focus on inferring visible parts of the body through auxiliary models, resulting in inaccurate feature matching of parts and ignoring the problem of insufficient occluded samples, which seriously affects the accuracy of occluded person re-identification. To address the above issues, we propose a multi-scale occlusion suppression network (MSOSNet) for occluded person re-identification. Specifically, we first propose a dual occlusion augmentation module (DOAM), which combines random occlusion with our proposed novel cross occlusion to generate more diverse occlusion data. Meanwhile, we design a novel occluded-aware spatial attention module (OSAM) to enable the network to focus on non-occluded areas of pedestrian images and effectively extract discriminative features. Ultimately, we propose a part feature matching module (PFMM) that utilizes graph matching algorithms to match non-occluded body parts of pedestrians. Extensive experimental results on both occluded and holistic datasets validate the effectiveness of our method.</p></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"185 \",\"pages\":\"Pages 66-72\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2024-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865524002125\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865524002125","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在实际应用场景中,各种障碍物造成的遮挡极大地影响了人员再识别的准确性。现有的闭塞人再识别方法大多侧重于通过辅助模型推断身体的可见部分,导致部分特征匹配不准确,忽略了闭塞样本不足的问题,严重影响了闭塞人再识别的准确性。针对上述问题,我们提出了一种多尺度闭塞抑制网络(MSOSNet),用于闭塞人员再识别。具体来说,我们首先提出了双闭塞增强模块(DOAM),它将随机闭塞与我们提出的新型交叉闭塞相结合,以生成更多样化的闭塞数据。同时,我们设计了一个新颖的闭塞感知空间关注模块(OSAM),使网络能够关注行人图像中的非闭塞区域,并有效地提取辨别特征。最后,我们提出了一个部件特征匹配模块(PFMM),利用图匹配算法来匹配行人的非遮挡身体部位。在闭塞和整体数据集上的大量实验结果验证了我们方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-scale occlusion suppression network for occluded person re-identification

In practical application scenarios, the occlusion caused by various obstacles greatly undermines the accuracy of person re-identification. Most existing methods for occluded person re-identification focus on inferring visible parts of the body through auxiliary models, resulting in inaccurate feature matching of parts and ignoring the problem of insufficient occluded samples, which seriously affects the accuracy of occluded person re-identification. To address the above issues, we propose a multi-scale occlusion suppression network (MSOSNet) for occluded person re-identification. Specifically, we first propose a dual occlusion augmentation module (DOAM), which combines random occlusion with our proposed novel cross occlusion to generate more diverse occlusion data. Meanwhile, we design a novel occluded-aware spatial attention module (OSAM) to enable the network to focus on non-occluded areas of pedestrian images and effectively extract discriminative features. Ultimately, we propose a part feature matching module (PFMM) that utilizes graph matching algorithms to match non-occluded body parts of pedestrians. Extensive experimental results on both occluded and holistic datasets validate the effectiveness of our method.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
期刊最新文献
BGI-Net: Bilayer Graph Inference Network for Low Light Image Enhancement Pre-image free graph machine learning with Normalizing Flows Fast approximate maximum common subgraph computation FracNet: An end-to-end deep learning framework for bone fracture detection Synthetic image learning: Preserving performance and preventing Membership Inference Attacks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1