Lightweight pyramid residual features with attention for person re-identification

R. F. Rachmadi, I. Purnama, S. M. S. Nugroho
{"title":"Lightweight pyramid residual features with attention for person re-identification","authors":"R. F. Rachmadi, I. Purnama, S. M. S. Nugroho","doi":"10.26555/ijain.v9i1.702","DOIUrl":null,"url":null,"abstract":"Person re-identification is one of the problems in the computer vision field that aims to retrieve similar human images in some image collections (or galleries). It is very useful for people searching or tracking in a closed environment (like a mall or building). One of the highlighted things on person re-identification problems is that the model is usually designed only for performance instead of performance and computing power consideration, which is applicable for devices with limited computing power. In this paper, we proposed a lightweight residual network with pyramid attention for person re-identification problems. The lightweight residual network adopted from the residual network (ResNet) model used for CIFAR dataset experiments consists of not more than two million parameters. An additional pyramid features extraction network and attention module are added to the network to improve the classifier's performance. We use CPFE (Context-aware Pyramid Features Extraction) network that utilizes atrous convolution with different dilation rates to extract the pyramid features. In addition, two different attention networks are used for the classifier: channel-wise and spatial-based attention networks. The proposed classifier is tested using widely use Market-1501 and DukeMTMC-reID person re-identification datasets. Experiments on Market-1501 and DukeMTMC-reID datasets show that our proposed classifier can perform well and outperform the classifier without CPFE and attention networks. Further investigation and ablation study shows that our proposed classifier has higher information density compared with other person re-identification methods.","PeriodicalId":52195,"journal":{"name":"International Journal of Advances in Intelligent Informatics","volume":"2 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advances in Intelligent Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26555/ijain.v9i1.702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Person re-identification is one of the problems in the computer vision field that aims to retrieve similar human images in some image collections (or galleries). It is very useful for people searching or tracking in a closed environment (like a mall or building). One of the highlighted things on person re-identification problems is that the model is usually designed only for performance instead of performance and computing power consideration, which is applicable for devices with limited computing power. In this paper, we proposed a lightweight residual network with pyramid attention for person re-identification problems. The lightweight residual network adopted from the residual network (ResNet) model used for CIFAR dataset experiments consists of not more than two million parameters. An additional pyramid features extraction network and attention module are added to the network to improve the classifier's performance. We use CPFE (Context-aware Pyramid Features Extraction) network that utilizes atrous convolution with different dilation rates to extract the pyramid features. In addition, two different attention networks are used for the classifier: channel-wise and spatial-based attention networks. The proposed classifier is tested using widely use Market-1501 and DukeMTMC-reID person re-identification datasets. Experiments on Market-1501 and DukeMTMC-reID datasets show that our proposed classifier can perform well and outperform the classifier without CPFE and attention networks. Further investigation and ablation study shows that our proposed classifier has higher information density compared with other person re-identification methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
轻量级金字塔残差特征与关注人的再识别
人物再识别是计算机视觉领域的问题之一,其目的是在某些图像集(或图库)中检索相似的人类图像。它对于人们在封闭环境(如商场或建筑物)中搜索或跟踪非常有用。在人员再识别问题中,一个突出的问题是模型通常只考虑性能而不考虑性能和计算能力,这适用于计算能力有限的设备。本文提出了一种具有金字塔关注的轻量残差网络来解决人的再识别问题。CIFAR数据集实验采用的残差网络(ResNet)模型的轻量级残差网络由不超过200万个参数组成。为了提高分类器的性能,在网络中增加了一个额外的金字塔特征提取网络和注意力模块。我们使用CPFE(上下文感知金字塔特征提取)网络,该网络利用不同扩张率的亚光卷积来提取金字塔特征。此外,分类器使用了两种不同的注意网络:基于通道的注意网络和基于空间的注意网络。使用广泛使用的Market-1501和DukeMTMC-reID人员再识别数据集对所提出的分类器进行了测试。在Market-1501和DukeMTMC-reID数据集上的实验表明,我们提出的分类器表现良好,并且优于没有CPFE和注意力网络的分类器。进一步的调查和消融研究表明,与其他的人再识别方法相比,我们提出的分类器具有更高的信息密度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Advances in Intelligent Informatics
International Journal of Advances in Intelligent Informatics Computer Science-Computer Vision and Pattern Recognition
CiteScore
3.00
自引率
0.00%
发文量
0
期刊最新文献
Emergency sign language recognition from variant of convolutional neural network (CNN) and long short term memory (LSTM) models Self-supervised few-shot learning for real-time traffic sign classification Hybrid machine learning model based on feature decomposition and entropy optimization for higher accuracy flood forecasting Imputation of missing microclimate data of coffee-pine agroforestry with machine learning Scientific reference style using rule-based machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1