Large-Scale Image Retrieval with Deep Attentive Global Features.

IF 6.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Neural Systems Pub Date : 2023-03-01 DOI:10.1142/S0129065723500132
Yingying Zhu, Yinghao Wang, Haonan Chen, Zemian Guo, Qiang Huang
{"title":"Large-Scale Image Retrieval with Deep Attentive Global Features.","authors":"Yingying Zhu,&nbsp;Yinghao Wang,&nbsp;Haonan Chen,&nbsp;Zemian Guo,&nbsp;Qiang Huang","doi":"10.1142/S0129065723500132","DOIUrl":null,"url":null,"abstract":"<p><p>How to obtain discriminative features has proved to be a core problem for image retrieval. Many recent works use convolutional neural networks to extract features. However, clutter and occlusion will interfere with the distinguishability of features when using convolutional neural network (CNN) for feature extraction. To address this problem, we intend to obtain high-response activations in the feature map based on the attention mechanism. We propose two attention modules, a spatial attention module and a channel attention module. For the spatial attention module, we first capture the global information and model the relation between channels as a region evaluator, which evaluates and assigns new weights to local features. For the channel attention module, we use a vector with trainable parameters to weight the importance of each feature map. The two attention modules are cascaded to adjust the weight distribution for the feature map, which makes the extracted features more discriminative. Furthermore, we present a scale and mask scheme to scale the major components and filter out the meaningless local features. This scheme can reduce the disadvantages of the various scales of the major components in images by applying multiple scale filters, and filter out the redundant features with the <i>MAX-Mask</i>. Exhaustive experiments demonstrate that the two attention modules are complementary to improve performance, and our network with the three modules outperforms the state-of-the-art methods on four well-known image retrieval datasets.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 3","pages":"2350013"},"PeriodicalIF":6.6000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0129065723500132","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

How to obtain discriminative features has proved to be a core problem for image retrieval. Many recent works use convolutional neural networks to extract features. However, clutter and occlusion will interfere with the distinguishability of features when using convolutional neural network (CNN) for feature extraction. To address this problem, we intend to obtain high-response activations in the feature map based on the attention mechanism. We propose two attention modules, a spatial attention module and a channel attention module. For the spatial attention module, we first capture the global information and model the relation between channels as a region evaluator, which evaluates and assigns new weights to local features. For the channel attention module, we use a vector with trainable parameters to weight the importance of each feature map. The two attention modules are cascaded to adjust the weight distribution for the feature map, which makes the extracted features more discriminative. Furthermore, we present a scale and mask scheme to scale the major components and filter out the meaningless local features. This scheme can reduce the disadvantages of the various scales of the major components in images by applying multiple scale filters, and filter out the redundant features with the MAX-Mask. Exhaustive experiments demonstrate that the two attention modules are complementary to improve performance, and our network with the three modules outperforms the state-of-the-art methods on four well-known image retrieval datasets.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度关注全局特征的大规模图像检索。
如何获得判别特征已被证明是图像检索的核心问题。最近的许多研究都使用卷积神经网络来提取特征。然而,在使用卷积神经网络(CNN)进行特征提取时,杂波和遮挡会干扰特征的可分辨性。为了解决这个问题,我们打算在基于注意机制的特征映射中获得高响应激活。我们提出了两个注意模块:空间注意模块和通道注意模块。对于空间关注模块,我们首先捕获全局信息并将通道之间的关系建模为区域评估器,该区域评估器对局部特征进行评估并分配新的权重。对于通道注意力模块,我们使用具有可训练参数的向量来加权每个特征映射的重要性。两个关注模块级联,调整特征映射的权重分布,使提取的特征更具判别性。此外,我们还提出了一种缩放和掩码方案来缩放主要成分并过滤掉无意义的局部特征。该方案通过应用多尺度滤波器来减少图像中主要成分不同尺度的缺点,并利用MAX-Mask滤除冗余特征。详尽的实验表明,这两个关注模块是互补的,可以提高性能,并且我们的网络在四个已知的图像检索数据集上优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Neural Systems
International Journal of Neural Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
28.80%
发文量
116
审稿时长
24 months
期刊介绍: The International Journal of Neural Systems is a monthly, rigorously peer-reviewed transdisciplinary journal focusing on information processing in both natural and artificial neural systems. Special interests include machine learning, computational neuroscience and neurology. The journal prioritizes innovative, high-impact articles spanning multiple fields, including neurosciences and computer science and engineering. It adopts an open-minded approach to this multidisciplinary field, serving as a platform for novel ideas and enhanced understanding of collective and cooperative phenomena in computationally capable systems.
期刊最新文献
Epileptic Seizure Detection with an End-to-end Temporal Convolutional Network and Bidirectional Long Short-Term Memory Model A graph-based neural approach to linear sum assignment problems Automated Quality Evaluation of Large-Scale Benchmark Datasets for Vision-Language Tasks sEMG-based Inter-Session Hand Gesture Recognition via Domain Adaptation with Locality Preserving and Maximum Margin Cultural Differences in the Assessment of Synthetic Voices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1