使用多层特征聚合直方图检索图像

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Cognitive Computation Pub Date : 2024-08-27 DOI:10.1007/s12559-024-10334-9
Fen Lu, Guang-Hai Liu, Xiao-Zhi Gao
{"title":"使用多层特征聚合直方图检索图像","authors":"Fen Lu, Guang-Hai Liu, Xiao-Zhi Gao","doi":"10.1007/s12559-024-10334-9","DOIUrl":null,"url":null,"abstract":"<p>Aggregating the diverse features into a compact representation is a hot issue in image retrieval. However, aggregating the differential feature of multilayer into a discriminative representation remains challenging. Inspired by the value-guided neural mechanisms, a novel representation method, namely, the <i>multilayer feature aggregation histogram</i> was proposed to image retrieval. It can aggregate multilayer features, such as low-, mid-, and high-layer features, into a discriminative yet compact representation via simulating the neural mechanisms that mediate the ability to make value-guided decisions. The highlights of the proposed method have the following: (1) A <i>detail-attentive map</i> was proposed to represent the aggregation of low- and mid-layer features. It can be well used to evaluate the distinguishable detail feature. (2) A simple yet straightforward aggregation method is proposed to re-evaluate the distinguishable high-layer feature. It can provide aggregated features including detail, object, and semantic by using <i>semantic-attentive map</i>. (3) A novel whitening method, namely <i>difference whitening</i>, is introduced to reduce dimensionality. It did not need to seek a training dataset of semantical similarity and can provide a compact yet discriminative representation. Experiments on the popular benchmark datasets demonstrate the proposed method can obviously increase retrieval performance in terms of mAP metric. The proposed method using 128-dimensionality representation can provide significantly higher mAPs than the DSFH, DWDF, and OSAH methods by 0.083, 0.043, and 0.022 on the Oxford5k dataset and by 0.195, 0.036, and 0.071 on the Paris6k dataset. The difference whitening method can conveniently transfer the deep learning model to a new task. Our method provided competitive performance compared with the existing aggregation methods and can retrieve scene images with similar colors, objects, and semantics.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":"2 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Image Retrieval Using Multilayer Feature Aggregation Histogram\",\"authors\":\"Fen Lu, Guang-Hai Liu, Xiao-Zhi Gao\",\"doi\":\"10.1007/s12559-024-10334-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Aggregating the diverse features into a compact representation is a hot issue in image retrieval. However, aggregating the differential feature of multilayer into a discriminative representation remains challenging. Inspired by the value-guided neural mechanisms, a novel representation method, namely, the <i>multilayer feature aggregation histogram</i> was proposed to image retrieval. It can aggregate multilayer features, such as low-, mid-, and high-layer features, into a discriminative yet compact representation via simulating the neural mechanisms that mediate the ability to make value-guided decisions. The highlights of the proposed method have the following: (1) A <i>detail-attentive map</i> was proposed to represent the aggregation of low- and mid-layer features. It can be well used to evaluate the distinguishable detail feature. (2) A simple yet straightforward aggregation method is proposed to re-evaluate the distinguishable high-layer feature. It can provide aggregated features including detail, object, and semantic by using <i>semantic-attentive map</i>. (3) A novel whitening method, namely <i>difference whitening</i>, is introduced to reduce dimensionality. It did not need to seek a training dataset of semantical similarity and can provide a compact yet discriminative representation. Experiments on the popular benchmark datasets demonstrate the proposed method can obviously increase retrieval performance in terms of mAP metric. The proposed method using 128-dimensionality representation can provide significantly higher mAPs than the DSFH, DWDF, and OSAH methods by 0.083, 0.043, and 0.022 on the Oxford5k dataset and by 0.195, 0.036, and 0.071 on the Paris6k dataset. The difference whitening method can conveniently transfer the deep learning model to a new task. Our method provided competitive performance compared with the existing aggregation methods and can retrieve scene images with similar colors, objects, and semantics.</p>\",\"PeriodicalId\":51243,\"journal\":{\"name\":\"Cognitive Computation\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12559-024-10334-9\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12559-024-10334-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

将不同的特征聚合成一个紧凑的表示是图像检索中的一个热点问题。然而,将多层的差异特征聚合成一个具有区分度的表示仍然具有挑战性。受价值引导神经机制的启发,一种新颖的表示方法被提出用于图像检索,即多层特征聚合直方图。该方法通过模拟神经机制,将低层、中层和高层等多层特征聚合成一个具有区分度且紧凑的表示。所提方法的亮点如下:(1) 提出了一个细节-注意力图谱来表示低层和中层特征的聚合。它可以很好地用于评估可区分的细节特征。(2) 提出了一种简单而直接的聚合方法来重新评估可区分的高层特征。通过使用语义-注意力图谱,它可以提供包括细节、对象和语义在内的聚合特征。(3) 引入了一种新颖的白化方法,即差异白化,以降低维度。它不需要寻找语义相似性的训练数据集,并能提供一种紧凑而又具有区分度的表示。在流行的基准数据集上进行的实验表明,所提出的方法能明显提高 mAP 指标的检索性能。与 DSFH、DWDF 和 OSAH 方法相比,使用 128 维表示的拟议方法的 mAP 在牛津 5k 数据集上分别高出 0.083、0.043 和 0.022,在巴黎 6k 数据集上分别高出 0.195、0.036 和 0.071。差值白化方法可以方便地将深度学习模型转移到新任务中。与现有的聚合方法相比,我们的方法提供了具有竞争力的性能,并能检索出具有相似颜色、对象和语义的场景图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Image Retrieval Using Multilayer Feature Aggregation Histogram

Aggregating the diverse features into a compact representation is a hot issue in image retrieval. However, aggregating the differential feature of multilayer into a discriminative representation remains challenging. Inspired by the value-guided neural mechanisms, a novel representation method, namely, the multilayer feature aggregation histogram was proposed to image retrieval. It can aggregate multilayer features, such as low-, mid-, and high-layer features, into a discriminative yet compact representation via simulating the neural mechanisms that mediate the ability to make value-guided decisions. The highlights of the proposed method have the following: (1) A detail-attentive map was proposed to represent the aggregation of low- and mid-layer features. It can be well used to evaluate the distinguishable detail feature. (2) A simple yet straightforward aggregation method is proposed to re-evaluate the distinguishable high-layer feature. It can provide aggregated features including detail, object, and semantic by using semantic-attentive map. (3) A novel whitening method, namely difference whitening, is introduced to reduce dimensionality. It did not need to seek a training dataset of semantical similarity and can provide a compact yet discriminative representation. Experiments on the popular benchmark datasets demonstrate the proposed method can obviously increase retrieval performance in terms of mAP metric. The proposed method using 128-dimensionality representation can provide significantly higher mAPs than the DSFH, DWDF, and OSAH methods by 0.083, 0.043, and 0.022 on the Oxford5k dataset and by 0.195, 0.036, and 0.071 on the Paris6k dataset. The difference whitening method can conveniently transfer the deep learning model to a new task. Our method provided competitive performance compared with the existing aggregation methods and can retrieve scene images with similar colors, objects, and semantics.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Computation
Cognitive Computation COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-NEUROSCIENCES
CiteScore
9.30
自引率
3.70%
发文量
116
审稿时长
>12 weeks
期刊介绍: Cognitive Computation is an international, peer-reviewed, interdisciplinary journal that publishes cutting-edge articles describing original basic and applied work involving biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems. It provides a new platform for the dissemination of research, current practices and future trends in the emerging discipline of cognitive computation that bridges the gap between life sciences, social sciences, engineering, physical and mathematical sciences, and humanities.
期刊最新文献
A Joint Network for Low-Light Image Enhancement Based on Retinex Incorporating Template-Based Contrastive Learning into Cognitively Inspired, Low-Resource Relation Extraction A Novel Cognitive Rough Approach for Severity Analysis of Autistic Children Using Spherical Fuzzy Bipolar Soft Sets Cognitively Inspired Three-Way Decision Making and Bi-Level Evolutionary Optimization for Mobile Cybersecurity Threats Detection: A Case Study on Android Malware Probing Fundamental Visual Comprehend Capabilities on Vision Language Models via Visual Phrases from Structural Data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1