An Unsupervised Genetic Algorithm Framework for Rank Selection and Fusion on Image Retrieval

Lucas Pascotti Valem, D. C. G. Pedronette
{"title":"An Unsupervised Genetic Algorithm Framework for Rank Selection and Fusion on Image Retrieval","authors":"Lucas Pascotti Valem, D. C. G. Pedronette","doi":"10.1145/3323873.3325022","DOIUrl":null,"url":null,"abstract":"Despite the major advances on feature development for low and mid-level representations, a single visual feature is often insufficient to achieve effective retrieval results in different scenarios. Since diverse visual properties provide distinct and often complementary information for a same query, the combination of different features, including handcrafted and learned features, has been establishing as a relevant trend in image retrieval. An intrinsic difficulty task consists in selecting and combining features that provide a high-effective result, which is often supported by supervised learning methods. However, in the absence of labeled data, selecting and fusing features in a completely unsupervised fashion becomes an essential, although very challenging task. The proposed genetic algorithm employs effectiveness estimation measures as fitness functions, making the evolutionary process fully unsupervised. Our approach was evaluated considering 3 public datasets and 35 different descriptors achieving relative gains up to +53.96% in scenarios with more than 8 billion possible combinations of rankers. The framework was also compared to different baselines, including state-of-the-art methods.","PeriodicalId":149041,"journal":{"name":"Proceedings of the 2019 on International Conference on Multimedia Retrieval","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 on International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3323873.3325022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Despite the major advances on feature development for low and mid-level representations, a single visual feature is often insufficient to achieve effective retrieval results in different scenarios. Since diverse visual properties provide distinct and often complementary information for a same query, the combination of different features, including handcrafted and learned features, has been establishing as a relevant trend in image retrieval. An intrinsic difficulty task consists in selecting and combining features that provide a high-effective result, which is often supported by supervised learning methods. However, in the absence of labeled data, selecting and fusing features in a completely unsupervised fashion becomes an essential, although very challenging task. The proposed genetic algorithm employs effectiveness estimation measures as fitness functions, making the evolutionary process fully unsupervised. Our approach was evaluated considering 3 public datasets and 35 different descriptors achieving relative gains up to +53.96% in scenarios with more than 8 billion possible combinations of rankers. The framework was also compared to different baselines, including state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图像检索中秩选择与融合的无监督遗传算法框架
尽管低级和中级表示的特征开发取得了重大进展,但单一的视觉特征往往不足以在不同的场景中获得有效的检索结果。由于不同的视觉属性为相同的查询提供了不同的且通常是互补的信息,因此不同特征的组合,包括手工特征和学习特征,已经成为图像检索的一个相关趋势。一个内在困难的任务包括选择和组合提供高效结果的特征,这通常是由监督学习方法支持的。然而,在没有标记数据的情况下,以完全无监督的方式选择和融合特征变得必不可少,尽管这是一项非常具有挑战性的任务。提出的遗传算法采用有效性估计测度作为适应度函数,使进化过程完全无监督。我们的方法在3个公共数据集和35个不同的描述符的情况下进行了评估,在超过80亿个可能的排名组合的情况下,我们的方法获得了高达+53.96%的相对增益。该框架还与不同的基线进行了比较,包括最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
EAGER Multimodal Multimedia Retrieval with vitrivr RobustiQ: A Robust ANN Search Method for Billion-scale Similarity Search on GPUs Improving What Cross-Modal Retrieval Models Learn through Object-Oriented Inter- and Intra-Modal Attention Networks DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1