Image matching using GPT correlation associated with simplified HOG patterns

Shizhi Zhang, T. Wakahara, Yukihiko Yamashita
{"title":"Image matching using GPT correlation associated with simplified HOG patterns","authors":"Shizhi Zhang, T. Wakahara, Yukihiko Yamashita","doi":"10.1109/IPTA.2017.8310122","DOIUrl":null,"url":null,"abstract":"GAT (Global Affine Transformation) and GPT (Global Projection Transformation) matchings proposed by Wakahara and Yamashita calculate the optimal AT (affine transformation) and PT (2D projection transformation), respectively. These image matching criteria realize deformation-tolerant matchings by maximizing the normalized cross-correlation between a template and a GAT/GPT-superimposed image. In order to shorten the calculation time, Wakahara and Yamashita also proposed the acceleration algorithms for GAT/GPT matchings. Later on, Wakahara et al. proposed the enhanced GPT matching to calculate the optimal PT parameters simultaneously which overcomes the incompatibility during the matching process. Zhang et al. figured out that these matching techniques do not take account of the conservation of the L2 norm, and introduced norm normalization factors that realize accurate and stable matchings. All these correlation-based matching techniques are well suited for “whole-to-whole” image matching, but are weak in “whole-to-part” image matching being cursed by complex backgrounds and noise. This research firstly proposes simplified HOG patterns for the enhanced GPT matching with norm normalization to obtain the robustness against noise and background. Secondly, this research also proposes the acceleration algorithm for the proposed matching criterion by creating several reference tables. Experiments using the Graffiti dataset show that the proposed method exhibits an outstanding matching ability compared with the original GPT correlation matching and the well-known combination of SURF feature descriptor and RANSAC algorithm. Furthermore, the computational complexity of the proposed method is significantly reduced below double figures via the acceleration algorithm.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPTA.2017.8310122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

GAT (Global Affine Transformation) and GPT (Global Projection Transformation) matchings proposed by Wakahara and Yamashita calculate the optimal AT (affine transformation) and PT (2D projection transformation), respectively. These image matching criteria realize deformation-tolerant matchings by maximizing the normalized cross-correlation between a template and a GAT/GPT-superimposed image. In order to shorten the calculation time, Wakahara and Yamashita also proposed the acceleration algorithms for GAT/GPT matchings. Later on, Wakahara et al. proposed the enhanced GPT matching to calculate the optimal PT parameters simultaneously which overcomes the incompatibility during the matching process. Zhang et al. figured out that these matching techniques do not take account of the conservation of the L2 norm, and introduced norm normalization factors that realize accurate and stable matchings. All these correlation-based matching techniques are well suited for “whole-to-whole” image matching, but are weak in “whole-to-part” image matching being cursed by complex backgrounds and noise. This research firstly proposes simplified HOG patterns for the enhanced GPT matching with norm normalization to obtain the robustness against noise and background. Secondly, this research also proposes the acceleration algorithm for the proposed matching criterion by creating several reference tables. Experiments using the Graffiti dataset show that the proposed method exhibits an outstanding matching ability compared with the original GPT correlation matching and the well-known combination of SURF feature descriptor and RANSAC algorithm. Furthermore, the computational complexity of the proposed method is significantly reduced below double figures via the acceleration algorithm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用GPT相关性与简化HOG模式相关联的图像匹配
Wakahara和Yamashita提出的GAT (Global Affine Transformation)和GPT (Global Projection Transformation)匹配分别计算最优AT(仿射变换)和PT (2D投影变换)。这些图像匹配准则通过最大化模板与GAT/ gpt叠加图像之间的归一化互相关来实现抗变形匹配。为了缩短计算时间,Wakahara和Yamashita还提出了GAT/GPT匹配的加速算法。随后,Wakahara等人提出了增强型GPT匹配,同时计算出最优的PT参数,克服了匹配过程中的不兼容性。Zhang等人发现这些匹配技术没有考虑L2范数的守恒性,引入了范数归一化因子,实现了准确稳定的匹配。这些基于相关性的匹配技术都很适合“整体到整体”的图像匹配,但在“整体到部分”的图像匹配中,由于背景复杂和噪声的影响,匹配效果较差。本研究首先提出了简化HOG模式用于增强GPT匹配与范数归一化,以获得对噪声和背景的鲁棒性。其次,通过建立多个参考表,对所提出的匹配准则进行加速算法。基于涂鸦数据集的实验表明,与原始的GPT相关匹配和SURF特征描述符与RANSAC算法的结合相比,本文方法具有出色的匹配能力。此外,通过加速算法,该方法的计算复杂度显著降低到两位数以下。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automated quantification of retinal vessel morphometry in the UK biobank cohort Deep learning for automatic sale receipt understanding Illumination-robust multispectral demosaicing Completed local structure patterns on three orthogonal planes for dynamic texture recognition Single object tracking using offline trained deep regression networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1