通过离线特征转移和搜索窗口权重提高场景识别性能

Chu-Tak Li, W. Siu, D. Lun
{"title":"通过离线特征转移和搜索窗口权重提高场景识别性能","authors":"Chu-Tak Li, W. Siu, D. Lun","doi":"10.1109/ICDSP.2018.8631883","DOIUrl":null,"url":null,"abstract":"This paper presents a key frame recognition algorithm, using novel offline feature-shifts approach and search window weights. We extract effective feature patches from key frames with an offline feature-shifts approach for real-time key frame recognition. We focus on practical situations in which blurring and shifts in viewpoints occur in our dataset. We compare our method with some conventional keypoint-based matching methods and the newest CNN features for scene recognition. The experimental results illustrate that our method can reasonably preserve the performance in key frame recognition when comparing with methods using online feature-shifts approach. Our proposed method provides larger tolerance of unmatched pairs which is useful for decision making in real-time systems. Moreover, our method is robust to illumination and blurring. We achieve 90% accuracy in a nighttime sequence while CNN approach only attains 60% accuracy. Our method only requires 33.8 ms to match a frame on average using a regular desktop, which is 4 times faster than CNN approach with only CPU mode.","PeriodicalId":218806,"journal":{"name":"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Boosting the Performance of Scene Recognition via Offline Feature-Shifts and Search Window Weights\",\"authors\":\"Chu-Tak Li, W. Siu, D. Lun\",\"doi\":\"10.1109/ICDSP.2018.8631883\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a key frame recognition algorithm, using novel offline feature-shifts approach and search window weights. We extract effective feature patches from key frames with an offline feature-shifts approach for real-time key frame recognition. We focus on practical situations in which blurring and shifts in viewpoints occur in our dataset. We compare our method with some conventional keypoint-based matching methods and the newest CNN features for scene recognition. The experimental results illustrate that our method can reasonably preserve the performance in key frame recognition when comparing with methods using online feature-shifts approach. Our proposed method provides larger tolerance of unmatched pairs which is useful for decision making in real-time systems. Moreover, our method is robust to illumination and blurring. We achieve 90% accuracy in a nighttime sequence while CNN approach only attains 60% accuracy. Our method only requires 33.8 ms to match a frame on average using a regular desktop, which is 4 times faster than CNN approach with only CPU mode.\",\"PeriodicalId\":218806,\"journal\":{\"name\":\"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDSP.2018.8631883\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDSP.2018.8631883","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

本文提出了一种基于离线特征转移和搜索窗口权重的关键帧识别算法。我们使用离线特征转移方法从关键帧中提取有效的特征补丁,用于实时关键帧识别。我们关注的是数据集中发生视点模糊和变化的实际情况。我们将该方法与一些传统的基于关键点的匹配方法和最新的CNN特征进行了对比。实验结果表明,与基于在线特征移位的方法相比,该方法在关键帧识别方面能保持较好的性能。该方法提供了较大的不匹配对容忍度,有助于实时系统的决策。此外,该方法对光照和模糊具有较强的鲁棒性。我们在夜间序列中达到90%的准确率,而CNN方法仅达到60%的准确率。我们的方法在普通桌面环境下匹配一帧平均只需要33.8 ms,这比仅使用CPU模式的CNN方法快4倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Boosting the Performance of Scene Recognition via Offline Feature-Shifts and Search Window Weights
This paper presents a key frame recognition algorithm, using novel offline feature-shifts approach and search window weights. We extract effective feature patches from key frames with an offline feature-shifts approach for real-time key frame recognition. We focus on practical situations in which blurring and shifts in viewpoints occur in our dataset. We compare our method with some conventional keypoint-based matching methods and the newest CNN features for scene recognition. The experimental results illustrate that our method can reasonably preserve the performance in key frame recognition when comparing with methods using online feature-shifts approach. Our proposed method provides larger tolerance of unmatched pairs which is useful for decision making in real-time systems. Moreover, our method is robust to illumination and blurring. We achieve 90% accuracy in a nighttime sequence while CNN approach only attains 60% accuracy. Our method only requires 33.8 ms to match a frame on average using a regular desktop, which is 4 times faster than CNN approach with only CPU mode.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A High-Throughput QC-LDPC Decoder for Near-Earth Application Face Recognition Based on Stacked Convolutional Autoencoder and Sparse Representation Internet of Remote Things: A Communication Scheme for Air-to-Ground Information Dissemination Deep Learning for Automatic IC Image Analysis A 4-D Sparse FIR Hyperfan Filter for Volumetric Refocusing of Light Fields by Hard Thresholding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1