Better and Faster Deep Image Fusion with Spatial Frequency

Zhuang Miao, Yang Li, Jiabao Wang, Jixiao Wang, Rui Zhang
{"title":"Better and Faster Deep Image Fusion with Spatial Frequency","authors":"Zhuang Miao, Yang Li, Jiabao Wang, Jixiao Wang, Rui Zhang","doi":"10.1109/ICCRD51685.2021.9386515","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed wide application of infrared and visible image fusion. However, most existing deep fusion methods focused primarily on improving the accuracy without taking much consideration of efficiency. In this paper, our goal is to build a better, faster and stronger image fusion method, which can reduce the computation complexity significantly while keep the fusion quality unchanged. To this end, we systematically analyzed the image fusion accuracy for different depth of image features and designed a lightweight backbone network with spatial frequency for infrared and visible image fusion. Unlikely previous methods based on traditional convolutional neural networks, our method can greatly preserve the detail information during image fusion. We analyze the spatial frequency strategy of our prototype and show that it can maintain more edges and textures information during fusion. Furthermore, our method has fewer parameters and lower computation in comparison of state-of-the-art fusion methods. Experiments conducted on benchmarks demonstrate that our method can achieve compelling fusion results over 97.0% decline of parameter size, running 5 times faster than state-of-the-art fusion methods.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRD51685.2021.9386515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent years have witnessed wide application of infrared and visible image fusion. However, most existing deep fusion methods focused primarily on improving the accuracy without taking much consideration of efficiency. In this paper, our goal is to build a better, faster and stronger image fusion method, which can reduce the computation complexity significantly while keep the fusion quality unchanged. To this end, we systematically analyzed the image fusion accuracy for different depth of image features and designed a lightweight backbone network with spatial frequency for infrared and visible image fusion. Unlikely previous methods based on traditional convolutional neural networks, our method can greatly preserve the detail information during image fusion. We analyze the spatial frequency strategy of our prototype and show that it can maintain more edges and textures information during fusion. Furthermore, our method has fewer parameters and lower computation in comparison of state-of-the-art fusion methods. Experiments conducted on benchmarks demonstrate that our method can achieve compelling fusion results over 97.0% decline of parameter size, running 5 times faster than state-of-the-art fusion methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
更好更快的空间频率深度图像融合
近年来,红外和可见光图像融合得到了广泛的应用。然而,现有的深度融合方法大多侧重于提高精度,而不考虑效率。本文的目标是建立一种更好、更快、更强的图像融合方法,在保证融合质量不变的情况下显著降低计算复杂度。为此,系统分析了不同深度图像特征下的图像融合精度,设计了一种具有空间频率的轻型骨干网络用于红外和可见光图像融合。与以往基于传统卷积神经网络的方法不同,该方法在图像融合过程中可以很好地保留细节信息。我们分析了我们的原型的空间频率策略,表明它可以在融合过程中保留更多的边缘和纹理信息。此外,与现有的融合方法相比,我们的方法参数更少,计算量更低。在基准上进行的实验表明,我们的方法可以在参数大小下降97.0%的情况下获得令人信服的融合结果,运行速度比目前最先进的融合方法快5倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
ICCRD 2021 Preface Point Cloud Depth Map and Optical Image Registration Based on Improved RIFT Algorithm ICCRD 2021 Copyright Page ICCRD 2021 Cover Page Robust Nighttime Road Lane Line Detection using Bilateral Filter and SAGC under Challenging Conditions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1