Zhuang Miao, Yang Li, Jiabao Wang, Jixiao Wang, Rui Zhang
{"title":"Better and Faster Deep Image Fusion with Spatial Frequency","authors":"Zhuang Miao, Yang Li, Jiabao Wang, Jixiao Wang, Rui Zhang","doi":"10.1109/ICCRD51685.2021.9386515","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed wide application of infrared and visible image fusion. However, most existing deep fusion methods focused primarily on improving the accuracy without taking much consideration of efficiency. In this paper, our goal is to build a better, faster and stronger image fusion method, which can reduce the computation complexity significantly while keep the fusion quality unchanged. To this end, we systematically analyzed the image fusion accuracy for different depth of image features and designed a lightweight backbone network with spatial frequency for infrared and visible image fusion. Unlikely previous methods based on traditional convolutional neural networks, our method can greatly preserve the detail information during image fusion. We analyze the spatial frequency strategy of our prototype and show that it can maintain more edges and textures information during fusion. Furthermore, our method has fewer parameters and lower computation in comparison of state-of-the-art fusion methods. Experiments conducted on benchmarks demonstrate that our method can achieve compelling fusion results over 97.0% decline of parameter size, running 5 times faster than state-of-the-art fusion methods.","PeriodicalId":294200,"journal":{"name":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 13th International Conference on Computer Research and Development (ICCRD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRD51685.2021.9386515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent years have witnessed wide application of infrared and visible image fusion. However, most existing deep fusion methods focused primarily on improving the accuracy without taking much consideration of efficiency. In this paper, our goal is to build a better, faster and stronger image fusion method, which can reduce the computation complexity significantly while keep the fusion quality unchanged. To this end, we systematically analyzed the image fusion accuracy for different depth of image features and designed a lightweight backbone network with spatial frequency for infrared and visible image fusion. Unlikely previous methods based on traditional convolutional neural networks, our method can greatly preserve the detail information during image fusion. We analyze the spatial frequency strategy of our prototype and show that it can maintain more edges and textures information during fusion. Furthermore, our method has fewer parameters and lower computation in comparison of state-of-the-art fusion methods. Experiments conducted on benchmarks demonstrate that our method can achieve compelling fusion results over 97.0% decline of parameter size, running 5 times faster than state-of-the-art fusion methods.