Fast Multi Focus Image Fusion Using Determinant

Mostafa Amin-Naji, A. Aghagolzadeh, Hami Mahdavinataj
{"title":"Fast Multi Focus Image Fusion Using Determinant","authors":"Mostafa Amin-Naji, A. Aghagolzadeh, Hami Mahdavinataj","doi":"10.1109/MVIP53647.2022.9738555","DOIUrl":null,"url":null,"abstract":"This paper presents fast pixel-wise multi-focus image fusion in the spatial domain without bells and whistles. The proposed method just uses the determinant of the sliding windows from the input images as a metric to create a pixel-wise decision map. The sliding windows of 15 pixels with the stride of 7 pixels are passed through the input images. Then it creates a pixel-wise decision map for fusion multi-focus images. Also, some simple tricks like global image threshold using Otsu’s method and removal of small objects by morphological closing operation are used to refine the pixel-wise decision map. This method is high-speed and can fuse a pair of 512x512 multi-focus images around 0.05 seconds (50 milliseconds) in our hardware. We compared it with 22 prominent methods in the transform domain, spatial domain, and deep learning based methods that their source codes are available, and our method is faster than all of them. We conducted the objective and subjective experiments on the Lytro dataset, and our method can compete with their results. The proposed method may not have the best fusion quality among state-of-the-art methods, but to the best of our knowledge, this is the fastest pixel-wise method and very suitable for real-time image processing. All material and source code will be available in https://github.com/mostafaaminnaji/FastDetFuse and http://imagefusion.ir.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Machine Vision and Image Processing (MVIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MVIP53647.2022.9738555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper presents fast pixel-wise multi-focus image fusion in the spatial domain without bells and whistles. The proposed method just uses the determinant of the sliding windows from the input images as a metric to create a pixel-wise decision map. The sliding windows of 15 pixels with the stride of 7 pixels are passed through the input images. Then it creates a pixel-wise decision map for fusion multi-focus images. Also, some simple tricks like global image threshold using Otsu’s method and removal of small objects by morphological closing operation are used to refine the pixel-wise decision map. This method is high-speed and can fuse a pair of 512x512 multi-focus images around 0.05 seconds (50 milliseconds) in our hardware. We compared it with 22 prominent methods in the transform domain, spatial domain, and deep learning based methods that their source codes are available, and our method is faster than all of them. We conducted the objective and subjective experiments on the Lytro dataset, and our method can compete with their results. The proposed method may not have the best fusion quality among state-of-the-art methods, but to the best of our knowledge, this is the fastest pixel-wise method and very suitable for real-time image processing. All material and source code will be available in https://github.com/mostafaaminnaji/FastDetFuse and http://imagefusion.ir.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于行列式的快速多焦点图像融合
本文提出了一种空间域的快速逐像素多焦点图像融合方法。该方法仅使用输入图像中滑动窗口的行列式作为度量来创建逐像素决策图。15像素、7像素的滑动窗口通过输入图像。然后为融合多焦点图像创建逐像素决策图。同时,利用Otsu方法的全局图像阈值和形态闭合操作去除小目标等简单技巧来细化逐像素决策图。这种方法是高速的,在我们的硬件中可以在0.05秒(50毫秒)左右融合一对512x512的多焦点图像。我们将其与变换域、空间域和基于深度学习的22种主要方法进行了比较,发现我们的方法比它们都快。我们在Lytro数据集上进行了客观和主观的实验,我们的方法可以与他们的结果相媲美。该方法的融合质量可能不是目前最先进的方法中最好的,但据我们所知,这是最快的逐像素方法,非常适合实时图像处理。所有材料和源代码都可以在https://github.com/mostafaaminnaji/FastDetFuse和http://imagefusion.ir上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Transfer Learning on Semantic Segmentation for Sugar Crystal Analysis Evaluation of the Image Processing Technique in Interpretation of Polar Plot Characteristics of Transformer Frequency Response Novel Gaussian Mixture-based Video Coding for Fixed Background Video Streaming Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images Facial Expression Recognition: a Comparison with Different Classical and Deep Learning Methods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1