各种图像融合技术之间的效率相关性

S. BharaniNayagi, T. S. S. Angel
{"title":"各种图像融合技术之间的效率相关性","authors":"S. BharaniNayagi, T. S. S. Angel","doi":"10.1142/s1469026823410109","DOIUrl":null,"url":null,"abstract":"Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"147 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Efficiency Correlation between Various Image Fusion Techniques\",\"authors\":\"S. BharaniNayagi, T. S. S. Angel\",\"doi\":\"10.1142/s1469026823410109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.\",\"PeriodicalId\":422521,\"journal\":{\"name\":\"Int. J. Comput. Intell. Appl.\",\"volume\":\"147 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Comput. Intell. Appl.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s1469026823410109\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Intell. Appl.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823410109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

多焦点图像可以通过深度学习方法进行融合。最初,采用多焦点图像融合(MFIF)来完成分类任务。实现了卷积神经网络(CNN)的分类器来判断像素是散焦还是聚焦。缺乏可用的数据来训练系统是MFIF方法的缺点之一。而不是使用MFIF,无监督模型的深度学习方法是负担得起的,适合图像融合。通过建立特征提取、融合和重构的框架,我们生成了一个端到端无监督模型的深度CNN。它被定义为Siamese多尺度特征提取模型。它只能提取同一场景的三幅不同的源图像,这是该系统的主要缺点。由于可能出现低强度和模糊图像,只考虑三个源图像可能会导致性能不佳。本工作的主要目的是考虑[公式:见文]参数来定义[公式:见文]源图像。将许多现有系统与本文提出的图像特征提取方法进行了比较。各种方法的实验结果表明,增强的Siamese多尺度特征提取与结构相似度度量(SSIM)结合使用可以产生良好的融合图像。它是通过定量和定性研究确定的。分析是根据客观检查和视觉特征来完成的。随着参数的增加,客观评价的完成率和复杂性随时间的增加而增加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Efficiency Correlation between Various Image Fusion Techniques
Multi-focus images can be fused by the deep learning (DL) approach. Initially, multi-focus image fusion (MFIF) is used to perform the classification task. The classifier of the convolutional neural network (CNN) is implemented to determine whether the pixel is defocused or focused. The lack of available data to train the system is one of the demerits of the MFIF methodology. Instead of using MFIF, the unsupervised model of the DL approach is affordable and appropriate for image fusion. By establishing a framework of feature extraction, fusion, and reconstruction, we generate a Deep CNN of [Formula: see text] End-to-End Unsupervised Model. It is defined as a Siamese Multi-Scale feature extraction model. It can extract only three different source images of the same scene, which is the major disadvantage of the system. Due to the possibility of low intensity and blurred images, considering only three source images may lead to poor performance. The main objective of the work is to consider [Formula: see text] parameters to define [Formula: see text] source images. Many existing systems are compared to the proposed method for extracting features from images. Experimental results of various approaches show that Enhanced Siamese Multi-Scale feature extraction used along with Structure Similarity Measure (SSIM) produces an excellent fused image. It is determined by undergoing quantitative and qualitative studies. The analysis is done based on objective examination and visual traits. By increasing the parameters, the objective assessment increases in performance rate and complexity with time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CT Images Segmentation Using a Deep Learning-Based Approach for Preoperative Projection of Human Organ Model Using Augmented Reality Technology Styling Classification of Group Photos Fusing Head and Pose Features Genetic Algorithm-Based Optimal Resource Trust Line Prediction in Cloud Computing Shearlet Transform-Based Novel Method for Multimodality Medical Image Fusion Using Deep Learning An Energy-Efficient Clustering and Fuzzy-Based Path Selection for Flying Ad-Hoc Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1