Multi-scale image depth fusion method based on superpixel-level convolutional neural network

Xiaojie Chai, Rongsheng Wang, Junming Wang, Riqiang Zhang
{"title":"Multi-scale image depth fusion method based on superpixel-level convolutional neural network","authors":"Xiaojie Chai, Rongsheng Wang, Junming Wang, Riqiang Zhang","doi":"10.3233/jcm-226706","DOIUrl":null,"url":null,"abstract":"In order to improve the image quality, reduce the image noise and improve the image definition, the image depth fusion processing is realized by using the sp CNN network (Super pixel level convolution neural network, sp CNN). The improved non-local mean method is used to de-noise the image to highlight the role of the center pixel of the image block; the de-noised image is segmented by the improved CV model (Chan-Vese, CV), and the globally optimal multi-scale image segmentation result is obtained after optimization; From the perspective of regional features, the similarity measurement of image regions is carried out to realize image preprocessing. The sp-CNN network is constructed, and with the help of the idea of pyramid pooling, the average pooling is used to extract the features of each layer from the global and local levels of the convolutional features, and the training data set is generated for training, thereby realizing multi-scale image fusion. The experimental results show that the optimal value of the root mean square error index of the proposed method is 0.58. The optimal value of structural similarity index is 41.22. On the average slope index, the optimal value is 21.39. The optimal value of cross entropy index is 2.21. This shows that the proposed method has high image definition and good visual effect, which verifies the effectiveness of the method.","PeriodicalId":14668,"journal":{"name":"J. Comput. Methods Sci. Eng.","volume":"48 1","pages":"1237-1250"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Comput. Methods Sci. Eng.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/jcm-226706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In order to improve the image quality, reduce the image noise and improve the image definition, the image depth fusion processing is realized by using the sp CNN network (Super pixel level convolution neural network, sp CNN). The improved non-local mean method is used to de-noise the image to highlight the role of the center pixel of the image block; the de-noised image is segmented by the improved CV model (Chan-Vese, CV), and the globally optimal multi-scale image segmentation result is obtained after optimization; From the perspective of regional features, the similarity measurement of image regions is carried out to realize image preprocessing. The sp-CNN network is constructed, and with the help of the idea of pyramid pooling, the average pooling is used to extract the features of each layer from the global and local levels of the convolutional features, and the training data set is generated for training, thereby realizing multi-scale image fusion. The experimental results show that the optimal value of the root mean square error index of the proposed method is 0.58. The optimal value of structural similarity index is 41.22. On the average slope index, the optimal value is 21.39. The optimal value of cross entropy index is 2.21. This shows that the proposed method has high image definition and good visual effect, which verifies the effectiveness of the method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于超像素级卷积神经网络的多尺度图像深度融合方法
为了提高图像质量,降低图像噪声,提高图像清晰度,利用sp CNN网络(Super pixel level convolution neural network, sp CNN)实现图像深度融合处理。采用改进的非局部均值法对图像进行去噪,突出图像块中心像素的作用;利用改进的CV模型(Chan-Vese, CV)对去噪图像进行分割,优化后得到全局最优的多尺度图像分割结果;从区域特征的角度出发,对图像区域进行相似性度量,实现图像预处理。构建sp-CNN网络,利用金字塔池化的思想,利用平均池化从卷积特征的全局和局部两层提取每一层的特征,生成训练数据集进行训练,从而实现多尺度图像融合。实验结果表明,该方法的均方根误差指数最优值为0.58。结构相似指数的最优值为41.22。在平均坡度指数上,最优值为21.39。交叉熵指数的最优值为2.21。实验结果表明,该方法具有较高的图像清晰度和良好的视觉效果,验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Retracted to: Design and dynamics simulation of vehicle active occupant restraint protection system Flip-OFDM Optical MIMO Based VLC System Using ML/DL Approach Using the Structure-Behavior Coalescence Method to Formalize the Action Flow Semantics of UML 2.0 Activity Diagrams Accurate Calibration and Scalable Bandwidth Sharing of Multi-Queue SSDs Looking to Personalize Gaze Estimation Using Transformers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1