MSFFNet: Multi-stream feature fusion network for underwater image enhancement

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2025-03-18 DOI:10.1016/j.displa.2025.103023
Peng Lin, Zihao Fan, Yafei Wang, Xudong Sun, Yuán-Ruì Yáng, Xianping Fu
{"title":"MSFFNet: Multi-stream feature fusion network for underwater image enhancement","authors":"Peng Lin,&nbsp;Zihao Fan,&nbsp;Yafei Wang,&nbsp;Xudong Sun,&nbsp;Yuán-Ruì Yáng,&nbsp;Xianping Fu","doi":"10.1016/j.displa.2025.103023","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning-based image processing methods have achieved remarkable success in improving the quality of underwater images. These methods usually extract features from different receptive fields through downsampling operations, and then enhance underwater images through upsampling operations. However, these operations of downsampling and upsampling inevitably disrupt the relations of neighboring pixels in raw underwater images, leading to the loss of image details. Given this, a multi-stream feature fusion network, dubbed MSFFNet, is proposed to enrich details, correct colors, and enhance contrast of degraded underwater images. In MSFFNet, the multi-stream feature estimation block is carefully constructed, which separately takes original resolution feature maps and low-resolution feature maps as inputs. The multi-stream feature estimation block proficiently preserves the details information of the original underwater image while extracting high-level features. Besides, a coordinate residual block is designed to emphasize valuable features and suppress noises based on position knowledge. A local–global feature fusion block is presented for selectively fusing the complementary multi-scale features. Finally, extensive comparative experiments on real underwater images and synthetic underwater images demonstrate that the proposed MSFFNet has superior performance on underwater image enhancement tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103023"},"PeriodicalIF":3.4000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000605","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning-based image processing methods have achieved remarkable success in improving the quality of underwater images. These methods usually extract features from different receptive fields through downsampling operations, and then enhance underwater images through upsampling operations. However, these operations of downsampling and upsampling inevitably disrupt the relations of neighboring pixels in raw underwater images, leading to the loss of image details. Given this, a multi-stream feature fusion network, dubbed MSFFNet, is proposed to enrich details, correct colors, and enhance contrast of degraded underwater images. In MSFFNet, the multi-stream feature estimation block is carefully constructed, which separately takes original resolution feature maps and low-resolution feature maps as inputs. The multi-stream feature estimation block proficiently preserves the details information of the original underwater image while extracting high-level features. Besides, a coordinate residual block is designed to emphasize valuable features and suppress noises based on position knowledge. A local–global feature fusion block is presented for selectively fusing the complementary multi-scale features. Finally, extensive comparative experiments on real underwater images and synthetic underwater images demonstrate that the proposed MSFFNet has superior performance on underwater image enhancement tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于水下图像增强的多流特征融合网络
基于深度学习的图像处理方法在提高水下图像质量方面取得了显著成效。这些方法通常通过下采样操作提取不同感受野的特征,然后通过上采样操作增强水下图像。然而,这些下采样和上采样操作不可避免地会破坏原始水下图像中相邻像素的关系,从而导致图像细节的丢失。有鉴于此,我们提出了一种多流特征融合网络(MSFFNet)来丰富细节、校正色彩并增强退化水下图像的对比度。在 MSFFNet 中,多流特征估计模块经过精心设计,分别将原始分辨率特征图和低分辨率特征图作为输入。多流特征估计块在提取高级特征的同时,还能有效保留原始水下图像的细节信息。此外,还设计了一个坐标残差块,以根据位置知识强调有价值的特征并抑制噪声。局部-全局特征融合块用于选择性地融合互补的多尺度特征。最后,在真实水下图像和合成水下图像上进行的大量对比实验证明,所提出的 MSFFNet 在水下图像增强任务中表现出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
Automated prompt-guided multi-modality cell segmentation with shape-aware classification and boundary-aware SAM adaptation An optimized convolutional neural network based on multi-strategy grey wolf optimizer to identify crop diseases and pests Parameter-efficient fine-tuning for no-reference image quality assessment: Empirical studies on vision transformer AFFLIE: Adaptive feature fusion for low-light image enhancement A polynomial regression-based calibration method for enhancing chromaticity and luminance accuracy at low luminance levels of LCDs with automated sampling and compensation mechanisms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1