CFPNet: Complementary Feature Perception Network for Underwater Image Enhancement

IF 3.8 2区 工程技术 Q1 ENGINEERING, CIVIL IEEE Journal of Oceanic Engineering Pub Date : 2024-11-15 DOI:10.1109/JOE.2024.3463838
Xianping Fu;Wenqiang Qin;Fengqi Li;Fengqiang Xu;Xiaohong Yan
{"title":"CFPNet: Complementary Feature Perception Network for Underwater Image Enhancement","authors":"Xianping Fu;Wenqiang Qin;Fengqi Li;Fengqiang Xu;Xiaohong Yan","doi":"10.1109/JOE.2024.3463838","DOIUrl":null,"url":null,"abstract":"Images shot underwater are usually characterized by global nonuniform information loss due to selective light absorption and scattering, resulting in various degradation problems, such as color distortion and low visibility. Recently, deep learning has drawn much attention in the field of underwater image enhancement (UIE) for its powerful performance. However, most deep learning-based UIE models rely on either pure convolutional neural network (CNN) or pure transformer, which makes it challenging to enhance images while maintaining local representations and global features simultaneously. In this article, we propose a novel complementary feature perception network (CFPNet), which embeds the transformer into the classical CNN-based UNet3+. The core idea is to fuse the advantages of CNN and transformer to obtain satisfactory high-quality underwater images that can naturally perceive local and global features. CFPNet employs a novel dual encoder structure of the CNN and transformer in parallel, while the decoder is composed of one trunk decoder and two auxiliary decoders. First, we propose the regionalized two-stage vision transformer that can progressively eliminate the variable levels of degradation in a coarse-to-fine manner. Second, we design the full-scale feature fusion module to explore sufficient information by merging the multiscale features. In addition, we propose an auxiliary feature guided learning strategy that utilizes reflectance and shading maps to guide the generation of the final results. The advantage of this strategy is to avoid repetitive and ineffective learning of the model, and to accomplish color correction and deblurring tasks more efficiently. Experiments demonstrate that our CFPNet can obtain high-quality underwater images and show superior performance compared to the state-of-the-art UIE methods qualitatively and quantitatively.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"150-163"},"PeriodicalIF":3.8000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Oceanic Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10754901/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, CIVIL","Score":null,"Total":0}
引用次数: 0

Abstract

Images shot underwater are usually characterized by global nonuniform information loss due to selective light absorption and scattering, resulting in various degradation problems, such as color distortion and low visibility. Recently, deep learning has drawn much attention in the field of underwater image enhancement (UIE) for its powerful performance. However, most deep learning-based UIE models rely on either pure convolutional neural network (CNN) or pure transformer, which makes it challenging to enhance images while maintaining local representations and global features simultaneously. In this article, we propose a novel complementary feature perception network (CFPNet), which embeds the transformer into the classical CNN-based UNet3+. The core idea is to fuse the advantages of CNN and transformer to obtain satisfactory high-quality underwater images that can naturally perceive local and global features. CFPNet employs a novel dual encoder structure of the CNN and transformer in parallel, while the decoder is composed of one trunk decoder and two auxiliary decoders. First, we propose the regionalized two-stage vision transformer that can progressively eliminate the variable levels of degradation in a coarse-to-fine manner. Second, we design the full-scale feature fusion module to explore sufficient information by merging the multiscale features. In addition, we propose an auxiliary feature guided learning strategy that utilizes reflectance and shading maps to guide the generation of the final results. The advantage of this strategy is to avoid repetitive and ineffective learning of the model, and to accomplish color correction and deblurring tasks more efficiently. Experiments demonstrate that our CFPNet can obtain high-quality underwater images and show superior performance compared to the state-of-the-art UIE methods qualitatively and quantitatively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于互补特征感知网络的水下图像增强
水下拍摄的图像由于光的选择性吸收和散射,通常具有全局不均匀信息丢失的特点,从而导致各种退化问题,如颜色失真和能见度低。近年来,深度学习以其强大的性能在水下图像增强(UIE)领域备受关注。然而,大多数基于深度学习的UIE模型要么依赖于纯卷积神经网络(CNN),要么依赖于纯变压器,这使得在同时保持局部表征和全局特征的同时增强图像具有挑战性。在本文中,我们提出了一种新的互补特征感知网络(CFPNet),该网络将变压器嵌入到经典的基于cnn的UNet3+中。其核心思想是融合CNN和transformer的优点,获得令人满意的高质量水下图像,可以自然地感知局部和全局特征。CFPNet采用CNN和变压器并联的新型双编码器结构,解码器由一个主干解码器和两个辅助解码器组成。首先,我们提出了一种区别化的两级视觉变压器,它可以以一种从粗到细的方式逐步消除退化的不同程度。其次,设计全尺度特征融合模块,通过多尺度特征融合挖掘出足够的信息;此外,我们提出了一种辅助特征引导学习策略,该策略利用反射率和阴影图来指导最终结果的生成。该策略的优点是避免了模型的重复和无效学习,更有效地完成色彩校正和去模糊任务。实验表明,CFPNet可以获得高质量的水下图像,在定性和定量上都优于目前最先进的UIE方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Journal of Oceanic Engineering
IEEE Journal of Oceanic Engineering 工程技术-工程:大洋
CiteScore
9.60
自引率
12.20%
发文量
86
审稿时长
12 months
期刊介绍: The IEEE Journal of Oceanic Engineering (ISSN 0364-9059) is the online-only quarterly publication of the IEEE Oceanic Engineering Society (IEEE OES). The scope of the Journal is the field of interest of the IEEE OES, which encompasses all aspects of science, engineering, and technology that address research, development, and operations pertaining to all bodies of water. This includes the creation of new capabilities and technologies from concept design through prototypes, testing, and operational systems to sense, explore, understand, develop, use, and responsibly manage natural resources.
期刊最新文献
Table of Contents JOE Call for Papers - Special Issue on Maritime Informatics and Robotics: Advances from the IEEE Symposium on Maritime Informatics & Robotics JOE Call for Papers - Special Issue on the IEEE 2026 AUV Symposium Combined Texture Continuity and Correlation for Sidescan Sonar Heading Distortion Sea Surface Floating Small Target Detection Based on a Priori Feature Distribution and Multiscan Iteration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1