Pub Date : 2025-01-13DOI: 10.1109/JOE.2025.3527081
{"title":"JOE Call for Papers - Special Issue on Maritime Informatics and Robotics: Advances from the IEEE Symposium on Maritime Informatics & Robotics","authors":"","doi":"10.1109/JOE.2025.3527081","DOIUrl":"https://doi.org/10.1109/JOE.2025.3527081","url":null,"abstract":"","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"421-422"},"PeriodicalIF":3.8,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839488","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-13DOI: 10.1109/JOE.2025.3527079
{"title":"JOE Call for Papers - Special Issue on the IEEE 2026 AUV Symposium","authors":"","doi":"10.1109/JOE.2025.3527079","DOIUrl":"https://doi.org/10.1109/JOE.2025.3527079","url":null,"abstract":"","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"419-420"},"PeriodicalIF":3.8,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10839495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1109/JOE.2024.3474741
Chao Huang;Jianhu Zhao;Yongcan Yu;Hongmei Zhang
Sidescan sonar (SSS) creates images through interpolation of scan lines. The instability of the transducer position caused by the vessel's turning and the boost from the swells, which leads to misalignment, overlapping, and uneven spacing of the scan lines (heading distortion), is a problem that has been largely overlooked in the processing of SSS data. The traditional interpolation method tends to cause serious mosaic and overlapping texture problems in SSS, which interferes with the subsequent image analysis work. Additionally, the practice of simply cutting and discarding also tends to waste resources. To enhance data usability, this article leverages the deep convolutional neural network (DCNN) to learn the correlations between textures, transforming the issue of heading anomaly correction into one of misalignment fusion in overlapping areas and gap texture filling, providing a feasible scheme for detecting scanning line heading anomalies and filling gaps. Addressing the lack of continuity in textures repaired by DCNN in larger gaps, a continuity-guided branch network is proposed to help the main repair network consider texture continuity. Through quantitative evaluation with real sonar images as a reference and qualitative evaluation without a real image reference, the effectiveness of the proposed method in filling gaps in scan lines with varying degrees of anomalies has been validated. For regions with minor heading anomalies, the method achieves repair results comparable to traditional interpolation techniques. In the area with large anomalies, the proposed method shows improvements over the traditional optimal method, with the peak signal-to-noise ratio index increase of over 5%, the structural similarity index improvement of over 20%, and the naturalness image quality evaluator index enhancement of over 8%, greatly enhancing the data's usability.
{"title":"Combined Texture Continuity and Correlation for Sidescan Sonar Heading Distortion","authors":"Chao Huang;Jianhu Zhao;Yongcan Yu;Hongmei Zhang","doi":"10.1109/JOE.2024.3474741","DOIUrl":"https://doi.org/10.1109/JOE.2024.3474741","url":null,"abstract":"Sidescan sonar (SSS) creates images through interpolation of scan lines. The instability of the transducer position caused by the vessel's turning and the boost from the swells, which leads to misalignment, overlapping, and uneven spacing of the scan lines (heading distortion), is a problem that has been largely overlooked in the processing of SSS data. The traditional interpolation method tends to cause serious mosaic and overlapping texture problems in SSS, which interferes with the subsequent image analysis work. Additionally, the practice of simply cutting and discarding also tends to waste resources. To enhance data usability, this article leverages the deep convolutional neural network (DCNN) to learn the correlations between textures, transforming the issue of heading anomaly correction into one of misalignment fusion in overlapping areas and gap texture filling, providing a feasible scheme for detecting scanning line heading anomalies and filling gaps. Addressing the lack of continuity in textures repaired by DCNN in larger gaps, a continuity-guided branch network is proposed to help the main repair network consider texture continuity. Through quantitative evaluation with real sonar images as a reference and qualitative evaluation without a real image reference, the effectiveness of the proposed method in filling gaps in scan lines with varying degrees of anomalies has been validated. For regions with minor heading anomalies, the method achieves repair results comparable to traditional interpolation techniques. In the area with large anomalies, the proposed method shows improvements over the traditional optimal method, with the peak signal-to-noise ratio index increase of over 5%, the structural similarity index improvement of over 20%, and the naturalness image quality evaluator index enhancement of over 8%, greatly enhancing the data's usability.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"338-353"},"PeriodicalIF":3.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1109/JOE.2024.3474748
Shuwen Xu;Tian Zhang;Hongtao Ru
To address the issue that the detection performance of conventional sea target detectors deteriorates seriously in short accumulated pulses, this article designs a feature detection method based on a priori feature distribution and multiscan iteration, which enhances the feature extraction ability of existing feature-based detection methods. The initial step involves the utilization of kernel density estimation for the purpose of fitting the a priori feature distribution model. Subsequently, the original feature vectors of the current scan are iterated based on the a priori feature distribution model to obtain improved feature vectors. After the feature iteration of the current scan is completed, the original feature vectors of the current scan are incorporated into the historical features to generate a new distribution model. The improved feature vectors after iteration are employed for training the decision region and detecting targets by the convex hull algorithm. The proposed method is designed to enhance the stability and reliability of detection features, thereby facilitating a greater degree of separation between the extracted features of sea clutter and target returns within the feature space. The measured IPIX data sets and Naval Aviation University X-Band data sets demonstrate that the proposed method can effectively improve the detection performance of existing multifeature-based detection methods in scenarios involving short accumulated pulses.
{"title":"Sea Surface Floating Small Target Detection Based on a Priori Feature Distribution and Multiscan Iteration","authors":"Shuwen Xu;Tian Zhang;Hongtao Ru","doi":"10.1109/JOE.2024.3474748","DOIUrl":"https://doi.org/10.1109/JOE.2024.3474748","url":null,"abstract":"To address the issue that the detection performance of conventional sea target detectors deteriorates seriously in short accumulated pulses, this article designs a feature detection method based on a priori feature distribution and multiscan iteration, which enhances the feature extraction ability of existing feature-based detection methods. The initial step involves the utilization of kernel density estimation for the purpose of fitting the a priori feature distribution model. Subsequently, the original feature vectors of the current scan are iterated based on the a priori feature distribution model to obtain improved feature vectors. After the feature iteration of the current scan is completed, the original feature vectors of the current scan are incorporated into the historical features to generate a new distribution model. The improved feature vectors after iteration are employed for training the decision region and detecting targets by the convex hull algorithm. The proposed method is designed to enhance the stability and reliability of detection features, thereby facilitating a greater degree of separation between the extracted features of sea clutter and target returns within the feature space. The measured IPIX data sets and Naval Aviation University X-Band data sets demonstrate that the proposed method can effectively improve the detection performance of existing multifeature-based detection methods in scenarios involving short accumulated pulses.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"94-119"},"PeriodicalIF":3.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When taking images underwater, we often find they have low contrast and color distortions since light passing through water suffers from absorption, scattering, and attenuation, making it difficult to see the scene clearly. To address this, we propose an effective model for underwater image enhancement using a histogram-based transformer (Histoformer), learning histogram distributions of high-contrast and color-corrected underwater images to produce the desired histogram to improve the visual quality of underwater images. Furthermore, we integrate the Histoformer with a generative adversarial network for pixel-based quality refinement. Experimental results demonstrate that the proposed model performs favorably against state-of-the-art underwater image restoration and enhancement approaches quantitatively and qualitatively.
{"title":"Histoformer: Histogram-Based Transformer for Efficient Underwater Image Enhancement","authors":"Yan-Tsung Peng;Yen-Rong Chen;Guan-Rong Chen;Chun-Jung Liao","doi":"10.1109/JOE.2024.3474919","DOIUrl":"https://doi.org/10.1109/JOE.2024.3474919","url":null,"abstract":"When taking images underwater, we often find they have low contrast and color distortions since light passing through water suffers from absorption, scattering, and attenuation, making it difficult to see the scene clearly. To address this, we propose an effective model for underwater image enhancement using a histogram-based transformer (Histoformer), learning histogram distributions of high-contrast and color-corrected underwater images to produce the desired histogram to improve the visual quality of underwater images. Furthermore, we integrate the Histoformer with a generative adversarial network for pixel-based quality refinement. Experimental results demonstrate that the proposed model performs favorably against state-of-the-art underwater image restoration and enhancement approaches quantitatively and qualitatively.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"164-177"},"PeriodicalIF":3.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1109/JOE.2024.3483219
Xuesong Li;Dajun Sun;Zhongyi Cao
A Doppler velocimetry logger (DVL) is a sonar attached to a vehicle that, when underway, transmits a pulse signal at regular intervals and measures the Doppler frequency of the seafloor echo to determine the vehicle's velocity relative to the Earth. The velocity measurement bias of DVL refers to the deviation of the average measurement velocity from the true value and is a quantitative measure of accuracy, which can be used to evaluate the performance of a DVL. DVL designers must tradeoff velocity measurement bias and requirements, such as size and power. To date, the DVL measurement bias has received little attention, and the underlying physical mechanism has yet to be completely elucidated. In this article, the DVL echo is modeled by a linear time-varying channel. An analytical expression for the DVL seafloor echo Doppler spectrum is established by analyzing echo statistical properties. The physical mechanism of the bias has been analyzed by this analytical expression. Then, an analytical equation for predicting the bias is proposed. Compared with the bias prediction method proposed by Taudien and Bilén (2018), the equation proposed in this article has equivalent predictive power, but with clear physical meaning, and provides a means to predict bias.
{"title":"Generation Mechanism of Acoustic Doppler Velocity Measurement Bias","authors":"Xuesong Li;Dajun Sun;Zhongyi Cao","doi":"10.1109/JOE.2024.3483219","DOIUrl":"https://doi.org/10.1109/JOE.2024.3483219","url":null,"abstract":"A Doppler velocimetry logger (DVL) is a sonar attached to a vehicle that, when underway, transmits a pulse signal at regular intervals and measures the Doppler frequency of the seafloor echo to determine the vehicle's velocity relative to the Earth. The velocity measurement bias of DVL refers to the deviation of the average measurement velocity from the true value and is a quantitative measure of accuracy, which can be used to evaluate the performance of a DVL. DVL designers must tradeoff velocity measurement bias and requirements, such as size and power. To date, the DVL measurement bias has received little attention, and the underlying physical mechanism has yet to be completely elucidated. In this article, the DVL echo is modeled by a linear time-varying channel. An analytical expression for the DVL seafloor echo Doppler spectrum is established by analyzing echo statistical properties. The physical mechanism of the bias has been analyzed by this analytical expression. Then, an analytical equation for predicting the bias is proposed. Compared with the bias prediction method proposed by Taudien and Bilén (2018), the equation proposed in this article has equivalent predictive power, but with clear physical meaning, and provides a means to predict bias.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"1-12"},"PeriodicalIF":3.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1109/JOE.2024.3428605
Linnea Weicht;Sarmad Hanif;Craig Bakker;Taiping Wang;Nolann Williams;Robert J. Cavagnaro
Utility-class autonomous surface vehicles (ASVs) are small watercraft that can be equipped with environmental sensors used to collect data in coastal and marine locations. Their operation is constrained by energy storage limits, but with adequate resources, marine energy presents an opportunity to provide power in remote locations. To demonstrate the feasibility of using tidal energy to support ASV operations, we created a MATLAB-Simulink modeling tool. The model simulates an ASV performing surveys and charging at a nearby tidal turbine. Model components include the tidal turbine, generator, battery storage dynamics, ASV kinetics, and ASV control schemes. We refined the tool using experimentally collected data in the tidal-resource-rich Sequim Bay, which has been proposed for tidal energy testing, to empirically identify vehicle hydrodynamic drag and inertial coefficients. We then used the model to simulate a resource characterization survey in Sequim Bay under varying environmental conditions and survey parameters. Results indicated that a tidal turbine can support continuous ASV operation in low tidal or low target survey speed scenarios, and we suggest improvements to the model.
{"title":"Simulation of an Autonomous Surface Vehicle With Colocated Tidal Turbine","authors":"Linnea Weicht;Sarmad Hanif;Craig Bakker;Taiping Wang;Nolann Williams;Robert J. Cavagnaro","doi":"10.1109/JOE.2024.3428605","DOIUrl":"https://doi.org/10.1109/JOE.2024.3428605","url":null,"abstract":"Utility-class autonomous surface vehicles (ASVs) are small watercraft that can be equipped with environmental sensors used to collect data in coastal and marine locations. Their operation is constrained by energy storage limits, but with adequate resources, marine energy presents an opportunity to provide power in remote locations. To demonstrate the feasibility of using tidal energy to support ASV operations, we created a MATLAB-Simulink modeling tool. The model simulates an ASV performing surveys and charging at a nearby tidal turbine. Model components include the tidal turbine, generator, battery storage dynamics, ASV kinetics, and ASV control schemes. We refined the tool using experimentally collected data in the tidal-resource-rich Sequim Bay, which has been proposed for tidal energy testing, to empirically identify vehicle hydrodynamic drag and inertial coefficients. We then used the model to simulate a resource characterization survey in Sequim Bay under varying environmental conditions and survey parameters. Results indicated that a tidal turbine can support continuous ASV operation in low tidal or low target survey speed scenarios, and we suggest improvements to the model.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"294-306"},"PeriodicalIF":3.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18DOI: 10.1109/JOE.2024.3455411
Zhuoyu Zhang;Mingwei Lin;Dejun Li;Rundong Wu;Ri Lin;Canjun Yang
Environmental monitoring plays a crucial role in the development of marine ranches and the surveillance of underwater aquaculture organisms. To capitalize on the real-time, long-term, and static observation capabilities of seabed networks, as well as the dynamic and large-scale monitoring potential of underwater vehicles, a novel mobile platform for ocean ranches has been proposed. This platform comprises a floating platform, a docking station, and an autonomous underwater vehicle (AUV). The floating platform utilized is a versatile ocean testing platform that can be securely anchored in close proximity to the designated observation area. To enable static monitoring alongside the floating platform, a lightweight connection station, constructed using polyvinyl chloride pipes, is designed to accompany the platform. The AUV is employed for dynamic monitoring and is seamlessly linked to the aforementioned components using docking technology. Consequently, this integrated system achieves dynamic and static observations centered around a movable floating platform. Field experiments conducted in lakes and seas have validated the efficacy of this system in multiple scenarios, both on the surface and underwater. These experiments have demonstrated the system's ability to autonomously dock, transmit wireless signals and power, facilitate long-term static observations of fixed nodes, and conduct autonomous cruising for dynamic monitoring purposes.
{"title":"An AUV-Enabled Dockable Platform for Long-Term Dynamic and Static Monitoring of Marine Pastures","authors":"Zhuoyu Zhang;Mingwei Lin;Dejun Li;Rundong Wu;Ri Lin;Canjun Yang","doi":"10.1109/JOE.2024.3455411","DOIUrl":"https://doi.org/10.1109/JOE.2024.3455411","url":null,"abstract":"Environmental monitoring plays a crucial role in the development of marine ranches and the surveillance of underwater aquaculture organisms. To capitalize on the real-time, long-term, and static observation capabilities of seabed networks, as well as the dynamic and large-scale monitoring potential of underwater vehicles, a novel mobile platform for ocean ranches has been proposed. This platform comprises a floating platform, a docking station, and an autonomous underwater vehicle (AUV). The floating platform utilized is a versatile ocean testing platform that can be securely anchored in close proximity to the designated observation area. To enable static monitoring alongside the floating platform, a lightweight connection station, constructed using polyvinyl chloride pipes, is designed to accompany the platform. The AUV is employed for dynamic monitoring and is seamlessly linked to the aforementioned components using docking technology. Consequently, this integrated system achieves dynamic and static observations centered around a movable floating platform. Field experiments conducted in lakes and seas have validated the efficacy of this system in multiple scenarios, both on the surface and underwater. These experiments have demonstrated the system's ability to autonomously dock, transmit wireless signals and power, facilitate long-term static observations of fixed nodes, and conduct autonomous cruising for dynamic monitoring purposes.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"276-293"},"PeriodicalIF":3.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1109/JOE.2024.3463838
Xianping Fu;Wenqiang Qin;Fengqi Li;Fengqiang Xu;Xiaohong Yan
Images shot underwater are usually characterized by global nonuniform information loss due to selective light absorption and scattering, resulting in various degradation problems, such as color distortion and low visibility. Recently, deep learning has drawn much attention in the field of underwater image enhancement (UIE) for its powerful performance. However, most deep learning-based UIE models rely on either pure convolutional neural network (CNN) or pure transformer, which makes it challenging to enhance images while maintaining local representations and global features simultaneously. In this article, we propose a novel complementary feature perception network (CFPNet), which embeds the transformer into the classical CNN-based UNet3+. The core idea is to fuse the advantages of CNN and transformer to obtain satisfactory high-quality underwater images that can naturally perceive local and global features. CFPNet employs a novel dual encoder structure of the CNN and transformer in parallel, while the decoder is composed of one trunk decoder and two auxiliary decoders. First, we propose the regionalized two-stage vision transformer that can progressively eliminate the variable levels of degradation in a coarse-to-fine manner. Second, we design the full-scale feature fusion module to explore sufficient information by merging the multiscale features. In addition, we propose an auxiliary feature guided learning strategy that utilizes reflectance and shading maps to guide the generation of the final results. The advantage of this strategy is to avoid repetitive and ineffective learning of the model, and to accomplish color correction and deblurring tasks more efficiently. Experiments demonstrate that our CFPNet can obtain high-quality underwater images and show superior performance compared to the state-of-the-art UIE methods qualitatively and quantitatively.
{"title":"CFPNet: Complementary Feature Perception Network for Underwater Image Enhancement","authors":"Xianping Fu;Wenqiang Qin;Fengqi Li;Fengqiang Xu;Xiaohong Yan","doi":"10.1109/JOE.2024.3463838","DOIUrl":"https://doi.org/10.1109/JOE.2024.3463838","url":null,"abstract":"Images shot underwater are usually characterized by global nonuniform information loss due to selective light absorption and scattering, resulting in various degradation problems, such as color distortion and low visibility. Recently, deep learning has drawn much attention in the field of underwater image enhancement (UIE) for its powerful performance. However, most deep learning-based UIE models rely on either pure convolutional neural network (CNN) or pure transformer, which makes it challenging to enhance images while maintaining local representations and global features simultaneously. In this article, we propose a novel complementary feature perception network (CFPNet), which embeds the transformer into the classical CNN-based UNet3+. The core idea is to fuse the advantages of CNN and transformer to obtain satisfactory high-quality underwater images that can naturally perceive local and global features. CFPNet employs a novel dual encoder structure of the CNN and transformer in parallel, while the decoder is composed of one trunk decoder and two auxiliary decoders. First, we propose the regionalized two-stage vision transformer that can progressively eliminate the variable levels of degradation in a coarse-to-fine manner. Second, we design the full-scale feature fusion module to explore sufficient information by merging the multiscale features. In addition, we propose an auxiliary feature guided learning strategy that utilizes reflectance and shading maps to guide the generation of the final results. The advantage of this strategy is to avoid repetitive and ineffective learning of the model, and to accomplish color correction and deblurring tasks more efficiently. Experiments demonstrate that our CFPNet can obtain high-quality underwater images and show superior performance compared to the state-of-the-art UIE methods qualitatively and quantitatively.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"150-163"},"PeriodicalIF":3.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Underwater images frequently experience issues, such as color casts, loss of contrast, and overall blurring due to the impact of light attenuation and scattering. To tackle these degradation issues, we present a highly efficient and robust method for enhancing underwater images, called DAPNet. Specifically, we integrate the extended information block into the encoder to minimize information loss during the downsampling stage. Afterward, we incorporate the dual attention module to enhance the network's sensitivity to critical location information and essential channels while utilizing codecs for feature reconstruction. Simultaneously, we employ adaptive instance normalization to transform the output features and generate multiple samples. Lastly, we utilize Monte Carlo likelihood estimation to obtain stable enhancement results from this sample space, ensuring the consistency and reliability of the final enhanced image. Experiments are conducted on three underwater image data sets to validate our method's effectiveness. Moreover, our method demonstrates strong performance in underwater image enhancement and exhibits excellent generalization and effectiveness in tasks, such as low-light image enhancement and image dehazing.
{"title":"DAPNet: Dual Attention Probabilistic Network for Underwater Image Enhancement","authors":"Xueyong Li;Rui Yu;Weidong Zhang;Huimin Lu;Wenyi Zhao;Guojia Hou;Zheng Liang","doi":"10.1109/JOE.2024.3458351","DOIUrl":"https://doi.org/10.1109/JOE.2024.3458351","url":null,"abstract":"Underwater images frequently experience issues, such as color casts, loss of contrast, and overall blurring due to the impact of light attenuation and scattering. To tackle these degradation issues, we present a highly efficient and robust method for enhancing underwater images, called DAPNet. Specifically, we integrate the extended information block into the encoder to minimize information loss during the downsampling stage. Afterward, we incorporate the dual attention module to enhance the network's sensitivity to critical location information and essential channels while utilizing codecs for feature reconstruction. Simultaneously, we employ adaptive instance normalization to transform the output features and generate multiple samples. Lastly, we utilize Monte Carlo likelihood estimation to obtain stable enhancement results from this sample space, ensuring the consistency and reliability of the final enhanced image. Experiments are conducted on three underwater image data sets to validate our method's effectiveness. Moreover, our method demonstrates strong performance in underwater image enhancement and exhibits excellent generalization and effectiveness in tasks, such as low-light image enhancement and image dehazing.","PeriodicalId":13191,"journal":{"name":"IEEE Journal of Oceanic Engineering","volume":"50 1","pages":"178-191"},"PeriodicalIF":3.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}