首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
YOLOv8 Neural Network Application for Noncollaborative Vessel Detection Using Sentinel-1 SAR Data: A Case Study
Camilla Caricchio;Luis Felipe Mendonça;Carlos A. D. Lentini;André T. C. Lima;David O. Silva;Pedro H. Meirelles e Góes
Noncollaborative vessels are usually involved in illegal activities and actively monitoring these vessels is one of the most challenging task. This study introduces a methodology that combines automatic identification system (AIS) data and SAR images into a YOLOv8+ slicing-aided hyper inference (SAHI)-based approach, as a decision aid tool for noncooperative vessel detection, to improve maritime domain awareness. It was used 1958 augmented images to custom train the YOLOv8 neural network. For the study case, 16 Sentinel high-resolution ground range detected (GRDH)- interferometric wide (IW) SAR images were used. During the training, the custom model achieved excellent performance with satisfactory statistical results (mAP@.5: 94.3%, precision: 92.5%, and recall: 91.9%), especially when compared to similar previous studies. The model was able to correctly distinguish between vessels and nonvessel features, such as islands, rivers, or coastlines. In the study case, the false negative (FN) detection rate was 95.4%, similar to mAp@0.5 results found at the training and validation step and the Recall was 95.6%, considered excellent results. The recall improvement in the study case shows that the model’s performance in real-world scenarios is better than initially expected for application in noncollaborative vessel detection systems. The model presented showed very promising results for the operational detection of darkships using, simultaneous, SAR images and AIS data.
{"title":"YOLOv8 Neural Network Application for Noncollaborative Vessel Detection Using Sentinel-1 SAR Data: A Case Study","authors":"Camilla Caricchio;Luis Felipe Mendonça;Carlos A. D. Lentini;André T. C. Lima;David O. Silva;Pedro H. Meirelles e Góes","doi":"10.1109/LGRS.2024.3508462","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3508462","url":null,"abstract":"Noncollaborative vessels are usually involved in illegal activities and actively monitoring these vessels is one of the most challenging task. This study introduces a methodology that combines automatic identification system (AIS) data and SAR images into a YOLOv8+ slicing-aided hyper inference (SAHI)-based approach, as a decision aid tool for noncooperative vessel detection, to improve maritime domain awareness. It was used 1958 augmented images to custom train the YOLOv8 neural network. For the study case, 16 Sentinel high-resolution ground range detected (GRDH)- interferometric wide (IW) SAR images were used. During the training, the custom model achieved excellent performance with satisfactory statistical results (mAP@.5: 94.3%, precision: 92.5%, and recall: 91.9%), especially when compared to similar previous studies. The model was able to correctly distinguish between vessels and nonvessel features, such as islands, rivers, or coastlines. In the study case, the false negative (FN) detection rate was 95.4%, similar to mAp@0.5 results found at the training and validation step and the Recall was 95.6%, considered excellent results. The recall improvement in the study case shows that the model’s performance in real-world scenarios is better than initially expected for application in noncollaborative vessel detection systems. The model presented showed very promising results for the operational detection of darkships using, simultaneous, SAR images and AIS data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient End-to-End Diffusion Model for One-Step SAR-to-Optical Translation
Jiang Qin;Bin Zou;Haolin Li;Lamei Zhang
The undesirable distortions of synthetic aperture radar (SAR) images pose a challenge to intuitive SAR interpretation. SAR-to-optical (S2O) image translation provides a feasible solution for easier interpretation of SAR and supports multisensor analysis. Currently, diffusion-based S2O models are emerging and have achieved remarkable performance in terms of perceptual metrics and fidelity. However, the numerous iterative sampling steps and slow inference speed of these diffusion models (DMs) limit their potential for practical applications. In this letter, an efficient end-to-end diffusion model (E3Diff) is developed for real-time one-step S2O translation. E3Diff not only samples as fast as generative adversarial network (GAN) models, but also retains the powerful image synthesis performance of DMs to achieve high-quality S2O translation in an end-to-end manner. To be specific, SAR spatial priors are first incorporated to provide enriched conditional clues and achieve more precise control from the feature level to synthesize optical images. Then, E3Diff is accelerated by a hybrid refinement loss, which effectively integrates the advantages of both GAN and diffusion components to achieve efficient one-step sampling. Experiments show that E3Diff achieves real-time inference speed (0.17 s per image on an A6000 GPU) and demonstrates significant image-quality improvements (35% and 27% improvement in Frechet inception distance (FID) on the UNICORN and SEN12 dataset, respectively) compared to existing state-of-the-art (SOTA) diffusion S2O methods. This advancement of E3Diff highlights its potential to enhance SAR interpretation and cross-modal applications. The code is available at https://github.com/DeepSARRS/E3Diff.
{"title":"Efficient End-to-End Diffusion Model for One-Step SAR-to-Optical Translation","authors":"Jiang Qin;Bin Zou;Haolin Li;Lamei Zhang","doi":"10.1109/LGRS.2024.3506566","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3506566","url":null,"abstract":"The undesirable distortions of synthetic aperture radar (SAR) images pose a challenge to intuitive SAR interpretation. SAR-to-optical (S2O) image translation provides a feasible solution for easier interpretation of SAR and supports multisensor analysis. Currently, diffusion-based S2O models are emerging and have achieved remarkable performance in terms of perceptual metrics and fidelity. However, the numerous iterative sampling steps and slow inference speed of these diffusion models (DMs) limit their potential for practical applications. In this letter, an efficient end-to-end diffusion model (E3Diff) is developed for real-time one-step S2O translation. E3Diff not only samples as fast as generative adversarial network (GAN) models, but also retains the powerful image synthesis performance of DMs to achieve high-quality S2O translation in an end-to-end manner. To be specific, SAR spatial priors are first incorporated to provide enriched conditional clues and achieve more precise control from the feature level to synthesize optical images. Then, E3Diff is accelerated by a hybrid refinement loss, which effectively integrates the advantages of both GAN and diffusion components to achieve efficient one-step sampling. Experiments show that E3Diff achieves real-time inference speed (0.17 s per image on an A6000 GPU) and demonstrates significant image-quality improvements (35% and 27% improvement in Frechet inception distance (FID) on the UNICORN and SEN12 dataset, respectively) compared to existing state-of-the-art (SOTA) diffusion S2O methods. This advancement of E3Diff highlights its potential to enhance SAR interpretation and cross-modal applications. The code is available at \u0000<uri>https://github.com/DeepSARRS/E</uri>\u00003Diff.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReFocal: Addressing Learning Imbalances for Accurate Tiny Object Detection in Aerial Imagery
Zijuan Chen;Chang Xu;Haoran Zhu;Yuxin Li;Wen Yang
Tiny objects in aerial imagery usually exhibit an extremely limited number of pixels, significantly affecting the object detection model’s learning process. While existing research has attempted to improve tiny objects’ positive sample quantity for scale-balanced learning, the primary focus lies on the object level. We argue that mitigating learning imbalance requires a comprehensive consideration encompassing object-level, sample-level, and feature-level improvements. To this end, we propose ReFocal, a learning strategy comprised of ReFocal Loss and ReFocal feature pyramid network (FPN), to mitigate imbalances across these three levels. ReFocal Loss utilizes a magnitude factor to regulate the learning magnitude of objects with varying sample counts and a novel focal rate adjuster to differentiate sample quality at the sample level, enabling the detector to prioritize high-quality samples within each object. ReFocal FPN employs a refocusing mechanism to dynamically enhance detailed information in high-level feature maps without introducing additional computational cost, thus addressing the feature-level imbalance. Extensive experiments on AI-TOD-v2 and TinyPerson datasets demonstrate the superiority of our proposed method over previous single-stage methods, particularly for very tiny objects.
{"title":"ReFocal: Addressing Learning Imbalances for Accurate Tiny Object Detection in Aerial Imagery","authors":"Zijuan Chen;Chang Xu;Haoran Zhu;Yuxin Li;Wen Yang","doi":"10.1109/LGRS.2024.3507209","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3507209","url":null,"abstract":"Tiny objects in aerial imagery usually exhibit an extremely limited number of pixels, significantly affecting the object detection model’s learning process. While existing research has attempted to improve tiny objects’ positive sample quantity for scale-balanced learning, the primary focus lies on the object level. We argue that mitigating learning imbalance requires a comprehensive consideration encompassing object-level, sample-level, and feature-level improvements. To this end, we propose ReFocal, a learning strategy comprised of ReFocal Loss and ReFocal feature pyramid network (FPN), to mitigate imbalances across these three levels. ReFocal Loss utilizes a magnitude factor to regulate the learning magnitude of objects with varying sample counts and a novel focal rate adjuster to differentiate sample quality at the sample level, enabling the detector to prioritize high-quality samples within each object. ReFocal FPN employs a refocusing mechanism to dynamically enhance detailed information in high-level feature maps without introducing additional computational cost, thus addressing the feature-level imbalance. Extensive experiments on AI-TOD-v2 and TinyPerson datasets demonstrate the superiority of our proposed method over previous single-stage methods, particularly for very tiny objects.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PPMamba: Enhancing Semantic Segmentation in Remote Sensing Imagery by SS2D
Juwei Mu;Shangbo Zhou;Xingjie Sun
Remote sensing semantic segmentation is a critical technology in the field of remote sensing image processing, with broad applications in environmental monitoring, urban planning, disaster assessment, and resource exploration. Despite the transformative impact of convolutional neural networks (CNNs) on this domain, CNN-based methods often encounter limitations due to their localized receptive fields, which struggle to capture the global context necessary for accurate segmentation in complex remote sensing imagery. In this letter, a novel approach is presented for remote sensing semantic segmentation using a mamba-based model named PPmamba. The PPmamba model integrates Resblock and PPmamba within an encoder-decoder framework to effectively capture both local and global contextual information from high-resolution remote sensing images. Leveraging the strengths of the Mamba architecture, our model employs selective scanning to efficiently process long sequences, overcoming the limitations of traditional CNNs and transformers in handling large-scale images with complex scenes. Extensive experiments on two benchmark datasets (Potsdam and Vaihingen) demonstrate the superiority of our PPmamba model against state-of-the-art models, achieving significant improvements in segmentation results. The codes will be available at https://github.com/Jerrymo59/PPMambaSeg.
{"title":"PPMamba: Enhancing Semantic Segmentation in Remote Sensing Imagery by SS2D","authors":"Juwei Mu;Shangbo Zhou;Xingjie Sun","doi":"10.1109/LGRS.2024.3507033","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3507033","url":null,"abstract":"Remote sensing semantic segmentation is a critical technology in the field of remote sensing image processing, with broad applications in environmental monitoring, urban planning, disaster assessment, and resource exploration. Despite the transformative impact of convolutional neural networks (CNNs) on this domain, CNN-based methods often encounter limitations due to their localized receptive fields, which struggle to capture the global context necessary for accurate segmentation in complex remote sensing imagery. In this letter, a novel approach is presented for remote sensing semantic segmentation using a mamba-based model named PPmamba. The PPmamba model integrates Resblock and PPmamba within an encoder-decoder framework to effectively capture both local and global contextual information from high-resolution remote sensing images. Leveraging the strengths of the Mamba architecture, our model employs selective scanning to efficiently process long sequences, overcoming the limitations of traditional CNNs and transformers in handling large-scale images with complex scenes. Extensive experiments on two benchmark datasets (Potsdam and Vaihingen) demonstrate the superiority of our PPmamba model against state-of-the-art models, achieving significant improvements in segmentation results. The codes will be available at \u0000<uri>https://github.com/Jerrymo59/PPMambaSeg</uri>\u0000.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Two-Stage Oil Spill Detection Method Based on an Improved Superpixel Module and DeepLab V3+ Using SAR Images 基于改进型超像素模块和 DeepLab V3+ 的两阶段溢油探测方法(使用合成孔径雷达图像
Lingxiao Cheng;Ying Li;Kangjia Zhao;Bingxin Liu;Yuanheng Sun
The application of deep learning in synthetic aperture radar (SAR) oil spill detection often faces challenges such as speckle noise and limited data volume. To address these issues, this article proposes a two-stage oil spill detection method, SD-OIL, which consists of a superpixel generation module (S3G), and a semantic segmentation model, DeepLab V3+ (the implementation process can be seen at https://github.com/GeminiCheng/ResearchCode). The first stage emphasizes superpixel generation, where S3G innovatively employs social support analysis and spectral angle mapping to develop a pixel-based social support quantification model that considers both individual and community perspectives, facilitating effective superpixel generation. In the semantic segmentation stage, the output from S3G enhances the segmentation performance of DeepLab V3+. Experimental results show that SD-OIL surpasses numerous existing segmentation-based oil spill detection methods, achieving an mIoU of 91.69%. The results also indicate that the S3G module significantly improves the accuracy of oil spill detection.
{"title":"A Two-Stage Oil Spill Detection Method Based on an Improved Superpixel Module and DeepLab V3+ Using SAR Images","authors":"Lingxiao Cheng;Ying Li;Kangjia Zhao;Bingxin Liu;Yuanheng Sun","doi":"10.1109/LGRS.2024.3508020","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3508020","url":null,"abstract":"The application of deep learning in synthetic aperture radar (SAR) oil spill detection often faces challenges such as speckle noise and limited data volume. To address these issues, this article proposes a two-stage oil spill detection method, SD-OIL, which consists of a superpixel generation module (S3G), and a semantic segmentation model, DeepLab V3+ (the implementation process can be seen at \u0000<uri>https://github.com/GeminiCheng/ResearchCode</uri>\u0000). The first stage emphasizes superpixel generation, where S3G innovatively employs social support analysis and spectral angle mapping to develop a pixel-based social support quantification model that considers both individual and community perspectives, facilitating effective superpixel generation. In the semantic segmentation stage, the output from S3G enhances the segmentation performance of DeepLab V3+. Experimental results show that SD-OIL surpasses numerous existing segmentation-based oil spill detection methods, achieving an mIoU of 91.69%. The results also indicate that the S3G module significantly improves the accuracy of oil spill detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMMUNet: Multiscale Attention Map Merging for Remote Sensing Image Segmentation
Yang Yang;Shunyi Zheng;Xiqi Wang;Wei Ao;Zhao Liu
The advancement of deep learning has driven notable progress in remote sensing semantic segmentation. Multihead self-attention (MSA) mechanisms have been widely adopted in semantic segmentation tasks. Network architectures exemplified by Vision Transformers have implemented window-based operations in the spatial domain to reduce computational costs. However, this approach comes at the expense of a weakened capacity to capture long-range dependencies, potentially limiting their efficacy in remote sensing image processing. In this letter, we propose AMMUNet, a UNet-based framework that employs multiscale attention map (AM) merging, comprising two key innovations: the attention map merging mechanism (AMMM) module and the granular multihead self-attention (GMSA). AMMM effectively combines multiscale AMs into a unified representation using a fixed mask template, enabling the modeling of a global attention mechanism. By integrating precomputed AMs in preceding layers, AMMM reduces computational costs while preserving global correlations. The proposed GMSA efficiently acquires global information while substantially mitigating computational costs in contrast to the global MSA mechanism. This is accomplished through the strategic alignment of granularity and the reduction of relative position bias parameters, thereby optimizing computational efficiency. Experimental evaluations highlight the superior performance of our approach, achieving remarkable mean intersection over union (mIoU) scores of 75.48% on the challenging Vaihingen dataset and an exceptional 77.90% on the Potsdam dataset, demonstrating the superiority of our method in precise remote sensing semantic segmentation. Codes are available at https://github.com/interpretty/AMMUNet.
{"title":"AMMUNet: Multiscale Attention Map Merging for Remote Sensing Image Segmentation","authors":"Yang Yang;Shunyi Zheng;Xiqi Wang;Wei Ao;Zhao Liu","doi":"10.1109/LGRS.2024.3506718","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3506718","url":null,"abstract":"The advancement of deep learning has driven notable progress in remote sensing semantic segmentation. Multihead self-attention (MSA) mechanisms have been widely adopted in semantic segmentation tasks. Network architectures exemplified by Vision Transformers have implemented window-based operations in the spatial domain to reduce computational costs. However, this approach comes at the expense of a weakened capacity to capture long-range dependencies, potentially limiting their efficacy in remote sensing image processing. In this letter, we propose AMMUNet, a UNet-based framework that employs multiscale attention map (AM) merging, comprising two key innovations: the attention map merging mechanism (AMMM) module and the granular multihead self-attention (GMSA). AMMM effectively combines multiscale AMs into a unified representation using a fixed mask template, enabling the modeling of a global attention mechanism. By integrating precomputed AMs in preceding layers, AMMM reduces computational costs while preserving global correlations. The proposed GMSA efficiently acquires global information while substantially mitigating computational costs in contrast to the global MSA mechanism. This is accomplished through the strategic alignment of granularity and the reduction of relative position bias parameters, thereby optimizing computational efficiency. Experimental evaluations highlight the superior performance of our approach, achieving remarkable mean intersection over union (mIoU) scores of 75.48% on the challenging Vaihingen dataset and an exceptional 77.90% on the Potsdam dataset, demonstrating the superiority of our method in precise remote sensing semantic segmentation. Codes are available at \u0000<uri>https://github.com/interpretty/AMMUNet</uri>\u0000.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Late-Stage Bitemporal Feature Fusion Network for Semantic Change Detection
Chenyao Zhou;Haotian Zhang;Han Guo;Zhengxia Zou;Zhenwei Shi
Semantic change detection (SCD) is an important task in geoscience and Earth observation. By producing a semantic change map for each temporal phase, both the land use land cover (LULC) categories and change information can be interpreted. Recently some multitask learning-based SCD methods have been proposed to decompose the task into semantic segmentation (SS) and binary change detection (BCD) subtasks. However, previous works comprise triple branches in an entangled manner, which may not be optimal and hard to adopt foundation models. Besides, lacking explicit refinement of bitemporal features during fusion may cause low accuracy. In this letter, we propose a novel late-stage bitemporal feature fusion network to address the issue. Specifically, we propose local–global attentional aggregation module to strengthen feature fusion, and propose local global context enhancement module to highlight pivotal semantics. Comprehensive experiments are conducted on two public datasets, including SECOND and Landsat-SCD. Quantitative and qualitative results show that our proposed model achieves new state-of-the-art performance on both datasets.
{"title":"A Late-Stage Bitemporal Feature Fusion Network for Semantic Change Detection","authors":"Chenyao Zhou;Haotian Zhang;Han Guo;Zhengxia Zou;Zhenwei Shi","doi":"10.1109/LGRS.2024.3507292","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3507292","url":null,"abstract":"Semantic change detection (SCD) is an important task in geoscience and Earth observation. By producing a semantic change map for each temporal phase, both the land use land cover (LULC) categories and change information can be interpreted. Recently some multitask learning-based SCD methods have been proposed to decompose the task into semantic segmentation (SS) and binary change detection (BCD) subtasks. However, previous works comprise triple branches in an entangled manner, which may not be optimal and hard to adopt foundation models. Besides, lacking explicit refinement of bitemporal features during fusion may cause low accuracy. In this letter, we propose a novel late-stage bitemporal feature fusion network to address the issue. Specifically, we propose local–global attentional aggregation module to strengthen feature fusion, and propose local global context enhancement module to highlight pivotal semantics. Comprehensive experiments are conducted on two public datasets, including SECOND and Landsat-SCD. Quantitative and qualitative results show that our proposed model achieves new state-of-the-art performance on both datasets.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Tide Monitoring Using Shipborne GNSS-R Phase Altimetry: A Case Study
Yunqiao He;Fan Gao;Tianhe Xu;Xinyue Meng;Nazi Wang
Tidal information is a valuable parameter for scientific studies and navigational safety. Despite traditional tide stations and satellite altimeters, shipborne Global Navigation Satellite System (GNSS) altimeters can provide an alternative method for instantaneous measurements. However, due to ship hydrodynamics and draft variations, especially for large vessels, the baseline between the GNSS positioning antenna and the sea surface is always unavailable or less accurate. This case study presents a novel ship-based altimetry method using GNSS-R phase altimetry, which is capable of accurately monitoring tidal information on a moving ship platform. The delay difference between the direct and reflected GNSS paths is obtained from the signal phase difference generated by open-loop tracking through a software-defined receiver. Spectral analysis was used to further solve the integer ambiguity problem of phase measurements, and then, accurate tidal information was obtained based on high-precision GNSS positioning. To evaluate the performance of the system, a case study of a shipborne experiment was conducted. The results show that the ship-based GNSS-R altimetry system can accurately measure the sea surface height variation. The root-mean-squared error (RMSE) is within 3.0 cm compared to the in situ value. This case study demonstrates the potential of ship-borne GNSS-R phase altimetry as an effective and accurate method for tidal monitoring in dynamic maritime environments.
{"title":"Accurate Tide Monitoring Using Shipborne GNSS-R Phase Altimetry: A Case Study","authors":"Yunqiao He;Fan Gao;Tianhe Xu;Xinyue Meng;Nazi Wang","doi":"10.1109/LGRS.2024.3506661","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3506661","url":null,"abstract":"Tidal information is a valuable parameter for scientific studies and navigational safety. Despite traditional tide stations and satellite altimeters, shipborne Global Navigation Satellite System (GNSS) altimeters can provide an alternative method for instantaneous measurements. However, due to ship hydrodynamics and draft variations, especially for large vessels, the baseline between the GNSS positioning antenna and the sea surface is always unavailable or less accurate. This case study presents a novel ship-based altimetry method using GNSS-R phase altimetry, which is capable of accurately monitoring tidal information on a moving ship platform. The delay difference between the direct and reflected GNSS paths is obtained from the signal phase difference generated by open-loop tracking through a software-defined receiver. Spectral analysis was used to further solve the integer ambiguity problem of phase measurements, and then, accurate tidal information was obtained based on high-precision GNSS positioning. To evaluate the performance of the system, a case study of a shipborne experiment was conducted. The results show that the ship-based GNSS-R altimetry system can accurately measure the sea surface height variation. The root-mean-squared error (RMSE) is within 3.0 cm compared to the in situ value. This case study demonstrates the potential of ship-borne GNSS-R phase altimetry as an effective and accurate method for tidal monitoring in dynamic maritime environments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diff-HRNet: A Diffusion Model-Based High-Resolution Network for Remote Sensing Semantic Segmentation
Zhipeng Wu;Chang Liu;Bingze Song;Huaxin Pei;Pinjie Li;Mengshuo Chen
The semantic segmentation methods based on deep neural networks predominantly employ supervised learning, relying heavily on the quantity and quality of annotated samples. Due to the complexity of high-resolution remote sensing imagery, obtaining sufficient and precise pixel-level labeled data is highly challenging. This letter introduces a novel self-supervised learning method using a pretrained denoising diffusion probabilistic model (DDPM) to leverage semantic information from large-scale unlabeled remote sensing imageries. Building on this, a multistage fusion scheme between pretrained features and high-resolution features is proposed, enabling the network to learn more effective strategies to leverage prior information provided by the pretrained model while preserving the rich semantic details of high-resolution images. Experimental results on two remote sensing semantic segmentation datasets show that the proposed Diff-HRNet outperforms all compared methods, demonstrating the potential of pretrained diffusion models in extracting crucial feature representations for semantic segmentation tasks.
{"title":"Diff-HRNet: A Diffusion Model-Based High-Resolution Network for Remote Sensing Semantic Segmentation","authors":"Zhipeng Wu;Chang Liu;Bingze Song;Huaxin Pei;Pinjie Li;Mengshuo Chen","doi":"10.1109/LGRS.2024.3505552","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3505552","url":null,"abstract":"The semantic segmentation methods based on deep neural networks predominantly employ supervised learning, relying heavily on the quantity and quality of annotated samples. Due to the complexity of high-resolution remote sensing imagery, obtaining sufficient and precise pixel-level labeled data is highly challenging. This letter introduces a novel self-supervised learning method using a pretrained denoising diffusion probabilistic model (DDPM) to leverage semantic information from large-scale unlabeled remote sensing imageries. Building on this, a multistage fusion scheme between pretrained features and high-resolution features is proposed, enabling the network to learn more effective strategies to leverage prior information provided by the pretrained model while preserving the rich semantic details of high-resolution images. Experimental results on two remote sensing semantic segmentation datasets show that the proposed Diff-HRNet outperforms all compared methods, demonstrating the potential of pretrained diffusion models in extracting crucial feature representations for semantic segmentation tasks.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hierarchical Local-Global-Aware Transformer With Scratch Learning Capabilities for Change Detection
Ming Chen;Wanshou Jiang
Most transformer-based methods rely on pretraining weights on large datasets such as Imagenet or pretraining from specific change detection (CD) datasets and then fine-tuning on the target dataset. When the target dataset significantly diverges from the dataset used for pretraining, the model’s ability to generalize to remote sensing imagery may be compromised due to the domain gap. In this letter, we propose HierFormer, which has the advantage of processing semantic features hierarchically, using simple operations for shallow features, spatial position transformation for middle-level features, and channel information interaction for high-level features. In addition, we propose a local-global-aware (LGA) attention block, which reduces the computational overhead of self-attention by sparse attention and increases the locality inductive bias (LIB) of the transformer by focusing attention on the local region and sparse part of the global region, which enables the model to be trained from scratch on small to medium-sized CD datasets. Finally, a new feature fusion decoder (FFD) is proposed to fuse the bitemporal features, which reweights the features of each channel through attention mechanism. Compared with other transformer-based or transformer-CNN-based hybrid networks, our method significantly improves F1, reaching 91.56% and 97.56% on the LEVIR-CD and CDD-CD change detection datasets. Our code is available at https://github.com/WesternTrail/HierFormer.
{"title":"A Hierarchical Local-Global-Aware Transformer With Scratch Learning Capabilities for Change Detection","authors":"Ming Chen;Wanshou Jiang","doi":"10.1109/LGRS.2024.3505253","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3505253","url":null,"abstract":"Most transformer-based methods rely on pretraining weights on large datasets such as Imagenet or pretraining from specific change detection (CD) datasets and then fine-tuning on the target dataset. When the target dataset significantly diverges from the dataset used for pretraining, the model’s ability to generalize to remote sensing imagery may be compromised due to the domain gap. In this letter, we propose HierFormer, which has the advantage of processing semantic features hierarchically, using simple operations for shallow features, spatial position transformation for middle-level features, and channel information interaction for high-level features. In addition, we propose a local-global-aware (LGA) attention block, which reduces the computational overhead of self-attention by sparse attention and increases the locality inductive bias (LIB) of the transformer by focusing attention on the local region and sparse part of the global region, which enables the model to be trained from scratch on small to medium-sized CD datasets. Finally, a new feature fusion decoder (FFD) is proposed to fuse the bitemporal features, which reweights the features of each channel through attention mechanism. Compared with other transformer-based or transformer-CNN-based hybrid networks, our method significantly improves F1, reaching 91.56% and 97.56% on the LEVIR-CD and CDD-CD change detection datasets. Our code is available at \u0000<uri>https://github.com/WesternTrail/HierFormer</uri>\u0000.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1