首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
MCD18 V6.2: A New Version of MODIS Downward Shortwave Radiation and Photosynthetically Active Radiation Products
Ruohan Li;Dongdong Wang;Sadashiva Devadiga;Sudipta Sarkar;Miguel O. Román
This study presents the new version of MODIS/Terra + Aqua Surface Radiation Daily/3-h downward shortwave radiation (DSR) (MCD18A1 V6.2) and photosynthetic active radiation (PAR) (MCD18A2 V6.2) product generated by MODIS adaptive processing system (MODAPS) using the latest version of the science algorithm developed by the NASA MODIS land science team. Key improvements in the new algorithm include using multiple bands covering visible, near-infrared, and shortwave infrared to enhance the capability of characterizing cloud optical characteristics, especially over snow-covered surfaces, and adopting linear interpolation for temporal scaling from instantaneous to 3-hourly retrievals. Comparative validation against MCD18 V6.1 and clouds and the Earth’s radiant energy system synoptic (CERES-SYN) demonstrates that V6.2 significantly improves accuracy at instantaneous, 3-hourly, and daily scales, particularly in snow-covered regions. The root mean square error (RMSE) (relative RMSE: rRMSE) of V6.2 reaches 101.9 W/m2 (18.8%) and 48.4 W/m2 (20.8%) for instantaneous DSR and PAR. The RMSE (rRMSE) reaches 29.9 W/m2 (16.9%) and 14.1 W/m2 (18.4%) for daily DSR and PAR, respectively. Aggregated to 100 km, V6.2 matches CERES-SYN accuracy using only polar-orbiting satellite data. This study also explores the potential for future improvement by integrating geostationary observations to enhance accuracy further.
本研究介绍了MODIS自适应处理系统(MODAPS)利用NASA MODIS陆地科学团队开发的最新版科学算法生成的新版MODIS/Terra + Aqua Surface Radiation Daily/3-h向下短波辐射(DSR)(MCD18A1 V6.2)和光合有效辐射(PAR)(MCD18A2 V6.2)产品。新算法的主要改进包括:使用了涵盖可见光、近红外和短波红外的多个波段,以增强描述云光学特征的能力,尤其是在积雪表面;采用线性插值法进行时间扩展,从瞬时检索到每3小时检索一次。与 MCD18 V6.1 和云与地球辐射能系统综合(CERES-SYN)的比较验证表明,V6.2 显著提高了瞬时、3 小时和每日尺度的精度,尤其是在积雪覆盖地区。V6.2 的均方根误差(RMSE)(相对 RMSE:rRMSE)在瞬时 DSR 和 PAR 方面分别达到 101.9 W/m2 (18.8%)和 48.4 W/m2 (20.8%)。每日 DSR 和 PAR 的均方根误差(rRMSE)分别达到 29.9 W/m2 (16.9%) 和 14.1 W/m2 (18.4%)。汇总到 100 公里,V6.2 与仅使用极轨卫星数据的 CERES-SYN 精确度相匹配。这项研究还探讨了未来通过整合地球静止观测数据进一步提高精度的可能性。
{"title":"MCD18 V6.2: A New Version of MODIS Downward Shortwave Radiation and Photosynthetically Active Radiation Products","authors":"Ruohan Li;Dongdong Wang;Sadashiva Devadiga;Sudipta Sarkar;Miguel O. Román","doi":"10.1109/LGRS.2024.3507822","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3507822","url":null,"abstract":"This study presents the new version of MODIS/Terra + Aqua Surface Radiation Daily/3-h downward shortwave radiation (DSR) (MCD18A1 V6.2) and photosynthetic active radiation (PAR) (MCD18A2 V6.2) product generated by MODIS adaptive processing system (MODAPS) using the latest version of the science algorithm developed by the NASA MODIS land science team. Key improvements in the new algorithm include using multiple bands covering visible, near-infrared, and shortwave infrared to enhance the capability of characterizing cloud optical characteristics, especially over snow-covered surfaces, and adopting linear interpolation for temporal scaling from instantaneous to 3-hourly retrievals. Comparative validation against MCD18 V6.1 and clouds and the Earth’s radiant energy system synoptic (CERES-SYN) demonstrates that V6.2 significantly improves accuracy at instantaneous, 3-hourly, and daily scales, particularly in snow-covered regions. The root mean square error (RMSE) (relative RMSE: rRMSE) of V6.2 reaches 101.9 W/m2 (18.8%) and 48.4 W/m2 (20.8%) for instantaneous DSR and PAR. The RMSE (rRMSE) reaches 29.9 W/m2 (16.9%) and 14.1 W/m2 (18.4%) for daily DSR and PAR, respectively. Aggregated to 100 km, V6.2 matches CERES-SYN accuracy using only polar-orbiting satellite data. This study also explores the potential for future improvement by integrating geostationary observations to enhance accuracy further.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Phase Congruency-Based Feature Transform for Rapid Matching of Planetary Remote Sensing Images
Genyi Wan;Rong Huang;Yusheng Xu;Zhen Ye;Qionghua You;Xiongfeng Yan;Xiaohua Tong
Plenty of effort has been devoted to solving the nonlinear radiation distortions (NRDs) in planetary image matching. The mainstream solutions convert multimodal images into “single” modal images, which requires building the intermediate modalities of images. Phase congruency (PC) features have been widely used to construct intermediate modalities due to their excellent structure extraction capabilities and have proven their effectiveness on Earth remote sensing images. However, when dealing with large-scale planetary remote sensing images (PRSIs), traditional PC features constructed based on the log-Gabor filter take considerable time, counterproductive to global topographic mapping. To address the efficiency issue, this work proposes a fast planetary image-matching method based on efficient PC-based feature transform (EPCFT). Specifically, we introduce a method to calculate PC using Gaussian first- and second-order derivatives, called efficient PC (EPC). Different from the log-Gabor filter, which is sensitive to structures in a single direction, $rm EPC$ uses circularly symmetric filters to equally process changes in all directions. The experiments with 100 image pairs show that compared with other methods, the efficiency of our method is nearly doubled without loss of accuracy.
{"title":"Efficient Phase Congruency-Based Feature Transform for Rapid Matching of Planetary Remote Sensing Images","authors":"Genyi Wan;Rong Huang;Yusheng Xu;Zhen Ye;Qionghua You;Xiongfeng Yan;Xiaohua Tong","doi":"10.1109/LGRS.2024.3510794","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3510794","url":null,"abstract":"Plenty of effort has been devoted to solving the nonlinear radiation distortions (NRDs) in planetary image matching. The mainstream solutions convert multimodal images into “single” modal images, which requires building the intermediate modalities of images. Phase congruency (PC) features have been widely used to construct intermediate modalities due to their excellent structure extraction capabilities and have proven their effectiveness on Earth remote sensing images. However, when dealing with large-scale planetary remote sensing images (PRSIs), traditional PC features constructed based on the log-Gabor filter take considerable time, counterproductive to global topographic mapping. To address the efficiency issue, this work proposes a fast planetary image-matching method based on efficient PC-based feature transform (EPCFT). Specifically, we introduce a method to calculate PC using Gaussian first- and second-order derivatives, called efficient PC (EPC). Different from the log-Gabor filter, which is sensitive to structures in a single direction, \u0000<inline-formula> <tex-math>$rm EPC$ </tex-math></inline-formula>\u0000 uses circularly symmetric filters to equally process changes in all directions. The experiments with 100 image pairs show that compared with other methods, the efficiency of our method is nearly doubled without loss of accuracy.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian-Inspired Attention Mechanism for Hyperspectral Anomaly Detection
Ruike Wang;Jing Hu
Hyperspectral anomaly detection (HAD) aims to identify spectrally distinct pixels within a hyperspectral image (HSI). This task necessitates capturing both local spectral information and spatial smoothness, posing a significant challenge for traditional methods. This letter proposes a novel autoencoder framework that leverages a Gaussian-inspired attention mechanism to address this challenge effectively. Specifically, we introduce a novel Gaussian attention layer embedded within the encoder. This layer utilizes a learnable Gaussian kernel to prioritize the local neighborhood of each pixel. This approach effectively captures fine-grained features crucial for background reconstruction. The learned representations are then passed through a deep autoencoder architecture to reconstruct anomaly-free data. Pixels with significant reconstruction errors are subsequently flagged as anomalies. Experiments on several datasets demonstrate the effectiveness of the proposed approach. Compared to existing methods, our framework achieves superior performance in terms of detection accuracy. This finding highlights the potential of Gaussian-inspired attention mechanisms for enhancing HAD. The code is released at: https://github.com/rk-rkk/Gaussian-Inspired-Attention-Mechanism-for-Hyperspectral-Anomaly-Detection.
{"title":"Gaussian-Inspired Attention Mechanism for Hyperspectral Anomaly Detection","authors":"Ruike Wang;Jing Hu","doi":"10.1109/LGRS.2024.3514166","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3514166","url":null,"abstract":"Hyperspectral anomaly detection (HAD) aims to identify spectrally distinct pixels within a hyperspectral image (HSI). This task necessitates capturing both local spectral information and spatial smoothness, posing a significant challenge for traditional methods. This letter proposes a novel autoencoder framework that leverages a Gaussian-inspired attention mechanism to address this challenge effectively. Specifically, we introduce a novel Gaussian attention layer embedded within the encoder. This layer utilizes a learnable Gaussian kernel to prioritize the local neighborhood of each pixel. This approach effectively captures fine-grained features crucial for background reconstruction. The learned representations are then passed through a deep autoencoder architecture to reconstruct anomaly-free data. Pixels with significant reconstruction errors are subsequently flagged as anomalies. Experiments on several datasets demonstrate the effectiveness of the proposed approach. Compared to existing methods, our framework achieves superior performance in terms of detection accuracy. This finding highlights the potential of Gaussian-inspired attention mechanisms for enhancing HAD. The code is released at: \u0000<uri>https://github.com/rk-rkk/Gaussian-Inspired-Attention-Mechanism-for-Hyperspectral-Anomaly-Detection</uri>\u0000.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft Contrastive Representation Learning for Cloud-Particle Images Captured In-Flight by the New HVPS-4 Airborne Probe
Yousef Yassin;Anthony Fuller;Keyvan Ranjbar;Kenny Bala;Leonid Nichman;James R. Green
Cloud properties underpin accurate climate modeling and are often derived from the individual particles comprising a cloud. Studying these cloud particles is challenging due to their intricate shapes, called “habits,” and manual classification via probe-generated images is time-consuming and subjective. We propose a novel method for habit representation learning that uses minimal labeled data by leveraging self-supervised learning (SSL) with Vision Transformers (ViTs) on a newly acquired dataset of 124000 images captured by the novel high-volume precipitation spectrometer ver. 4 (HVPS-4) probe. Our approach significantly outperforms ImageNet pretraining by 48% on a 293-sample annotated dataset. Notably, we present the first SSL scheme for learning habit representations, leveraging data collected in flight from the probe. Our results demonstrate that self-supervised pretraining significantly improves habit classification even when using single-channel HVPS-4 data. We achieve further gains using sequential views and a soft contrastive objective tailored for sequential, in-flight measurements. Our work paves the way for applying SSL to multiview and multiscale data from advanced cloud-particle imaging probes, enabling comprehensive characterization of the flight environment. We publicly release data, code, and models associated with this study.
{"title":"Soft Contrastive Representation Learning for Cloud-Particle Images Captured In-Flight by the New HVPS-4 Airborne Probe","authors":"Yousef Yassin;Anthony Fuller;Keyvan Ranjbar;Kenny Bala;Leonid Nichman;James R. Green","doi":"10.1109/LGRS.2024.3506483","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3506483","url":null,"abstract":"Cloud properties underpin accurate climate modeling and are often derived from the individual particles comprising a cloud. Studying these cloud particles is challenging due to their intricate shapes, called “habits,” and manual classification via probe-generated images is time-consuming and subjective. We propose a novel method for habit representation learning that uses minimal labeled data by leveraging self-supervised learning (SSL) with Vision Transformers (ViTs) on a newly acquired dataset of 124000 images captured by the novel high-volume precipitation spectrometer ver. 4 (HVPS-4) probe. Our approach significantly outperforms ImageNet pretraining by 48% on a 293-sample annotated dataset. Notably, we present the first SSL scheme for learning habit representations, leveraging data collected in flight from the probe. Our results demonstrate that self-supervised pretraining significantly improves habit classification even when using single-channel HVPS-4 data. We achieve further gains using sequential views and a soft contrastive objective tailored for sequential, in-flight measurements. Our work paves the way for applying SSL to multiview and multiscale data from advanced cloud-particle imaging probes, enabling comprehensive characterization of the flight environment. We publicly release data, code, and models associated with this study.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BPN: Building Pointer Network for Satellite Imagery Building Contour Extraction
Xiaodong Ma;Lingjie Zhu;Yuzhou Liu;Zexiao Xie;Xiang Gao;Shuhan Shen
Extracting structured building contours from satellite imagery plays an important role in many geospatial tasks. However, it still remains a challenge due to the high cost of manual labeling, and models trained on simple polygons show poor generalization on buildings with more complex shapes. To deal with this, we propose a novel neural network called building pointer network (BPN) in this letter, which builds upon a recurrent neural network (RNN) architecture that integrates visual and geometric signals with an input-focused attention mechanism, making it more general for various shape complexity. Given an RGB satellite image, the model first uses a convolutional neural network (CNN) to obtain the set of key points for each building. Then, the coordinates of the key points and their image features are fused and fed into the RNN which ultimately predicts the index of the building corners sequentially. Results show that our method has good generalization ability for building data with complex shapes, provided that a dataset with relatively simple shapes is used as the training set.
{"title":"BPN: Building Pointer Network for Satellite Imagery Building Contour Extraction","authors":"Xiaodong Ma;Lingjie Zhu;Yuzhou Liu;Zexiao Xie;Xiang Gao;Shuhan Shen","doi":"10.1109/LGRS.2024.3514109","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3514109","url":null,"abstract":"Extracting structured building contours from satellite imagery plays an important role in many geospatial tasks. However, it still remains a challenge due to the high cost of manual labeling, and models trained on simple polygons show poor generalization on buildings with more complex shapes. To deal with this, we propose a novel neural network called building pointer network (BPN) in this letter, which builds upon a recurrent neural network (RNN) architecture that integrates visual and geometric signals with an input-focused attention mechanism, making it more general for various shape complexity. Given an RGB satellite image, the model first uses a convolutional neural network (CNN) to obtain the set of key points for each building. Then, the coordinates of the key points and their image features are fused and fed into the RNN which ultimately predicts the index of the building corners sequentially. Results show that our method has good generalization ability for building data with complex shapes, provided that a dataset with relatively simple shapes is used as the training set.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inversion of the Loss Tangent of Martian Regolith From Echoes of Ultrawideband Ground Penetration Radar in the Tianwen-1 Mission
Niutao Liu;Ya-Qiu Jin;Xu Feng
In the Tianwen-1 Mars exploration mission, ultrawideband radar is carried by the Zhurong Martian rover. The exponential attenuation at the center frequency was applied to invert the loss tangent of Mars regolith in previous studies. Ignoring the frequency-dependent absorption in the ultrawideband might cause a large error in the inversion results. Considering the transmitted linear frequency-modulated (LFM) waves, in this letter, an analytical formula for the attenuation of ultrawideband waves to invert the loss tangents of Martian regolith is derived with the accumulation of the frequency-dependent attenuated spectrum. The newly inverted loss tangent is much larger than the inverted values with the center frequency. In addition, the inversion of the loss tangent from the ultrawideband radar data obtained in the Chang’e-5 lunar program is discussed. This letter presents a corrected inversion method for the loss tangent of regolith from ultrawideband radar echoes.
{"title":"Inversion of the Loss Tangent of Martian Regolith From Echoes of Ultrawideband Ground Penetration Radar in the Tianwen-1 Mission","authors":"Niutao Liu;Ya-Qiu Jin;Xu Feng","doi":"10.1109/LGRS.2024.3513957","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3513957","url":null,"abstract":"In the Tianwen-1 Mars exploration mission, ultrawideband radar is carried by the Zhurong Martian rover. The exponential attenuation at the center frequency was applied to invert the loss tangent of Mars regolith in previous studies. Ignoring the frequency-dependent absorption in the ultrawideband might cause a large error in the inversion results. Considering the transmitted linear frequency-modulated (LFM) waves, in this letter, an analytical formula for the attenuation of ultrawideband waves to invert the loss tangents of Martian regolith is derived with the accumulation of the frequency-dependent attenuated spectrum. The newly inverted loss tangent is much larger than the inverted values with the center frequency. In addition, the inversion of the loss tangent from the ultrawideband radar data obtained in the Chang’e-5 lunar program is discussed. This letter presents a corrected inversion method for the loss tangent of regolith from ultrawideband radar echoes.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AM2CFN: Assimilation Modality Mapping Guided Crossmodal Fusion Network for HSI and LiDAR Data Joint Classification
Yinbiao Lu;Wenbo Yu;Xintong Wei;Jiahui Huang
Combining their complementary properties, using hyperspectral image (HSI) and light detection and ranging (LiDAR) data improves classification performance. Nevertheless, the heterogeneous capturing instruments and distribution characteristics of these two remote sensing (RS) modalities always limit their application scopes in on-ground observation-related domains. This heterogeneity hinders capturing the crossmodal connection for discriminant information extraction and exchange. In this letter, we propose an assimilation modality mapping guided crossmodal fusion network (AM2CFN) for HSI and LiDAR data joint classification. Our motivation is to explore one RS assimilation modality (RSAM) by exploiting one latent crossmodal mapping strategy from HSI and LiDAR data simultaneously to remove the effect of modality heterogeneity and contribute to information exchange. AM2CFN constructs one level-wise assimilating encoder to simulate modality heterogeneity and enhance regional consistency. Modality intrinsic features are captured in this encoder to provide knowledge for modality assimilation. Furthermore, one RSAM balancing HS and LiDAR properties is explored. AM2CFN constructs one RSAM reconstruction decoder for modality reconstruction and classification. Dual constraints based on solid angle and Kullback-Leibler divergence are considered to restrain the information exchange process toward the optimal direction. Experiments show that AM2CFN outperforms several state-of-the-art techniques qualitatively and quantitatively. AM2CFN increases the overall accuracy (OA) by 2.46% and 1.62% on average on the Houston and MUUFL datasets. The codes will be available at https://github.com/GEOywb/AM2CFN
{"title":"AM2CFN: Assimilation Modality Mapping Guided Crossmodal Fusion Network for HSI and LiDAR Data Joint Classification","authors":"Yinbiao Lu;Wenbo Yu;Xintong Wei;Jiahui Huang","doi":"10.1109/LGRS.2024.3514179","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3514179","url":null,"abstract":"Combining their complementary properties, using hyperspectral image (HSI) and light detection and ranging (LiDAR) data improves classification performance. Nevertheless, the heterogeneous capturing instruments and distribution characteristics of these two remote sensing (RS) modalities always limit their application scopes in on-ground observation-related domains. This heterogeneity hinders capturing the crossmodal connection for discriminant information extraction and exchange. In this letter, we propose an assimilation modality mapping guided crossmodal fusion network (AM2CFN) for HSI and LiDAR data joint classification. Our motivation is to explore one RS assimilation modality (RSAM) by exploiting one latent crossmodal mapping strategy from HSI and LiDAR data simultaneously to remove the effect of modality heterogeneity and contribute to information exchange. AM2CFN constructs one level-wise assimilating encoder to simulate modality heterogeneity and enhance regional consistency. Modality intrinsic features are captured in this encoder to provide knowledge for modality assimilation. Furthermore, one RSAM balancing HS and LiDAR properties is explored. AM2CFN constructs one RSAM reconstruction decoder for modality reconstruction and classification. Dual constraints based on solid angle and Kullback-Leibler divergence are considered to restrain the information exchange process toward the optimal direction. Experiments show that AM2CFN outperforms several state-of-the-art techniques qualitatively and quantitatively. AM2CFN increases the overall accuracy (OA) by 2.46% and 1.62% on average on the Houston and MUUFL datasets. The codes will be available at \u0000<uri>https://github.com/GEOywb/AM2CFN</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale 3-D Building Reconstruction in LoD2 From ALS Point Clouds
Gefei Kong;Chaoquan Zhang;Hongchao Fan
Large-scale 3-D building models are a fundamental data of many research and applications. The automatic reconstruction of these 3-D models in LoD2 garners much attention and many automatic methods have been proposed. However, most existing solutions require multiple and complicated substeps for reconstructing the structure of a single building. Meanwhile, most of them have not been applied to large-scale reconstruction to better support the practical applications. Furthermore, some of them rely on the input point clouds with building classification information, thereby affecting their generalization. To resolve these issues, in this letter, we propose a workflow to fully automatically reconstruct large-scale 3-D building models in LoD2. This workflow takes airborne laser scanning (ALS) point clouds as input and uses building footprints and digital terrain model (DTM) as assistance. LoD2 3-D building models are reconstructed by a three-module pipeline: 1) building and roof segmentation; 2) 3-D roof reconstruction; and 3) final top–down extrusion with terrain information. By proposing hybrid deep-learning-based and rule-based methods for the first two modules, we ensure the accurate structure output of reconstruction results as much as possible. The experimental results on point clouds covering the whole city of Trondheim, Norway, indicate that the proposed workflow can effectively reconstruct large-scale 3-D building models in LoD2 with the acceptable RMSE.
{"title":"Large-Scale 3-D Building Reconstruction in LoD2 From ALS Point Clouds","authors":"Gefei Kong;Chaoquan Zhang;Hongchao Fan","doi":"10.1109/LGRS.2024.3514514","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3514514","url":null,"abstract":"Large-scale 3-D building models are a fundamental data of many research and applications. The automatic reconstruction of these 3-D models in LoD2 garners much attention and many automatic methods have been proposed. However, most existing solutions require multiple and complicated substeps for reconstructing the structure of a single building. Meanwhile, most of them have not been applied to large-scale reconstruction to better support the practical applications. Furthermore, some of them rely on the input point clouds with building classification information, thereby affecting their generalization. To resolve these issues, in this letter, we propose a workflow to fully automatically reconstruct large-scale 3-D building models in LoD2. This workflow takes airborne laser scanning (ALS) point clouds as input and uses building footprints and digital terrain model (DTM) as assistance. LoD2 3-D building models are reconstructed by a three-module pipeline: 1) building and roof segmentation; 2) 3-D roof reconstruction; and 3) final top–down extrusion with terrain information. By proposing hybrid deep-learning-based and rule-based methods for the first two modules, we ensure the accurate structure output of reconstruction results as much as possible. The experimental results on point clouds covering the whole city of Trondheim, Norway, indicate that the proposed workflow can effectively reconstruct large-scale 3-D building models in LoD2 with the acceptable RMSE.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radar Forward-Looking Imaging Based on Chirp Beam Scanning
Yang Yang;Yongqiang Cheng;Kang Liu;Hao Wu;Hongyan Liu;Hongqiang Wang
In this letter, a novel radar forward-looking imaging technique based on beam pattern modulation and beam scanning is presented. First, the chirp beam, which presents quadratic varying phases within the main lobe, is generated and scans as a chirp pulse propagating along the azimuth direction by differentially exciting each element of a uniform linear array (ULA). Second, the target distribution is successfully reconstructed using 2-D pulse compression, and a theoretical analysis of the azimuth resolution is conducted. Finally, the sparse representation (SR) technique is employed to enhance the imaging performance. Simulation and experimental results validate the effectiveness and potential of the proposed method for acquiring high-resolution forward-looking images. This work holds promise for advancing the development of radar forward-looking methods and systems.
{"title":"Radar Forward-Looking Imaging Based on Chirp Beam Scanning","authors":"Yang Yang;Yongqiang Cheng;Kang Liu;Hao Wu;Hongyan Liu;Hongqiang Wang","doi":"10.1109/LGRS.2024.3514192","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3514192","url":null,"abstract":"In this letter, a novel radar forward-looking imaging technique based on beam pattern modulation and beam scanning is presented. First, the chirp beam, which presents quadratic varying phases within the main lobe, is generated and scans as a chirp pulse propagating along the azimuth direction by differentially exciting each element of a uniform linear array (ULA). Second, the target distribution is successfully reconstructed using 2-D pulse compression, and a theoretical analysis of the azimuth resolution is conducted. Finally, the sparse representation (SR) technique is employed to enhance the imaging performance. Simulation and experimental results validate the effectiveness and potential of the proposed method for acquiring high-resolution forward-looking images. This work holds promise for advancing the development of radar forward-looking methods and systems.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142859026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Azimuth Ambiguity Identification Method for Ship Detection in Multilook PolSAR Imagery
Wenxing Mu;Ning Wang;Lu Fang;Tao Liu
Azimuth ambiguity is a common issue in polarimetric synthetic aperture radar (PolSAR) imagery, particularly on calm and windless maritime surfaces, which causes numerous false alarms in ship detection. Numerous methods have been applied in single look complex (SLC) PolSAR imagery to suppress ambiguities. Nevertheless, identifying and removing azimuth ambiguities in multilook complex (MLC) PolSAR imagery remains an open problem. This letter proposes an azimuth ambiguity identification method for ship detection in multilook PolSAR imagery. The process is divided into two steps: potential target detection and ambiguity identification. First, the four-component scattering model (Y4O) proposed by Yamaguchi is utilized to decompose the multilook PolSAR image into four dominant scattering categories. Then, the constant false alarm rate (CFAR) detection is conducted based on the total scattering power to detect all potential targets. Azimuth ambiguities are identified according to the correlation coefficient between the measured and standard scattering power vectors. Eventually, the detection map is formed by removing azimuth ambiguities from the CFAR detection result. The proposed method is validated on RadarSAT-2 and Airborne SAR (AIRSAR) images.
{"title":"An Azimuth Ambiguity Identification Method for Ship Detection in Multilook PolSAR Imagery","authors":"Wenxing Mu;Ning Wang;Lu Fang;Tao Liu","doi":"10.1109/LGRS.2024.3513977","DOIUrl":"https://doi.org/10.1109/LGRS.2024.3513977","url":null,"abstract":"Azimuth ambiguity is a common issue in polarimetric synthetic aperture radar (PolSAR) imagery, particularly on calm and windless maritime surfaces, which causes numerous false alarms in ship detection. Numerous methods have been applied in single look complex (SLC) PolSAR imagery to suppress ambiguities. Nevertheless, identifying and removing azimuth ambiguities in multilook complex (MLC) PolSAR imagery remains an open problem. This letter proposes an azimuth ambiguity identification method for ship detection in multilook PolSAR imagery. The process is divided into two steps: potential target detection and ambiguity identification. First, the four-component scattering model (Y4O) proposed by Yamaguchi is utilized to decompose the multilook PolSAR image into four dominant scattering categories. Then, the constant false alarm rate (CFAR) detection is conducted based on the total scattering power to detect all potential targets. Azimuth ambiguities are identified according to the correlation coefficient between the measured and standard scattering power vectors. Eventually, the detection map is formed by removing azimuth ambiguities from the CFAR detection result. The proposed method is validated on RadarSAT-2 and Airborne SAR (AIRSAR) images.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1