Pub Date : 2025-09-08DOI: 10.1109/LGRS.2025.3606934
Jintong Xu;Xiao Xiao;Jingtian Tang
The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.
{"title":"Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method","authors":"Jintong Xu;Xiao Xiao;Jingtian Tang","doi":"10.1109/LGRS.2025.3606934","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606934","url":null,"abstract":"The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.
{"title":"Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform","authors":"Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen","doi":"10.1109/LGRS.2025.3607097","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607097","url":null,"abstract":"Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1109/LGRS.2025.3607205
Xiao Wang;Yisha Sun;Pan He
Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at https://github.com/xavi276310/AFIMNet
{"title":"AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification","authors":"Xiao Wang;Yisha Sun;Pan He","doi":"10.1109/LGRS.2025.3607205","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607205","url":null,"abstract":"Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at <uri>https://github.com/xavi276310/AFIMNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-05DOI: 10.1109/LGRS.2025.3606521
Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao
Multiscale object detection in remote sensing imagery poses significant challenges, including substantial variations in object size, diverse orientations, and interference from complex backgrounds. To address these issues, we propose a scale-aware detection and feature fusion network (SADFF-Net), a novel detection framework that incorporates a Multiscale contextual attention fusion (MCAF) module to enhance information exchange between feature layers and suppress irrelevant feature interference. In addition, SADFF-Net employs an adaptive spatial feature fusion (ASFF) module to improve semantic consistency across feature layers by assigning spatial weights at multiple scales. To enhance adaptability to scale variations, the regression head integrates a deformable convolution. In contrast, the classification head utilizes depth-wise separable convolutions to significantly reduce computational complexity without compromising detection accuracy. Extensive experiments on the DOTAv1 and DIOR_R datasets demonstrate that SADFF-Net outperforms current state-of-the-art methods in Multiscale object detection.
{"title":"SADFF-Net: Scale-Aware Detection and Feature Fusion for Multiscale Remote Sensing Object Detection","authors":"Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao","doi":"10.1109/LGRS.2025.3606521","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606521","url":null,"abstract":"Multiscale object detection in remote sensing imagery poses significant challenges, including substantial variations in object size, diverse orientations, and interference from complex backgrounds. To address these issues, we propose a scale-aware detection and feature fusion network (SADFF-Net), a novel detection framework that incorporates a Multiscale contextual attention fusion (MCAF) module to enhance information exchange between feature layers and suppress irrelevant feature interference. In addition, SADFF-Net employs an adaptive spatial feature fusion (ASFF) module to improve semantic consistency across feature layers by assigning spatial weights at multiple scales. To enhance adaptability to scale variations, the regression head integrates a deformable convolution. In contrast, the classification head utilizes depth-wise separable convolutions to significantly reduce computational complexity without compromising detection accuracy. Extensive experiments on the DOTAv1 and DIOR_R datasets demonstrate that SADFF-Net outperforms current state-of-the-art methods in Multiscale object detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1109/LGRS.2025.3605910
Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu
Deep learning is a powerful technique for semantic change detection (SCD) of bitemporal remote sensing images. In this work, we propose to improve SCD accuracy using deep learning with frequency feature enhancement (FFE). Specifically, we develop an FFE module that aims to enhance the performance of both binary change detection (BCD) and semantic segmentation, two main key components for obtaining high SCD accuracy, by integrating the Fourier transform and attention mechanisms. Experimental results on the SECOND and LandSat-SCD datasets demonstrate the effectiveness of the proposed method, and it achieves high resolution for change boundaries.
{"title":"Semantic Change Detection of Bitemporal Remote Sensing Images Using Frequency Feature Enhancement","authors":"Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu","doi":"10.1109/LGRS.2025.3605910","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605910","url":null,"abstract":"Deep learning is a powerful technique for semantic change detection (SCD) of bitemporal remote sensing images. In this work, we propose to improve SCD accuracy using deep learning with frequency feature enhancement (FFE). Specifically, we develop an FFE module that aims to enhance the performance of both binary change detection (BCD) and semantic segmentation, two main key components for obtaining high SCD accuracy, by integrating the Fourier transform and attention mechanisms. Experimental results on the SECOND and LandSat-SCD datasets demonstrate the effectiveness of the proposed method, and it achieves high resolution for change boundaries.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1109/LGRS.2025.3605993
Pengxiong Zhang;Yi Jiang;Xinguo Zhu
Due to its superior recognition accuracy, deep learning has been widely adopted in synthetic aperture radar (SAR) ship detection. Nevertheless, significant variations in ship target scales pose challenges for existing detection architectures, frequently leading to missed detections or false positives. Moreover, high-precision detection models are typically structurally complex and computationally intensive, resulting in substantial hardware resource consumption. In this letter, we introduce LSAR-Det, a novel SAR ship detection network designed to address these challenges. We propose a lightweight residual feature extraction (LRFE) module to construct the backbone network, enhancing feature extraction capabilities while reducing the number of parameters and floating-point operations per second (FLOPs). Furthermore, we design a lightweight cross-space convolution (LCSConv) module to replace the traditional convolution in the neck network. In addition, we incorporate a multiscale bidirectional feature pyramid network (M-BiFPN) to facilitate multiscale feature fusion with fewer parameters. Our proposed model contains merely 0.985M parameters and requires only 3.3G FLOPs. Experimental results on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) datasets demonstrate that LSAR-Det outperforms other models, achieving detection accuracies of 98.2% and 91.8%, respectively, thereby effectively balancing detection performance and model efficiency.
{"title":"LSAR-Det: A Lightweight YOLOv11-Based Model for Ship Detection in SAR Images","authors":"Pengxiong Zhang;Yi Jiang;Xinguo Zhu","doi":"10.1109/LGRS.2025.3605993","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605993","url":null,"abstract":"Due to its superior recognition accuracy, deep learning has been widely adopted in synthetic aperture radar (SAR) ship detection. Nevertheless, significant variations in ship target scales pose challenges for existing detection architectures, frequently leading to missed detections or false positives. Moreover, high-precision detection models are typically structurally complex and computationally intensive, resulting in substantial hardware resource consumption. In this letter, we introduce LSAR-Det, a novel SAR ship detection network designed to address these challenges. We propose a lightweight residual feature extraction (LRFE) module to construct the backbone network, enhancing feature extraction capabilities while reducing the number of parameters and floating-point operations per second (FLOPs). Furthermore, we design a lightweight cross-space convolution (LCSConv) module to replace the traditional convolution in the neck network. In addition, we incorporate a multiscale bidirectional feature pyramid network (M-BiFPN) to facilitate multiscale feature fusion with fewer parameters. Our proposed model contains merely 0.985M parameters and requires only 3.3G FLOPs. Experimental results on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) datasets demonstrate that LSAR-Det outperforms other models, achieving detection accuracies of 98.2% and 91.8%, respectively, thereby effectively balancing detection performance and model efficiency.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The 11-month data gap between gravity recovery and climate experiment (GRACE) and GRACE Follow-On (GRACE-FO) hinders monitoring long-term ice mass change and its further analysis. While many attempts have been made to bridge water storage gaps, few unified frameworks exist to bridge the ice mass change gaps for both Greenland ice sheet (GrIS) and Antarctic ice sheet (AIS). This study combined partial least squares regression (PLSR) and the Sparrow Search Algorithm optimized back propagation (SSA-BP) to fill this gap in GrIS and AIS. During this process, seasonal autoregressive integrated moving average (MA) with exogenous variables (SARIMAX) and multiple linear regression (MLR) were introduced as comparison. PSLR is utilized to select key variables for constructing predictive models. We found SSA-BP outperformed SARIMAX and MLR, with correlation coefficients (CCs) and root mean square error (RMSE) at 0.99 and 39.22 Gt for GrIS, and 0.95 and 189.85 Gt for AIS within the testing period. SSA-BP demonstrated a reasonable mass change trend with less noise than other methods. SSA-BP reconstructed result shows superiority than other researches. Moreover, the reconstructed seasonal signals highlight the importance of filling the gap, showing decreased mass loss for GrIS and continuous mass loss acceleration for AIS post-2016.
{"title":"A Unified Framework for Bridging the Data Gap Between GRACE/GRACE-FO for Both Greenland and Antarctica","authors":"Zhuoya Shi;Zemin Wang;Baojun Zhang;Nicholas E. Barrand;Manman Luo;Shuang Wu;Jiachun An;Hong Geng;Haojian Wu","doi":"10.1109/LGRS.2025.3605913","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605913","url":null,"abstract":"The 11-month data gap between gravity recovery and climate experiment (GRACE) and GRACE Follow-On (GRACE-FO) hinders monitoring long-term ice mass change and its further analysis. While many attempts have been made to bridge water storage gaps, few unified frameworks exist to bridge the ice mass change gaps for both Greenland ice sheet (GrIS) and Antarctic ice sheet (AIS). This study combined partial least squares regression (PLSR) and the Sparrow Search Algorithm optimized back propagation (SSA-BP) to fill this gap in GrIS and AIS. During this process, seasonal autoregressive integrated moving average (MA) with exogenous variables (SARIMAX) and multiple linear regression (MLR) were introduced as comparison. PSLR is utilized to select key variables for constructing predictive models. We found SSA-BP outperformed SARIMAX and MLR, with correlation coefficients (CCs) and root mean square error (RMSE) at 0.99 and 39.22 Gt for GrIS, and 0.95 and 189.85 Gt for AIS within the testing period. SSA-BP demonstrated a reasonable mass change trend with less noise than other methods. SSA-BP reconstructed result shows superiority than other researches. Moreover, the reconstructed seasonal signals highlight the importance of filling the gap, showing decreased mass loss for GrIS and continuous mass loss acceleration for AIS post-2016.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1109/LGRS.2025.3605916
Yuquan Gan;Siyu Wu;Xingyu Li;Zhijie Xu;Yushan Pan
Hyperspectral image (HSI) classification faces critical challenges in effectively modeling the intricate spectral–spatial structures and non-Euclidean relationships. Traditional methods often struggle to simultaneously capture local details, global contextual dependencies, and graph-structured correlations, leading to limited classification accuracy. To address the above issues, this letter proposes a graph-aware hybrid encoding (GAHE) framework. To fully exploit the spectral–spatial characteristics and graph structural dependencies inherent in HSI, the proposed method is structured into three key components: a multiscale selective graph-aware attention (MSGA) module, a hybrid projection encoding module, and a graph sensitive aggregation (GSA) module. The three modules work in a complementary manner to progressively refine and enhance feature representations across multiple scales and modalities. Compared with advanced classification methods, the experimental results demonstrate that the proposed GAHE method shows better classification performance.
{"title":"Graph-Aware Hybrid Encoding for Hyperspectral Image Classification","authors":"Yuquan Gan;Siyu Wu;Xingyu Li;Zhijie Xu;Yushan Pan","doi":"10.1109/LGRS.2025.3605916","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605916","url":null,"abstract":"Hyperspectral image (HSI) classification faces critical challenges in effectively modeling the intricate spectral–spatial structures and non-Euclidean relationships. Traditional methods often struggle to simultaneously capture local details, global contextual dependencies, and graph-structured correlations, leading to limited classification accuracy. To address the above issues, this letter proposes a graph-aware hybrid encoding (GAHE) framework. To fully exploit the spectral–spatial characteristics and graph structural dependencies inherent in HSI, the proposed method is structured into three key components: a multiscale selective graph-aware attention (MSGA) module, a hybrid projection encoding module, and a graph sensitive aggregation (GSA) module. The three modules work in a complementary manner to progressively refine and enhance feature representations across multiple scales and modalities. Compared with advanced classification methods, the experimental results demonstrate that the proposed GAHE method shows better classification performance.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1109/LGRS.2025.3605978
Allan A. Nielsen;Henning Skriver;Knut Conradsen
We report on a complex Wishart distribution-based test statistic $boldsymbol {Q}$ for block-diagonality in Hermitian matrices such as the ones analyzed in polarimetric synthetic aperture radar (polSAR) image data in the covariance matrix formulation. We also give an improved probability measure $boldsymbol {P}$ associated with the test statistic. This is used in a case with simulated data to demonstrate the superiority of the new expression for $boldsymbol {P}$ and to illustrate the dependence of results on the choice of covariance matrix, its dimensionality, the equivalent number of looks, and two parameters in the improved $boldsymbol {P}$ measure. We also give two cases with acquired data. One case is with airborne F-SAR polarimetric data, where we test for reflection symmetry, another case is with (spaceborne) dual-pol Sentinel-1 data, where we test if the data are diagonal-only. The absence of block-diagonal structure occurs mostly for man-made objects. In the example with Sentinel-1 data, some objects (e.g., buildings, cars, aircraft, and ships) are detected, others (e.g., some bridges) are not.
{"title":"A Test Statistic for Block-Diagonal Covariance Matrix Structure in polSAR Data","authors":"Allan A. Nielsen;Henning Skriver;Knut Conradsen","doi":"10.1109/LGRS.2025.3605978","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605978","url":null,"abstract":"We report on a complex Wishart distribution-based test statistic <inline-formula> <tex-math>$boldsymbol {Q}$ </tex-math></inline-formula> for block-diagonality in Hermitian matrices such as the ones analyzed in polarimetric synthetic aperture radar (polSAR) image data in the covariance matrix formulation. We also give an improved probability measure <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> associated with the test statistic. This is used in a case with simulated data to demonstrate the superiority of the new expression for <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> and to illustrate the dependence of results on the choice of covariance matrix, its dimensionality, the equivalent number of looks, and two parameters in the improved <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> measure. We also give two cases with acquired data. One case is with airborne F-SAR polarimetric data, where we test for reflection symmetry, another case is with (spaceborne) dual-pol Sentinel-1 data, where we test if the data are diagonal-only. The absence of block-diagonal structure occurs mostly for man-made objects. In the example with Sentinel-1 data, some objects (e.g., buildings, cars, aircraft, and ships) are detected, others (e.g., some bridges) are not.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1109/LGRS.2025.3605792
Siyuan Ding;Xun Wang;Deshan Feng;Cheng Chen;Dianbo Li
Ground penetrating radar (GPR) is a powerful tool for exploring the shallow subsurface due to its effective and noninvasive features. Recently, the accurate and high-resolution characterization of subsurface properties in 3-D GPR investigations calls for a quantitative and high-resolution imaging approach. However, the full-waveform inversion (FWI) method for GPR data was performed mostly in 2-D and rarely discussed the polarizations. To fully utilize 3-D GPR polarization data, this letter proposes a frequency-domain FWI algorithm for simultaneous inversion of both the co-polarized and cross-polarized data. Detail derivations and vital processes in our inversion workflow were described in detail, before applying it to the numerical experiments and analyzing the potential impacts of the polarizations on inversion results with a synthetic model. Results showed that the cross-polarized data are more sensitive than the co-polarized data in inversion, and the behaviors in the inversion of the multipolarized data with different values in the weighting matrix suggest that larger weights for co-polarized data are of benefit to a better inversion result.
{"title":"Potential Impacts of 3-D Polarized GPR Data on Full-Waveform Inversion","authors":"Siyuan Ding;Xun Wang;Deshan Feng;Cheng Chen;Dianbo Li","doi":"10.1109/LGRS.2025.3605792","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605792","url":null,"abstract":"Ground penetrating radar (GPR) is a powerful tool for exploring the shallow subsurface due to its effective and noninvasive features. Recently, the accurate and high-resolution characterization of subsurface properties in 3-D GPR investigations calls for a quantitative and high-resolution imaging approach. However, the full-waveform inversion (FWI) method for GPR data was performed mostly in 2-D and rarely discussed the polarizations. To fully utilize 3-D GPR polarization data, this letter proposes a frequency-domain FWI algorithm for simultaneous inversion of both the co-polarized and cross-polarized data. Detail derivations and vital processes in our inversion workflow were described in detail, before applying it to the numerical experiments and analyzing the potential impacts of the polarizations on inversion results with a synthetic model. Results showed that the cross-polarized data are more sensitive than the co-polarized data in inversion, and the behaviors in the inversion of the multipolarized data with different values in the weighting matrix suggest that larger weights for co-polarized data are of benefit to a better inversion result.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}