首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method 基于八叉树的三维可控源电磁建模方法
Jintong Xu;Xiao Xiao;Jingtian Tang
The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.
可控源电磁(CSEM)方法是探测和研究地下电导率结构的重要地球物理工具。先进的正演模拟技术对煤层气数据的反演和成像至关重要。本文将谱元法(SEM)与八叉树网格相结合,开发了一种精确、高效的CSEM问题三维正演算法。基于高阶基函数的扫描电镜可以提供精确的扫描电镜响应,八叉树网格可以进行局部细化,与传统扫描电镜中使用的结构化六面体网格相比,可以用更少的元素离散模型,同时还提供处理复杂模型的能力。通过两个综合算例验证了该算法的准确性和有效性。通过一个具有复杂几何结构的实际模型验证了该算法的有效性。
{"title":"Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method","authors":"Jintong Xu;Xiao Xiao;Jingtian Tang","doi":"10.1109/LGRS.2025.3606934","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606934","url":null,"abstract":"The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform 基于优化二阶同步提取小波变换的流体流动性属性提取
Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen
Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.
基于时频的地震属性解析主要依赖于时频分析工具。本文通过对尺度参数和提取方案的优化,提出了一种改进的二阶同步提取小波变换。对合成数据进行时频计算,效率提高了5%。然后,我们将所提出的变换应用于现场数据的流体流度计算,计算效率提高了5.6%,分辨率提高了11.26%,证明了其优越的性能。现场数据测试表明,所提出的变换和相关的流体流动性结果优于常规方法。尽管仍然存在计算方面的挑战,但该方法在储层表征和流体检测方面取得了重大进展。
{"title":"Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform","authors":"Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen","doi":"10.1109/LGRS.2025.3607097","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607097","url":null,"abstract":"Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification AFIMNet:用于遥感场景分类的自适应特征交互网络
Xiao Wang;Yisha Sun;Pan He
Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at https://github.com/xavi276310/AFIMNet
基于卷积神经网络(CNN)的方法在遥感场景分类(RSSC)中得到了广泛的应用,并取得了显著的分类效果。然而,传统的CNN方法在提取全局特征和捕获图像语义方面存在一定的局限性,特别是在复杂的遥感图像场景中。Transformer可以通过自关注机制直接捕获全局特征,但在处理局部细节时,其性能较弱。目前,将CNN与变压器特征直接结合的方法会导致特征不平衡,引入冗余信息。为了解决这些问题,我们提出了一种用于RSSC的自适应特征交互网络AFIMNet。首先,我们使用双分支网络结构(基于ResNet34和swan - s)从RS场景图像中提取局部和全局特征。其次,设计了自适应特征交互模块(AFIM),有效增强了局部特征与全局特征之间的交互和相关性。第三,利用空间信道融合模块(SCFM)对交互特征进行聚合,进一步增强特征表示能力。我们提出的方法在三个公开的RS数据集上进行了验证,实验结果表明,与目前流行的RS图像分类方法相比,AFIMNet具有更强的特征表示能力,显著提高了分类精度。源代码可以在https://github.com/xavi276310/AFIMNet上公开访问
{"title":"AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification","authors":"Xiao Wang;Yisha Sun;Pan He","doi":"10.1109/LGRS.2025.3607205","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607205","url":null,"abstract":"Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at <uri>https://github.com/xavi276310/AFIMNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SADFF-Net: Scale-Aware Detection and Feature Fusion for Multiscale Remote Sensing Object Detection 基于尺度感知的多尺度遥感目标检测与特征融合
Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao
Multiscale object detection in remote sensing imagery poses significant challenges, including substantial variations in object size, diverse orientations, and interference from complex backgrounds. To address these issues, we propose a scale-aware detection and feature fusion network (SADFF-Net), a novel detection framework that incorporates a Multiscale contextual attention fusion (MCAF) module to enhance information exchange between feature layers and suppress irrelevant feature interference. In addition, SADFF-Net employs an adaptive spatial feature fusion (ASFF) module to improve semantic consistency across feature layers by assigning spatial weights at multiple scales. To enhance adaptability to scale variations, the regression head integrates a deformable convolution. In contrast, the classification head utilizes depth-wise separable convolutions to significantly reduce computational complexity without compromising detection accuracy. Extensive experiments on the DOTAv1 and DIOR_R datasets demonstrate that SADFF-Net outperforms current state-of-the-art methods in Multiscale object detection.
遥感图像中的多尺度目标检测面临着巨大的挑战,包括物体大小的巨大变化、不同的方向和复杂背景的干扰。为了解决这些问题,我们提出了一个尺度感知检测和特征融合网络(SADFF-Net),这是一个新的检测框架,它包含了一个多尺度上下文注意融合(MCAF)模块,以增强特征层之间的信息交换并抑制无关的特征干扰。此外,SADFF-Net采用自适应空间特征融合(ASFF)模块,通过在多个尺度上分配空间权重来提高特征层之间的语义一致性。为了增强对尺度变化的适应性,回归头集成了一个可变形卷积。相比之下,分类头利用深度可分离卷积,在不影响检测精度的情况下显著降低计算复杂度。在DOTAv1和DIOR_R数据集上进行的大量实验表明,SADFF-Net在多尺度目标检测方面优于当前最先进的方法。
{"title":"SADFF-Net: Scale-Aware Detection and Feature Fusion for Multiscale Remote Sensing Object Detection","authors":"Runbo Yang;Huiyan Han;Shanyuan Bai;Yaming Cao","doi":"10.1109/LGRS.2025.3606521","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606521","url":null,"abstract":"Multiscale object detection in remote sensing imagery poses significant challenges, including substantial variations in object size, diverse orientations, and interference from complex backgrounds. To address these issues, we propose a scale-aware detection and feature fusion network (SADFF-Net), a novel detection framework that incorporates a Multiscale contextual attention fusion (MCAF) module to enhance information exchange between feature layers and suppress irrelevant feature interference. In addition, SADFF-Net employs an adaptive spatial feature fusion (ASFF) module to improve semantic consistency across feature layers by assigning spatial weights at multiple scales. To enhance adaptability to scale variations, the regression head integrates a deformable convolution. In contrast, the classification head utilizes depth-wise separable convolutions to significantly reduce computational complexity without compromising detection accuracy. Extensive experiments on the DOTAv1 and DIOR_R datasets demonstrate that SADFF-Net outperforms current state-of-the-art methods in Multiscale object detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Change Detection of Bitemporal Remote Sensing Images Using Frequency Feature Enhancement 基于频率特征增强的双时相遥感图像语义变化检测
Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu
Deep learning is a powerful technique for semantic change detection (SCD) of bitemporal remote sensing images. In this work, we propose to improve SCD accuracy using deep learning with frequency feature enhancement (FFE). Specifically, we develop an FFE module that aims to enhance the performance of both binary change detection (BCD) and semantic segmentation, two main key components for obtaining high SCD accuracy, by integrating the Fourier transform and attention mechanisms. Experimental results on the SECOND and LandSat-SCD datasets demonstrate the effectiveness of the proposed method, and it achieves high resolution for change boundaries.
深度学习是一种有效的双时遥感图像语义变化检测技术。在这项工作中,我们建议使用频率特征增强(FFE)的深度学习来提高SCD的准确性。具体来说,我们开发了一个FFE模块,旨在通过集成傅里叶变换和注意机制来提高二进制变化检测(BCD)和语义分割的性能,这是获得高SCD精度的两个主要关键组件。在SECOND和LandSat-SCD数据集上的实验结果表明了该方法的有效性,并取得了较高的变化边界分辨率。
{"title":"Semantic Change Detection of Bitemporal Remote Sensing Images Using Frequency Feature Enhancement","authors":"Renfang Wang;Kun Yang;Feng Wang;Hong Qiu;Yingying Huang;Xiufeng Liu","doi":"10.1109/LGRS.2025.3605910","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605910","url":null,"abstract":"Deep learning is a powerful technique for semantic change detection (SCD) of bitemporal remote sensing images. In this work, we propose to improve SCD accuracy using deep learning with frequency feature enhancement (FFE). Specifically, we develop an FFE module that aims to enhance the performance of both binary change detection (BCD) and semantic segmentation, two main key components for obtaining high SCD accuracy, by integrating the Fourier transform and attention mechanisms. Experimental results on the SECOND and LandSat-SCD datasets demonstrate the effectiveness of the proposed method, and it achieves high resolution for change boundaries.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSAR-Det: A Lightweight YOLOv11-Based Model for Ship Detection in SAR Images 基于yolov11的轻型SAR图像舰船检测模型
Pengxiong Zhang;Yi Jiang;Xinguo Zhu
Due to its superior recognition accuracy, deep learning has been widely adopted in synthetic aperture radar (SAR) ship detection. Nevertheless, significant variations in ship target scales pose challenges for existing detection architectures, frequently leading to missed detections or false positives. Moreover, high-precision detection models are typically structurally complex and computationally intensive, resulting in substantial hardware resource consumption. In this letter, we introduce LSAR-Det, a novel SAR ship detection network designed to address these challenges. We propose a lightweight residual feature extraction (LRFE) module to construct the backbone network, enhancing feature extraction capabilities while reducing the number of parameters and floating-point operations per second (FLOPs). Furthermore, we design a lightweight cross-space convolution (LCSConv) module to replace the traditional convolution in the neck network. In addition, we incorporate a multiscale bidirectional feature pyramid network (M-BiFPN) to facilitate multiscale feature fusion with fewer parameters. Our proposed model contains merely 0.985M parameters and requires only 3.3G FLOPs. Experimental results on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) datasets demonstrate that LSAR-Det outperforms other models, achieving detection accuracies of 98.2% and 91.8%, respectively, thereby effectively balancing detection performance and model efficiency.
深度学习以其优越的识别精度,被广泛应用于合成孔径雷达(SAR)舰船检测中。然而,船舶目标尺度的显著变化给现有的检测架构带来了挑战,经常导致漏检或误报。此外,高精度检测模型通常结构复杂,计算量大,导致大量硬件资源消耗。在这封信中,我们介绍了SAR- det,一种新型的SAR船舶探测网络,旨在解决这些挑战。我们提出了一个轻量级的剩余特征提取(LRFE)模块来构建骨干网,增强了特征提取能力,同时减少了参数数量和每秒浮点运算(FLOPs)。此外,我们设计了一个轻量级的跨空间卷积(LCSConv)模块来取代颈部网络中的传统卷积。此外,我们还引入了一种多尺度双向特征金字塔网络(M-BiFPN),以实现参数更少的多尺度特征融合。我们提出的模型仅包含0.985M个参数,仅需3.3G FLOPs。在SAR船舶检测数据集(SSDD)和高分辨率SAR图像数据集(HRSID)数据集上的实验结果表明,LSAR-Det优于其他模型,检测精度分别达到98.2%和91.8%,有效地平衡了检测性能和模型效率。
{"title":"LSAR-Det: A Lightweight YOLOv11-Based Model for Ship Detection in SAR Images","authors":"Pengxiong Zhang;Yi Jiang;Xinguo Zhu","doi":"10.1109/LGRS.2025.3605993","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605993","url":null,"abstract":"Due to its superior recognition accuracy, deep learning has been widely adopted in synthetic aperture radar (SAR) ship detection. Nevertheless, significant variations in ship target scales pose challenges for existing detection architectures, frequently leading to missed detections or false positives. Moreover, high-precision detection models are typically structurally complex and computationally intensive, resulting in substantial hardware resource consumption. In this letter, we introduce LSAR-Det, a novel SAR ship detection network designed to address these challenges. We propose a lightweight residual feature extraction (LRFE) module to construct the backbone network, enhancing feature extraction capabilities while reducing the number of parameters and floating-point operations per second (FLOPs). Furthermore, we design a lightweight cross-space convolution (LCSConv) module to replace the traditional convolution in the neck network. In addition, we incorporate a multiscale bidirectional feature pyramid network (M-BiFPN) to facilitate multiscale feature fusion with fewer parameters. Our proposed model contains merely 0.985M parameters and requires only 3.3G FLOPs. Experimental results on the SAR ship detection dataset (SSDD) and high-resolution SAR image dataset (HRSID) datasets demonstrate that LSAR-Det outperforms other models, achieving detection accuracies of 98.2% and 91.8%, respectively, thereby effectively balancing detection performance and model efficiency.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Framework for Bridging the Data Gap Between GRACE/GRACE-FO for Both Greenland and Antarctica 弥合格陵兰和南极洲GRACE/GRACE- fo数据差距的统一框架
Zhuoya Shi;Zemin Wang;Baojun Zhang;Nicholas E. Barrand;Manman Luo;Shuang Wu;Jiachun An;Hong Geng;Haojian Wu
The 11-month data gap between gravity recovery and climate experiment (GRACE) and GRACE Follow-On (GRACE-FO) hinders monitoring long-term ice mass change and its further analysis. While many attempts have been made to bridge water storage gaps, few unified frameworks exist to bridge the ice mass change gaps for both Greenland ice sheet (GrIS) and Antarctic ice sheet (AIS). This study combined partial least squares regression (PLSR) and the Sparrow Search Algorithm optimized back propagation (SSA-BP) to fill this gap in GrIS and AIS. During this process, seasonal autoregressive integrated moving average (MA) with exogenous variables (SARIMAX) and multiple linear regression (MLR) were introduced as comparison. PSLR is utilized to select key variables for constructing predictive models. We found SSA-BP outperformed SARIMAX and MLR, with correlation coefficients (CCs) and root mean square error (RMSE) at 0.99 and 39.22 Gt for GrIS, and 0.95 and 189.85 Gt for AIS within the testing period. SSA-BP demonstrated a reasonable mass change trend with less noise than other methods. SSA-BP reconstructed result shows superiority than other researches. Moreover, the reconstructed seasonal signals highlight the importance of filling the gap, showing decreased mass loss for GrIS and continuous mass loss acceleration for AIS post-2016.
重力恢复和气候实验(GRACE)与GRACE后续(GRACE- fo)之间11个月的数据差距阻碍了对长期冰质量变化的监测和进一步分析。虽然已经进行了许多尝试来弥补储水缺口,但目前很少有统一的框架来弥补格陵兰冰盖(GrIS)和南极冰盖(AIS)的冰质量变化缺口。本研究将偏最小二乘回归(PLSR)和麻雀搜索算法优化后的反向传播(SSA-BP)相结合,填补了GrIS和AIS的这一空白。在此过程中,引入了带有外源变量的季节自回归综合移动平均(MA)和多元线性回归(MLR)作为比较。利用PSLR选择关键变量构建预测模型。我们发现SSA-BP在测试期间优于SARIMAX和MLR, GrIS的相关系数(cc)和均方根误差(RMSE)分别为0.99和39.22 Gt, AIS的相关系数(cc)和均方根误差(RMSE)分别为0.95和189.85 Gt。与其他方法相比,SSA-BP方法质量变化趋势合理,噪声较小。SSA-BP重构结果具有较强的优越性。此外,重建的季节信号强调了填补空白的重要性,显示2016年后GrIS的质量损失减少,AIS的质量损失持续加速。
{"title":"A Unified Framework for Bridging the Data Gap Between GRACE/GRACE-FO for Both Greenland and Antarctica","authors":"Zhuoya Shi;Zemin Wang;Baojun Zhang;Nicholas E. Barrand;Manman Luo;Shuang Wu;Jiachun An;Hong Geng;Haojian Wu","doi":"10.1109/LGRS.2025.3605913","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605913","url":null,"abstract":"The 11-month data gap between gravity recovery and climate experiment (GRACE) and GRACE Follow-On (GRACE-FO) hinders monitoring long-term ice mass change and its further analysis. While many attempts have been made to bridge water storage gaps, few unified frameworks exist to bridge the ice mass change gaps for both Greenland ice sheet (GrIS) and Antarctic ice sheet (AIS). This study combined partial least squares regression (PLSR) and the Sparrow Search Algorithm optimized back propagation (SSA-BP) to fill this gap in GrIS and AIS. During this process, seasonal autoregressive integrated moving average (MA) with exogenous variables (SARIMAX) and multiple linear regression (MLR) were introduced as comparison. PSLR is utilized to select key variables for constructing predictive models. We found SSA-BP outperformed SARIMAX and MLR, with correlation coefficients (CCs) and root mean square error (RMSE) at 0.99 and 39.22 Gt for GrIS, and 0.95 and 189.85 Gt for AIS within the testing period. SSA-BP demonstrated a reasonable mass change trend with less noise than other methods. SSA-BP reconstructed result shows superiority than other researches. Moreover, the reconstructed seasonal signals highlight the importance of filling the gap, showing decreased mass loss for GrIS and continuous mass loss acceleration for AIS post-2016.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Aware Hybrid Encoding for Hyperspectral Image Classification 基于图感知的高光谱图像分类混合编码
Yuquan Gan;Siyu Wu;Xingyu Li;Zhijie Xu;Yushan Pan
Hyperspectral image (HSI) classification faces critical challenges in effectively modeling the intricate spectral–spatial structures and non-Euclidean relationships. Traditional methods often struggle to simultaneously capture local details, global contextual dependencies, and graph-structured correlations, leading to limited classification accuracy. To address the above issues, this letter proposes a graph-aware hybrid encoding (GAHE) framework. To fully exploit the spectral–spatial characteristics and graph structural dependencies inherent in HSI, the proposed method is structured into three key components: a multiscale selective graph-aware attention (MSGA) module, a hybrid projection encoding module, and a graph sensitive aggregation (GSA) module. The three modules work in a complementary manner to progressively refine and enhance feature representations across multiple scales and modalities. Compared with advanced classification methods, the experimental results demonstrate that the proposed GAHE method shows better classification performance.
高光谱图像(HSI)分类面临着有效建模复杂的光谱空间结构和非欧几里得关系的关键挑战。传统方法通常难以同时捕获局部细节、全局上下文依赖关系和图结构相关性,从而导致分类精度有限。为了解决上述问题,本文提出了一个图形感知混合编码(GAHE)框架。为了充分利用HSI固有的光谱空间特征和图结构依赖性,该方法被构建成三个关键组件:多尺度选择性图感知注意(MSGA)模块、混合投影编码模块和图敏感聚合(GSA)模块。这三个模块以互补的方式工作,逐步细化和增强跨多个尺度和模式的特征表示。实验结果表明,本文提出的GAHE方法具有较好的分类性能。
{"title":"Graph-Aware Hybrid Encoding for Hyperspectral Image Classification","authors":"Yuquan Gan;Siyu Wu;Xingyu Li;Zhijie Xu;Yushan Pan","doi":"10.1109/LGRS.2025.3605916","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605916","url":null,"abstract":"Hyperspectral image (HSI) classification faces critical challenges in effectively modeling the intricate spectral–spatial structures and non-Euclidean relationships. Traditional methods often struggle to simultaneously capture local details, global contextual dependencies, and graph-structured correlations, leading to limited classification accuracy. To address the above issues, this letter proposes a graph-aware hybrid encoding (GAHE) framework. To fully exploit the spectral–spatial characteristics and graph structural dependencies inherent in HSI, the proposed method is structured into three key components: a multiscale selective graph-aware attention (MSGA) module, a hybrid projection encoding module, and a graph sensitive aggregation (GSA) module. The three modules work in a complementary manner to progressively refine and enhance feature representations across multiple scales and modalities. Compared with advanced classification methods, the experimental results demonstrate that the proposed GAHE method shows better classification performance.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Test Statistic for Block-Diagonal Covariance Matrix Structure in polSAR Data polSAR数据中块对角协方差矩阵结构的检验统计量
Allan A. Nielsen;Henning Skriver;Knut Conradsen
We report on a complex Wishart distribution-based test statistic $boldsymbol {Q}$ for block-diagonality in Hermitian matrices such as the ones analyzed in polarimetric synthetic aperture radar (polSAR) image data in the covariance matrix formulation. We also give an improved probability measure $boldsymbol {P}$ associated with the test statistic. This is used in a case with simulated data to demonstrate the superiority of the new expression for $boldsymbol {P}$ and to illustrate the dependence of results on the choice of covariance matrix, its dimensionality, the equivalent number of looks, and two parameters in the improved $boldsymbol {P}$ measure. We also give two cases with acquired data. One case is with airborne F-SAR polarimetric data, where we test for reflection symmetry, another case is with (spaceborne) dual-pol Sentinel-1 data, where we test if the data are diagonal-only. The absence of block-diagonal structure occurs mostly for man-made objects. In the example with Sentinel-1 data, some objects (e.g., buildings, cars, aircraft, and ships) are detected, others (e.g., some bridges) are not.
我们报道了一个复杂的基于Wishart分布的检验统计量$boldsymbol {Q}$,用于在协方差矩阵公式中分析偏振合成孔径雷达(polSAR)图像数据中的厄米矩阵中的块对角性。我们还给出了与检验统计量相关联的改进概率度量$boldsymbol {P}$。这是在一个模拟数据的情况下使用的,以证明$boldsymbol {P}$的新表达式的优越性,并说明结果依赖于协方差矩阵的选择,它的维数,等效的外观数,以及改进的$boldsymbol {P}$度量中的两个参数。我们还给出了两个案例。一种情况是机载F-SAR极化数据,我们测试反射对称性,另一种情况是(星载)双pol Sentinel-1数据,我们测试数据是否只有对角线。块状对角线结构的缺失主要发生在人造物体上。在Sentinel-1数据的例子中,一些物体(如建筑物、汽车、飞机和船只)被检测到,而另一些物体(如一些桥梁)没有被检测到。
{"title":"A Test Statistic for Block-Diagonal Covariance Matrix Structure in polSAR Data","authors":"Allan A. Nielsen;Henning Skriver;Knut Conradsen","doi":"10.1109/LGRS.2025.3605978","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605978","url":null,"abstract":"We report on a complex Wishart distribution-based test statistic <inline-formula> <tex-math>$boldsymbol {Q}$ </tex-math></inline-formula> for block-diagonality in Hermitian matrices such as the ones analyzed in polarimetric synthetic aperture radar (polSAR) image data in the covariance matrix formulation. We also give an improved probability measure <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> associated with the test statistic. This is used in a case with simulated data to demonstrate the superiority of the new expression for <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> and to illustrate the dependence of results on the choice of covariance matrix, its dimensionality, the equivalent number of looks, and two parameters in the improved <inline-formula> <tex-math>$boldsymbol {P}$ </tex-math></inline-formula> measure. We also give two cases with acquired data. One case is with airborne F-SAR polarimetric data, where we test for reflection symmetry, another case is with (spaceborne) dual-pol Sentinel-1 data, where we test if the data are diagonal-only. The absence of block-diagonal structure occurs mostly for man-made objects. In the example with Sentinel-1 data, some objects (e.g., buildings, cars, aircraft, and ships) are detected, others (e.g., some bridges) are not.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Potential Impacts of 3-D Polarized GPR Data on Full-Waveform Inversion 三维极化GPR数据对全波形反演的潜在影响
Siyuan Ding;Xun Wang;Deshan Feng;Cheng Chen;Dianbo Li
Ground penetrating radar (GPR) is a powerful tool for exploring the shallow subsurface due to its effective and noninvasive features. Recently, the accurate and high-resolution characterization of subsurface properties in 3-D GPR investigations calls for a quantitative and high-resolution imaging approach. However, the full-waveform inversion (FWI) method for GPR data was performed mostly in 2-D and rarely discussed the polarizations. To fully utilize 3-D GPR polarization data, this letter proposes a frequency-domain FWI algorithm for simultaneous inversion of both the co-polarized and cross-polarized data. Detail derivations and vital processes in our inversion workflow were described in detail, before applying it to the numerical experiments and analyzing the potential impacts of the polarizations on inversion results with a synthetic model. Results showed that the cross-polarized data are more sensitive than the co-polarized data in inversion, and the behaviors in the inversion of the multipolarized data with different values in the weighting matrix suggest that larger weights for co-polarized data are of benefit to a better inversion result.
探地雷达(GPR)以其有效、无创的特点成为探测浅层地下的有力工具。近年来,为了在三维探地雷达研究中准确、高分辨率地表征地下属性,需要一种定量、高分辨率的成像方法。然而,GPR数据的全波形反演(FWI)方法大多是二维的,很少讨论极化问题。为了充分利用三维GPR极化数据,本文提出了一种频域FWI算法,用于同时反演共极化和交叉极化数据。详细介绍了反演流程中的详细推导和关键过程,并将其应用于数值实验,结合综合模型分析了极化对反演结果的潜在影响。结果表明,交极化数据的反演灵敏度高于同极化数据,而不同权重矩阵的多极化数据的反演行为表明,同极化数据的权重越大,反演结果越好。
{"title":"Potential Impacts of 3-D Polarized GPR Data on Full-Waveform Inversion","authors":"Siyuan Ding;Xun Wang;Deshan Feng;Cheng Chen;Dianbo Li","doi":"10.1109/LGRS.2025.3605792","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605792","url":null,"abstract":"Ground penetrating radar (GPR) is a powerful tool for exploring the shallow subsurface due to its effective and noninvasive features. Recently, the accurate and high-resolution characterization of subsurface properties in 3-D GPR investigations calls for a quantitative and high-resolution imaging approach. However, the full-waveform inversion (FWI) method for GPR data was performed mostly in 2-D and rarely discussed the polarizations. To fully utilize 3-D GPR polarization data, this letter proposes a frequency-domain FWI algorithm for simultaneous inversion of both the co-polarized and cross-polarized data. Detail derivations and vital processes in our inversion workflow were described in detail, before applying it to the numerical experiments and analyzing the potential impacts of the polarizations on inversion results with a synthetic model. Results showed that the cross-polarized data are more sensitive than the co-polarized data in inversion, and the behaviors in the inversion of the multipolarized data with different values in the weighting matrix suggest that larger weights for co-polarized data are of benefit to a better inversion result.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1