首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
Improving New Zealand’s Vegetation Mapping Using Weakly Supervised Learning 利用弱监督学习改进新西兰植被制图
Brent Martin;Norman W. H. Mason;James D. Shepherd;Jan Schindler
The New Zealand Land Use Carbon Analysis System Land Use Map (LUCAS LUM) is a series of land use layers that map land use classes, including both exotic and native forest, dating back to 1990 and updated every four years since 2008. This map is a rich resource, but the significant effort required to update it means errors may creep in without detection. We trialed whether a deep learning model could be trained on this imperfect data. We found the model predicts exotic forestry nationally to a higher level of accuracy than previously achieved. The resulting layer was used to detect and correct missed exotic forest plantations in the current LUCAS LUM. We also demonstrate that the exotic forestry prediction is sufficiently sensitive to detect wilding conifer infestations and estimate infestation density. Our results highlight the effectiveness of weakly supervised learning, enabling accurate and scalable national land use and land cover mapping while drastically reducing manual labeling efforts.
新西兰土地利用碳分析系统土地利用图(LUCAS LUM)是一系列土地利用层,绘制了土地利用类别,包括外来森林和原生森林,可追溯到1990年,自2008年以来每四年更新一次。这张地图是一个丰富的资源,但更新它需要付出巨大的努力,这意味着错误可能会在没有被发现的情况下悄悄出现。我们测试了深度学习模型是否可以在这些不完美的数据上进行训练。我们发现,该模型在全国范围内预测外来林业的准确度高于以前的水平。所得到的层用于检测和纠正当前LUCAS LUM中缺失的外来森林种植园。我们还证明了外来森林预测在检测野生针叶树侵染和估计侵染密度方面具有足够的敏感性。我们的研究结果突出了弱监督学习的有效性,实现了准确和可扩展的国家土地利用和土地覆盖制图,同时大大减少了人工标记工作。
{"title":"Improving New Zealand’s Vegetation Mapping Using Weakly Supervised Learning","authors":"Brent Martin;Norman W. H. Mason;James D. Shepherd;Jan Schindler","doi":"10.1109/LGRS.2025.3635413","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3635413","url":null,"abstract":"The New Zealand Land Use Carbon Analysis System Land Use Map (LUCAS LUM) is a series of land use layers that map land use classes, including both exotic and native forest, dating back to 1990 and updated every four years since 2008. This map is a rich resource, but the significant effort required to update it means errors may creep in without detection. We trialed whether a deep learning model could be trained on this imperfect data. We found the model predicts exotic forestry nationally to a higher level of accuracy than previously achieved. The resulting layer was used to detect and correct missed exotic forest plantations in the current LUCAS LUM. We also demonstrate that the exotic forestry prediction is sufficiently sensitive to detect wilding conifer infestations and estimate infestation density. Our results highlight the effectiveness of weakly supervised learning, enabling accurate and scalable national land use and land cover mapping while drastically reducing manual labeling efforts.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-MFG: Multiscale and Feature-Preserving YOLO With Gated Attention for Remote Sensing Object Detection YOLO- mfg:基于门控关注的多尺度特征保持YOLO遥感目标检测
HengYu Li;Bo Huang;JianYong Lv
Driven by the increasing demand for intelligent Earth observation and large-scale scene understanding, remote sensing object detection has gained significant academic and practical importance. Despite notable progress in feature extraction and computational efficiency, many recent approaches still struggle to effectively handle issues such as detecting objects at multiple scales and preserving small targets. In this letter, an efficient remote sensing object detector called multiscale and feature-preserving YOLO with gated attention (YOLO-MFG) is proposed to address these challenges. First, a multiscale group shuffle attention (MGSA) module is introduced to adaptively aggregate multiscale spatial features, improving the model’s sensitivity to objects of diverse sizes. Second, the use of feature-preserving downsampling (FPD) enhances the downsampling process by introducing a triple-branch fusion mechanism that mitigates aliasing while jointly preserving semantics, saliency, and geometry. Finally, gated enhanced attention (GEA) is integrated to capture long-range dependencies and contextual cues crucial for remote sensing scenarios. The experimental results demonstrate that the proposed YOLO-MFG achieves a 2.9% improvement in mean average precision at an intersection over union (IoU) threshold of 0.5 (mAP50) on the optical remote sensing dataset SIMD compared with YOLO11. In addition, the mAP50 of detection results is improved by 1.4% and 4.2% on the DIOR and NWPU VHR-10 datasets, respectively.
在智能对地观测和大尺度场景理解需求日益增长的推动下,遥感目标检测具有重要的理论和实践意义。尽管在特征提取和计算效率方面取得了显著进展,但许多新方法仍然难以有效地处理多尺度目标检测和小目标保存等问题。为了解决这些问题,本文提出了一种高效的遥感目标检测器,称为多尺度和特征保持的门控注意力YOLO (YOLO- mfg)。首先,引入多尺度群体洗牌注意(MGSA)模块自适应聚合多尺度空间特征,提高模型对不同大小目标的敏感性;其次,使用特征保持下采样(FPD)通过引入三分支融合机制来增强下采样过程,该机制在共同保持语义、显著性和几何形状的同时减轻了混化。最后,集成了门控增强注意力(GEA),以捕获遥感场景中至关重要的远程依赖关系和上下文线索。实验结果表明,在光学遥感数据集SIMD上,与YOLO11相比,YOLO-MFG在0.5 (mAP50)的交汇交汇(IoU)阈值下的平均精度提高了2.9%。在DIOR和NWPU VHR-10数据集上,检测结果的mAP50分别提高了1.4%和4.2%。
{"title":"YOLO-MFG: Multiscale and Feature-Preserving YOLO With Gated Attention for Remote Sensing Object Detection","authors":"HengYu Li;Bo Huang;JianYong Lv","doi":"10.1109/LGRS.2025.3634593","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634593","url":null,"abstract":"Driven by the increasing demand for intelligent Earth observation and large-scale scene understanding, remote sensing object detection has gained significant academic and practical importance. Despite notable progress in feature extraction and computational efficiency, many recent approaches still struggle to effectively handle issues such as detecting objects at multiple scales and preserving small targets. In this letter, an efficient remote sensing object detector called multiscale and feature-preserving YOLO with gated attention (YOLO-MFG) is proposed to address these challenges. First, a multiscale group shuffle attention (MGSA) module is introduced to adaptively aggregate multiscale spatial features, improving the model’s sensitivity to objects of diverse sizes. Second, the use of feature-preserving downsampling (FPD) enhances the downsampling process by introducing a triple-branch fusion mechanism that mitigates aliasing while jointly preserving semantics, saliency, and geometry. Finally, gated enhanced attention (GEA) is integrated to capture long-range dependencies and contextual cues crucial for remote sensing scenarios. The experimental results demonstrate that the proposed YOLO-MFG achieves a 2.9% improvement in mean average precision at an intersection over union (IoU) threshold of 0.5 (mAP50) on the optical remote sensing dataset SIMD compared with YOLO11. In addition, the mAP50 of detection results is improved by 1.4% and 4.2% on the DIOR and NWPU VHR-10 datasets, respectively.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forest Tree Species Classification Based on Deep Ensemble Learning by Fusing High-Resolution, Multitemporal, and Hyperspectral Multisource Remote Sensing Data 基于高分辨率、多时相、高光谱多源遥感数据融合的深度集成学习森林树种分类
Dengli Yu;Lilin Tu;Ziqing Wei;Fuyao Zhu;Chengjun Yu;Denghong Wang;Jiayi Li;Xin Huang
Forest tree species classification has great significance for sustainable development of forest resource. Multisource remote sensing data provide abundant temporal, spatial, and spectral information for tree species classification. However, there lacks tree species classification methods, which comprehensively capture and fuse spatio–temporal–spectral information. Therefore, a tree species classification method based on deep ensemble learning of multisource spatio–temporal–spectral remote sensing data is proposed. First, multitemporal, high-resolution, and hyperspectral data are utilized for training temporal, spatial, and spectral deep networks. Furtherly, deep ensemble learning is developed for the fusion of spatio–temporal–spectral network outputs, where weighted fusion is implemented via dynamic weight optimization based on the spatio–temporal–spatial features. Experimental results indicate that the importance of temporal features is higher than that of spatial information, and spectral networks perform best among all network structures. After the spatio–temporal–spectral ensemble learning, the performance of tree species classification is further improved, and the overall accuracy (OA) of the proposed method reaches above 90%. The proposed algorithm realizes precise and fine-scale tree species classification and provides technique support for the monitoring and conservation of forest resource.
森林树种分类对森林资源的可持续发展具有重要意义。多源遥感数据为树种分类提供了丰富的时间、空间和光谱信息。然而,目前还缺乏全面捕获和融合时空光谱信息的树种分类方法。为此,提出了一种基于多源时空光谱遥感数据深度集成学习的树种分类方法。首先,多时相、高分辨率和高光谱数据被用于训练时间、空间和光谱深度网络。此外,针对时空光谱网络输出的融合,开发了深度集成学习,通过基于时空特征的动态权重优化实现加权融合。实验结果表明,时间特征的重要性高于空间信息,频谱网络在所有网络结构中表现最好。经过时空光谱集成学习,进一步提高了树种分类性能,总体准确率达到90%以上。该算法实现了精确、精细的树种分类,为森林资源的监测和保护提供了技术支持。
{"title":"Forest Tree Species Classification Based on Deep Ensemble Learning by Fusing High-Resolution, Multitemporal, and Hyperspectral Multisource Remote Sensing Data","authors":"Dengli Yu;Lilin Tu;Ziqing Wei;Fuyao Zhu;Chengjun Yu;Denghong Wang;Jiayi Li;Xin Huang","doi":"10.1109/LGRS.2025.3634553","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634553","url":null,"abstract":"Forest tree species classification has great significance for sustainable development of forest resource. Multisource remote sensing data provide abundant temporal, spatial, and spectral information for tree species classification. However, there lacks tree species classification methods, which comprehensively capture and fuse spatio–temporal–spectral information. Therefore, a tree species classification method based on deep ensemble learning of multisource spatio–temporal–spectral remote sensing data is proposed. First, multitemporal, high-resolution, and hyperspectral data are utilized for training temporal, spatial, and spectral deep networks. Furtherly, deep ensemble learning is developed for the fusion of spatio–temporal–spectral network outputs, where weighted fusion is implemented via dynamic weight optimization based on the spatio–temporal–spatial features. Experimental results indicate that the importance of temporal features is higher than that of spatial information, and spectral networks perform best among all network structures. After the spatio–temporal–spectral ensemble learning, the performance of tree species classification is further improved, and the overall accuracy (OA) of the proposed method reaches above 90%. The proposed algorithm realizes precise and fine-scale tree species classification and provides technique support for the monitoring and conservation of forest resource.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Sparse Adaptive Generalized S Transform 稀疏自适应广义S变换
Shengyi Wang;Xuehua Chen;Cong Wang;Junjie Liu;Xin Luo
High-resolution time–frequency analysis is crucial for seismic interpretation. Conventional sparse time–frequency transforms, such as the sparse generalized S transform (SGST), are not adaptive to the intrinsic characteristics of the signal. To address this limitation, we propose a sparse adaptive generalized S transform (SAGST). This method incorporates the signal amplitude spectrum into the Gaussian window function, allowing the window to adapt dynamically to the signal characteristics. This adaptive mechanism enables the construction of wavelet bases that are better matched to the signal. We apply the SAGST to the time–frequency analysis of both synthetic signal and field seismic data. The synthetic signal test shows that the SAGST achieves higher energy concentration, superior computational efficiency, and enhanced weak signal extraction compared with the sparse adaptive S transform (SAST) and SGST. A field example demonstrates that the SAGST can be used to indicate low-frequency shadow associated with hydrocarbon reservoirs.
高分辨率时频分析是地震解释的关键。传统的稀疏时频变换,如稀疏广义S变换(SGST),不能适应信号的固有特性。为了解决这一限制,我们提出了一种稀疏自适应广义S变换(SAGST)。该方法将信号幅度谱纳入高斯窗函数,使窗口能够动态适应信号特性。这种自适应机制使得构建与信号更好匹配的小波基成为可能。我们将SAGST应用于合成信号和现场地震资料的时频分析。合成信号测试表明,与稀疏自适应S变换(SAST)和SGST相比,SAGST的能量集中度更高,计算效率更高,弱信号提取能力更强。现场实例表明,SAGST可以用于识别与油气藏相关的低频阴影。
{"title":"The Sparse Adaptive Generalized S Transform","authors":"Shengyi Wang;Xuehua Chen;Cong Wang;Junjie Liu;Xin Luo","doi":"10.1109/LGRS.2025.3634759","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634759","url":null,"abstract":"High-resolution time–frequency analysis is crucial for seismic interpretation. Conventional sparse time–frequency transforms, such as the sparse generalized S transform (SGST), are not adaptive to the intrinsic characteristics of the signal. To address this limitation, we propose a sparse adaptive generalized S transform (SAGST). This method incorporates the signal amplitude spectrum into the Gaussian window function, allowing the window to adapt dynamically to the signal characteristics. This adaptive mechanism enables the construction of wavelet bases that are better matched to the signal. We apply the SAGST to the time–frequency analysis of both synthetic signal and field seismic data. The synthetic signal test shows that the SAGST achieves higher energy concentration, superior computational efficiency, and enhanced weak signal extraction compared with the sparse adaptive S transform (SAST) and SGST. A field example demonstrates that the SAGST can be used to indicate low-frequency shadow associated with hydrocarbon reservoirs.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARTEA: A Multistage Adaptive Preprocessing Algorithm for Subsurface Target Enhancement in Ground Penetrating Radar 探地雷达地下目标增强多阶段自适应预处理算法
Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li
The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.
地下介质的非均质性导致探地雷达信号在传播过程中产生多径散射和介质损耗,从而导致波前畸变和信号衰减。这些影响通过模糊目标签名、阻碍自动特征提取和降低感兴趣区域(ROI)的清晰度来降低b扫描配置文件。为了解决这些问题,我们提出了一种多阶段预处理框架——自适应区域目标增强算法(ARTEA)。ARTEA集成了动态范围压缩,由自适应西格玛图引导的连续尺度归一化和频域细化步骤。ARTEA通过根据局部信号特征动态调整参数,实现了伪影抑制和目标保存的有效平衡。在合成和实地GPR数据上的实验表明,ARTEA可以在抑制伪影和保留目标基本特征的同时增强目标对比度和结构保真度。
{"title":"ARTEA: A Multistage Adaptive Preprocessing Algorithm for Subsurface Target Enhancement in Ground Penetrating Radar","authors":"Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li","doi":"10.1109/LGRS.2025.3634350","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634350","url":null,"abstract":"The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Multifeature Hybrid Mamba for Remote Sensing Image Scene Classification 一种用于遥感影像场景分类的轻型多特征混合曼巴算法
Huihui Dong;Jingcao Li;Zongfang Ma;Zhijie Li;Mengkun Liu;Xiaohui Wei;Licheng Jiao
Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at https://github.com/yizhilanmaodhh/LMHMamba
遥感图像场景分类在遥感领域有着广泛的应用,现有方法虽然取得了显著的成绩,但在特征提取和轻量化设计等方面仍存在局限性。目前的多分支模型虽然性能良好,但参数数量大,计算成本高,难以部署在资源受限的边缘设备上,如无人驾驶飞行器(uav)。另一方面,像StarNet这样的轻量级模型,参数较少,但依赖于元素乘法来生成特征,缺乏明确的远程空间特征的捕获,导致分类精度不足。为了解决这些问题,这封信提出了一个轻量级的基于曼巴的混合网络,即LMHMamba,其核心是一个创新的轻量级多功能混合曼巴(LMHM)模块。该模块结合StarNet在隐式生成高维非线性特征方面的优势,引入轻量级状态空间模块增强空间特征学习能力,利用局部和全局关注模块强调局部和全局特征。这使得有效的多维特征融合,同时保持低参数。在三个RS场景分类数据集上验证了LMHMamba模型的性能,并与主流轻量化模型和最新方法进行了比较。实验结果表明,LMHMamba在分类精度和计算效率方面都达到了先进水平,显著优于现有的轻量化模型,为边缘部署提供了高效的解决方案。代码可从https://github.com/yizhilanmaodhh/LMHMamba获得
{"title":"A Lightweight Multifeature Hybrid Mamba for Remote Sensing Image Scene Classification","authors":"Huihui Dong;Jingcao Li;Zongfang Ma;Zhijie Li;Mengkun Liu;Xiaohui Wei;Licheng Jiao","doi":"10.1109/LGRS.2025.3634398","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634398","url":null,"abstract":"Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at <uri>https://github.com/yizhilanmaodhh/LMHMamba</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WCEDNet: A Weighted Cascaded Encoder–Decoder Network for Hyperspectral Change Detection Based on Spatial–Spectral Difference Features 基于空间-光谱差分特征的高光谱变化检测加权级联编码器-解码器网络
Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong
The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at https://github.com/WUTCM-Lab/WCEDNet
高光谱变化检测的核心在于准确捕捉不同时间相位的光谱特征差异,从而判断地表物体是否发生了变化。针对不同地物光谱变化在特定波段表现更为明显的特点,设计了一种基于空间光谱差异特征的加权级联编码器-解码器网络(WCEDNet)用于高光谱变化检测。首先,与传统的基于暹罗网络的变化检测框架不同,我们提出的单分支方法更侧重于提取空间光谱差异特征。其次,在编码器阶段引入的加权级联结构使得对不同波段的关注有所不同,增强了对高响应性的光谱波段的关注。此外,我们还开发了一个空间-光谱交叉关注(SSCA)模块来模拟空间和光谱域内的特征内相关性。在三个具有挑战性的高光谱变化检测数据集上对我们的方法进行了评估,实验结果表明,与竞争模型相比,我们的方法具有更好的性能。详细的代码已经在https://github.com/WUTCM-Lab/WCEDNet上开源
{"title":"WCEDNet: A Weighted Cascaded Encoder–Decoder Network for Hyperspectral Change Detection Based on Spatial–Spectral Difference Features","authors":"Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong","doi":"10.1109/LGRS.2025.3634345","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634345","url":null,"abstract":"The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at <uri>https://github.com/WUTCM-Lab/WCEDNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiscale Feature Refinement Detector for Small Objects With Ambiguous Boundaries 边界模糊小目标的多尺度特征细化检测器
Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li
There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.
小目标检测存在着实例有限、特征不足、尺度多样、分布不均匀、边界模糊、背景复杂等问题。这些问题往往导致高误检率,阻碍模型的泛化和收敛。本研究提出了一种多尺度目标检测算法,该算法通过改进对DH的变化检测并结合最小点距相交-过并损失来增强对细微特征的检测。增强的DH改进了目标表示,可以更精确地定位和分类小目标。同时,新的损失(NL)函数通过自适应调整辅助边界盒尺度来稳定边界盒回归。对两个基准数据集的评估表明,我们的方法在卫星图像多车辆数据集(SIMD)上的mAP50提高了2.6%,mAP50:95提高了1.8%,在DIOR数据集上的mAP50:95提高了1.9%。此外,该模型将参数数量减少了2.5%,计算成本减少了1.4%,显示了其在实时检测应用中的潜力。
{"title":"A Multiscale Feature Refinement Detector for Small Objects With Ambiguous Boundaries","authors":"Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li","doi":"10.1109/LGRS.2025.3633285","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633285","url":null,"abstract":"There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geospatial Domain Adaptation With Truncated Parameter-Efficient Fine-Tuning 截断参数高效微调的地理空间域自适应
Kwonyoung Kim;Jungin Park;Kwanghoon Sohn
Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.
参数有效微调(PEFT)通过学习一小组附加参数,同时保持预训练参数不变,使大型预训练基础模型适应下游任务,如遥感场景分类。虽然PEFT比完全微调(FT)提供了大量的训练效率,但由于依赖于预训练和任务特定参数,它仍然会产生很高的推理成本。为了解决这一限制,我们提出了一种具有模型截断的新型PEFT方法,称为截断参数有效微调(truncated parameter-efficient fine-tuning, TruncPEFT),使效率增益在推理期间持续存在。观察到来自最终层和中间层的预测通常表现出很高的一致性,我们截断了一组最终层,并用轻量级注意力模块代替它们。此外,我们引入了一种令牌丢弃策略来减轻类间干扰,降低了模型对遥感数据中不同类之间视觉相似性的敏感性。在7个遥感场景分类数据集上的大量实验证明了该方法的有效性,显著提高了训练、推理和GPU内存效率,同时实现了与之前的PEFT方法和全FT方法相当甚至更好的性能。
{"title":"Geospatial Domain Adaptation With Truncated Parameter-Efficient Fine-Tuning","authors":"Kwonyoung Kim;Jungin Park;Kwanghoon Sohn","doi":"10.1109/LGRS.2025.3633718","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633718","url":null,"abstract":"Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Sequential Doppler Offset (SDO) Method for Locating Targets Causing Azimuth Fractional Ambiguity in Spaceborne HRWS-SAR 星载HRWS-SAR中方位模糊目标的序多普勒偏移定位方法
Yanyan Zhang;Akira Hirose;Ryo Natsuaki
Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.
先进陆地观测卫星-4 (ALOS-4)是一种星载高分辨率宽幅合成孔径雷达(HRWS-SAR),使用可变脉冲重复间隔(VPRI)技术实现连续宽幅成像。在一些ALOS-4图像中,观测到由VPRI引起的方位角分数模糊,它不同于通常由信道间误差引起的整数模糊,它发生的间隔更小。在这封信中,我们提出了一种顺序多普勒偏移(SDO)方法来定位导致方位角模糊的原始目标(OT)。首先,得到整数模糊度间隔与分数模糊度间隔的比值,利用该比值生成不同多普勒中心频率的SAR图像;其次,计算生成图像的和图像与零多普勒中心频率图像之间的相干性;第三,根据相干性选取相干性大于阈值的点;最后,对滤波后的选定点进行检测,得到最终的OT。基于ALOS-4 L1.2数据进行了实验,结果表明该方法能够准确定位OT。简而言之,该方法为HRWS-SAR中的分数模糊抑制提供了一个起点。
{"title":"A Sequential Doppler Offset (SDO) Method for Locating Targets Causing Azimuth Fractional Ambiguity in Spaceborne HRWS-SAR","authors":"Yanyan Zhang;Akira Hirose;Ryo Natsuaki","doi":"10.1109/LGRS.2025.3633588","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633588","url":null,"abstract":"Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1