首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
Bridging Temporal and Spatial–Spectral Features With Satellite Image Time Series: TAS2B-Net for Crop Semantic Segmentation 利用卫星影像时间序列桥接时空光谱特征:TAS2B-Net作物语义分割
Xiaohan Luo;Hangyu Dai;Vladimir Lysenko;Jinglu Tan;Ya Guo
Semantic segmentation based on satellite image time series (SITS) is fundamental to a wide range of geospatial applications, including land cover mapping and urban development analysis. By integrating crop phenological dynamics over time, SITS provides richer spatiotemporal information than static satellite imagery. However, existing models fail to effectively process the temporal and spatial–spectral dimensions of SITS independently, leading to reduced segmentation accuracy. In this letter, we propose a temporal aggregation spatial–spectral bridge network (TAS2B-Net), a novel architecture designed to extract fine-grained crop features from SITS. The network consists of two key components: the pixel-aware grouping temporal integrator (PGTI), which captures temporal dependencies within pixel groups, and the edge-aware contextual fusion head (ECFH), which enhances spatial boundary and global structural representation. Additionally, we introduce a lightweight multiscale spectral decoder (LMSD) to aggregate contextual information across multiple spectral scales, further improving feature learning for semantic segmentation. Extensive experiments on the panoptic agricultural satellite time series (PASTIS) and MTLCC datasets show that the proposed network achieves mIoU scores of 68.91% and 84.59%, respectively, outperforming eight state-of-the-art (SOTA) methods and setting new benchmarks for SITS-based semantic segmentation.
基于卫星图像时间序列(sit)的语义分割是广泛的地理空间应用的基础,包括土地覆盖制图和城市发展分析。通过整合作物物候动态,sit提供了比静态卫星图像更丰富的时空信息。然而,现有模型不能有效地独立处理sit的时空光谱维度,导致分割精度降低。在这封信中,我们提出了一个时间聚合空间光谱桥网络(TAS2B-Net),这是一个旨在从sit中提取细粒度作物特征的新架构。该网络由两个关键组件组成:像素感知分组时间积分器(PGTI)和边缘感知上下文融合头(ECFH),前者捕获像素组内的时间依赖性,后者增强空间边界和全局结构表示。此外,我们引入了一个轻量级的多尺度光谱解码器(LMSD)来聚合跨多个光谱尺度的上下文信息,进一步改进语义分割的特征学习。在panoptic农业卫星时间序列(PASTIS)和MTLCC数据集上的大量实验表明,该网络的mIoU得分分别为68.91%和84.59%,优于8种最先进的(SOTA)方法,为基于sits的语义分割设定了新的基准。
{"title":"Bridging Temporal and Spatial–Spectral Features With Satellite Image Time Series: TAS2B-Net for Crop Semantic Segmentation","authors":"Xiaohan Luo;Hangyu Dai;Vladimir Lysenko;Jinglu Tan;Ya Guo","doi":"10.1109/LGRS.2025.3603294","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603294","url":null,"abstract":"Semantic segmentation based on satellite image time series (SITS) is fundamental to a wide range of geospatial applications, including land cover mapping and urban development analysis. By integrating crop phenological dynamics over time, SITS provides richer spatiotemporal information than static satellite imagery. However, existing models fail to effectively process the temporal and spatial–spectral dimensions of SITS independently, leading to reduced segmentation accuracy. In this letter, we propose a temporal aggregation spatial–spectral bridge network (TAS2B-Net), a novel architecture designed to extract fine-grained crop features from SITS. The network consists of two key components: the pixel-aware grouping temporal integrator (PGTI), which captures temporal dependencies within pixel groups, and the edge-aware contextual fusion head (ECFH), which enhances spatial boundary and global structural representation. Additionally, we introduce a lightweight multiscale spectral decoder (LMSD) to aggregate contextual information across multiple spectral scales, further improving feature learning for semantic segmentation. Extensive experiments on the panoptic agricultural satellite time series (PASTIS) and MTLCC datasets show that the proposed network achieves mIoU scores of 68.91% and 84.59%, respectively, outperforming eight state-of-the-art (SOTA) methods and setting new benchmarks for SITS-based semantic segmentation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Collaborative Sparse and Total Variation Regularization for Unmixing-Based Change Detection 基于非混合变化检测的双协同稀疏和全变分正则化
Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong
Hyperspectral change detection is critical for analyzing the temporal evolution of the feature components in multitemporal hyperspectral images. However, existing methods often fall short of fully exploiting the spatiotemporal–spectral correlations within these images, thereby limiting their accuracy and robustness. This letter introduces a novel hyperspectral change detection method, termed dual collaborative sparse unmixing via variable splitting augmented Lagrangian and total variation (DCLSUnSAL-TV). By integrating dual collaborative sparsity and total variation (TV) regularizers, this method capitalizes on the local similarity of changes in the feature components, leveraging the low-rank property of hyperspectral difference images (HSDIs) and their inherent spatial–spectral correlations. A customized abundancewise truncation and ensemble strategy is designed to obtain the change map by aggregating the subpixel-level changes with respect to each endmember. Comprehensive comparison and ablation experiments demonstrate the effectiveness of the proposed method in improving the accuracy of change detection. The source code is available at: https://github.com/2alsbz/DCLSUnSAL_TV
高光谱变化检测对于分析多时相高光谱图像中特征分量的时间演化至关重要。然而,现有的方法往往不能充分利用这些图像中的时空光谱相关性,从而限制了它们的准确性和鲁棒性。本文介绍了一种新的高光谱变化检测方法,即通过变量分裂增广拉格朗日和全变分(DCLSUnSAL-TV)进行双协同稀疏解混。该方法通过整合双协同稀疏性和总变分(TV)正则化器,利用特征分量变化的局部相似性,利用高光谱差分图像(hsdi)的低秩特性及其固有的空间-光谱相关性。设计了一种定制的丰度截断和集成策略,通过聚合相对于每个端元的亚像素级变化来获得变化图。综合对比和烧蚀实验证明了该方法在提高变化检测精度方面的有效性。源代码可从https://github.com/2alsbz/DCLSUnSAL_TV获得
{"title":"Dual Collaborative Sparse and Total Variation Regularization for Unmixing-Based Change Detection","authors":"Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong","doi":"10.1109/LGRS.2025.3603339","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603339","url":null,"abstract":"Hyperspectral change detection is critical for analyzing the temporal evolution of the feature components in multitemporal hyperspectral images. However, existing methods often fall short of fully exploiting the spatiotemporal–spectral correlations within these images, thereby limiting their accuracy and robustness. This letter introduces a novel hyperspectral change detection method, termed dual collaborative sparse unmixing via variable splitting augmented Lagrangian and total variation (DCLSUnSAL-TV). By integrating dual collaborative sparsity and total variation (TV) regularizers, this method capitalizes on the local similarity of changes in the feature components, leveraging the low-rank property of hyperspectral difference images (HSDIs) and their inherent spatial–spectral correlations. A customized abundancewise truncation and ensemble strategy is designed to obtain the change map by aggregating the subpixel-level changes with respect to each endmember. Comprehensive comparison and ablation experiments demonstrate the effectiveness of the proposed method in improving the accuracy of change detection. The source code is available at: <uri>https://github.com/2alsbz/DCLSUnSAL_TV</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhaseMamba: A Mamba-Based Deep Learning Model for Seismic Phase Picking and Detection PhaseMamba:一种基于mamba的地震相位采集和检测深度学习模型
Yunfei Zhou;Haoran Ren;Haofeng Wu
Seismic phase picking is a critical task for earthquake detection and localization, where traditional methods rely on manual parameter tuning and have great difficulty to capture complex temporal features. In this letter, we propose PhaseMamba, an automated seismic phase picking and detection model that leverages deep learning through a U-shaped architecture with skip connections for effective time-domain seismic signal analysis, while incorporating a state-space Mamba model to enhance long-term contextual dependency extraction capabilities. For training, validation, and testing, we utilize the open-source global seismic dataset, Stanford Earthquake Dataset (STEAD), which provides a diverse range of high-quality seismic waveforms. Comprehensive experiments are conducted on this dataset to evaluate the model’s performance. The results demonstrate that PhaseMamba achieves superior performance in P-wave arrival picking compared with all state-of-the-art models (PhaseNet, EQTransformer, and SeisT), while showing comparable or slightly lower performance in S-wave arrival picking. These findings suggest that PhaseMamba is a promising tool for advancing seismic phase picking and contributing to broader seismic research applications.
地震相位采集是地震探测和定位的关键环节,传统的方法依赖于人工参数调优,难以捕获复杂的时间特征。在这封信中,我们提出了PhaseMamba,这是一种自动地震相位采集和检测模型,通过u形结构和跳跃连接利用深度学习进行有效的时域地震信号分析,同时结合状态空间Mamba模型来增强长期上下文依赖性提取能力。对于训练,验证和测试,我们利用开源的全球地震数据集,斯坦福地震数据集(STEAD),它提供了各种高质量的地震波形。在此数据集上进行了全面的实验,以评估模型的性能。结果表明,与所有最先进的模型(PhaseNet、EQTransformer和SeisT)相比,PhaseMamba在p波到达拾取方面具有优越的性能,而在s波到达拾取方面则表现出相当或略低的性能。这些发现表明,PhaseMamba是一种很有前途的工具,可以推进地震相位提取,并有助于更广泛的地震研究应用。
{"title":"PhaseMamba: A Mamba-Based Deep Learning Model for Seismic Phase Picking and Detection","authors":"Yunfei Zhou;Haoran Ren;Haofeng Wu","doi":"10.1109/LGRS.2025.3603915","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603915","url":null,"abstract":"Seismic phase picking is a critical task for earthquake detection and localization, where traditional methods rely on manual parameter tuning and have great difficulty to capture complex temporal features. In this letter, we propose PhaseMamba, an automated seismic phase picking and detection model that leverages deep learning through a U-shaped architecture with skip connections for effective time-domain seismic signal analysis, while incorporating a state-space Mamba model to enhance long-term contextual dependency extraction capabilities. For training, validation, and testing, we utilize the open-source global seismic dataset, Stanford Earthquake Dataset (STEAD), which provides a diverse range of high-quality seismic waveforms. Comprehensive experiments are conducted on this dataset to evaluate the model’s performance. The results demonstrate that PhaseMamba achieves superior performance in P-wave arrival picking compared with all state-of-the-art models (PhaseNet, EQTransformer, and SeisT), while showing comparable or slightly lower performance in S-wave arrival picking. These findings suggest that PhaseMamba is a promising tool for advancing seismic phase picking and contributing to broader seismic research applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super Equatorial Plasma Bubbles Observed Over South America During the October 10 and 11, 2024 Strong Geomagnetic Storm 2024年10月10日和11日强磁暴期间在南美洲观测到的超级赤道等离子体气泡
Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang
On October 10, 2024, the second most intense geomagnetic storm of solar cycle 25 to date took place. This storm was triggered by multiple coronal mass ejections (CMEs) that arrived at Earth from October 7 to 9, causing significant geomagnetic disturbances. The geomagnetic Kp index peaked at its highest level (Kp = 9), indicating a red alert status. This study investigated equatorial plasma bubbles (EPBs) over South America during this geomagnetic storm using ground-based Global Navigation Satellite System (GNSS) rate of total electron content index (ROTI) and Global-scale Observations of the Limb and Disk (GOLD) satellite oxygen atom (OI) 135.6-nm radiance wavelength data. The analysis revealed that the EPBs observed in South America lasted for an unusually long duration of approximately 14 h, from around 23:00 UT (18:00 LT) on October 10 to about 14:00 UT (9:00 LT) on October 11. In addition, these super EPBs extended over a wide latitude range, reaching approximately 35°N and down to 50°S, gradually forming an inverted C-shaped pattern. The observed characteristics of the EPBs are likely associated with changes in solar wind parameters and the effects of the prompt penetration electric field (PPEF).
2024年10月10日,第25太阳活动周期中第二强烈的地磁风暴发生了。这场风暴是由10月7日至9日到达地球的多次日冕物质抛射(cme)引发的,造成了严重的地磁干扰。地磁Kp指数达到最高值(Kp = 9),进入红色警戒状态。利用地面导航卫星系统(GNSS)总电子含量指数(ROTI)和全球尺度观测卫星(GOLD)氧原子(OI) 135.6 nm辐射波长数据,研究了这次地磁风暴期间南美洲赤道等离子体气泡(EPBs)。分析显示,在南美洲观测到的EPBs持续了大约14小时的异常长时间,从10月10日23:00 UT (18:00 LT)到10月11日14:00 UT (9:00 LT)。此外,这些超级epb在很宽的纬度范围内延伸,达到约35°N,低至50°S,逐渐形成倒c形图案。epb的观测特征可能与太阳风参数的变化和提示穿透电场(PPEF)的影响有关。
{"title":"Super Equatorial Plasma Bubbles Observed Over South America During the October 10 and 11, 2024 Strong Geomagnetic Storm","authors":"Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang","doi":"10.1109/LGRS.2025.3603418","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603418","url":null,"abstract":"On October 10, 2024, the second most intense geomagnetic storm of solar cycle 25 to date took place. This storm was triggered by multiple coronal mass ejections (CMEs) that arrived at Earth from October 7 to 9, causing significant geomagnetic disturbances. The geomagnetic Kp index peaked at its highest level (Kp = 9), indicating a red alert status. This study investigated equatorial plasma bubbles (EPBs) over South America during this geomagnetic storm using ground-based Global Navigation Satellite System (GNSS) rate of total electron content index (ROTI) and Global-scale Observations of the Limb and Disk (GOLD) satellite oxygen atom (OI) 135.6-nm radiance wavelength data. The analysis revealed that the EPBs observed in South America lasted for an unusually long duration of approximately 14 h, from around 23:00 UT (18:00 LT) on October 10 to about 14:00 UT (9:00 LT) on October 11. In addition, these super EPBs extended over a wide latitude range, reaching approximately 35°N and down to 50°S, gradually forming an inverted C-shaped pattern. The observed characteristics of the EPBs are likely associated with changes in solar wind parameters and the effects of the prompt penetration electric field (PPEF).","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System 分布式MIMO-SAR系统同步误差补偿方法
Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic
Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.
分布式多输入多输出合成孔径雷达(MIMO-SAR)为雷达成像提供了一种新的模式,它利用多个分布式传感器来提高成像性能。然而,在这些系统中,同步误差对成像质量有很大的影响。发射和接收的回波信号具有互易性,可以用来估计同步误差。通过比较不同传感器之间的回波,可以估计和补偿同步误差。本文提出了一种用于分布式MIMO-SAR系统的同步抗误差成像算法。首先,通过比较回波信号对的倒数,在距离域估计同步误差;然后,在基于快速反向投影(BP)的SAR成像过程中对误差进行补偿。通过实验验证了该算法的有效性。
{"title":"Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System","authors":"Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic","doi":"10.1109/LGRS.2025.3603396","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603396","url":null,"abstract":"Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection 基于自适应局部上下文的动态卷积遥感目标检测
Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang
Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.
遥感图像目标检测在对地观测中起着举足轻重的作用,在城市规划、环境监测等方面具有重要的应用价值。由于目标间尺度差异较大,背景复杂,小目标分布密集,目标间场景相关性强,现有的目标检测方法往往不能有效地对遥感图像的目标关系和上下文信息进行建模。为了解决这些限制,我们提出了一种新的融合了自适应局部场景背景的遥感目标检测网络YOLO-ALS。该框架引入了三个关键点。首先,全维动态卷积重构C2f模块克服了局部上下文提取限制和目标共现先验缺陷,增强了目标特征表示。其次,自适应局部场景上下文模块(ALSCM)通过空间注意动态集成多尺度感受野特征,实现背景窗口自适应选择和跨尺度特征对齐;最后,结合共现矩阵的分类辅助模块通过数据驱动学习挖掘目标关联规则,将高置信度区域的共现信息与最优阈值结合,修正低置信度区域的分类概率,显著降低漏检率。通过广泛的消融研究和对比分析,在多个公共遥感数据集上进行了综合实验,证明了该方法的优越性。该方法在解决遥感目标探测的独特挑战的同时,实现了最先进的性能。
{"title":"YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection","authors":"Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang","doi":"10.1109/LGRS.2025.3602896","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602896","url":null,"abstract":"Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFENet: Change-Aware and Fourier Feature Exchange Network for Cropland Change Detection in Remote Sensing Images 基于变化感知和傅立叶特征交换网络的遥感影像耕地变化检测
Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu
The accelerated nonagriculturalization of cropland has increasingly highlighted the importance of remote sensing (RS) change detection (CD) for monitoring land-use transitions. However, variations in RS imaging conditions and irregular cropland changes often result in noisy or inaccurate change maps. To address these challenges, we propose a novel deep learning framework named change-aware and Fourier feature exchange network (CAFENet). The method introduces a dedicated change-aware (CA) branch to extract discriminative change cues from pseudo-video sequences and integrates them into the backbone network. A Fourier feature exchange module (FFEM) is designed to reduce brightness, color, and style discrepancies between bitemporal images, thereby enhancing robustness under varying acquisition conditions. Fused features are further refined using an efficient multiscale attention mechanism (EMSA) to capture rich spatial details. In the decoding stage, a dynamic content-aware upsampling module (DCAU), together with skip connections, progressively recovers spatial resolution while preserving structural information. The experimental results on three datasets—CLCD, SW-CLCD, and LuojiaSET-CLCD—demonstrate that CAFENet achieves superior performance over state-of-the-art methods in terms of both accuracy and robustness, particularly in complex agricultural landscapes.
随着耕地非农化进程的加快,遥感变化检测对土地利用变化监测的重要性日益凸显。然而,RS成像条件的变化和不规则的农田变化往往导致嘈杂或不准确的变化图。为了解决这些挑战,我们提出了一种新的深度学习框架,称为变化感知和傅立叶特征交换网络(CAFENet)。该方法引入了一个专用的变化感知(CA)分支,从伪视频序列中提取判别变化线索,并将其集成到骨干网中。傅里叶特征交换模块(FFEM)的目的是减少亮度,颜色和双时间图像之间的风格差异,从而增强在不同的采集条件下的鲁棒性。使用高效的多尺度注意机制(EMSA)进一步细化融合特征,以捕获丰富的空间细节。在解码阶段,动态内容感知上采样模块(DCAU)与跳跃连接一起,在保留结构信息的同时逐步恢复空间分辨率。在clcd、SW-CLCD和罗家set - clcd三个数据集上的实验结果表明,CAFENet在准确性和鲁棒性方面都优于最先进的方法,特别是在复杂的农业景观中。
{"title":"CAFENet: Change-Aware and Fourier Feature Exchange Network for Cropland Change Detection in Remote Sensing Images","authors":"Min Duan;Yuanxu Wang;Lu Bai;Yujiang He;Zhichao Zhao;Yurong Qian;Xuanchen Liu","doi":"10.1109/LGRS.2025.3602854","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602854","url":null,"abstract":"The accelerated nonagriculturalization of cropland has increasingly highlighted the importance of remote sensing (RS) change detection (CD) for monitoring land-use transitions. However, variations in RS imaging conditions and irregular cropland changes often result in noisy or inaccurate change maps. To address these challenges, we propose a novel deep learning framework named change-aware and Fourier feature exchange network (CAFENet). The method introduces a dedicated change-aware (CA) branch to extract discriminative change cues from pseudo-video sequences and integrates them into the backbone network. A Fourier feature exchange module (FFEM) is designed to reduce brightness, color, and style discrepancies between bitemporal images, thereby enhancing robustness under varying acquisition conditions. Fused features are further refined using an efficient multiscale attention mechanism (EMSA) to capture rich spatial details. In the decoding stage, a dynamic content-aware upsampling module (DCAU), together with skip connections, progressively recovers spatial resolution while preserving structural information. The experimental results on three datasets—CLCD, SW-CLCD, and LuojiaSET-CLCD—demonstrate that CAFENet achieves superior performance over state-of-the-art methods in terms of both accuracy and robustness, particularly in complex agricultural landscapes.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DL-DSFN: Dual-Layer Dynamic Scattering Filtering for Robust SAR Target Recognition DL-DSFN:用于SAR目标识别的双层动态散射滤波
Yuying Zhu;Qian Wang;Muyu Hou
Despite the impressive performance of deep learning in synthetic aperture radar (SAR) automatic target recognition (ATR), its generalization capability remains a critical concern, particularly when facing domain shifts between training and testing environments. Considering the inherent robustness and interpretability of electromagnetic scattering characteristics, we explore leveraging these properties to guide deep learning training, thereby improving generalization. To this end, we propose a dual-layer dynamic scattering filtering network (DL-DSFN) that leverages external physical priors to guide the learning process. The first layer adaptively generates convolutional kernels conditioned on scattering cues, enabling localized modeling of target-specific scattering phenomena. The second layer establishes a cross-domain mapping from SAR imagery to scattering features, facilitating automatic extraction of salient scattering characteristics. Furthermore, an adaptive mechanism for determining the number of scattering centers is also incorporated. Experiments conducted under significant variations between training and testing sets demonstrate that our method achieves competitive recognition accuracy while maintaining low computational cost, with only approximately 0.16 M parameters and 0.002 G FLOPs.
尽管深度学习在合成孔径雷达(SAR)自动目标识别(ATR)中的表现令人印象深刻,但其泛化能力仍然是一个关键问题,特别是当面临训练和测试环境之间的域转换时。考虑到电磁散射特性固有的鲁棒性和可解释性,我们探索利用这些特性来指导深度学习训练,从而提高泛化。为此,我们提出了一种双层动态散射滤波网络(DL-DSFN),它利用外部物理先验来指导学习过程。第一层自适应地生成基于散射信号的卷积核,实现目标特定散射现象的局部建模。第二层建立了SAR图像到散射特征的跨域映射,便于自动提取显著散射特征。此外,还引入了一种确定散射中心数目的自适应机制。在训练集和测试集之间存在显著差异的情况下进行的实验表明,我们的方法在保持较低的计算成本的同时获得了具有竞争力的识别精度,只有大约0.16 M个参数和0.002 G FLOPs。
{"title":"DL-DSFN: Dual-Layer Dynamic Scattering Filtering for Robust SAR Target Recognition","authors":"Yuying Zhu;Qian Wang;Muyu Hou","doi":"10.1109/LGRS.2025.3602769","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602769","url":null,"abstract":"Despite the impressive performance of deep learning in synthetic aperture radar (SAR) automatic target recognition (ATR), its generalization capability remains a critical concern, particularly when facing domain shifts between training and testing environments. Considering the inherent robustness and interpretability of electromagnetic scattering characteristics, we explore leveraging these properties to guide deep learning training, thereby improving generalization. To this end, we propose a dual-layer dynamic scattering filtering network (DL-DSFN) that leverages external physical priors to guide the learning process. The first layer adaptively generates convolutional kernels conditioned on scattering cues, enabling localized modeling of target-specific scattering phenomena. The second layer establishes a cross-domain mapping from SAR imagery to scattering features, facilitating automatic extraction of salient scattering characteristics. Furthermore, an adaptive mechanism for determining the number of scattering centers is also incorporated. Experiments conducted under significant variations between training and testing sets demonstrate that our method achieves competitive recognition accuracy while maintaining low computational cost, with only approximately 0.16 M parameters and 0.002 G FLOPs.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aerial Image Semantic Segmentation Method Based on Cross-Modal Hierarchical Feature Fusion 基于跨模态层次特征融合的航空图像语义分割方法
Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai
Multimodal aerial image semantic segmentation enables fine-grained land cover classification by integrating data from different sensors, yet it remains challenged by information redundancy, intermodal feature discrepancies, and class confusion in complex scenes. To address these issues, we propose a cross-modal hierarchical feature fusion network (CMHFNet) based on an encoder–decoder architecture. The encoder incorporates a pixelwise attention-guided fusion module (PAFM) and a multistage progressive fusion transformer (MPFT) to suppress redundancy and model long-range intermodal dependencies and scale variations. The decoder introduces a residual information-guided feature compensation mechanism to recover spatial details and mitigate class ambiguity. The experiments on DDOS, Vaihingen, and Potsdam datasets demonstrate that the CMHFNet surpasses state-of-the-art methods, validating its effectiveness and practical value.
多模态航空图像语义分割通过整合不同传感器的数据实现了细粒度的土地覆盖分类,但在复杂场景中仍然存在信息冗余、多模态特征差异和类混淆等问题。为了解决这些问题,我们提出了一种基于编码器-解码器架构的跨模态分层特征融合网络(CMHFNet)。该编码器集成了一个像素级注意力引导融合模块(PAFM)和一个多级渐进融合变压器(MPFT),以抑制冗余并模拟远程多式联运依赖性和规模变化。该解码器引入残差信息导向的特征补偿机制来恢复空间细节,减轻类模糊。在DDOS、Vaihingen和Potsdam数据集上的实验表明,CMHFNet超越了最先进的方法,验证了其有效性和实用价值。
{"title":"Aerial Image Semantic Segmentation Method Based on Cross-Modal Hierarchical Feature Fusion","authors":"Jinglei Bai;Jinfu Yang;Tao Xiang;Shu Cai","doi":"10.1109/LGRS.2025.3602267","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602267","url":null,"abstract":"Multimodal aerial image semantic segmentation enables fine-grained land cover classification by integrating data from different sensors, yet it remains challenged by information redundancy, intermodal feature discrepancies, and class confusion in complex scenes. To address these issues, we propose a cross-modal hierarchical feature fusion network (CMHFNet) based on an encoder–decoder architecture. The encoder incorporates a pixelwise attention-guided fusion module (PAFM) and a multistage progressive fusion transformer (MPFT) to suppress redundancy and model long-range intermodal dependencies and scale variations. The decoder introduces a residual information-guided feature compensation mechanism to recover spatial details and mitigate class ambiguity. The experiments on DDOS, Vaihingen, and Potsdam datasets demonstrate that the CMHFNet surpasses state-of-the-art methods, validating its effectiveness and practical value.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SOD-Net: A Small Ship Object Detection Network for SAR Images 基于SAR图像的小型船舶目标检测网络SOD-Net
Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao
In ship detection using synthetic aperture radar (SAR), small targets and complex background noise remain key challenges that restrict the detection performance. In this letter, we propose a small-target ship detection network based on a small object detection network (SOD-Net) using SAR images. First, we construct a U-shaped feature preextraction network and adopt a spatial pixel attention (SPA) mechanism to enhance the initial feature representation ability. Second, a pinwheel convolution (PC) convolutional neural network (CNN)-based cross-scale feature fusion (CCFF) module is designed. By expanding the receptive field through asymmetric convolution kernels and reducing the parameter scale, features of small targets are properly captured. Evaluation results show that the proposed SOD-Net achieves evaluation accuracies of 98.4% and 91.0% on the benchmark SSDD and HRSID datasets (mean average precision (mAP) at an intersection over union of 0.5), respectively, with only 28 million parameters, thus outperforming state-of-the-art models (e.g., YOLOv8 and D-FINE). Visual analysis confirmed that the SOD-Net is robust in scenarios, including complex sea conditions, dense port berthing, and noise interference, thereby providing an accurate and efficient solution for SAR maritime monitoring.
在合成孔径雷达(SAR)舰船检测中,小目标和复杂背景噪声是制约探测性能的关键问题。在本文中,我们提出了一种基于SAR图像的小目标检测网络(SOD-Net)的小目标船舶检测网络。首先,构建u型特征预提取网络,采用空间像素关注(SPA)机制增强初始特征表示能力;其次,设计了一个基于PC卷积神经网络(CNN)的跨尺度特征融合(CCFF)模块。通过非对称卷积核扩展接收野和减小参数尺度,可以很好地捕获小目标的特征。评价结果表明,所提出的SOD-Net在基准的SSDD和HRSID数据集(交叉集的平均精度(mAP)为0.5)上的评价准确率分别为98.4%和91.0%,参数仅为2800万个,优于最先进的模型(如YOLOv8和D-FINE)。可视化分析证实,SOD-Net在复杂海况、港口密集靠泊和噪声干扰等情况下具有鲁棒性,从而为SAR海上监测提供了准确高效的解决方案。
{"title":"SOD-Net: A Small Ship Object Detection Network for SAR Images","authors":"Junpeng Ai;Liang Luo;Shijie Wang;Liandong Hao","doi":"10.1109/LGRS.2025.3602092","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602092","url":null,"abstract":"In ship detection using synthetic aperture radar (SAR), small targets and complex background noise remain key challenges that restrict the detection performance. In this letter, we propose a small-target ship detection network based on a small object detection network (SOD-Net) using SAR images. First, we construct a U-shaped feature preextraction network and adopt a spatial pixel attention (SPA) mechanism to enhance the initial feature representation ability. Second, a pinwheel convolution (PC) convolutional neural network (CNN)-based cross-scale feature fusion (CCFF) module is designed. By expanding the receptive field through asymmetric convolution kernels and reducing the parameter scale, features of small targets are properly captured. Evaluation results show that the proposed SOD-Net achieves evaluation accuracies of 98.4% and 91.0% on the benchmark SSDD and HRSID datasets (mean average precision (mAP) at an intersection over union of 0.5), respectively, with only 28 million parameters, thus outperforming state-of-the-art models (e.g., YOLOv8 and D-FINE). Visual analysis confirmed that the SOD-Net is robust in scenarios, including complex sea conditions, dense port berthing, and noise interference, thereby providing an accurate and efficient solution for SAR maritime monitoring.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1