Pub Date : 2025-09-02DOI: 10.1109/LGRS.2025.3605331
Fan Ye;Xiaoning Zhang;Zhengjie Wang;Yifei Wang;Zhaoyang Peng;Tengying Fu;Ziti Jiao;Yanxuan Wu;Yue Wang
Considering the simplicity of flight route planning, orthorectified images obtained from nadir observations are widely used in remote sensing. However, they are always insufficient to represent the anisotropic reflectance and 3-D structural information of objects. Therefore, multiangle observation information can enhance target information and potentially improve the accuracy of target classification and recognition. In this study, we investigated the potential of anisotropic reflectance information in land cover classification. By employing the DJI P4M multispectral observation system, multiangle multispectral reflectance images for five land cover types were captured at bare soil, concrete roads, grassland, apricot tree, and red broom cypress areas. Subsequently, the anisotropic flat index (AFX)-based bidirectional reflectance distribution function (BRDF) archetypes model and the kernel-driven model were used to reconstruct the BRDF. Finally, land cover classification was performed using three types of machine learning algorithm considering different BRDF features and band combinations. The results indicate that, compared to nadir directional reflectance, multiangle feature sets can improve the overall classification accuracy up to 24%. Compared to using single-band information, band combinations can also improve that up to 54%. The overall accuracy using the feature set of kernel-driven model parameters and nadir reflectance was also enhanced significantly, which can reach 86% using green-red-near infrared band combinations. This work demonstrates the contribution of multiangle multispectral information to natural and artificial land cover classification.
{"title":"Application of Optical Multiangle Multispectral Reflectance in Land Cover Classification","authors":"Fan Ye;Xiaoning Zhang;Zhengjie Wang;Yifei Wang;Zhaoyang Peng;Tengying Fu;Ziti Jiao;Yanxuan Wu;Yue Wang","doi":"10.1109/LGRS.2025.3605331","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3605331","url":null,"abstract":"Considering the simplicity of flight route planning, orthorectified images obtained from nadir observations are widely used in remote sensing. However, they are always insufficient to represent the anisotropic reflectance and 3-D structural information of objects. Therefore, multiangle observation information can enhance target information and potentially improve the accuracy of target classification and recognition. In this study, we investigated the potential of anisotropic reflectance information in land cover classification. By employing the DJI P4M multispectral observation system, multiangle multispectral reflectance images for five land cover types were captured at bare soil, concrete roads, grassland, apricot tree, and red broom cypress areas. Subsequently, the anisotropic flat index (AFX)-based bidirectional reflectance distribution function (BRDF) archetypes model and the kernel-driven model were used to reconstruct the BRDF. Finally, land cover classification was performed using three types of machine learning algorithm considering different BRDF features and band combinations. The results indicate that, compared to nadir directional reflectance, multiangle feature sets can improve the overall classification accuracy up to 24%. Compared to using single-band information, band combinations can also improve that up to 54%. The overall accuracy using the feature set of kernel-driven model parameters and nadir reflectance was also enhanced significantly, which can reach 86% using green-red-near infrared band combinations. This work demonstrates the contribution of multiangle multispectral information to natural and artificial land cover classification.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1109/LGRS.2025.3604651
Yutian Li;Wei Liu;Erzhu Li;Lianpeng Zhang;Xing Li
Remote sensing change detection (RSCD) is a key tool for environmental monitoring and resource management, playing a significant role in monitoring dynamic surface changes. In practical applications, RSCD often requires high precision and efficient detection methods. However, traditional methods tend to involve high technical complexity and a large number of parameters and are susceptible to interference from complex background noise, leading to poor performance in detecting change areas. To address these issues, this letter proposes a lightweight RSCD network, LMG-Net. The model uses a lightweight encoder and incorporates a hierarchical transformer module (HTF) to suppress background noise and minimize parameter increase, effectively extracting multilevel global features. Additionally, this letter introduces a multidimensional cooperative attention guidance (MAG) mechanism, further enhancing the ability to detect boundary changes. The model has only 3.29 M parameters and a computational load of 3.89G, demonstrating its high applicability, particularly for real-time applications in resource-constrained environments. Experimental results show that LMG-Net achieves the state-of-the-art (SOTA) ${F}1$ scores and IoU values on the WHU-CD, SYSU-CD, and LEVIR-CD+ datasets: (94.79%, 90.09%), (82.29%, 69.90%), and (84.30%, 71.14%).
遥感变化检测(RSCD)是环境监测和资源管理的重要工具,在监测地表动态变化方面发挥着重要作用。在实际应用中,RSCD往往需要高精度、高效的检测方法。然而,传统方法技术复杂,参数多,容易受到复杂背景噪声的干扰,检测变化区域的性能较差。为了解决这些问题,这封信提出了一个轻量级的RSCD网络LMG-Net。该模型采用轻量级编码器,并结合层次化变换模块(HTF)来抑制背景噪声和减小参数的增加,有效地提取了多层全局特征。此外,本文引入了多维合作注意引导(MAG)机制,进一步增强了检测边界变化的能力。该模型参数仅为3.29 M,计算负荷为3.89G,具有较高的适用性,尤其适用于资源受限环境下的实时应用。实验结果表明,LMG-Net在WHU-CD、SYSU-CD和levirr - cd +数据集上的得分和IoU值分别为(94.79%、90.09%)、(82.29%、69.90%)和(84.30%、71.14%),达到了最先进的(SOTA) ${F}1$分数和IoU值。
{"title":"LMG-Net: A Lightweight Remote Sensing Change Detection Network With Multilevel Global Features","authors":"Yutian Li;Wei Liu;Erzhu Li;Lianpeng Zhang;Xing Li","doi":"10.1109/LGRS.2025.3604651","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3604651","url":null,"abstract":"Remote sensing change detection (RSCD) is a key tool for environmental monitoring and resource management, playing a significant role in monitoring dynamic surface changes. In practical applications, RSCD often requires high precision and efficient detection methods. However, traditional methods tend to involve high technical complexity and a large number of parameters and are susceptible to interference from complex background noise, leading to poor performance in detecting change areas. To address these issues, this letter proposes a lightweight RSCD network, LMG-Net. The model uses a lightweight encoder and incorporates a hierarchical transformer module (HTF) to suppress background noise and minimize parameter increase, effectively extracting multilevel global features. Additionally, this letter introduces a multidimensional cooperative attention guidance (MAG) mechanism, further enhancing the ability to detect boundary changes. The model has only 3.29 M parameters and a computational load of 3.89G, demonstrating its high applicability, particularly for real-time applications in resource-constrained environments. Experimental results show that LMG-Net achieves the state-of-the-art (SOTA) <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula> scores and IoU values on the WHU-CD, SYSU-CD, and LEVIR-CD+ datasets: (94.79%, 90.09%), (82.29%, 69.90%), and (84.30%, 71.14%).","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-29DOI: 10.1109/LGRS.2025.3604251
Qinfen Cai;Feng Zhou;Iraklis Giannakis;Sijing Liu;Xiangyun Hu
China’s first Mars mission [Tianwen-1 (TW-1)] successfully touched down in the Utopia Planitia of Mars with a rover subsurface penetrating radar (RoPeR) carried for exploring the regolith dielectric properties. Hyperbolic fitting is a conventional method to infer the subsurface material relative permittivity from ground penetrating radar (GPR) data. However, it is difficult to directly extract valid hyperbolas from the RoPeR data. Inspired by the recently developed deep learning-based geophysical inversion method to estimate the subsurface wave velocities through GPR data, an improved deep learning architecture is proposed to infer the Martian regolith relative permittivity from the RoPeR data, with self-attention (SA) and cascade modules are introduced into the network. The improved cascade and SA modules can improve the inversion efficiency and mitigate the scatter-diffraction effect of the predicted results. The inverted relative permittivity from the first 60 ns of the RoPeR data demonstrates an approximate line with a mean value of 4.73 in the regolith of interest. The very limited fluctuation of relative permittivity implies that no explicit stratification existing in the investigated regolith, agreeing with the previous studies.
{"title":"Predicting Martian Regolith Permittivity Using Deep Learning Methods—Revisiting Southern Utopia Planitia","authors":"Qinfen Cai;Feng Zhou;Iraklis Giannakis;Sijing Liu;Xiangyun Hu","doi":"10.1109/LGRS.2025.3604251","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3604251","url":null,"abstract":"China’s first Mars mission [Tianwen-1 (TW-1)] successfully touched down in the Utopia Planitia of Mars with a rover subsurface penetrating radar (RoPeR) carried for exploring the regolith dielectric properties. Hyperbolic fitting is a conventional method to infer the subsurface material relative permittivity from ground penetrating radar (GPR) data. However, it is difficult to directly extract valid hyperbolas from the RoPeR data. Inspired by the recently developed deep learning-based geophysical inversion method to estimate the subsurface wave velocities through GPR data, an improved deep learning architecture is proposed to infer the Martian regolith relative permittivity from the RoPeR data, with self-attention (SA) and cascade modules are introduced into the network. The improved cascade and SA modules can improve the inversion efficiency and mitigate the scatter-diffraction effect of the predicted results. The inverted relative permittivity from the first 60 ns of the RoPeR data demonstrates an approximate line with a mean value of 4.73 in the regolith of interest. The very limited fluctuation of relative permittivity implies that no explicit stratification existing in the investigated regolith, agreeing with the previous studies.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145011320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Change detection (CD) in remote sensing (RS) imagery remains challenging due to boundary ambiguity and false alarms caused by high foreground–background similarity and insufficient difference representation. To address these issues, we propose an edge-guided difference enhancement network (EGDENet). EGDENet integrates an edge-aware adaptive enhancement module (EAEM) to extract high-frequency edge cues across scales, and a channel-spatial cooperative difference module (CSCDM) to refine change features by jointly leveraging spatial and channel-wise differences. An upsampling feature fusion (UFF) further enhances robustness to scale variations and improves region consistency. Extensive experiments on two public datasets demonstrate that EGDENet achieves superior performance with clearer boundaries compared to state-of-the-art methods. Our source code is publicly available at https://github.com/adleess/-EGDENet
{"title":"Enhancing Change Detection With Edge-Guided Difference Modeling in Remote Sensing Imagery","authors":"Pengkai Wang;Fuchao Cheng;Yuan Yao;Liang Liu;Jianwei Zhang;Abdelaziz Bouras;D. Narasimhan;Ling Qin;Shaohua Wang;Chang Liu","doi":"10.1109/LGRS.2025.3604110","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3604110","url":null,"abstract":"Change detection (CD) in remote sensing (RS) imagery remains challenging due to boundary ambiguity and false alarms caused by high foreground–background similarity and insufficient difference representation. To address these issues, we propose an edge-guided difference enhancement network (EGDENet). EGDENet integrates an edge-aware adaptive enhancement module (EAEM) to extract high-frequency edge cues across scales, and a channel-spatial cooperative difference module (CSCDM) to refine change features by jointly leveraging spatial and channel-wise differences. An upsampling feature fusion (UFF) further enhances robustness to scale variations and improves region consistency. Extensive experiments on two public datasets demonstrate that EGDENet achieves superior performance with clearer boundaries compared to state-of-the-art methods. Our source code is publicly available at <uri>https://github.com/adleess/-EGDENet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic segmentation based on satellite image time series (SITS) is fundamental to a wide range of geospatial applications, including land cover mapping and urban development analysis. By integrating crop phenological dynamics over time, SITS provides richer spatiotemporal information than static satellite imagery. However, existing models fail to effectively process the temporal and spatial–spectral dimensions of SITS independently, leading to reduced segmentation accuracy. In this letter, we propose a temporal aggregation spatial–spectral bridge network (TAS2B-Net), a novel architecture designed to extract fine-grained crop features from SITS. The network consists of two key components: the pixel-aware grouping temporal integrator (PGTI), which captures temporal dependencies within pixel groups, and the edge-aware contextual fusion head (ECFH), which enhances spatial boundary and global structural representation. Additionally, we introduce a lightweight multiscale spectral decoder (LMSD) to aggregate contextual information across multiple spectral scales, further improving feature learning for semantic segmentation. Extensive experiments on the panoptic agricultural satellite time series (PASTIS) and MTLCC datasets show that the proposed network achieves mIoU scores of 68.91% and 84.59%, respectively, outperforming eight state-of-the-art (SOTA) methods and setting new benchmarks for SITS-based semantic segmentation.
{"title":"Bridging Temporal and Spatial–Spectral Features With Satellite Image Time Series: TAS2B-Net for Crop Semantic Segmentation","authors":"Xiaohan Luo;Hangyu Dai;Vladimir Lysenko;Jinglu Tan;Ya Guo","doi":"10.1109/LGRS.2025.3603294","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603294","url":null,"abstract":"Semantic segmentation based on satellite image time series (SITS) is fundamental to a wide range of geospatial applications, including land cover mapping and urban development analysis. By integrating crop phenological dynamics over time, SITS provides richer spatiotemporal information than static satellite imagery. However, existing models fail to effectively process the temporal and spatial–spectral dimensions of SITS independently, leading to reduced segmentation accuracy. In this letter, we propose a temporal aggregation spatial–spectral bridge network (TAS2B-Net), a novel architecture designed to extract fine-grained crop features from SITS. The network consists of two key components: the pixel-aware grouping temporal integrator (PGTI), which captures temporal dependencies within pixel groups, and the edge-aware contextual fusion head (ECFH), which enhances spatial boundary and global structural representation. Additionally, we introduce a lightweight multiscale spectral decoder (LMSD) to aggregate contextual information across multiple spectral scales, further improving feature learning for semantic segmentation. Extensive experiments on the panoptic agricultural satellite time series (PASTIS) and MTLCC datasets show that the proposed network achieves mIoU scores of 68.91% and 84.59%, respectively, outperforming eight state-of-the-art (SOTA) methods and setting new benchmarks for SITS-based semantic segmentation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-28DOI: 10.1109/LGRS.2025.3603339
Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong
Hyperspectral change detection is critical for analyzing the temporal evolution of the feature components in multitemporal hyperspectral images. However, existing methods often fall short of fully exploiting the spatiotemporal–spectral correlations within these images, thereby limiting their accuracy and robustness. This letter introduces a novel hyperspectral change detection method, termed dual collaborative sparse unmixing via variable splitting augmented Lagrangian and total variation (DCLSUnSAL-TV). By integrating dual collaborative sparsity and total variation (TV) regularizers, this method capitalizes on the local similarity of changes in the feature components, leveraging the low-rank property of hyperspectral difference images (HSDIs) and their inherent spatial–spectral correlations. A customized abundancewise truncation and ensemble strategy is designed to obtain the change map by aggregating the subpixel-level changes with respect to each endmember. Comprehensive comparison and ablation experiments demonstrate the effectiveness of the proposed method in improving the accuracy of change detection. The source code is available at: https://github.com/2alsbz/DCLSUnSAL_TV
{"title":"Dual Collaborative Sparse and Total Variation Regularization for Unmixing-Based Change Detection","authors":"Shile Zhang;Yuxing Zhao;Zhihan Liu;Xiangming Jiang;Maoguo Gong","doi":"10.1109/LGRS.2025.3603339","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603339","url":null,"abstract":"Hyperspectral change detection is critical for analyzing the temporal evolution of the feature components in multitemporal hyperspectral images. However, existing methods often fall short of fully exploiting the spatiotemporal–spectral correlations within these images, thereby limiting their accuracy and robustness. This letter introduces a novel hyperspectral change detection method, termed dual collaborative sparse unmixing via variable splitting augmented Lagrangian and total variation (DCLSUnSAL-TV). By integrating dual collaborative sparsity and total variation (TV) regularizers, this method capitalizes on the local similarity of changes in the feature components, leveraging the low-rank property of hyperspectral difference images (HSDIs) and their inherent spatial–spectral correlations. A customized abundancewise truncation and ensemble strategy is designed to obtain the change map by aggregating the subpixel-level changes with respect to each endmember. Comprehensive comparison and ablation experiments demonstrate the effectiveness of the proposed method in improving the accuracy of change detection. The source code is available at: <uri>https://github.com/2alsbz/DCLSUnSAL_TV</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-28DOI: 10.1109/LGRS.2025.3603915
Yunfei Zhou;Haoran Ren;Haofeng Wu
Seismic phase picking is a critical task for earthquake detection and localization, where traditional methods rely on manual parameter tuning and have great difficulty to capture complex temporal features. In this letter, we propose PhaseMamba, an automated seismic phase picking and detection model that leverages deep learning through a U-shaped architecture with skip connections for effective time-domain seismic signal analysis, while incorporating a state-space Mamba model to enhance long-term contextual dependency extraction capabilities. For training, validation, and testing, we utilize the open-source global seismic dataset, Stanford Earthquake Dataset (STEAD), which provides a diverse range of high-quality seismic waveforms. Comprehensive experiments are conducted on this dataset to evaluate the model’s performance. The results demonstrate that PhaseMamba achieves superior performance in P-wave arrival picking compared with all state-of-the-art models (PhaseNet, EQTransformer, and SeisT), while showing comparable or slightly lower performance in S-wave arrival picking. These findings suggest that PhaseMamba is a promising tool for advancing seismic phase picking and contributing to broader seismic research applications.
{"title":"PhaseMamba: A Mamba-Based Deep Learning Model for Seismic Phase Picking and Detection","authors":"Yunfei Zhou;Haoran Ren;Haofeng Wu","doi":"10.1109/LGRS.2025.3603915","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603915","url":null,"abstract":"Seismic phase picking is a critical task for earthquake detection and localization, where traditional methods rely on manual parameter tuning and have great difficulty to capture complex temporal features. In this letter, we propose PhaseMamba, an automated seismic phase picking and detection model that leverages deep learning through a U-shaped architecture with skip connections for effective time-domain seismic signal analysis, while incorporating a state-space Mamba model to enhance long-term contextual dependency extraction capabilities. For training, validation, and testing, we utilize the open-source global seismic dataset, Stanford Earthquake Dataset (STEAD), which provides a diverse range of high-quality seismic waveforms. Comprehensive experiments are conducted on this dataset to evaluate the model’s performance. The results demonstrate that PhaseMamba achieves superior performance in P-wave arrival picking compared with all state-of-the-art models (PhaseNet, EQTransformer, and SeisT), while showing comparable or slightly lower performance in S-wave arrival picking. These findings suggest that PhaseMamba is a promising tool for advancing seismic phase picking and contributing to broader seismic research applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-27DOI: 10.1109/LGRS.2025.3603418
Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang
On October 10, 2024, the second most intense geomagnetic storm of solar cycle 25 to date took place. This storm was triggered by multiple coronal mass ejections (CMEs) that arrived at Earth from October 7 to 9, causing significant geomagnetic disturbances. The geomagnetic Kp index peaked at its highest level (Kp = 9), indicating a red alert status. This study investigated equatorial plasma bubbles (EPBs) over South America during this geomagnetic storm using ground-based Global Navigation Satellite System (GNSS) rate of total electron content index (ROTI) and Global-scale Observations of the Limb and Disk (GOLD) satellite oxygen atom (OI) 135.6-nm radiance wavelength data. The analysis revealed that the EPBs observed in South America lasted for an unusually long duration of approximately 14 h, from around 23:00 UT (18:00 LT) on October 10 to about 14:00 UT (9:00 LT) on October 11. In addition, these super EPBs extended over a wide latitude range, reaching approximately 35°N and down to 50°S, gradually forming an inverted C-shaped pattern. The observed characteristics of the EPBs are likely associated with changes in solar wind parameters and the effects of the prompt penetration electric field (PPEF).
2024年10月10日,第25太阳活动周期中第二强烈的地磁风暴发生了。这场风暴是由10月7日至9日到达地球的多次日冕物质抛射(cme)引发的,造成了严重的地磁干扰。地磁Kp指数达到最高值(Kp = 9),进入红色警戒状态。利用地面导航卫星系统(GNSS)总电子含量指数(ROTI)和全球尺度观测卫星(GOLD)氧原子(OI) 135.6 nm辐射波长数据,研究了这次地磁风暴期间南美洲赤道等离子体气泡(EPBs)。分析显示,在南美洲观测到的EPBs持续了大约14小时的异常长时间,从10月10日23:00 UT (18:00 LT)到10月11日14:00 UT (9:00 LT)。此外,这些超级epb在很宽的纬度范围内延伸,达到约35°N,低至50°S,逐渐形成倒c形图案。epb的观测特征可能与太阳风参数的变化和提示穿透电场(PPEF)的影响有关。
{"title":"Super Equatorial Plasma Bubbles Observed Over South America During the October 10 and 11, 2024 Strong Geomagnetic Storm","authors":"Yumei Li;Hong Zhang;Fan Xu;Qiong Ding;Long Tang","doi":"10.1109/LGRS.2025.3603418","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603418","url":null,"abstract":"On October 10, 2024, the second most intense geomagnetic storm of solar cycle 25 to date took place. This storm was triggered by multiple coronal mass ejections (CMEs) that arrived at Earth from October 7 to 9, causing significant geomagnetic disturbances. The geomagnetic Kp index peaked at its highest level (Kp = 9), indicating a red alert status. This study investigated equatorial plasma bubbles (EPBs) over South America during this geomagnetic storm using ground-based Global Navigation Satellite System (GNSS) rate of total electron content index (ROTI) and Global-scale Observations of the Limb and Disk (GOLD) satellite oxygen atom (OI) 135.6-nm radiance wavelength data. The analysis revealed that the EPBs observed in South America lasted for an unusually long duration of approximately 14 h, from around 23:00 UT (18:00 LT) on October 10 to about 14:00 UT (9:00 LT) on October 11. In addition, these super EPBs extended over a wide latitude range, reaching approximately 35°N and down to 50°S, gradually forming an inverted C-shaped pattern. The observed characteristics of the EPBs are likely associated with changes in solar wind parameters and the effects of the prompt penetration electric field (PPEF).","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.
{"title":"Compensation Approach to Synchronization Errors in Distributed MIMO-SAR System","authors":"Wanqing Ma;Zhong Xu;Jinshan Ding;Ljubisa Stankovic","doi":"10.1109/LGRS.2025.3603396","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3603396","url":null,"abstract":"Distributed multiple-input–multiple-output synthe- tic aperture radar (MIMO-SAR) provides a new paradigm for radar imaging, which utilizes multiple distributed sensors to improve imaging performance. However, synchronization errors have a significant impact on imaging quality in these systems. The transmitted and received echo signals exhibit reciprocity, which can be exploited to estimate synchronization errors. By comparing echoes between different sensors, the synchronization errors could be estimated and compensated. This work presents a synchronization error-resistant imaging algorithm for distributed MIMO-SAR systems. First, the synchronization errors are estimated in the range domain by comparing the reciprocal echo signal pairs. Then, the errors are compensated during a fast back-projection (BP)-based SAR imaging process. The effectiveness of the proposed algorithm has been verified by experiments.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1109/LGRS.2025.3602896
Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang
Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.
{"title":"YOLO-ALS: Dynamic Convolution With Adaptive Local Context for Remote Sensing Target Detection","authors":"Ruyi Feng;Zhixin Zhao;Tao Zhao;Lizhe Wang","doi":"10.1109/LGRS.2025.3602896","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3602896","url":null,"abstract":"Remote sensing image target detection plays a pivotal role in Earth observation, offering substantial value for applications such as urban planning and environmental monitoring. Due to the significant scale variations among targets, complex backgrounds with dense small object distributions, and strong intertarget scene correlations, existing target detection methods usually fail to effectively model target relationships and contextual information for remote sensing imagery. To address these limitations, we proposed YOLO-ALS, a novel remote sensing target detection network that integrates adaptive local scene context. The proposed framework introduces three key points. First, a full-dimensional dynamic convolution reconstruction C2f module enhances target feature representation by overcoming local context extraction limitations and target co-occurrence prior deficiencies. Second, an adaptive local scene context module (ALSCM) dynamically integrates multiscale receptive field features through spatial attention, enabling background window adaptive selection and cross-scale feature alignment. Finally, a co-occurrence matrix-integrated classification auxiliary module mines target association rules through data-driven learning, correcting classification probabilities in low-confidence areas by combining high-confidence areas’ co-occurrence information with an optimal threshold, which can significantly reduce missed detection rates. Comprehensive experiments on multiple public remote sensing datasets demonstrate the superiority of the proposed method through extensive ablation studies and comparative analyses. The proposed method has achieved state-of-the-art performance while addressing the unique challenges of remote sensing target detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144998079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}