Pub Date : 2026-02-06DOI: 10.1109/JSTARS.2026.3661580
Ruiqing Zhang;Bingbing Lei;Wei Feng;Xue Chai
Hyperspectral–multispectral image fusion (HMIF) aims to achieve hyperspectral image (HSI) super-resolution by integrating the rich spectral information of HSI with the high spatial resolution of multispectral image (MSI). Despite remarkable progress enabled by deep learning, HMIF remains challenging. Conventional fusion networks that rely solely on feature concatenation often fail in leveraging the abundant prior knowledge inherent in remote sensing data, thereby limiting their ability to simulate the complex nonlinear relationships found in real-world scenes. Moreover, introducing shallow cross-modal feature sharing frequently results in edge artifacts or spectral distortions, while adopting decoupled branches hinders propagating complementary information across different modalities. To address these limitations, we propose spatial–spectral cross-modal alternating direction method of multipliers (ADMM) unfolding network (SCIAU-Net), an explainable deep learning framework that unfolds the optimization process of the ADMM. SCIAU-Net reformulates two degradation models dominated by HSI and MSI, respectively, into a dual-branch neural architecture with dedicated modules designed to solve the corresponding variables. To begin with, dense VRWKV block (DVB) replace handcrafted components, embedding domain knowledge and physical priors of remote sensing images directly into the network. Moreover, we introduce spatial–spectral cross-modal interaction modules. In the HSI-dominated branch, SpeCIM injects MSI-guided spatial cues via adaptive implicit neural representation to extract spatial details, while in the MSI-dominated branch, SpaCIM employs state space duality to model intergroup spectral dependencies and refine spectral reconstruction. Finally, a principled loss function—comprising a mean squared error term and a Karush–Kuhn–Tucker-consistency term—penalizes the ADMM primal and dual residuals, promoting convergence toward physically consistent solutions. Extensive qualitative and quantitative experiments on five datasets demonstrate that SCIAU-Net achieves state-of-the-art performance in all evaluated scenarios, producing high-resolution HSI with superior spatial and spectral fidelity.
{"title":"SCIAU-Net: A Spatial-Spectral Cross-Modal Interaction ADMM Unfolding Network for Hyperspectral and Multispectral Image Fusion","authors":"Ruiqing Zhang;Bingbing Lei;Wei Feng;Xue Chai","doi":"10.1109/JSTARS.2026.3661580","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3661580","url":null,"abstract":"Hyperspectral–multispectral image fusion (HMIF) aims to achieve hyperspectral image (HSI) super-resolution by integrating the rich spectral information of HSI with the high spatial resolution of multispectral image (MSI). Despite remarkable progress enabled by deep learning, HMIF remains challenging. Conventional fusion networks that rely solely on feature concatenation often fail in leveraging the abundant prior knowledge inherent in remote sensing data, thereby limiting their ability to simulate the complex nonlinear relationships found in real-world scenes. Moreover, introducing shallow cross-modal feature sharing frequently results in edge artifacts or spectral distortions, while adopting decoupled branches hinders propagating complementary information across different modalities. To address these limitations, we propose spatial–spectral cross-modal alternating direction method of multipliers (ADMM) unfolding network (SCIAU-Net), an explainable deep learning framework that unfolds the optimization process of the ADMM. SCIAU-Net reformulates two degradation models dominated by HSI and MSI, respectively, into a dual-branch neural architecture with dedicated modules designed to solve the corresponding variables. To begin with, dense VRWKV block (DVB) replace handcrafted components, embedding domain knowledge and physical priors of remote sensing images directly into the network. Moreover, we introduce spatial–spectral cross-modal interaction modules. In the HSI-dominated branch, SpeCIM injects MSI-guided spatial cues via adaptive implicit neural representation to extract spatial details, while in the MSI-dominated branch, SpaCIM employs state space duality to model intergroup spectral dependencies and refine spectral reconstruction. Finally, a principled loss function—comprising a mean squared error term and a Karush–Kuhn–Tucker-consistency term—penalizes the ADMM primal and dual residuals, promoting convergence toward physically consistent solutions. Extensive qualitative and quantitative experiments on five datasets demonstrate that SCIAU-Net achieves state-of-the-art performance in all evaluated scenarios, producing high-resolution HSI with superior spatial and spectral fidelity.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8175-8192"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/JSTARS.2026.3662146
Zhihui Geng;Jiangtao Wang;Rui Wang
The synergistic application of hyperspectral images combined with light detection and ranging (LiDAR) or synthetic aperture radar (SAR) data is crucial for improving the accuracy in multisource remote sensing joint classification. However, existing methods still suffer from limitations in long-range dependency modeling, cross-modal alignment, and the preservation of fine-grained spectral features. This study introduces the tri-complementary Mamba modules (Tri-CoMamba) framework to resolve the aforementioned limitations. The proposed network architecture is founded upon the state-space model and employs selective scanning Mamba-S6 as its core structure, integrating three complementary modules: complement-and-rectify Mamba (CoRe-Mamba), cross-frequency spectral Mamba (CF-SpecMamba), and modality-aware spatial modulation (MASM). Specifically, CoRe-Mamba mitigates feature mismatching through dual-level spatial and channel rectification, thereby enhancing semantic consistency and directional modeling capabilities. CF-SpecMamba introduces bidirectional recurrence and cross-frequency interaction attention to balance low-frequency baselines with high-frequency details in spectral modeling, to achieve comprehensive spectral feature enhancement. Furthermore, MASM utilizes modality-aware dynamic spatial modulation to highlight discriminative regions and suppress background interference, thereby optimizing the cross-modal fusion effect. The synergy of these three modules enables Tri-CoMamba to completely exploit the distinct yet supportive strengths of spatial, spectral, and modal features, all while preserving computational efficiency, which leads to the precise classification of multisource data. The effectiveness of this approach was validated using the Berlin, Trento, and Houston2018 datasets, with results demonstrating that Tri-CoMamba outperforms various representative methods, achieving overall accuracy of 78.51%, 99.80%, and 92.88%, respectively.
{"title":"Tri-CoMamba: A Tri-Complementary Mamba Framework for Multisource Remote Sensing Image Classification","authors":"Zhihui Geng;Jiangtao Wang;Rui Wang","doi":"10.1109/JSTARS.2026.3662146","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3662146","url":null,"abstract":"The synergistic application of hyperspectral images combined with light detection and ranging (LiDAR) or synthetic aperture radar (SAR) data is crucial for improving the accuracy in multisource remote sensing joint classification. However, existing methods still suffer from limitations in long-range dependency modeling, cross-modal alignment, and the preservation of fine-grained spectral features. This study introduces the tri-complementary Mamba modules (Tri-CoMamba) framework to resolve the aforementioned limitations. The proposed network architecture is founded upon the state-space model and employs selective scanning Mamba-S6 as its core structure, integrating three complementary modules: complement-and-rectify Mamba (CoRe-Mamba), cross-frequency spectral Mamba (CF-SpecMamba), and modality-aware spatial modulation (MASM). Specifically, CoRe-Mamba mitigates feature mismatching through dual-level spatial and channel rectification, thereby enhancing semantic consistency and directional modeling capabilities. CF-SpecMamba introduces bidirectional recurrence and cross-frequency interaction attention to balance low-frequency baselines with high-frequency details in spectral modeling, to achieve comprehensive spectral feature enhancement. Furthermore, MASM utilizes modality-aware dynamic spatial modulation to highlight discriminative regions and suppress background interference, thereby optimizing the cross-modal fusion effect. The synergy of these three modules enables Tri-CoMamba to completely exploit the distinct yet supportive strengths of spatial, spectral, and modal features, all while preserving computational efficiency, which leads to the precise classification of multisource data. The effectiveness of this approach was validated using the Berlin, Trento, and Houston2018 datasets, with results demonstrating that Tri-CoMamba outperforms various representative methods, achieving overall accuracy of 78.51%, 99.80%, and 92.88%, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7170-7187"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373531","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1109/JSTARS.2026.3661713
Shuhang Gao;Caiqun Wang;Qiong Hu;Jun Lu;Jianxi Wang;Dan-Xia Song
Timely detection of cropland change at fine spatial scales is essential for sustainable land management and food security. Satellite observations with high spatial and temporal resolutions enable effective cropland monitoring, offering scientific support for decision-making. However, traditional cropland change detection methods often rely on long-term image series, limiting their ability to detect rapid changes over heterogeneous cropping systems. To address these challenges, we develop a novel framework, named LSTM-FCN-CVAPS, which integrates the advanced deep learning model long short-term memory–fully convolutional network (LSTM-FCN) with the change vector analysis in posterior probability space (CVAPS) method—a technique designed to reduce errors in postclassification change detection. This framework is applied to 3-m spatial resolution PlanetScope (PS) imagery to monitor cropland changes at high spatiotemporal resolution over Dangyang County, a key agricultural region in Hubei Province, China. The proposed method achieves high classification accuracy (OA = 0.9761) and outperforms conventional classification methods. By combining LSTM-FCN with CVAPS, it yields superior accuracy (OA = 0.9452), effectively capturing temporal dynamics within a 10-month period. Applied in Dangyang, the method reveals a 3.9% net reduction in cropland area from 2022 to 2024. Major transitions include cropland to bare land (14.94 km2), forest/grass (10.03 km2), artificial surfaces (5.62 km2), and water (2.14 km2), with strong seasonal patterns observed in the conversions of cropland-to-bare land and cropland-to-artificial surface. The cropland changes were concentrated in the central plains, with minimal changes in the southwest. The proposed method is effective and well-suited to change detection over fragmented croplands, requires short-term time-series input, and is transferable to other agricultural areas, contributing to more informed land use planning and continuous environmental monitoring.
{"title":"Cropland Change Detection at High Spatial and Temporal Resolutions Based on Short-Term PlanetScope Image Series Using LSTM-FCN-CVAPS Model","authors":"Shuhang Gao;Caiqun Wang;Qiong Hu;Jun Lu;Jianxi Wang;Dan-Xia Song","doi":"10.1109/JSTARS.2026.3661713","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3661713","url":null,"abstract":"Timely detection of cropland change at fine spatial scales is essential for sustainable land management and food security. Satellite observations with high spatial and temporal resolutions enable effective cropland monitoring, offering scientific support for decision-making. However, traditional cropland change detection methods often rely on long-term image series, limiting their ability to detect rapid changes over heterogeneous cropping systems. To address these challenges, we develop a novel framework, named LSTM-FCN-CVAPS, which integrates the advanced deep learning model long short-term memory–fully convolutional network (LSTM-FCN) with the change vector analysis in posterior probability space (CVAPS) method—a technique designed to reduce errors in postclassification change detection. This framework is applied to 3-m spatial resolution PlanetScope (PS) imagery to monitor cropland changes at high spatiotemporal resolution over Dangyang County, a key agricultural region in Hubei Province, China. The proposed method achieves high classification accuracy (OA = 0.9761) and outperforms conventional classification methods. By combining LSTM-FCN with CVAPS, it yields superior accuracy (OA = 0.9452), effectively capturing temporal dynamics within a 10-month period. Applied in Dangyang, the method reveals a 3.9% net reduction in cropland area from 2022 to 2024. Major transitions include cropland to bare land (14.94 km<sup>2</sup>), forest/grass (10.03 km<sup>2</sup>), artificial surfaces (5.62 km<sup>2</sup>), and water (2.14 km<sup>2</sup>), with strong seasonal patterns observed in the conversions of cropland-to-bare land and cropland-to-artificial surface. The cropland changes were concentrated in the central plains, with minimal changes in the southwest. The proposed method is effective and well-suited to change detection over fragmented croplands, requires short-term time-series input, and is transferable to other agricultural areas, contributing to more informed land use planning and continuous environmental monitoring.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"8286-8301"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we explore models predicting soil bacterial diversity to: spectral indices derived from optical satellite remote sensing; and meteorological variables. We computed alpha and beta diversity indices using metabarcoding data generated from 214 cropland soil samples collected in the context of Eurostat’s 2018 pan-European LUCAS Soil module. Subsequently, we derived 12 spectral indices from sentinel-2 images and monthly meteorological variables from the TerraClimate dataset. We then built models of bacterial diversity using the earth observation and climatic variables, experimenting with different algorithms and predictor time lags from the soil sampling date. Random-forest and Cubist regressors yielded MAE ≤ 7% of the observed range and R2 = 0.87 for beta diversity indices, while alpha diversity models reached MAE ≈ 10% and R2 ≈ 0.15. Feature importance pointed to winter moisture variability as the chief control on richness/evenness, whereas growing-season thermal extremes governed community turnover, with Sentinel-2 indices contributing secondary signals. Overall, our results indicate that freely-available satellite multispectral and meteorological data, can predict dimensions of cropland soil bacterial diversity and with particularly strong skill for principal coordinates analysis and canonical analysis of principal based beta diversity axes.
{"title":"Predicting Bacterial Diversity in European Croplands Using Earth Observation and Meteorological Data","authors":"Dimitrios Bormpoudakis;Pablo Sánchez-Cueto;Soraya González Sánchez;Spyros Theodoridis;Maëva Labouyrie;Alberto Orgiazzi;Panos Panagos;Arwyn Jones;Salvador Lladó;Martin Hartmann;Charalampos Kontoes","doi":"10.1109/JSTARS.2026.3662435","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3662435","url":null,"abstract":"In this article, we explore models predicting soil bacterial diversity to: spectral indices derived from optical satellite remote sensing; and meteorological variables. We computed alpha and beta diversity indices using metabarcoding data generated from 214 cropland soil samples collected in the context of Eurostat’s 2018 pan-European LUCAS Soil module. Subsequently, we derived 12 spectral indices from sentinel-2 images and monthly meteorological variables from the TerraClimate dataset. We then built models of bacterial diversity using the earth observation and climatic variables, experimenting with different algorithms and predictor time lags from the soil sampling date. Random-forest and Cubist regressors yielded MAE ≤ 7% of the observed range and <italic>R</i><sup>2</sup> = 0.87 for beta diversity indices, while alpha diversity models reached MAE ≈ 10% and <italic>R</i><sup>2</sup> ≈ 0.15. Feature importance pointed to winter moisture variability as the chief control on richness/evenness, whereas growing-season thermal extremes governed community turnover, with Sentinel-2 indices contributing secondary signals. Overall, our results indicate that freely-available satellite multispectral and meteorological data, can predict dimensions of cropland soil bacterial diversity and with particularly strong skill for principal coordinates analysis and canonical analysis of principal based beta diversity axes.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7560-7567"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373650","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve the accuracy of coastline extraction for polarimetric synthetic aperture radar (PolSAR) images, an adaptive superpixel segmentation-based method is proposed. First, multiple polarimetric and texture features are extracted to characterize the complexity for PolSAR image. Then, the image complexity and dimension are used to determine the optimal number of superpixels, followed by segmenting superpixels through the integration of the simple linear iterative clustering (SLIC) algorithm. The segmented superpixels are merged to extract the coastlines using the superpixel similarity-based fractal network evolution algorithm (FNEA). Ultimately, the proposed method is validated using PolSAR images with varying complexity levels. Experimental results demonstrate its effectiveness, achieving an average undersegmentation error of 0.1269 for adaptive superpixel segmentation and a high average Kappa coefficient of 0.9889 for coastline extraction. Furthermore, the method exhibits strong adaptability in superpixel segmentation while maintaining high precision in coastline extraction.
{"title":"Adaptive Superpixel Segmentation-Based Coastline Extraction Method for PolSAR Images","authors":"Yu Wang;Zhanying Ma;Mengmeng Li;Yu Li;Xue Shi;Xuemei Zhao","doi":"10.1109/JSTARS.2026.3662412","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3662412","url":null,"abstract":"To improve the accuracy of coastline extraction for polarimetric synthetic aperture radar (PolSAR) images, an adaptive superpixel segmentation-based method is proposed. First, multiple polarimetric and texture features are extracted to characterize the complexity for PolSAR image. Then, the image complexity and dimension are used to determine the optimal number of superpixels, followed by segmenting superpixels through the integration of the simple linear iterative clustering (SLIC) algorithm. The segmented superpixels are merged to extract the coastlines using the superpixel similarity-based fractal network evolution algorithm (FNEA). Ultimately, the proposed method is validated using PolSAR images with varying complexity levels. Experimental results demonstrate its effectiveness, achieving an average undersegmentation error of 0.1269 for adaptive superpixel segmentation and a high average Kappa coefficient of 0.9889 for coastline extraction. Furthermore, the method exhibits strong adaptability in superpixel segmentation while maintaining high precision in coastline extraction.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7251-7263"},"PeriodicalIF":5.3,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373670","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud coverage significantly degrades the quality and usability of remote sensing imagery, while also leading to unnecessary bandwidth consumption and power expenditure in satellite payloads. To address this challenge, a high-efficiency hardware–software co-design framework for cloud removal in remote sensing applications is presented. At the algorithmic level, a two-stage model compression strategy is designed, combining structured channel pruning and adaptive unstructured pruning, which reduces model parameters by more than 98% while preserving segmentation accuracy across multisource datasets. At the hardware level, a low-voltage sparse matrix accelerator enhanced with a super balanced path mechanism is designed, enabling stable operation at 0.65 V and achieving 43.2% higher energy efficiency and 11.5% better area efficiency compared to prior designs. Extensive experiments on GF-1 WFV, Sentinel-2 CloudSEN12, and Lilium-1 imagery validate the generalization capability and robustness of the proposed approach. The joint optimization of algorithm and hardware not only delivers state-of-the-art efficiency but also provides a practical pathway for reducing noninformative data transmission and enhancing the sustainability of future satellite missions.
{"title":"Efficient Cloud Removal for Remote Sensing Data Transmission via Model Compression and Sparse Accelerator Design","authors":"Chun-Fu Chen;Pei-Jun Lee;Chun-Han Chen;Shimaa Bergies","doi":"10.1109/JSTARS.2026.3661035","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3661035","url":null,"abstract":"Cloud coverage significantly degrades the quality and usability of remote sensing imagery, while also leading to unnecessary bandwidth consumption and power expenditure in satellite payloads. To address this challenge, a high-efficiency hardware–software co-design framework for cloud removal in remote sensing applications is presented. At the algorithmic level, a two-stage model compression strategy is designed, combining structured channel pruning and adaptive unstructured pruning, which reduces model parameters by more than 98% while preserving segmentation accuracy across multisource datasets. At the hardware level, a low-voltage sparse matrix accelerator enhanced with a super balanced path mechanism is designed, enabling stable operation at 0.65 V and achieving 43.2% higher energy efficiency and 11.5% better area efficiency compared to prior designs. Extensive experiments on GF-1 WFV, Sentinel-2 CloudSEN12, and Lilium-1 imagery validate the generalization capability and robustness of the proposed approach. The joint optimization of algorithm and hardware not only delivers state-of-the-art efficiency but also provides a practical pathway for reducing noninformative data transmission and enhancing the sustainability of future satellite missions.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6389-6402"},"PeriodicalIF":5.3,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11371695","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional empirical global ionospheric models, such as the Klobuchar model and the International Reference Ionosphere (IRI), utilize a limited set of parameters to represent global ionospheric total electron content (TEC). These models are extensively employed in Global Navigation Satellite Systems (GNSS) positioning. However, the accuracy of these models is inherently restricted by their reliance on predefined mathematical functions, particularly during periods of intense space weather activity. While artificial neural networks (ANNs) offer significant advantages in modeling nonlinear relationships and have demonstrated promising results for ionospheric prediction, their efficacy specifically for the domain of empirical ionospheric modeling remains largely unexplored. This study aims to fill this gap by proposing a multichannel convolutional neural network (CNN) method for empirical ionospheric modeling. The base architecture, CEIMv1, integrates fundamental solar-geometric parameters. Building upon this foundation, CEIMv2 incorporates the solar-geomagnetic indices as additional inputs, while CEIMv3 further extends the input data by including mean global electron content (MGEC) data. Validation against reference products shows that CEIMv2 and CEIMv3 achieve root-mean-square errors (RMSEs) of 6.40 TECU and 4.06 TECU, corresponding to accuracy gains of 26.0% and 17.1% over IRI-2020 and 46.6% and 49.9% compared to the Klobuchar model. Notably, CEIMv3 exhibits a minimal variation in precision of merely 3.66 TECU between high- and low-activity solar years, significantly outperforming conventional models. These results demonstrate a shift from traditional function-based methods toward a data-driven, multichannel machine learning strategy, offering significantly improved ionospheric delay correction for single-frequency GNSS users.
{"title":"A Multichannel CNN for Global Empirical Ionospheric Modeling","authors":"Weitang Wang;Yibin Yao;Qi Zhang;Rong Wang;Liang Zhang","doi":"10.1109/JSTARS.2026.3660927","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660927","url":null,"abstract":"Conventional empirical global ionospheric models, such as the Klobuchar model and the International Reference Ionosphere (IRI), utilize a limited set of parameters to represent global ionospheric total electron content (TEC). These models are extensively employed in Global Navigation Satellite Systems (GNSS) positioning. However, the accuracy of these models is inherently restricted by their reliance on predefined mathematical functions, particularly during periods of intense space weather activity. While artificial neural networks (ANNs) offer significant advantages in modeling nonlinear relationships and have demonstrated promising results for ionospheric prediction, their efficacy specifically for the domain of empirical ionospheric modeling remains largely unexplored. This study aims to fill this gap by proposing a multichannel convolutional neural network (CNN) method for empirical ionospheric modeling. The base architecture, CEIMv1, integrates fundamental solar-geometric parameters. Building upon this foundation, CEIMv2 incorporates the solar-geomagnetic indices as additional inputs, while CEIMv3 further extends the input data by including mean global electron content (MGEC) data. Validation against reference products shows that CEIMv2 and CEIMv3 achieve root-mean-square errors (RMSEs) of 6.40 TECU and 4.06 TECU, corresponding to accuracy gains of 26.0% and 17.1% over IRI-2020 and 46.6% and 49.9% compared to the Klobuchar model. Notably, CEIMv3 exhibits a minimal variation in precision of merely 3.66 TECU between high- and low-activity solar years, significantly outperforming conventional models. These results demonstrate a shift from traditional function-based methods toward a data-driven, multichannel machine learning strategy, offering significantly improved ionospheric delay correction for single-frequency GNSS users.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7389-7400"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370668","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/JSTARS.2026.3660688
Yin Jin;Huadong Guo;Hanlin Ye;Mengxiong Zhou;Hairong Wang;Guang Liu
Stray light degrades image quality and may damage the Moon-based multispectral camera. This article focuses on the effects of stray light on the lunar south pole on Earth observation in the Chang'e-8 mission. Compared with spaceborne platform, the illumination conditions of Moon-based sensor are more complex, and the extent of its impact on the sensor remains unclear. In this article, we constructed a three-dimensional lunar illumination model based on the Hapke radiative transfer model and Monte Carlo ray tracing algorithm to accurately simulate illumination conditions and analyze the distribution of stray light on the lunar surface. In addition, the effects of sunlight hitting the camera lens directly and sunlight entering the sensor through diffuse reflection on Moon-based Earth observation were also analyzed. Based on this model, the working time of the Moon-based multispectral camera can be analyzed to avoid both solar intrusion and lunar nights. In addition, the impact of stray light on the sensor's entrance pupil under different work environments can also be evaluated. The results show that in the Chang'e-8 candidate landing area, solar elevation mainly affects the total radiance at the entrance pupil, while solar azimuth governs the spatial distribution of incident light. These findings provide important insights into timing constraints and optical interference, supporting observation scheduling and performance evaluation for the Chang'e-8 Moon-based Earth observation mission.
{"title":"Analysis of Lunar Surface Stray Light for Moon-Based Multispectral Camera","authors":"Yin Jin;Huadong Guo;Hanlin Ye;Mengxiong Zhou;Hairong Wang;Guang Liu","doi":"10.1109/JSTARS.2026.3660688","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660688","url":null,"abstract":"Stray light degrades image quality and may damage the Moon-based multispectral camera. This article focuses on the effects of stray light on the lunar south pole on Earth observation in the Chang'e-8 mission. Compared with spaceborne platform, the illumination conditions of Moon-based sensor are more complex, and the extent of its impact on the sensor remains unclear. In this article, we constructed a three-dimensional lunar illumination model based on the Hapke radiative transfer model and Monte Carlo ray tracing algorithm to accurately simulate illumination conditions and analyze the distribution of stray light on the lunar surface. In addition, the effects of sunlight hitting the camera lens directly and sunlight entering the sensor through diffuse reflection on Moon-based Earth observation were also analyzed. Based on this model, the working time of the Moon-based multispectral camera can be analyzed to avoid both solar intrusion and lunar nights. In addition, the impact of stray light on the sensor's entrance pupil under different work environments can also be evaluated. The results show that in the Chang'e-8 candidate landing area, solar elevation mainly affects the total radiance at the entrance pupil, while solar azimuth governs the spatial distribution of incident light. These findings provide important insights into timing constraints and optical interference, supporting observation scheduling and performance evaluation for the Chang'e-8 Moon-based Earth observation mission.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7295-7304"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370778","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/JSTARS.2026.3659858
Shufang Xu;Yifan Liu;Yiyan Zhang;Hongmin Gao
In recent years, deep learning-based methods have been increasingly applied to hyperspectral image (HSI) anomaly detection. Due to the unique nature of the hyperspectral anomaly detection (HAD) task, which lacks prior supervisory information, existing methods often reconstruct both background and anomalous pixels to some extent. Moreover, the neglect of spatial information in HSI creates difficulties in separating anomalous pixels from background pixels during detection. To address these issues, we propose a novel purification window spectral–spatial self-supervised network that trains a network to reconstruct only background pixels, while fully leveraging HSI spatial information. The purification window module first cleanses the dataset, significantly mitigating the problem of insufficient supervisory information in HAD. Inputting the processed dataset into the network shortens the training time while enhancing the model performance. The processed image data is then input into a lightweight reconstruction network based on Kolmogorov–Arnold Network (KAN) convolution and depthwise separable convolution, which ensures strong feature representation capabilities with low computational complexity. We summarize and improve upon previous guided image filtering methods, introducing a new approach to incorporate spatial information that further suppresses the reconstruction of anomalous pixels. The proposed network focuses on spectral information, and its combination with the guided filtering method further improves the accuracy of HAD. Extensive comparative experiments on three datasets demonstrate that lightweight KAN convolution spectral–spatial network with purification window outperforms other popular detectors in terms of effectiveness and superiority.
{"title":"Lightweight KAN Convolution Spectral–Spatial Network With Purification Window for Hyperspectral Anomaly Detection","authors":"Shufang Xu;Yifan Liu;Yiyan Zhang;Hongmin Gao","doi":"10.1109/JSTARS.2026.3659858","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3659858","url":null,"abstract":"In recent years, deep learning-based methods have been increasingly applied to hyperspectral image (HSI) anomaly detection. Due to the unique nature of the hyperspectral anomaly detection (HAD) task, which lacks prior supervisory information, existing methods often reconstruct both background and anomalous pixels to some extent. Moreover, the neglect of spatial information in HSI creates difficulties in separating anomalous pixels from background pixels during detection. To address these issues, we propose a novel purification window spectral–spatial self-supervised network that trains a network to reconstruct only background pixels, while fully leveraging HSI spatial information. The purification window module first cleanses the dataset, significantly mitigating the problem of insufficient supervisory information in HAD. Inputting the processed dataset into the network shortens the training time while enhancing the model performance. The processed image data is then input into a lightweight reconstruction network based on Kolmogorov–Arnold Network (KAN) convolution and depthwise separable convolution, which ensures strong feature representation capabilities with low computational complexity. We summarize and improve upon previous guided image filtering methods, introducing a new approach to incorporate spatial information that further suppresses the reconstruction of anomalous pixels. The proposed network focuses on spectral information, and its combination with the guided filtering method further improves the accuracy of HAD. Extensive comparative experiments on three datasets demonstrate that lightweight KAN convolution spectral–spatial network with purification window outperforms other popular detectors in terms of effectiveness and superiority.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"6892-6906"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1109/JSTARS.2026.3660290
Qi He;Xu Liu;Wei Zhao;Yanling Du
Sea surface temperature (SST) plays a central role in regulating ocean and atmosphere interactions and influencing extreme climate events such as marine heatwaves. However, the inherent complexity and nonlinearity of SST dynamics present major challenges for achieving accurate and interpretable forecasting. To address this problem, we propose a novel interpretable framework named Multitemporal Scale Fusion Transformers (MTSFT), which provides a solution for prediction accuracy and explanatory power. MTSFT incorporates Enhanced Multitemporal Scale Periodic Features to decouple overlapping temporal patterns at daily, seasonal, and interannual scales, improving the model's ability to capture key temporal structures. Based on an improved Temporal Fusion Transformers, the framework integrates static covariates, historical environmental inputs, and known future indicators into a unified architecture. In addition, MTSFT supports multilevel interpretability by identifying dominant drivers, detecting SST anomalies, and characterizing periodic patterns across various time scales. Experimental results across typical coastal regions of China show that MTSFT consistently achieves reliable prediction performance and offers meaningful scientific insights to support marine risk assessment and climate-informed decision-making.
{"title":"Multitemporal Scale Fusion Transformers for Interpretable Sea Surface Temperature Prediction","authors":"Qi He;Xu Liu;Wei Zhao;Yanling Du","doi":"10.1109/JSTARS.2026.3660290","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3660290","url":null,"abstract":"Sea surface temperature (SST) plays a central role in regulating ocean and atmosphere interactions and influencing extreme climate events such as marine heatwaves. However, the inherent complexity and nonlinearity of SST dynamics present major challenges for achieving accurate and interpretable forecasting. To address this problem, we propose a novel interpretable framework named Multitemporal Scale Fusion Transformers (MTSFT), which provides a solution for prediction accuracy and explanatory power. MTSFT incorporates Enhanced Multitemporal Scale Periodic Features to decouple overlapping temporal patterns at daily, seasonal, and interannual scales, improving the model's ability to capture key temporal structures. Based on an improved Temporal Fusion Transformers, the framework integrates static covariates, historical environmental inputs, and known future indicators into a unified architecture. In addition, MTSFT supports multilevel interpretability by identifying dominant drivers, detecting SST anomalies, and characterizing periodic patterns across various time scales. Experimental results across typical coastal regions of China show that MTSFT consistently achieves reliable prediction performance and offers meaningful scientific insights to support marine risk assessment and climate-informed decision-making.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"7357-7372"},"PeriodicalIF":5.3,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}