Pub Date : 2026-01-30DOI: 10.1016/j.jag.2026.105132
Zihuan Guo, Hong Zhang, Xiao-Ming Li, Yukun Fan, Haoxuan Duan, Qiming Zeng, Ji Ge, Chao Wang
Quad-polarimetric (quad-pol) synthetic aperture radar (SAR) data provides crucial polarimetric information for post-disaster building damage assessment. However, most current spaceborne SAR platforms prioritize dual-polarization (dual-pol) mode, which ensures high temporal and spatial data availability but limits damage analysis accuracy due to the absence of some polarimetric information. Existing methods for reconstructing dual-pol to quad-pol SAR data often fail to ensure that the reconstructed data meets fundamental physical properties, while traditional building damage detection methods still struggle to accurately capture complex depolarization effects. To address these challenges, this paper proposes a diffusion model-based method for reconstructing dual-pol data to quad-pol data, applied to post-earthquake building damage analysis. The method introduces a Positive Semi-definite Constraint Module and a Plug-and-Play SVD Parameter Fine-tuning Module to ensure the physical validity and accuracy of the reconstructed data. Additionally, a Stokes vector-based Degree of Polarization frequency analysis method is proposed to enhance the description of depolarization information. A multi-dimensional polarimetric feature combination is constructed for grid-level building damage assessment. Experiments on Gaofen-3, ALOS-2/PALSAR-2, and Sentinel-1 data show that the proposed method performs optimally in complex scenarios, with all pixels meeting the positive semi-definite constraint. Compared to the original dual-pol SAR data, building damage assessment using the reconstructed quad-pol SAR data resulted in an F1 score improvement of 16.3% and 8.4% for detecting moderately and severely damaged buildings, respectively. This research provides crucial technical support for fully harnessing the potential of dual-pol SAR data in building damage assessment.
{"title":"Quad-pol reconstruction of dual-pol SAR data via a physically constrained diffusion model for building damage assessment","authors":"Zihuan Guo, Hong Zhang, Xiao-Ming Li, Yukun Fan, Haoxuan Duan, Qiming Zeng, Ji Ge, Chao Wang","doi":"10.1016/j.jag.2026.105132","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105132","url":null,"abstract":"Quad-polarimetric (quad-pol) synthetic aperture radar (SAR) data provides crucial polarimetric information for post-disaster building damage assessment. However, most current spaceborne SAR platforms prioritize dual-polarization (dual-pol) mode, which ensures high temporal and spatial data availability but limits damage analysis accuracy due to the absence of some polarimetric information. Existing methods for reconstructing dual-pol to quad-pol SAR data often fail to ensure that the reconstructed data meets fundamental physical properties, while traditional building damage detection methods still struggle to accurately capture complex depolarization effects. To address these challenges, this paper proposes a diffusion model-based method for reconstructing dual-pol data to quad-pol data, applied to post-earthquake building damage analysis. The method introduces a Positive Semi-definite Constraint Module and a Plug-and-Play SVD Parameter Fine-tuning Module to ensure the physical validity and accuracy of the reconstructed data. Additionally, a Stokes vector-based Degree of Polarization frequency analysis method is proposed to enhance the description of depolarization information. A multi-dimensional polarimetric feature combination is constructed for grid-level building damage assessment. Experiments on Gaofen-3, ALOS-2/PALSAR-2, and Sentinel-1 data show that the proposed method performs optimally in complex scenarios, with all pixels meeting the positive semi-definite constraint. Compared to the original dual-pol SAR data, building damage assessment using the reconstructed quad-pol SAR data resulted in an F1 score improvement of 16.3% and 8.4% for detecting moderately and severely damaged buildings, respectively. This research provides crucial technical support for fully harnessing the potential of dual-pol SAR data in building damage assessment.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"89 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate in-season crop yield prediction is important for farm management, insurance planning, and food security decision making. Many studies have proposed different models and data sources, but their findings are difficult to compare because they focus on different crops, regions, spatial scales, prediction months, and evaluation metrics. This review presents a structured synthesis of in-season crop yield prediction studies and to identify key gaps that limit early and reliable prediction. We conducted a systematic literature review following a defined protocol. We searched multiple databases using targeted search strings, and screened studies using clear inclusion and exclusion criteria. Eligible studies focused on corn, soybean, or winter wheat, were published between 2014 and 2025, and predicted crop yield during the growing season. Review papers were excluded. This process identified 170 studies, of which 55 were retained for detailed analysis. For each selected study, we extracted information on modeling approach, input data sources, prediction lead time, spatial scale, and reported evaluation metrics. We synthesized results across three main model types, machine learning models, process-based crop models, and statistical models. This review addresses five research questions: (1) What modeling approaches are most commonly used for in-season crop yield prediction, and how do their predictive performances compare? (2) What input data sources and feature types are most frequently used, and how do they influence prediction accuracy? (3) What prediction lead times are most commonly used for major crops, and how does lead time affect model performance? (4) What are the main challenges and limitations in current in-season crop yield prediction studies? (5) What methodological and data-driven strategies can improve future in-season crop yield predictions? The reviewed literature shows that no single approach consistently outperforms others across all conditions. Model performance depends strongly on data quality, crop type, and regional conditions, with common limitations related to data gaps, cloud contamination in satellite imagery, and limited model interpretability.
{"title":"In-season crop yield prediction: State of the art and future research direction","authors":"Ziao Liu, Liping Di, Ruixin Yang, Liying Guo, Chen Zhang, Hui Li, Bosen Shao","doi":"10.1016/j.jag.2026.105129","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105129","url":null,"abstract":"Accurate in-season crop yield prediction is important for farm management, insurance planning, and food security decision making. Many studies have proposed different models and data sources, but their findings are difficult to compare because they focus on different crops, regions, spatial scales, prediction months, and evaluation metrics. This review presents a structured synthesis of in-season crop yield prediction studies and to identify key gaps that limit early and reliable prediction. We conducted a systematic literature review following a defined protocol. We searched multiple databases using targeted search strings, and screened studies using clear inclusion and exclusion criteria. Eligible studies focused on corn, soybean, or winter wheat, were published between 2014 and 2025, and predicted crop yield during the growing season. Review papers were excluded. This process identified 170 studies, of which 55 were retained for detailed analysis. For each selected study, we extracted information on modeling approach, input data sources, prediction lead time, spatial scale, and reported evaluation metrics. We synthesized results across three main model types, machine learning models, process-based crop models, and statistical models. This review addresses five research questions: (1) What modeling approaches are most commonly used for in-season crop yield prediction, and how do their predictive performances compare? (2) What input data sources and feature types are most frequently used, and how do they influence prediction accuracy? (3) What prediction lead times are most commonly used for major crops, and how does lead time affect model performance? (4) What are the main challenges and limitations in current in-season crop yield prediction studies? (5) What methodological and data-driven strategies can improve future in-season crop yield predictions? The reviewed literature shows that no single approach consistently outperforms others across all conditions. Model performance depends strongly on data quality, crop type, and regional conditions, with common limitations related to data gaps, cloud contamination in satellite imagery, and limited model interpretability.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"302 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Terrace construction is a crucial human intervention for improving slope productivity and preventing soil erosion, especially in low-latitude hilly regions with heavy and concentrated rainfall. However, existing terrace mapping methods perform poorly in such areas. In this study, we generated the first high-resolution (0.9 m) terrace distribution map for low-latitude hilly regions by integrating deep learning with slope-area threshold denoising. Using Guangdong Province as a case study, we combined Google Earth imagery, SRTM DEM, and Global 10-m land-cover data. The results showed that the optimal 0.9 m resolution achieved 93.34 % overall accuracy (OA), 79.18 % F1-score, and 65.53 % intersection over union (IoU) on our Guangdong terrace dataset, processing 400 km2 in 89 min, showing an excellent accuracy–speed trade-off. Statistical analysis revealed that terraces in Guangdong are predominantly small (4725 m2 on average) and gently sloped (8.85°). Provincial validation confirmed superior performance, with a producer’s accuracy of 80.38 %. Spatially, terraces are mainly clustered in inland hilly areas, especially in northern Guangdong, with additional clusters in the peripheral zones of the Pearl River Delta and in the western and eastern regions. Cities with the largest terrace coverage are Qingyuan (801.98 km2), Zhaoqing (733.11 km2), and Shaoguan (701.53 km2). This high-precision dataset supports agricultural management, soil erosion risk assessment, ecological conservation, and rural revitalisation, while also offering a transferable framework for mapping terrain features in complex landscapes worldwide using remote sensing imagery.
{"title":"High-Resolution (0.9 m) terrace mapping in Low-Latitude hilly regions using deep learning and Area–Slope Denoising: A case study from Guangdong Province, China","authors":"Yinghai Zhao, Hanquan Cheng, Siyuan Peng, Suhong Liu, Yun Xie, Baoyuan Liu","doi":"10.1016/j.jag.2026.105126","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105126","url":null,"abstract":"Terrace construction is a crucial human intervention for improving slope productivity and preventing soil erosion, especially in low-latitude hilly regions with heavy and concentrated rainfall. However, existing terrace mapping methods perform poorly in such areas. In this study, we generated the first high-resolution (0.9 m) terrace distribution map for low-latitude hilly regions by integrating deep learning with slope-area threshold denoising. Using Guangdong Province as a case study, we combined Google Earth imagery, SRTM DEM, and Global 10-m land-cover data. The results showed that the optimal 0.9 m resolution achieved 93.34 % overall accuracy (OA), 79.18 % F1-score, and 65.53 % intersection over union (IoU) on our Guangdong terrace dataset, processing 400 km<ce:sup loc=\"post\">2</ce:sup> in 89 min, showing an excellent accuracy–speed trade-off. Statistical analysis revealed that terraces in Guangdong are predominantly small (4725 m<ce:sup loc=\"post\">2</ce:sup> on average) and gently sloped (8.85°). Provincial validation confirmed superior performance, with a producer’s accuracy of 80.38 %. Spatially, terraces are mainly clustered in inland hilly areas, especially in northern Guangdong, with additional clusters in the peripheral zones of the Pearl River Delta and in the western and eastern regions. Cities with the largest terrace coverage are Qingyuan (801.98 km<ce:sup loc=\"post\">2</ce:sup>), Zhaoqing (733.11 km<ce:sup loc=\"post\">2</ce:sup>), and Shaoguan (701.53 km<ce:sup loc=\"post\">2</ce:sup>). This high-precision dataset supports agricultural management, soil erosion risk assessment, ecological conservation, and rural revitalisation, while also offering a transferable framework for mapping terrain features in complex landscapes worldwide using remote sensing imagery.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"84 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote sensing image change detection is critical for monitoring earth surface dynamics. Although deep learning has significantly improved change detection performance, traditional and existing deep super-resolution techniques for cross-resolution change detection often assume bi-temporal images share the same resolution, at least in the training phase. They also suffer from limitations including dependency on expensive high-resolution paired training data, suboptimal performance transfer from super-resolution to change detection accuracy, and heavy reliance on extensive pixel-level annotations. To address these limitations, we propose a novel dual pixel-level and subpatch-level network with cross-temporal super resolution (DPSNet) for change detection across spatial resolutions. Our method DPSNet, comprises two core components: 1) a Reference Image-Guided Generative Adversarial Network (RefIGM GAN) for cross-temporal super resolution; and 2) a Semi-supervised Dual-Path Network (SDNet) for pixel-level and subpatch-level change detection. A resource-efficient alternating optimization strategy is employed between RefIGM GAN and SDNet, creating a virtuous cycle in which super-resolution improves detection accuracy, and detection results optimize super-resolution reconstruction. Experiments were conducted on three datasets, i.e., CDD, SYSU, and HTCD, each characterized by distinct resolution variations. The CDD and SYSU datasets include bi-temporal images with 4 × and 8 × resolution differences, respectively, while the HTCD dataset contains both satellite and UAV imagery with inherent resolution disparities. The results demonstrate that by integrating reference-guided super resolution and semi-supervised learning, effective cross-resolution change detection can be achieved with only limited high-resolution data and pixel-level labels, showing great practical significance in scenarios where solely low-resolution historical images are available. Our source code will be released at https://github.com/Flandre7155/DPSNet.
{"title":"A dual pixel-level and subpatch-level network with cross-temporal super resolution for change detection across spatial resolutions","authors":"Dawei Wen, Yunlong Zhang, Binqiang Zhang, Deng Chen, Xiaofeng Pan, Xin Huang","doi":"10.1016/j.jag.2026.105134","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105134","url":null,"abstract":"Remote sensing image change detection is critical for monitoring earth surface dynamics. Although deep learning has significantly improved change detection performance, traditional and existing deep super-resolution techniques for cross-resolution change detection often assume bi-temporal images share the same resolution, at least in the training phase. They also suffer from limitations including dependency on expensive high-resolution paired training data, suboptimal performance transfer from super-resolution to change detection accuracy, and heavy reliance on extensive pixel-level annotations. To address these limitations, we propose a novel dual pixel-level and subpatch-level network with cross-temporal super resolution (DPSNet) for change detection across spatial resolutions. Our method DPSNet, comprises two core components: 1) a Reference Image-Guided Generative Adversarial Network (RefIGM GAN) for cross-temporal super resolution; and 2) a Semi-supervised Dual-Path Network (SDNet) for pixel-level and subpatch-level change detection. A resource-efficient alternating optimization strategy is employed between RefIGM GAN and SDNet, creating a virtuous cycle in which super-resolution improves detection accuracy, and detection results optimize super-resolution reconstruction. Experiments were conducted on three datasets, i.e., CDD, SYSU, and HTCD, each characterized by distinct resolution variations. The CDD and SYSU datasets include bi-temporal images with 4 × and 8 × resolution differences, respectively, while the HTCD dataset contains both satellite and UAV imagery with inherent resolution disparities. The results demonstrate that by integrating reference-guided super resolution and semi-supervised learning, effective cross-resolution change detection can be achieved with only limited high-resolution data and pixel-level labels, showing great practical significance in scenarios where solely low-resolution historical images are available. Our source code will be released at <ce:inter-ref xlink:href=\"https://github.com/Flandre7155/DPSNet\" xlink:type=\"simple\">https://github.com/Flandre7155/DPSNet</ce:inter-ref>.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"29 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1016/j.jag.2026.105125
Kai Tang, Zhuo Zheng, Hongruixuan Chen, Xuehong Chen, Jin Chen
The Earth is experiencing continuous anthropogenic and natural changes. Very high resolution (VHR) remote sensing imagery-based change detection provides an effective means to monitor these dynamics at fine spatial scales. Although deep learning has significantly advanced supervised change detection (CD), it heavily relies on large amounts of human-labeled samples. In real-world CD application scenarios, acquiring sufficient change samples is challenging due to the labor-intensive nature of pixel-level labeling. This challenge has motivated the development of unsupervised change detection (UCD). However, existing UCD methods still struggle in complex scenes with bi-temporal domain shifts caused by different imaging conditions. This is largely due to the absence of high-quality samples needed to guide CD-oriented optimization. To address this challenge, we propose DreamCD, a change-label-free framework that synthesizes change samples for UCD. DreamCD consists of: (1) a weakly conditional semantic diffusion model trained with pseudo-semantic masks, (2) a Content-Semantic-Style synthesis strategy that synthesizes realistic pre- and post-event image pairs of the application domain, and (3) an arbitrary contemporal deep change detector trained solely on synthetic samples. We further introduce LsSCD-Ex, a large-scale semantic change detection (SCD) dataset consistent with OpenEarthMap semantics, enabling evaluation of synthetic-sample-based SCD. Experiments on the SECOND and LsSCD-Ex datasets demonstrate that DreamCD achieves state-of-the-art (SOTA) UCD performance, improving the average F1 score by 14.01% over existing methods for binary CD and outperforming the SOTA unsupervised SCD model, Changen2, by 2.15% in F1 and 3.63% in separated kappa coefficient (SCD metric). These results suggest that DreamCD provides a promising and extensible solution for CD in real-world remote sensing applications. Code and LsSCD-Ex dataset are available at https://github.com/tangkai-RS/DreamCD.
{"title":"DreamCD: A change-label-free framework for change detection via a weakly conditional semantic diffusion model in optical VHR imagery","authors":"Kai Tang, Zhuo Zheng, Hongruixuan Chen, Xuehong Chen, Jin Chen","doi":"10.1016/j.jag.2026.105125","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105125","url":null,"abstract":"The Earth is experiencing continuous anthropogenic and natural changes. Very high resolution (VHR) remote sensing imagery-based change detection provides an effective means to monitor these dynamics at fine spatial scales. Although deep learning has significantly advanced supervised change detection (CD), it heavily relies on large amounts of human-labeled samples. In real-world CD application scenarios, acquiring sufficient change samples is challenging due to the labor-intensive nature of pixel-level labeling. This challenge has motivated the development of unsupervised change detection (UCD). However, existing UCD methods still struggle in complex scenes with bi-temporal domain shifts caused by different imaging conditions. This is largely due to the absence of high-quality samples needed to guide CD-oriented optimization. To address this challenge, we propose DreamCD, a change-label-free framework that synthesizes change samples for UCD. DreamCD consists of: (1) a weakly conditional semantic diffusion model trained with pseudo-semantic masks, (2) a Content-Semantic-Style synthesis strategy that synthesizes realistic pre- and post-event image pairs of the application domain, and (3) an arbitrary contemporal deep change detector trained solely on synthetic samples. We further introduce LsSCD-Ex, a large-scale semantic change detection (SCD) dataset consistent with OpenEarthMap semantics, enabling evaluation of synthetic-sample-based SCD. Experiments on the SECOND and LsSCD-Ex datasets demonstrate that DreamCD achieves state-of-the-art (SOTA) UCD performance, improving the average F1 score by 14.01% over existing methods for binary CD and outperforming the SOTA unsupervised SCD model, Changen2, by 2.15% in F1 and 3.63% in separated kappa coefficient (SCD metric). These results suggest that DreamCD provides a promising and extensible solution for CD in real-world remote sensing applications. Code and LsSCD-Ex dataset are available at <ce:inter-ref xlink:href=\"https://github.com/tangkai-RS/DreamCD\" xlink:type=\"simple\">https://github.com/tangkai-RS/DreamCD</ce:inter-ref>.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"9 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud cover in optical remote sensing imagery severely limits its effectiveness for downstream applications. Compared with cloud removal methods that rely on Synthetic Aperture Radar (SAR) data or single cloud‐free references, multi-temporal imagery provides richer, more stable and more reliable auxiliary information. However, existing methods often exploit only limited temporal observations and fail to fully capture the deep synergistic relationships among spatial, spectral, and temporal features, leading to reconstruction results that suffer from detail loss and spectral distortion in cloud-contaminated regions. To address this issue, we propose TSSMamba, a temporal–spectral–spatial state space model for multi-temporal cloud removal. The framework jointly models temporal dynamics, spectral responses and spatial structures of cloud-obscured imagery. A dual-stream architecture is designed to separately learn temporal–spectral and temporal–spatial dependencies, while a cross-dimensional fusion module strengthens interactions between the two streams and improves spatial coherence and spectral consistency in the restored images. Experiments on three public cloud removal datasets demonstrate that TSSMamba significantly outperforms state-of-the-art methods in both quantitative metrics and visual quality. On the STGAN Dataset, Sen2_MTC and SEN12MS-CR-TS, TSSMamba achieves PSNR gains of 0.98, 0.59, and 1.11 dB and SSIM improvements of 0.0123, 0.0130, and 0.0115, respectively, over MSC-GAN, thereby confirming its superior cloud removal capability. Code will be made available at: https://github.com/zhangcy23/TSSMamba.
{"title":"TSSMamba: A temporal–spectral–spatial state space model for multi-temporal remote sensing cloud removal","authors":"Chengyao Zhang, Fengyan Wang, Xuqing Zhang, Mingchang Wang, Feng Chen, Xiang Wu, Weitong Ma","doi":"10.1016/j.jag.2026.105131","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105131","url":null,"abstract":"Cloud cover in optical remote sensing imagery severely limits its effectiveness for downstream applications. Compared with cloud removal methods that rely on Synthetic Aperture Radar (SAR) data or single cloud‐free references, multi-temporal imagery provides richer, more stable and more reliable auxiliary information. However, existing methods often exploit only limited temporal observations and fail to fully capture the deep synergistic relationships among spatial, spectral, and temporal features, leading to reconstruction results that suffer from detail loss and spectral distortion in cloud-contaminated regions. To address this issue, we propose TSSMamba, a temporal–spectral–spatial state space model for multi-temporal cloud removal. The framework jointly models temporal dynamics, spectral responses and spatial structures of cloud-obscured imagery. A dual-stream architecture is designed to separately learn temporal–spectral and temporal–spatial dependencies, while a cross-dimensional fusion module strengthens interactions between the two streams and improves spatial coherence and spectral consistency in the restored images. Experiments on three public cloud removal datasets demonstrate that TSSMamba significantly outperforms state-of-the-art methods in both quantitative metrics and visual quality. On the STGAN Dataset, Sen2_MTC and SEN12MS-CR-TS, TSSMamba achieves PSNR gains of 0.98, 0.59, and 1.11 dB and SSIM improvements of 0.0123, 0.0130, and 0.0115, respectively, over MSC-GAN, thereby confirming its superior cloud removal capability. Code will be made available at: https://github.com/zhangcy23/TSSMamba.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"159 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1016/j.jag.2026.105130
Chen Feng, Xiuying Zhang, Chenglin Hu, Miaomiao Cheng, Meiling Shi
Gas flaring (GF) releases large amounts of greenhouse gases and pollutants, severely affecting global climate change and regional environmental quality. Quantifying global GF activities is essential to implement emission mitigation polices. Existing methods struggle with detecting weak GF activities under complex daytime background conditions. This study proposes a method combining the spectral index and machine learning methods, to detect onshore GF sites (GFs) using the images from the Multispectral Instrument (MSI) onboard Sentinel-2 satellites. First, the Thermal Anomaly Index (TAI, calculated in the near-infrared and short-wave infrared bands) and TAI increment (ΔTAI) are applied to detect thermal anomalies from MSI images, in which ΔTAI enhances the spatial-contextual background contrast. Then, nine flare-specific features are selected to further identify the GFs from the thermal anomalies using a random forest model. Finally, the detected GFs were validated worldwide, achieving an overall user accuracy of 93.34% and a producer accuracy of 95.31%. This approach enhances the sensitivity to weak flares and effectively addresses the interferences from other heat sources and highly reflective buildings, providing a universal solution for GFs detection in complex onshore scenes.
{"title":"A universal method combining spectral index and machine learning for monitoring onshore gas flaring from Sentinel-2 MSI images","authors":"Chen Feng, Xiuying Zhang, Chenglin Hu, Miaomiao Cheng, Meiling Shi","doi":"10.1016/j.jag.2026.105130","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105130","url":null,"abstract":"Gas flaring (GF) releases large amounts of greenhouse gases and pollutants, severely affecting global climate change and regional environmental quality. Quantifying global GF activities is essential to implement emission mitigation polices. Existing methods struggle with detecting weak GF activities under complex daytime background conditions. This study proposes a method combining the spectral index and machine learning methods, to detect onshore GF sites (GFs) using the images from the Multispectral Instrument (MSI) onboard Sentinel-2 satellites. First, the Thermal Anomaly Index (TAI, calculated in the near-infrared and short-wave infrared bands) and TAI increment (ΔTAI) are applied to detect thermal anomalies from MSI images, in which ΔTAI enhances the spatial-contextual background contrast. Then, nine flare-specific features are selected to further identify the GFs from the thermal anomalies using a random forest model. Finally, the detected GFs were validated worldwide, achieving an overall user accuracy of 93.34% and a producer accuracy of 95.31%. This approach enhances the sensitivity to weak flares and effectively addresses the interferences from other heat sources and highly reflective buildings, providing a universal solution for GFs detection in complex onshore scenes.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"72 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1016/j.jag.2026.105121
Jun Xiang, Xiuhao Liang, Jiawei Jiang, Dongxia Zou, Wei Wei, Dengkui Mo, Xiaoming Qiu, Chen Liang, Kai Lu
Eucalyptus plantations, characterized by uniform stand structure and relatively sparse foliage, are highly susceptible to typhoon damage, which poses substantial economic risks to forestry operations and undermines regional ecosystem stability in southern China. This study focuses on windthrown eucalyptus forests in Shangsi and Bobai Counties, Guangxi Zhuang Autonomous Region, following Typhoons Ma-on and TALIM . We developed a specialized UAV (Unmanned Aerial Vehicle) based deep learning dataset and proposed a novel segmentation model, GIBN-Net, which incorporates an asymmetric global attention module to enhance feature extraction. Evaluation results, including ablation and comparative experiments, show that GIBN-Net outperforms mainstream models, achieving optimal accuracy (OA: 0.9405, Precision: 0.8879, Recall: 0.9105, IoU: 0.8248, F1-Score: 0.8722). The study also revealed that tailored preprocessing strategies significantly improved boundary detection, while simple stacking of the UAV-derived DSM with RGB imagery led to a decrease in model accuracy. This research is the first to develop a dedicated dataset and an optimized deep learning model for typhoon-damaged eucalyptus detection in southern China. The GIBN-Net model is capable of detecting wind damage in eucalyptus forests more accurately and efficiently in complex forest environments. The dedicated dataset, combined with spatial context information, improves boundary accuracy and facilitates faster damage mapping. This study enables rapid, large-scale, and automated assessment of typhoon-induced damage in eucalyptus plantations, thereby providing actionable insights for insurance claim verification, post-disaster resource allocation, and ecological restoration planning. Therefore, it can serve as a timely and effective decision-support tool for forestry management and disaster response.
{"title":"GIBN-Net for automated windthrown tree detection in eucalyptus plantations using UAV imagery","authors":"Jun Xiang, Xiuhao Liang, Jiawei Jiang, Dongxia Zou, Wei Wei, Dengkui Mo, Xiaoming Qiu, Chen Liang, Kai Lu","doi":"10.1016/j.jag.2026.105121","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105121","url":null,"abstract":"Eucalyptus plantations, characterized by uniform stand structure and relatively sparse foliage, are highly susceptible to typhoon damage, which poses substantial economic risks to forestry operations and undermines regional ecosystem stability in southern China. This study focuses on windthrown eucalyptus forests in Shangsi and Bobai Counties, Guangxi Zhuang Autonomous Region, following Typhoons Ma-on and TALIM . We developed a specialized UAV (Unmanned Aerial Vehicle) based deep learning dataset and proposed a novel segmentation model, GIBN-Net, which incorporates an asymmetric global attention module to enhance feature extraction. Evaluation results, including ablation and comparative experiments, show that GIBN-Net outperforms mainstream models, achieving optimal accuracy (OA: 0.9405, Precision: 0.8879, Recall: 0.9105, IoU: 0.8248, F1-Score: 0.8722). The study also revealed that tailored preprocessing strategies significantly improved boundary detection, while simple stacking of the UAV-derived DSM with RGB imagery led to a decrease in model accuracy. This research is the first to develop a dedicated dataset and an optimized deep learning model for typhoon-damaged eucalyptus detection in southern China. The GIBN-Net model is capable of detecting wind damage in eucalyptus forests more accurately and efficiently in complex forest environments. The dedicated dataset, combined with spatial context information, improves boundary accuracy and facilitates faster damage mapping. This study enables rapid, large-scale, and automated assessment of typhoon-induced damage in eucalyptus plantations, thereby providing actionable insights for insurance claim verification, post-disaster resource allocation, and ecological restoration planning. Therefore, it can serve as a timely and effective decision-support tool for forestry management and disaster response.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"11 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146129303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1016/j.jag.2026.105124
Yehan Sun, Jun Pan, Lijun Jiang, You Tian, Jiayi Zhang, Kaifeng Liu
Forest fire smoke significantly disrupts regional carbon cycles and degrades air quality, highlighting an urgent need for highly sensitive detection and inversion models to improve early warning systems. This study proposes a physics-based remote sensing framework for smoke detection and concentration inversion, founded on scattering–absorption theory. By employing multi-dimensional spectral point cluster analysis based on a smoke scattering–absorption model, we introduce the Mahalanobis Distance (MD) as a physically interpretable measure of spectral deviation. A physical relationship between MD and smoke concentration is derived, which serves as the foundation for a dual-threshold detection algorithm and an inversion model. The proposed approach is validated through laboratory experiments, UAV-based simulations, and satellite observations. Key findings include: (1) Smoke clusters migrate quasi-linearly from background to the ideal smoke point with increasing concentration, while their spatial extent contracts exponentially; (2) MD exhibits a double-exponential correlation with smoke concentration (RMSE = 0.18 g/m2, R2 = 0.80); (3) Detection accuracy reaches 0.89–0.97 (UAV) and 0.93 (satellite), with fire localization errors of 0.52 ± 0.35 m and 45.47 ± 42.22 m, respectively. These results confirm that spectral variation is a core parameter for smoke retrieval, and MD provides a robust, physically grounded measure for detection and inversion. The proposed model overcomes key limitations of empirical threshold methods and offers a scalable solution for early forest fire monitoring across platforms.
{"title":"A physics-based remote sensing framework for forest fire smoke detection toward early fire warning","authors":"Yehan Sun, Jun Pan, Lijun Jiang, You Tian, Jiayi Zhang, Kaifeng Liu","doi":"10.1016/j.jag.2026.105124","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105124","url":null,"abstract":"Forest fire smoke significantly disrupts regional carbon cycles and degrades air quality, highlighting an urgent need for highly sensitive detection and inversion models to improve early warning systems. This study proposes a physics-based remote sensing framework for smoke detection and concentration inversion, founded on scattering–absorption theory. By employing multi-dimensional spectral point cluster analysis based on a smoke scattering–absorption model, we introduce the Mahalanobis Distance (MD) as a physically interpretable measure of spectral deviation. A physical relationship between MD and smoke concentration is derived, which serves as the foundation for a dual-threshold detection algorithm and an inversion model. The proposed approach is validated through laboratory experiments, UAV-based simulations, and satellite observations. Key findings include: (1) Smoke clusters migrate quasi-linearly from background to the ideal smoke point with increasing concentration, while their spatial extent contracts exponentially; (2) MD exhibits a double-exponential correlation with smoke concentration (RMSE = 0.18 g/m<ce:sup loc=\"post\">2</ce:sup>, R<ce:sup loc=\"post\">2</ce:sup> = 0.80); (3) Detection accuracy reaches 0.89–0.97 (UAV) and 0.93 (satellite), with fire localization errors of 0.52 ± 0.35 m and 45.47 ± 42.22 m, respectively. These results confirm that spectral variation is a core parameter for smoke retrieval, and MD provides a robust, physically grounded measure for detection and inversion. The proposed model overcomes key limitations of empirical threshold methods and offers a scalable solution for early forest fire monitoring across platforms.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"31 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-24DOI: 10.1016/j.jag.2026.105128
Qinglin Yang, Yao Guan, Haixu He, Suzhen Yang, Haonan Sun, Jining Yan
Mangroves in the Beibu Gulf of the northern south China sea are vital for coastal protection and carbon storage but have undergone pronounced spatial changes over recent decades due to rapid urbanization and climate variability. Existing monitoring efforts, largely constrained to annual temporal resolution, fail to capture seasonal dynamics and fine-scale transformations induced by human activities. In addition, existing change detection methods are either based on statistical machine learning or fully convolutional deep learning, which have limited ability to detect long-term dynamic changes in mangrove forests. To address these challenges, a temporal semantic segmentation change detection with multi-scale dilated convolutions (TSSCD-D) framework was proposed, and the Landsat time series (1987–2022) was used for high-resolution mangrove dynamics monitoring. The proposed approach expands the temporal receptive field, enabling improved representation of long-term changes. Model evaluation demonstrates an overall accuracy of 87.3%, with producer’s and user’s accuracies for mangroves of 97.1% and 96.8%, outperforming other methods (see Table 2). Analysis reveals a sharp decline in mangrove area from 856.1 km2 in 1987 to 532.8 km2 in 1998, primarily driven by reclamation and infrastructure development, followed by gradual recovery to 824.3 km2 by 2022 under conservation policies. Transition mapping shows that conversions with water bodies, marshes, and irrigated croplands account for 80% of total changes, indicating strong hydrological and anthropogenic influences. These findings provide critical insights and fine-scale datasets to support long-term mangrove conservation and management in tropical coastal systems.More details on TSSCD-D can be found at https://github.com/CUG-BEODL/TSSCD-D.
{"title":"Monitoring of mangrove dynamic change with Landsat time series from 1987 to 2022 in the Beibu Gulf, China","authors":"Qinglin Yang, Yao Guan, Haixu He, Suzhen Yang, Haonan Sun, Jining Yan","doi":"10.1016/j.jag.2026.105128","DOIUrl":"https://doi.org/10.1016/j.jag.2026.105128","url":null,"abstract":"Mangroves in the Beibu Gulf of the northern south China sea are vital for coastal protection and carbon storage but have undergone pronounced spatial changes over recent decades due to rapid urbanization and climate variability. Existing monitoring efforts, largely constrained to annual temporal resolution, fail to capture seasonal dynamics and fine-scale transformations induced by human activities. In addition, existing change detection methods are either based on statistical machine learning or fully convolutional deep learning, which have limited ability to detect long-term dynamic changes in mangrove forests. To address these challenges, a temporal semantic segmentation change detection with multi-scale dilated convolutions (TSSCD-D) framework was proposed, and the Landsat time series (1987–2022) was used for high-resolution mangrove dynamics monitoring. The proposed approach expands the temporal receptive field, enabling improved representation of long-term changes. Model evaluation demonstrates an overall accuracy of 87.3%, with producer’s and user’s accuracies for mangroves of 97.1% and 96.8%, outperforming other methods (see Table 2). Analysis reveals a sharp decline in mangrove area from 856.1 km<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math> in 1987 to 532.8 km<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math> in 1998, primarily driven by reclamation and infrastructure development, followed by gradual recovery to 824.3 km<mml:math altimg=\"si1.svg\" display=\"inline\"><mml:msup><mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:math> by 2022 under conservation policies. Transition mapping shows that conversions with water bodies, marshes, and irrigated croplands account for 80% of total changes, indicating strong hydrological and anthropogenic influences. These findings provide critical insights and fine-scale datasets to support long-term mangrove conservation and management in tropical coastal systems.More details on TSSCD-D can be found at <ce:inter-ref xlink:href=\"https://github.com/CUG-BEODL/TSSCD-D\" xlink:type=\"simple\">https://github.com/CUG-BEODL/TSSCD-D</ce:inter-ref>.","PeriodicalId":50341,"journal":{"name":"International Journal of Applied Earth Observation and Geoinformation","volume":"143 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}