首页 > 最新文献

International journal of applied earth observation and geoinformation : ITC journal最新文献

英文 中文
Integrated and simultaneous mapping of blue carbon ecosystems by using tide-level, phenological, and biophysical features from optical and SAR images 利用来自光学和SAR图像的潮位、物候和生物物理特征对蓝碳生态系统进行综合和同步制图
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-08 DOI: 10.1016/j.jag.2025.105076
Leping Wang , Qian Zhang , Yangfan Li
Blue carbon ecosystems (BCEs) are nature-based solutions critical for mitigating climate change and biodiversity loss. Accurate mapping of BCEs is fundamental to carbon accounting, maximizing their ecosystem service value, and informing conservation and restoration efforts. Yet, most existing studies focus on single ecosystem mapping and lack multi-class classification approaches capable of addressing the spectral similarity among different BCEs. To address this issue, we developed a novel algorithm on Google Earth Engine, namely Multi-class Blue Carbon Ecosystem Mapping by integrating Tide-level, Phenological, and Biophysical features (MBCEM-TPB) to simultaneously map mangroves, saltmarshes, and intertidal seagrass meadows, thereby characterizing the full composition of BCEs. Specifically, we first composited multi-temporal imagery under different tidal levels, phenological stages, and biophysical features from Sentinel-1 and Sentinel-2 data. Based on spectral similarity principles, we performed training sample migration and then generated interannual blue carbon maps for 2019, 2021, and 2023 using Random Forest classifier. The algorithm was evaluated across eight study sites encompassing different BCEs combinations (two or three ecosystem types) spanning diverse climate zones, bioregions, and levels of ecosystem complexity. The overall accuracy of the MBCEM-TPB algorithm exceeded 93.65% across three periods, demonstrating its robustness and generalizability, even in complex intertidal landscapes. This study provides the first unified multi-class classification algorithm for BCEs and offers a generalizable approach applicable at global scales, supporting refined blue carbon accounting and ecosystem management.
蓝碳生态系统(bce)是基于自然的解决方案,对减缓气候变化和生物多样性丧失至关重要。bce的精确测绘是碳核算、最大化其生态系统服务价值以及为保护和恢复工作提供信息的基础。然而,现有的研究大多集中在单一生态系统的制图上,缺乏能够解决不同生物多样性之间光谱相似性的多类分类方法。为了解决这一问题,我们在谷歌Earth Engine上开发了一种新的算法,即结合潮位、物质性和生物物理特征的Multi-class Blue Carbon Ecosystem Mapping (MBCEM-TPB),同时绘制红树林、盐沼和潮间带海草草甸的蓝碳生态系统图,从而表征了潮间带海草草甸的全部组成。具体来说,我们首先合成了Sentinel-1和Sentinel-2数据在不同潮位、物候阶段和生物物理特征下的多时相图像。基于光谱相似原理,我们进行了训练样本迁移,然后使用随机森林分类器生成了2019年、2021年和2023年的年际蓝碳图。该算法在8个研究地点进行了评估,这些研究地点涵盖了不同的bce组合(两种或三种生态系统类型),跨越了不同的气候带、生物区和生态系统复杂性水平。MBCEM-TPB算法在三个周期内的总体精度超过93.65%,即使在复杂的潮间带景观中也显示出其鲁棒性和泛化性。该研究提供了首个统一的bce多类分类算法,并提供了一种适用于全球尺度的可推广方法,为精细化的蓝碳核算和生态系统管理提供支持。
{"title":"Integrated and simultaneous mapping of blue carbon ecosystems by using tide-level, phenological, and biophysical features from optical and SAR images","authors":"Leping Wang ,&nbsp;Qian Zhang ,&nbsp;Yangfan Li","doi":"10.1016/j.jag.2025.105076","DOIUrl":"10.1016/j.jag.2025.105076","url":null,"abstract":"<div><div>Blue carbon ecosystems (BCEs) are nature-based solutions critical for mitigating climate change and biodiversity loss. Accurate mapping of BCEs is fundamental to carbon accounting, maximizing their ecosystem service value, and informing conservation and restoration efforts. Yet, most existing studies focus on single ecosystem mapping and lack multi-class classification approaches capable of addressing the spectral similarity among different BCEs. To address this issue, we developed a novel algorithm on Google Earth Engine, namely Multi-class Blue Carbon Ecosystem Mapping by integrating Tide-level, Phenological, and Biophysical features (MBCEM-TPB) to simultaneously map mangroves, saltmarshes, and intertidal seagrass meadows, thereby characterizing the full composition of BCEs. Specifically, we first composited multi-temporal imagery under different tidal levels, phenological stages, and biophysical features from Sentinel-1 and Sentinel-2 data. Based on spectral similarity principles, we performed training sample migration and then generated interannual blue carbon maps for 2019, 2021, and 2023 using Random Forest classifier. The algorithm was evaluated across eight study sites encompassing different BCEs combinations (two or three ecosystem types) spanning diverse climate zones, bioregions, and levels of ecosystem complexity. The overall accuracy of the MBCEM-TPB algorithm exceeded 93.65% across three periods, demonstrating its robustness and generalizability, even in complex intertidal landscapes. This study provides the first unified multi-class classification algorithm for BCEs and offers a generalizable approach applicable at global scales, supporting refined blue carbon accounting and ecosystem management.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105076"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DA-MiTUNet: A Mix Transformer with dual attention embedding in unet for Land-Sea segmentation of remote sensing images DA-MiTUNet:用于遥感图像陆海分割的双注意力嵌入混合变压器
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-05 DOI: 10.1016/j.jag.2025.105066
Jiawei Wu , Zijian Liu , Qixiang Tong , Zhipeng Zhu , Hui He , Xinghui Wu , Haihua Xing
Automatic extraction of coastlines from remote sensing images is of great practical importance for coastal risk assessment, ecological environmental protection, and marine economic development. However, the highly dynamic nature of coastlines and the complex, diverse characteristics of land–sea boundaries make precise coastline extraction a challenging task. Although traditional deep learning methods have demonstrated good performance in this respect, they still face numerous shortcomings when dealing with high computational costs and the need to fully utilize multiscale features. In this paper, to address these problems, we propose a novel and efficient land–sea segmentation model for remote sensing imagery based on a classical U-shaped network structure, named DA-MiTUNet. On the one hand, we introduce the convolutional block attention module into the Mix Transformer (MiT), forming a dual-attention encoder in conjunction with an efficient self-attention mechanism. This integration ensures comprehensive extraction of global context and local information, thereby enabling more precise determination of complex land–sea boundary features. On the other hand, we propose an adaptive feature fusion module to further promote the effective fusion of features across different hierarchical levels, achieving more refined land–sea boundary segmentation. Experimental results on the Gaofen-1 Hainan Coastline Dataset (GF–HNCD) and Benchmark Sea–Land Dataset (BSD) datasets demonstrated that the proposed DA-MiTUNet model outperforms other comparative models in terms of both the average F1 score and the mean Intersection over Union value, while achieving excellent segmentation results with relatively low computational complexity, thereby reflecting the potential of our model for dynamic coastal monitoring during extreme sea level events.
海岸带遥感影像自动提取对海岸带风险评估、生态环境保护和海洋经济发展具有重要的现实意义。然而,海岸线的高度动态性质和陆海边界的复杂多样特征使得精确的海岸线提取成为一项具有挑战性的任务。尽管传统的深度学习方法在这方面表现良好,但在处理高计算成本和需要充分利用多尺度特征时,它们仍然面临许多缺点。为了解决这些问题,本文提出了一种基于经典u型网络结构的新型高效遥感影像陆海分割模型DA-MiTUNet。一方面,我们将卷积块注意模块引入到Mix Transformer (MiT)中,结合高效的自注意机制形成双注意编码器。这种整合确保了对全球背景和局部信息的全面提取,从而能够更精确地确定复杂的陆海边界特征。另一方面,提出自适应特征融合模块,进一步促进不同层次特征的有效融合,实现更精细的陆海边界分割。在高分一号海南海岸线数据集(GF-HNCD)和基准海-地数据集(BSD)数据集上的实验结果表明,所提出的DA-MiTUNet模型在平均F1得分和平均Intersection over Union值方面都优于其他比较模型,同时在较低的计算复杂度下获得了良好的分割结果,从而体现了我们的模型在极端海平面事件下动态海岸监测的潜力。
{"title":"DA-MiTUNet: A Mix Transformer with dual attention embedding in unet for Land-Sea segmentation of remote sensing images","authors":"Jiawei Wu ,&nbsp;Zijian Liu ,&nbsp;Qixiang Tong ,&nbsp;Zhipeng Zhu ,&nbsp;Hui He ,&nbsp;Xinghui Wu ,&nbsp;Haihua Xing","doi":"10.1016/j.jag.2025.105066","DOIUrl":"10.1016/j.jag.2025.105066","url":null,"abstract":"<div><div>Automatic extraction of coastlines from remote sensing images is of great practical importance for coastal risk assessment, ecological environmental protection, and marine economic development. However, the highly dynamic nature of coastlines and the complex, diverse characteristics of land–sea boundaries make precise coastline extraction a challenging task. Although traditional deep learning methods have demonstrated good performance in this respect, they still face numerous shortcomings when dealing with high computational costs and the need to fully utilize multiscale features. In this paper, to address these problems, we propose a novel and efficient land–sea segmentation model for remote sensing imagery based on a classical U-shaped network structure, named DA-MiTUNet. On the one hand, we introduce the convolutional block attention module into the Mix Transformer (MiT), forming a dual-attention encoder in conjunction with an efficient self-attention mechanism. This integration ensures comprehensive extraction of global context and local information, thereby enabling more precise determination of complex land–sea boundary features. On the other hand, we propose an adaptive feature fusion module to further promote the effective fusion of features across different hierarchical levels, achieving more refined land–sea boundary segmentation. Experimental results on the Gaofen-1 Hainan Coastline Dataset (GF–HNCD) and Benchmark Sea–Land Dataset (BSD) datasets demonstrated that the proposed DA-MiTUNet model outperforms other comparative models in terms of both the average F1 score and the mean Intersection over Union value, while achieving excellent segmentation results with relatively low computational complexity, thereby reflecting the potential of our model for dynamic coastal monitoring during extreme sea level events.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105066"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-scale high-resolution coastal subsidence mapping in eastern China with Sentinel-1 and Sentinel-2: Heterogeneous patterns and primary drivers 基于Sentinel-1和Sentinel-2的中国东部大尺度高分辨率沿海沉降制图:非均质模式和主要驱动因素
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-06 DOI: 10.1016/j.jag.2025.105047
Peng Li , Jianbo Bai , Lin Shen , Wei Tang , Cunren Liang , Bin Zhao , Zhenhong Li , Houjie Wang
A majorty of the low-lying coastal areas worldwide, where the population is densely concentrated, are confronted with high to extremetly high risks of land subsidence. However, a comprehensive detection and quantification of large-scale coastal subsidence patterns and their primary drivers in eastern China remain lacking. In this study, we employed Time Series Interferometric Synthetic Aperture Radar (TS-InSAR) technique and Sentinel-1 data to derive the vertical deformation pattern at a resolution of 90 m from 2017 to 2024. We developed a novel multi-frame mosaicking method, achieving spatially consistent InSAR observations over the land-sea transition areas. Our findings uncover extensive coastal subsidence in northeastern Shandong and eastern Jiangsu, highlighting several rapid-subsidence funnels with rates exceeding 50 mm/yr for the first time. By integrating Sentinel-2 multispectral imagery, subsidence time series, groundwater level measurements, and principal component analysis (PCA), we further analyzed the spatiotemporal distribution patterns and underlying drivers of these heterogeneous subsidence funnels. Our analysis demonstrates that anthropogenic factors are the dominant drivers of coastal subsidence. Four representative case studies reveal distinct subsidence mechanisms: (1) brine extraction for salt production and aquaculture, (2) excessive freshwater withdrawal for agricultural irrigation and industrial use, (3) groundwater depletion for intensive greenhouse aquaculture, and (4) land reclamation for industrial infrastructure development. In each region, subsidence patterns are predominantly controlled by a single dominant factor. This study is expected to provide valuable insights for monitoring and managing coastal subsidence, enhance our understanding of associated risks, and offer critical guidance for protecting communities in vulnerable coastal areas.
世界上大部分人口密集的沿海低洼地区都面临着高至极高的地面沉降风险。然而,中国东部沿海大尺度沉降模式及其主要驱动因素的综合检测和量化仍然缺乏。本研究利用时序干涉合成孔径雷达(TS-InSAR)技术和Sentinel-1数据,获得了2017 - 2024年90 m分辨率的垂直形变模式。我们开发了一种新的多帧拼接方法,实现了陆海过渡区InSAR观测的空间一致性。我们的研究结果揭示了山东东北部和江苏东部广泛的沿海沉降,并首次突出了几个速度超过50毫米/年的快速沉降漏斗。通过综合Sentinel-2多光谱影像、沉降时间序列、地下水位测量数据和主成分分析(PCA),进一步分析了这些非均匀沉降漏斗的时空分布格局及其驱动因素。分析表明,人为因素是沿海沉降的主要驱动因素。四个具有代表性的案例研究揭示了不同的沉降机制:(1)盐生产和水产养殖的盐水抽取,(2)农业灌溉和工业用水的过度淡水抽取,(3)集约化温室水产养殖的地下水枯竭,以及(4)工业基础设施建设的土地复垦。各区域沉降模式主要受单一主导因素控制。这项研究有望为监测和管理沿海下沉提供有价值的见解,增强我们对相关风险的理解,并为保护脆弱沿海地区的社区提供重要指导。
{"title":"Large-scale high-resolution coastal subsidence mapping in eastern China with Sentinel-1 and Sentinel-2: Heterogeneous patterns and primary drivers","authors":"Peng Li ,&nbsp;Jianbo Bai ,&nbsp;Lin Shen ,&nbsp;Wei Tang ,&nbsp;Cunren Liang ,&nbsp;Bin Zhao ,&nbsp;Zhenhong Li ,&nbsp;Houjie Wang","doi":"10.1016/j.jag.2025.105047","DOIUrl":"10.1016/j.jag.2025.105047","url":null,"abstract":"<div><div>A majorty of the low-lying coastal areas worldwide, where the population is densely concentrated, are confronted with high to extremetly high risks of land subsidence. However, a comprehensive detection and quantification of large-scale coastal subsidence patterns and their primary drivers in eastern China remain lacking. In this study, we employed Time Series Interferometric Synthetic Aperture Radar (TS-InSAR) technique and Sentinel-1 data to derive the vertical deformation pattern at a resolution of 90 m from 2017 to 2024. We developed a novel multi-frame mosaicking method, achieving spatially consistent InSAR observations over the land-sea transition areas. Our findings uncover extensive coastal subsidence in northeastern Shandong and eastern Jiangsu, highlighting several rapid-subsidence funnels with rates exceeding 50 mm/yr for the first time. By integrating Sentinel-2 multispectral imagery, subsidence time series, groundwater level measurements, and principal component analysis (PCA), we further analyzed the spatiotemporal distribution patterns and underlying drivers of these heterogeneous subsidence funnels. Our analysis demonstrates that anthropogenic factors are the dominant drivers of coastal subsidence. Four representative case studies reveal distinct subsidence mechanisms: (1) brine extraction for salt production and aquaculture, (2) excessive freshwater withdrawal for agricultural irrigation and industrial use, (3) groundwater depletion for intensive greenhouse aquaculture, and (4) land reclamation for industrial infrastructure development. In each region, subsidence patterns are predominantly controlled by a single dominant factor. This study is expected to provide valuable insights for monitoring and managing coastal subsidence, enhance our understanding of associated risks, and offer critical guidance for protecting communities in vulnerable coastal areas.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105047"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ForResANeXt: Forest/non-forest segmentation with aggregated residual attention network in satellite imagery ForResANeXt:卫星图像中基于聚集残差注意力网络的森林/非森林分割
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-14 DOI: 10.1016/j.jag.2026.105105
Qianhuizi Guo , Liangzhi Li , Ling Han
Accurate mapping of forest (F) and non-forest (NF) areas is essential for ecological assessment, resource management, and deforestation monitoring. However, complex backgrounds, severe class imbalance and redundant features continue to limit the accuracy and efficiency of network segmentation. To overcome these issues, we present ForResANeXt, a novel semantic segmentation network that uses Sentinel-2 multispectral imagery for forest/non-forest mapping. The model incorporates an AResCAB to enrich contextual feature representations while reducing redundancy and a lightweight embedded attention module to improve positional awareness. Furthermore, attention-gated skip connections suppress background noise and emphasize key spatial information, and a Focal Dice Loss function mitigates the impact of severe class imbalance. Experimental results demonstrate that ForResANeXt achieves a mIoU of 95.31%, surpassing U-Net and mainstream CNN variants in recall and F1 score for the minority non-forest class. It also outperforms several representative advanced CNN architectures and Transformer-based models in terms of Boundary IoU and Small Object Recall. Qualitative comparisons further confirm its superior capability in preserving structural details and delineating complex boundaries with reduced misclassification. Cross-regional transfer experiments validate the model’s robustness and generalization capability across diverse geographical and temporal conditions, and ablation studies confirm the effectiveness of each proposed component. Overall, ForResANeXt shows great promise for efficient and accurate forest cover mapping using multispectral satellite data.
准确绘制森林和非森林区域的地图对于生态评估、资源管理和森林砍伐监测至关重要。然而,复杂的背景、严重的类不平衡和冗余的特征继续限制着网络分割的准确性和效率。为了克服这些问题,我们提出了一种新的语义分割网络ForResANeXt,该网络使用Sentinel-2多光谱图像进行森林/非森林制图。该模型结合了一个AResCAB来丰富上下文特征表示,同时减少冗余,并结合了一个轻量级的嵌入式注意模块来提高位置感知。此外,注意门控跳跃连接抑制背景噪声并强调关键空间信息,焦骰子损失函数减轻了严重的类别不平衡的影响。实验结果表明,ForResANeXt的mIoU达到95.31%,在召回率和少数非森林类的F1分数上超过了U-Net和主流CNN变体。在边界IoU和小对象召回方面,它也优于几种具有代表性的高级CNN架构和基于transformer的模型。定性比较进一步证实了其在保留结构细节和描绘复杂边界方面的优越能力,并减少了错误分类。跨区域转移实验验证了该模型在不同地理和时间条件下的稳健性和泛化能力,而消融研究证实了各组成部分的有效性。总的来说,ForResANeXt显示了利用多光谱卫星数据进行高效、准确的森林覆盖制图的巨大希望。
{"title":"ForResANeXt: Forest/non-forest segmentation with aggregated residual attention network in satellite imagery","authors":"Qianhuizi Guo ,&nbsp;Liangzhi Li ,&nbsp;Ling Han","doi":"10.1016/j.jag.2026.105105","DOIUrl":"10.1016/j.jag.2026.105105","url":null,"abstract":"<div><div>Accurate mapping of forest (F) and non-forest (NF) areas is essential for ecological assessment, resource management, and deforestation monitoring. However, complex backgrounds, severe class imbalance and redundant features continue to limit the accuracy and efficiency of network segmentation. To overcome these issues, we present ForResANeXt, a novel semantic segmentation network that uses Sentinel-2 multispectral imagery for forest/non-forest mapping. The model incorporates an AResCAB to enrich contextual feature representations while reducing redundancy and a lightweight embedded attention module to improve positional awareness. Furthermore, attention-gated skip connections suppress background noise and emphasize key spatial information, and a Focal Dice Loss function mitigates the impact of severe class imbalance. Experimental results demonstrate that ForResANeXt achieves a mIoU of 95.31%, surpassing U-Net and mainstream CNN variants in recall and F1 score for the minority non-forest class. It also outperforms several representative advanced CNN architectures and Transformer-based models in terms of Boundary IoU and Small Object Recall. Qualitative comparisons further confirm its superior capability in preserving structural details and delineating complex boundaries with reduced misclassification. Cross-regional transfer experiments validate the model’s robustness and generalization capability across diverse geographical and temporal conditions, and ablation studies confirm the effectiveness of each proposed component. Overall, ForResANeXt shows great promise for efficient and accurate forest cover mapping using multispectral satellite data.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105105"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reverse degradation for remote sensing pan-sharpening 遥感泛锐化的反向退化
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-21 DOI: 10.1016/j.jag.2026.105085
Jiang He, Xiao Xiang Zhu
Accurate pan-sharpening of multispectral images is essential for high-resolution remote sensing, yet supervised methods are limited by the need for paired training data and poor generalization. Existing unsupervised approaches often neglect the physical consistency between degradation and fusion and lack sufficient constraints, resulting in suboptimal performance in complex scenarios. We propose RevFus, a novel two-stage pan-sharpening framework. In the first stage, an invertible neural network models the degradation process and reverses it for fusion with cycle-consistency self-learning, ensuring a physically grounded mapping. In the second stage, structural detail compensation and spatial–spectral contrastive learning alleviate detail loss and enhance spectral–spatial fidelity. To further understand the network’s decision-making, we design a quantitative and systematic measure of model interpretability, the Interpretability Efficacy Coefficient (IEC). IEC integrates multiple statistics derived from SHapley Additive exPlanations (SHAP) values into a single unified score and try to evaluate how effectively a model balances spatial detail enhancement with spectral preservation. Experiments on three datasets demonstrate that RevFus outperforms state-of-the-art unsupervised and traditional methods, delivering superior spectral fidelity, enhanced spatial detail, and high model interpretability, thereby validating the effectiveness of the interpretable deep learning framework for robust, high-quality pan-sharpening.
多光谱图像的精确泛锐化对于高分辨率遥感至关重要,但监督方法受成对训练数据的需求和泛化能力的限制。现有的无监督方法往往忽略了退化和融合之间的物理一致性,缺乏足够的约束,导致在复杂场景下的性能不理想。我们提出RevFus,一个新的两阶段泛锐化框架。在第一阶段,一个可逆神经网络对退化过程进行建模,并将其与循环一致性自学习进行融合,以确保物理接地映射。在第二阶段,采用结构细节补偿和空间-光谱对比学习来减轻细节损失,提高频谱-空间保真度。为了进一步理解网络的决策,我们设计了一个定量的、系统的模型可解释性度量——可解释性效能系数(interpretability Efficacy Coefficient, IEC)。IEC将来自SHapley加性解释(SHAP)值的多个统计数据集成到一个统一的分数中,并试图评估模型如何有效地平衡空间细节增强与光谱保存。在三个数据集上的实验表明,RevFus优于最先进的无监督和传统方法,提供了卓越的光谱保真度、增强的空间细节和高模型可解释性,从而验证了可解释深度学习框架在鲁棒性、高质量泛锐化方面的有效性。
{"title":"Reverse degradation for remote sensing pan-sharpening","authors":"Jiang He,&nbsp;Xiao Xiang Zhu","doi":"10.1016/j.jag.2026.105085","DOIUrl":"10.1016/j.jag.2026.105085","url":null,"abstract":"<div><div>Accurate pan-sharpening of multispectral images is essential for high-resolution remote sensing, yet supervised methods are limited by the need for paired training data and poor generalization. Existing unsupervised approaches often neglect the physical consistency between degradation and fusion and lack sufficient constraints, resulting in suboptimal performance in complex scenarios. We propose RevFus, a novel two-stage pan-sharpening framework. In the first stage, an invertible neural network models the degradation process and reverses it for fusion with cycle-consistency self-learning, ensuring a physically grounded mapping. In the second stage, structural detail compensation and spatial–spectral contrastive learning alleviate detail loss and enhance spectral–spatial fidelity. To further understand the network’s decision-making, we design a quantitative and systematic measure of model interpretability, the Interpretability Efficacy Coefficient (IEC). IEC integrates multiple statistics derived from SHapley Additive exPlanations (SHAP) values into a single unified score and try to evaluate how effectively a model balances spatial detail enhancement with spectral preservation. Experiments on three datasets demonstrate that RevFus outperforms state-of-the-art unsupervised and traditional methods, delivering superior spectral fidelity, enhanced spatial detail, and high model interpretability, thereby validating the effectiveness of the interpretable deep learning framework for robust, high-quality pan-sharpening.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105085"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global-local interaction and conditional consistency constrained diffusion model for SAR-guided optical image cloud removal sar制导光学图像去云的全局-局部相互作用和条件一致性约束扩散模型
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2025-12-11 DOI: 10.1016/j.jag.2025.105013
Liwen Cao , Jun Pan , Jiangong Xu , Tao Chen , Qiangqiang Yuan , Jizhang Sang
Cloud cover constitutes a formidable obstacle in the field of optical remote sensing image processing, substantially impeding the extraction and utilization of surface information. Synthetic Aperture Radar (SAR) imagery, serving as a complementary informational resource, is capable of furnishing crucial auxiliary data for optical images. In recent years, diffusion-based cloud removal methodologies have made significant progress. Nevertheless, their inherent generative diversity and randomness pose challenges in meeting the realism requirements for cloud removal in optical remote sensing imagery. To address this, this paper presents a SAR-guided optical imagery cloud removal method based on global–local interaction and conditional consistency-constrained diffusion models (GLCdiffcr). Specifically, the method integrates a multi-scale residual self-attention network in the denoising module. This network captures both global and local details of SAR imagery and the captured details provide precise guidance for cloud removal. Additionally, within the reverse diffusion framework, the method directly predicts cloud-free optical images and iterates over multiple steps, reducing errors caused by generative randomness and improving consistency. Meanwhile, in order to enhance the realism of the generated images, the method employs a novel multi-condition consistency-constrained loss function, which combines pixel-level errors with structural similarity measures. Through this loss function, the gap between the generated images and real-world land cover types is further minimized. Experimental results demonstrate that the proposed method outperforms current state-of-the-art methods in both quantitative metrics and visual quality, particularly in complex regions, with higher accuracy and reliability.
在光学遥感图像处理领域,云层是一个巨大的障碍,严重阻碍了地表信息的提取和利用。合成孔径雷达(SAR)图像作为一种补充信息资源,能够为光学图像提供重要的辅助数据。近年来,基于扩散的云清除方法取得了重大进展。然而,其固有的生成多样性和随机性对满足光学遥感图像去云的真实感要求提出了挑战。为了解决这一问题,本文提出了一种基于全局-局部相互作用和条件一致性约束扩散模型(GLCdiffcr)的sar制导光学图像去云方法。具体来说,该方法在去噪模块中集成了一个多尺度残差自关注网络。该网络捕获了SAR图像的全局和局部细节,并为云层清除提供了精确的指导。此外,在反向扩散框架内,该方法直接预测无云光学图像,并进行多步迭代,减少了生成随机性带来的误差,提高了一致性。同时,为了增强生成图像的真实感,该方法采用了一种新颖的多条件一致性约束损失函数,该函数将像素级误差与结构相似性度量相结合。通过这个损失函数,生成的图像与真实世界的土地覆盖类型之间的差距进一步最小化。实验结果表明,该方法在定量指标和视觉质量方面都优于当前最先进的方法,特别是在复杂区域,具有更高的准确性和可靠性。
{"title":"A global-local interaction and conditional consistency constrained diffusion model for SAR-guided optical image cloud removal","authors":"Liwen Cao ,&nbsp;Jun Pan ,&nbsp;Jiangong Xu ,&nbsp;Tao Chen ,&nbsp;Qiangqiang Yuan ,&nbsp;Jizhang Sang","doi":"10.1016/j.jag.2025.105013","DOIUrl":"10.1016/j.jag.2025.105013","url":null,"abstract":"<div><div>Cloud cover constitutes a formidable obstacle in the field of optical remote sensing image processing, substantially impeding the extraction and utilization of surface information. Synthetic Aperture Radar (SAR) imagery, serving as a complementary informational resource, is capable of furnishing crucial auxiliary data for optical images. In recent years, diffusion-based cloud removal methodologies have made significant progress. Nevertheless, their inherent generative diversity and randomness pose challenges in meeting the realism requirements for cloud removal in optical remote sensing imagery. To address this, this paper presents a SAR-guided optical imagery cloud removal method based on global–local interaction and conditional consistency-constrained diffusion models (GLCdiffcr). Specifically, the method integrates a multi-scale residual self-attention network in the denoising module. This network captures both global and local details of SAR imagery and the captured details provide precise guidance for cloud removal. Additionally, within the reverse diffusion framework, the method directly predicts cloud-free optical images and iterates over multiple steps, reducing errors caused by generative randomness and improving consistency. Meanwhile, in order to enhance the realism of the generated images, the method employs a novel multi-condition consistency-constrained loss function, which combines pixel-level errors with structural similarity measures. Through this loss function, the gap between the generated images and real-world land cover types is further minimized. Experimental results demonstrate that the proposed method outperforms current state-of-the-art methods in both quantitative metrics and visual quality, particularly in complex regions, with higher accuracy and reliability.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105013"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A geometric consistency constrained hierarchical global SfM for large-scale UAV images 基于几何一致性约束的大尺度无人机图像分层全局SfM
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-06 DOI: 10.1016/j.jag.2025.105075
Yan Zhou , Xianwei Zheng , Jinding Gao , Qian Shi , Xiaoping Liu
Efficient and effective pose estimation and 3D point clouds reconstruction from low-altitude UAV images is crucial for digital twins and geospatial applications. However, conventional divide-and-conquer SfM pipelines remain constrained by the high computational cost of incremental reconstruction. While global SfM, though faster, often suffer from instability due to outlier sensitivity in translation averaging. To address these limitations, we propose a feature track enhanced global SfM within divide-and-conquer framework. To deal with outliers, a cluster consistency constrained outlier filtering method is proposed, which combines distribution-aware scene partition and cluster consistency to remove false matches. Furthermore, we propose a geometry-constrained alignment optimization strategy in sub-models merging process to eliminate misalignments and ghosting artifacts, obtaining complete and accurate 3D models. Extensive experiments on ETH3D and SYSU UAV large-scale datasets verify that the proposed method outperforms established baselines, with comparable accuracy to the incremental SfM but half of the computation time. The reconstructed 3D models demonstrate notable enhancements in both structural integrity and level of detail, proving the method’s high applicability in large-scale 3D reconstruction tasks. The code and data will be released: https://github.com/BunnyanChou/HieGSfM.
基于低空无人机图像的高效姿态估计和三维点云重建对于数字孪生和地理空间应用至关重要。然而,传统的分而治之的SfM管道仍然受到增量重建的高计算成本的限制。而全局SfM虽然速度更快,但由于平移平均的离群值敏感性,往往存在不稳定性。为了解决这些限制,我们提出了一个分而治之框架下的特征跟踪增强的全局SfM。针对异常点的处理,提出了一种聚类一致性约束的异常点滤波方法,将分布感知场景划分和聚类一致性相结合,去除虚假匹配。在子模型合并过程中,提出了一种几何约束的对齐优化策略,以消除子模型的不对齐和重影现象,获得完整、准确的三维模型。在ETH3D和SYSU无人机大规模数据集上的大量实验验证了所提出的方法优于已建立的基线,具有与增量SfM相当的精度,但计算时间只有一半。重建的三维模型在结构完整性和细节水平上都有显著提高,证明了该方法在大规模三维重建任务中的高适用性。代码和数据将发布:https://github.com/BunnyanChou/HieGSfM。
{"title":"A geometric consistency constrained hierarchical global SfM for large-scale UAV images","authors":"Yan Zhou ,&nbsp;Xianwei Zheng ,&nbsp;Jinding Gao ,&nbsp;Qian Shi ,&nbsp;Xiaoping Liu","doi":"10.1016/j.jag.2025.105075","DOIUrl":"10.1016/j.jag.2025.105075","url":null,"abstract":"<div><div>Efficient and effective pose estimation and 3D point clouds reconstruction from low-altitude UAV images is crucial for digital twins and geospatial applications. However, conventional divide-and-conquer SfM pipelines remain constrained by the high computational cost of incremental reconstruction. While global SfM, though faster, often suffer from instability due to outlier sensitivity in translation averaging. To address these limitations, we propose a feature track enhanced global SfM within divide-and-conquer framework. To deal with outliers, a cluster consistency constrained outlier filtering method is proposed, which combines distribution-aware scene partition and cluster consistency to remove false matches. Furthermore, we propose a geometry-constrained alignment optimization strategy in sub-models merging process to eliminate misalignments and ghosting artifacts, obtaining complete and accurate 3D models. Extensive experiments on ETH3D and SYSU UAV large-scale datasets verify that the proposed method outperforms established baselines, with comparable accuracy to the incremental SfM but half of the computation time. The reconstructed 3D models demonstrate notable enhancements in both structural integrity and level of detail, proving the method’s high applicability in large-scale 3D reconstruction tasks. The code and data will be released: <span><span><u>https://github.com/BunnyanChou/HieGSfM</u></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105075"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
City-scale building instance segmentation from LiDAR point clouds via structure-aware method 基于结构感知方法的LiDAR点云城市尺度建筑实例分割
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-06 DOI: 10.1016/j.jag.2026.105086
Jinpeng Li , Yuan Li , Yiping Chen , Hongchao Fan , Ruisheng Wang
Building instance segmentation from city-scale point cloud is of great significance to urban planning management, disaster response and recovery, and land resource management. However, due to the complexity of urban scenes and sparse nature of LiDAR data, existing methods are often limited by the problems of obscured building boundaries and incomplete building structures, particularly in densely populated urban areas with diverse architectural styles. To address these challenges, we propose a novel method that automatically extracts building instances from airborne LiDAR data and is especially aware of the building structures. The proposed method encompasses two main stages, building points semantic segmentation and individual building extraction. First, we design a point cloud semantic segmentation network, VPBE-Net, that innovatively utilizes voxel-point cloud fused features to efficiently extract building points from large-scale point cloud. Second, building instances are automatically and robustly extracted using a graph-based algorithm SI-DVDC, which comprehensively considers both object-level building structure property and point-level density accessibility. We evaluate the semantic segmentation performance on the DALES and Toronto datasets and the building instance segmentation performance on the UrbanBIS and City-BIS datasets. For the semantics, Overall Accuracy (OA) and mean Intersection over Union (mIoU) metrics reach 88.96 % and 70.28 % on DALES dataset, and 89.26% and 75.40% on Toronto dataset, which is 2.22 % and 3.25 % higher than the state-of-the-art methods. For the building instance extraction, the instance-level quality metric reach 88.65 % on UrbanBIS dataset and 76.97 % on City-BIS dataset, respectively. The experiments verify that the proposed method can extract individual buildings from complex urban and rural environments, while being aware of diverse building structures, thereby demonstrating the remarkable generalization ability. To facilitate future research, we make source code and dataset available at https://github.com/Lijp411/City-BIS.
基于城市尺度点云的建筑实例分割对城市规划管理、灾害响应与恢复、土地资源管理等具有重要意义。然而,由于城市场景的复杂性和LiDAR数据的稀疏性,现有的方法往往受到建筑物边界模糊和建筑物结构不完整的问题的限制,特别是在人口密集、建筑风格多样的城市地区。为了解决这些挑战,我们提出了一种新的方法,从机载激光雷达数据中自动提取建筑实例,并特别注意建筑结构。该方法主要包括两个阶段:建筑点语义分割和单个建筑提取。首先,设计了点云语义分割网络VPBE-Net,创新地利用体素-点云融合特征,从大规模点云中高效提取建筑点;其次,采用基于图的SI-DVDC算法,综合考虑对象级建筑结构属性和点级密度可达性,实现建筑实例的自动鲁棒提取;我们评估了DALES和Toronto数据集上的语义分割性能,以及UrbanBIS和City-BIS数据集上的建筑实例分割性能。在语义方面,DALES数据集的总体准确率(Overall Accuracy, OA)和平均交汇率(Intersection over Union, mIoU)指标分别达到88.96%和70.28%,Toronto数据集达到89.26%和75.40%,分别比目前最先进的方法高2.22%和3.25%。对于建筑实例提取,在UrbanBIS数据集和City-BIS数据集上,实例级质量指标分别达到了88.65%和76.97%。实验结果表明,该方法能够从复杂的城乡环境中提取出单个建筑,同时能够感知到建筑结构的多样性,具有显著的泛化能力。为了方便未来的研究,我们在https://github.com/Lijp411/City-BIS上提供了源代码和数据集。
{"title":"City-scale building instance segmentation from LiDAR point clouds via structure-aware method","authors":"Jinpeng Li ,&nbsp;Yuan Li ,&nbsp;Yiping Chen ,&nbsp;Hongchao Fan ,&nbsp;Ruisheng Wang","doi":"10.1016/j.jag.2026.105086","DOIUrl":"10.1016/j.jag.2026.105086","url":null,"abstract":"<div><div>Building instance segmentation from city-scale point cloud is of great significance to urban planning management, disaster response and recovery, and land resource management. However, due to the complexity of urban scenes and sparse nature of LiDAR data, existing methods are often limited by the problems of obscured building boundaries and incomplete building structures, particularly in densely populated urban areas with diverse architectural styles. To address these challenges, we propose a novel method that automatically extracts building instances from airborne LiDAR data and is especially aware of the building structures. The proposed method encompasses two main stages, building points semantic segmentation and individual building extraction. First, we design a point cloud semantic segmentation network, VPBE-Net, that innovatively utilizes voxel-point cloud fused features to efficiently extract building points from large-scale point cloud. Second, building instances are automatically and robustly extracted using a graph-based algorithm SI-DVDC, which comprehensively considers both object-level building structure property and point-level density accessibility. We evaluate the semantic segmentation performance on the DALES and Toronto datasets and the building instance segmentation performance on the UrbanBIS and City-BIS datasets. For the semantics, Overall Accuracy (OA) and mean Intersection over Union (mIoU) metrics reach 88.96 % and 70.28 % on DALES dataset, and 89.26% and 75.40% on Toronto dataset, which is 2.22 % and 3.25 % higher than the state-of-the-art methods. For the building instance extraction, the instance-level quality metric reach 88.65 % on UrbanBIS dataset and 76.97 % on City-BIS dataset, respectively. The experiments verify that the proposed method can extract individual buildings from complex urban and rural environments, while being aware of diverse building structures, thereby demonstrating the remarkable generalization ability. To facilitate future research, we make source code and dataset available at https://github.com/Lijp411/City-BIS.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105086"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global daily seamless XCO2 Mapping (2016–2020): Spatio-temporal trends and variations during wildfire events 全球每日无缝XCO2制图(2016-2020):野火事件的时空趋势与变化
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2026-01-13 DOI: 10.1016/j.jag.2026.105092
Jie Li , Ziyi Zhang , Tongwen Li , Qiangqiang Yuan , Liangpei Zhang
Carbon dioxide (CO2) is a dominant greenhouse gas and has a considerable effect on climate change. Satellite remote sensing is commonly used to acquire atmospheric CO2 concentrations. However, the limited spatial coverage of a single satellite makes the obtainment of full-coverage CO2 data difficult. In this study, a daily dataset of global seamless column-averaged dry-air mole fractions of CO2 (XCO2) was generated with a high spatial resolution of 0.1° from 2016 to 2020, by using a stacking machine learning method. The proposed XCO2 dataset shows a satisfactory performance, with a root mean square error (RMSE) of 0.9697 ppm and correlation coefficient (R) of 0.9868 in the 10-fold cross validation. The spatial validation reveals good generalization ability, with continent-by-continent validation results showing an R greater than 0.93. The proposed dataset reports high consistency and accuracy in the ground-based validation, with an RMSE of 1.0855 ppm. Out of 24 stations, 22 demonstrate a precision of R greater than 0.95. In comparison with two XCO2 model simulations, our reconstructions show a better consistency with ground observations. Spatial analyses at continent, national, and Chinese provincial levels, and temporal trends at daily, monthly, seasonal, and annual scales, are provided. Furthermore, benefitting from the daily temporal resolution, two typical examples of wildfire events, namely the Fort McMurray wildfire and the Blue Cut Fire, are evaluated. Our dataset can effectively capture fine-scale XCO2 variations and has the potential to characterize carbon sources and sinks. The dataset can be obtained freely at https://zenodo.org/records/15191247.
二氧化碳(CO2)是主要的温室气体,对气候变化有相当大的影响。卫星遥感通常用于获取大气中的二氧化碳浓度。然而,由于单个卫星的空间覆盖有限,很难获得全覆盖的CO2数据。本研究采用堆叠机器学习方法,生成了2016 - 2020年全球无缝柱平均干空气摩尔分数(XCO2)的日数据集,空间分辨率为0.1°。在10倍交叉验证中,XCO2数据集的均方根误差(RMSE)为0.9697 ppm,相关系数(R)为0.9868。空间验证显示出较好的泛化能力,各大洲验证结果的R均大于0.93。该数据集在地面验证中具有较高的一致性和准确性,RMSE为1.0855 ppm。在24个站点中,22个站点的R精度大于0.95。与两个XCO2模式的模拟结果相比,我们的重建结果与地面观测结果具有更好的一致性。提供了大陆、国家和中国省级的空间分析,以及日、月、季、年尺度的时间趋势。此外,利用日时间分辨率,对两个典型的野火事件,即Fort McMurray野火和Blue Cut Fire进行了评估。我们的数据集可以有效地捕获精细尺度的XCO2变化,并具有表征碳源和汇的潜力。该数据集可以在https://zenodo.org/records/15191247上免费获得。
{"title":"Global daily seamless XCO2 Mapping (2016–2020): Spatio-temporal trends and variations during wildfire events","authors":"Jie Li ,&nbsp;Ziyi Zhang ,&nbsp;Tongwen Li ,&nbsp;Qiangqiang Yuan ,&nbsp;Liangpei Zhang","doi":"10.1016/j.jag.2026.105092","DOIUrl":"10.1016/j.jag.2026.105092","url":null,"abstract":"<div><div>Carbon dioxide (CO<sub>2</sub>) is a dominant greenhouse gas and has a considerable effect on climate change. Satellite remote sensing is commonly used to acquire atmospheric CO<sub>2</sub> concentrations. However, the limited spatial coverage of a single satellite makes the obtainment of full-coverage CO<sub>2</sub> data difficult. In this study, a daily dataset of global seamless column-averaged dry-air mole fractions of CO<sub>2</sub> (XCO<sub>2</sub>) was generated with a high spatial resolution of 0.1° from 2016 to 2020, by using a stacking machine learning method. The proposed XCO<sub>2</sub> dataset shows a satisfactory performance, with a root mean square error (RMSE) of 0.9697 ppm and correlation coefficient (R) of 0.9868 in the 10-fold cross validation. The spatial validation reveals good generalization ability, with continent-by-continent validation results showing an R greater than 0.93. The proposed dataset reports high consistency and accuracy in the ground-based validation, with an RMSE of 1.0855 ppm. Out of 24 stations, 22 demonstrate a precision of R greater than 0.95. In comparison with two XCO<sub>2</sub> model simulations, our reconstructions show a better consistency with ground observations. Spatial analyses at continent, national, and Chinese provincial levels, and temporal trends at daily, monthly, seasonal, and annual scales, are provided. Furthermore, benefitting from the daily temporal resolution, two typical examples of wildfire events, namely the Fort McMurray wildfire and the Blue Cut Fire, are evaluated. Our dataset can effectively capture fine-scale XCO<sub>2</sub> variations and has the potential to characterize carbon sources and sinks. The dataset can be obtained freely at <span><span>https://zenodo.org/records/15191247</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105092"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fully automatic and label-free Sentinel-1 SAR framework for green-tide mapping 用于绿潮制图的全自动无标签Sentinel-1 SAR框架
IF 8.6 Q1 REMOTE SENSING Pub Date : 2026-02-01 Epub Date: 2025-12-19 DOI: 10.1016/j.jag.2025.105036
Pengfei Tang , Peijun Du , Shanchuan Guo , Lu Qie , Wei Zhang , Peng Zhang , Mathias Réus , Jocelyn Chanussot
Green tides in the Yellow Sea are recurrent hazardous algal blooms whose optical monitoring is often hindered by cloud cover, while existing SAR approaches remain sensitive to sea state and look-alike targets and frequently require per-scene tuning or curated labels, limiting transferability and temporal consistency. To address this, we develop a label-free, fully automatic Sentinel-1 workflow that operationalizes three empirical signatures of green tides: spatial anomaly pre-location using local standard deviation on VV, edge-guided intensity separation via edge-balanced Otsu, and temporal anomaly screening using Z-scores with adaptive thresholding; an automatic object-level filter then removes non-algal marine targets. Implemented on Google Earth Engine at 10 m resolution, the pipeline delivers rapid processing without manual parameters. Validation shows high mapping accuracy: with a global stratified sample set, F1 equals 0.96 in 2019 and 0.97 in 2021; with a local edge validation set, F1 equals 0.94; in an all-pixel assessment over more than 2.5 billion pixels against a baseline, overall F1 equals 0.91. Qualitative comparisons likewise show fewer omissions of low-contrast filaments and fewer perforations within mats than GA-Net and UDNet. An ablation analysis clarifies the role of each module: the spatial pre-locator supplies contiguous candidates; the edge-guided intensity module sharpens boundaries and limits leakage; the temporal module suppresses transient bright seawater and consolidates persistent mats. Used jointly, the three constraints provide complementary information that yields the most stable cross-year performance and a favorable balance between precision and recall. Overall, the framework offers a simple, scalable, and operational pathway for fine-scale, all-weather monitoring and consistent multi-year assessment of green tides in the Yellow Sea.
黄海的绿潮是反复出现的有害藻华,其光学监测经常受到云层覆盖的阻碍,而现有的SAR方法对海况和相似目标仍然敏感,并且经常需要对每个场景进行调整或管理标签,限制了可转移性和时间一致性。为了解决这个问题,我们开发了一个无标签的全自动Sentinel-1工作流,该工作流可操作绿潮的三个经验特征:使用VV的局部标准差进行空间异常预定位,使用边缘平衡Otsu进行边缘引导强度分离,以及使用自适应阈值的z分数进行时间异常筛选;自动对象级过滤器然后删除非藻类海洋目标。该管道在谷歌Earth Engine上实现,分辨率为10米,无需手动参数即可进行快速处理。验证表明,在全局分层样本集上,F1在2019年等于0.96,在2021年等于0.97;对于局部边缘验证集,F1 = 0.94;在超过25亿像素的全像素评估中,总体F1等于0.91。定性比较同样表明,与GA-Net和UDNet相比,低对比度细丝的遗漏和垫内穿孔较少。消融分析阐明了每个模块的作用:空间预定位器提供连续候选;边缘引导强度模块锐化边界,限制泄漏;时间模块抑制了短暂的明亮海水并巩固了持久的垫。联合使用,这三个约束提供了互补的信息,产生最稳定的跨年性能,并在精度和召回率之间取得了有利的平衡。总体而言,该框架为黄海绿潮的精细、全天候监测和持续多年评估提供了一个简单、可扩展和可操作的途径。
{"title":"A fully automatic and label-free Sentinel-1 SAR framework for green-tide mapping","authors":"Pengfei Tang ,&nbsp;Peijun Du ,&nbsp;Shanchuan Guo ,&nbsp;Lu Qie ,&nbsp;Wei Zhang ,&nbsp;Peng Zhang ,&nbsp;Mathias Réus ,&nbsp;Jocelyn Chanussot","doi":"10.1016/j.jag.2025.105036","DOIUrl":"10.1016/j.jag.2025.105036","url":null,"abstract":"<div><div>Green tides in the Yellow Sea are recurrent hazardous algal blooms whose optical monitoring is often hindered by cloud cover, while existing SAR approaches remain sensitive to sea state and look-alike targets and frequently require per-scene tuning or curated labels, limiting transferability and temporal consistency. To address this, we develop a label-free, fully automatic Sentinel-1 workflow that operationalizes three empirical signatures of green tides: spatial anomaly pre-location using local standard deviation on VV, edge-guided intensity separation via edge-balanced Otsu, and temporal anomaly screening using Z-scores with adaptive thresholding; an automatic object-level filter then removes non-algal marine targets. Implemented on Google Earth Engine at 10 m resolution, the pipeline delivers rapid processing without manual parameters. Validation shows high mapping accuracy: with a global stratified sample set, F1 equals 0.96 in 2019 and 0.97 in 2021; with a local edge validation set, F1 equals 0.94; in an all-pixel assessment over more than 2.5 billion pixels against a baseline, overall F1 equals 0.91. Qualitative comparisons likewise show fewer omissions of low-contrast filaments and fewer perforations within mats than GA-Net and UDNet. An ablation analysis clarifies the role of each module: the spatial pre-locator supplies contiguous candidates; the edge-guided intensity module sharpens boundaries and limits leakage; the temporal module suppresses transient bright seawater and consolidates persistent mats. Used jointly, the three constraints provide complementary information that yields the most stable cross-year performance and a favorable balance between precision and recall. Overall, the framework offers a simple, scalable, and operational pathway for fine-scale, all-weather monitoring and consistent multi-year assessment of green tides in the Yellow Sea.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"146 ","pages":"Article 105036"},"PeriodicalIF":8.6,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International journal of applied earth observation and geoinformation : ITC journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1