首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
A Deep Learning-Based Model for Nowcasting of Convective Initiation Using Infrared Observations 基于红外观测的对流起始临近预报的深度学习模型
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-05 DOI: 10.1109/JSTARS.2025.3650686
Huijie Zhao;Xiaohang Ma;Guorui Jia;Jialu Xu;Yihan Xie;Yujun Zhao
As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.
随着强对流天气对公共安全的影响越来越大,提高预报精度变得至关重要。然而,由于某些区域或变量的观测覆盖不足,以及对负责局部对流发展的精细尺度物理过程的代表性不足,预测能力仍然有限。为了应对这些挑战,本研究提出了一种基于异构气象数据的物理嵌入式神经网络,该网络利用卫星多光谱图像和天基和地面红外光谱观测协同检索的大气温度和湿度剖面,在6小时内预测局部对流开始(CI)。本研究的核心创新在于开发了一个物理一致的模型,该模型明确地将对流可用势能方程嵌入到网络架构中。该模型通过嵌入物理信息,使大气热力特征提取模块能够生成物理上一致的特征张量,从而增强对关键对流过程的表征。我们使用预训练和微调方法对网络进行训练,然后通过再分析和实际观测数据验证其有效性。结果表明,与没有大气廓线输入的预报相比,纳入大气廓线数据可使6小时平均临界成功指数(CSI)提高40%,从0.44提高到0.62。此外,在使用再分析数据和雷达观测的验证实验中,与使用纯粹数据驱动的剖面提取模块的模型相比,所提出的大气剖面特征提取模块始终将模型的平均预测CSI提高了29%以上。
{"title":"A Deep Learning-Based Model for Nowcasting of Convective Initiation Using Infrared Observations","authors":"Huijie Zhao;Xiaohang Ma;Guorui Jia;Jialu Xu;Yihan Xie;Yujun Zhao","doi":"10.1109/JSTARS.2025.3650686","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650686","url":null,"abstract":"As severe convective weather exerts growing influence on public safety, enhancing forecast accuracy has become critically important. However, the predictive capability remains limited due to insufficient observational coverageenlrg in certain regions or variables, as well as the inadequate representation of the fine-scale physical processes responsible for local convective development. In response to these challenges, this study proposes a physically embedded neural network based on heterogeneous meteorological data, which utilizes satellite multispectral images and atmospheric temperature and humidity profile synergistically retrieved from space-based and ground-based infrared spectral observations, to forecast local convective initiation (CI) within a 6-hour lead time. The core innovation of this study lies in the development of a physically consistent model that explicitly embeds the convective available potential energy equation into the network architecture. By embedding physical information, the model enables the atmospheric thermodynamic feature extraction module to generate physically consistent feature tensors, thereby enhancing the representation of key convective processes. We trained the network using the pretraining and fine-tuning approach, then validated its effectiveness with reanalysis and actual observational data. The results demonstrate that incorporating the retrieved atmospheric profile data leads to a 40% improvement in the 6-hour average critical success index (CSI), increasing from 0.44 to 0.62 relative to forecasts without atmospheric profile input. Furthermore, in validation experiments using reanalysis data and radar observations, the proposed atmospheric profile feature extraction module consistently improves the model’s average forecast CSI by more than 29% compared to models utilizing purely data-driven profile extraction modules.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4188-4202"},"PeriodicalIF":5.3,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Object Detection on Remote Sensing Images Based on Decoupled Training, Contrastive Learning, and Self-Training 基于解耦训练、对比学习和自训练的遥感图像小目标检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-01 DOI: 10.1109/JSTARS.2025.3650394
Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang
Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.
与在大数据集上训练的一般方法相比,遥感图像中的少镜头目标检测(FSOD)面临两个关键挑战:第一,仅利用少数标记实例作为训练集,严重限制了深度神经网络的特征表示学习;其次,遥感图像数据背景复杂,同一图像中存在多个大小差异较大的目标,导致探测器出现大量误报和漏检。本文提出了一个FSOD框架(称为DeCL-Det),它应用自我训练从未标记的目标域数据生成高质量的伪注释。这些改进的伪标签被迭代地集成到训练集中,以扩大对新类的监督。引入了一个辅助网络,通过纠正伪标记区域的错误分类来减轻标签噪声,确保鲁棒性学习。对于多尺度特征学习,我们提出了一种梯度解耦框架GCFPN,将特征金字塔网络(FPN)与梯度解耦层(GDL)相结合。FPN是提取多尺度特征表示,GDL是通过梯度将区域提议网络和RCNN头部之间的模块解耦为两个阶段或任务。FPN和GDL两个模块以解耦的方式训练Faster R-CNN,以促进新对象的多尺度特征学习。为了进一步提高分类能力,我们引入了一个监督对比学习头来增强特征识别,增强FSOD的鲁棒性。在DIOR数据集上的实验表明,我们的方法优于现有的几种方法,并取得了具有竞争力的结果。
{"title":"Few-Shot Object Detection on Remote Sensing Images Based on Decoupled Training, Contrastive Learning, and Self-Training","authors":"Shun Zhang;Xuebin Zhang;Yaohui Xu;Ke Wang","doi":"10.1109/JSTARS.2025.3650394","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3650394","url":null,"abstract":"Few-shot object detection (FSOD) in remote sensing imagery faces two critical challenges compared to general methods trained on large datasets, first, only a few labeled instances leveraged as the training set significantly limit the feature representation learning of deep neural networks; second, Remote sensing image data contain complicated background and multiple objects with greatly different sizes in the same image, which leads the detector to large numbers of false alarms and miss detections. This article proposes a FSOD framework (called DeCL-Det) that applies self-training to generate high-quality pseudoannotations from unlabeled target domain data. These refined pseudolabels are iteratively integrated into the training set to expand supervision for novel classes. An auxiliary network is introduced to mitigate label noise by rectifying misclassifications in pseudolabeled regions, ensuring robust learning. For multiscale feature learning, we propose a gradient-decoupled framework, GCFPN, combining feature pyramid networks (FPN) with a gradient decoupled layer (GDL). FPN is to extract multiscale feature representations, and GDL is to decouple the modules between the region proposal network and RCNN head into two stages or tasks through gradients. The two modules, FPN and GDL, train Faster R-CNN in a decoupled way to facilitate the multiscale feature learning of novel objects. To further enhance the classification ability, we introduce a supervised contrastive learning head to enhance feature discrimination, reinforcing robustness in FSOD. Experiments on the DIOR dataset indicate that our method performs better than several existing approaches and achieves competitive results.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"3983-3997"},"PeriodicalIF":5.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11321270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous RFI Mitigation in Image-Domain via Subimage Segmentation and Local Frequency Feature Analysis 基于子图像分割和局部频率特征分析的图像域异构RFI抑制
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-31 DOI: 10.1109/JSTARS.2025.3649816
Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang
Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.
射频干扰会降低星载合成孔径雷达(SAR)遥感图像的质量。在Sentinel-1卫星的干涉宽幅模式下,SAR接收器可以在一个观测周期内捕获多种类型的RFI信号,这被称为异构RFI,增加了干扰检测和缓解的复杂性。提出了一种基于子图像分割和局部光谱特征分析的异构干扰抑制方法。该方法将原单视复杂图像沿距离方向分割成多个子图像,增强了干涉特征在距离频域的表示。然后对每个子图像进行光谱分析以检测和减轻干扰。最后,将子图像拼接在一起,重建RFI缓解后的图像。利用陆坦一号的模拟干扰数据和哨兵一号的实测干扰数据进行实验。结果表明,该方法能有效地抑制各种典型干扰场景下的射频干扰伪影,恢复图像中被遮挡的地物信息。
{"title":"Heterogeneous RFI Mitigation in Image-Domain via Subimage Segmentation and Local Frequency Feature Analysis","authors":"Siqi Lai;Mingliang Tao;Yanyang Liu;Lei Cui;Jia Su;Ling Wang","doi":"10.1109/JSTARS.2025.3649816","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649816","url":null,"abstract":"Radio frequency interference (RFI) may degrade the quality of remote sensing images acquired by spaceborne synthetic aperture radar (SAR). In the interferometric wide-swath mode of the Sentinel-1 satellite, the SAR receiver may capture multiple types of RFI signals within a single observation period, which is referred to as heterogeneous RFI, increasing the complexity of interference detection and mitigation. This article proposes a heterogeneous interference mitigation method based on subimage segmentation and local spectral features analysis. The proposed method divides the original single look complex image into multiple subimages along the range direction, enhancing the representation of interference features in the range frequency domain. Spectral analysis is then performed on each subimage to detect and mitigate interference. Finally, the image after RFI mitigation is reconstructed by stitching the subimages together. Experiments were conducted using simulated interference data generated from LuTan-1 and measured interference data from Sentinel-1. The results demonstrate that the proposed method can effectively mitigate RFI artifacts in various typical interference scenarios and restore the obscured ground object information in the images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4069-4084"},"PeriodicalIF":5.3,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320316","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Granularity-Inconsistent Transformer for Unsupervised Hyperspectral Anomaly Detection 用于无监督高光谱异常检测的粒度不一致变压器
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-31 DOI: 10.1109/JSTARS.2025.3647616
Cong Wang;Yunfeng Wang;Yu Wang;Mingming Xu;Leiquan Wang
In hyperspectral anomaly detection (HAD), anomalous pixels typically exhibit a sparsely distributed spatial pattern. Existing deep models often generate backgrounds by reconstructing spectral vectors, yet fail to capture the inherent spatial characteristics of the image. To overcome the semantic and structural information loss caused by neglecting spatial features, we propose the granularity-inconsistent transformer (GIFormer) for unsupervised HAD. Specifically, the interaction between the spatial and spectral dimensions is leveraged to enhance the spatial-spectral feature representation of hyperspectral images, highlighting the differences between background and anomaly features. The GIFormer performs multilevel background reconstruction to detect anomalies. In the encoder, patch-level anomaly elimination masks are applied to reconstruct background features, where spatial correlations of anomalies are utilized to suppress anomalous patterns spanning multiple pixels. The decoder operates at the pixel level, using fine-grained receptive fields for global attention modeling, which enables the model to refine local details that may have been aggregated by the encoder in larger patches, ensuring the final reconstruction retains the intricate structure of the original hyperspectral data. Furthermore, adaptive weight loss is incorporated to guide network training. Extensive experimental results confirm the superior performance of GIFormer.
在高光谱异常检测(HAD)中,异常像素通常呈现稀疏分布的空间模式。现有的深度模型通常通过重建光谱向量来生成背景,但无法捕捉图像固有的空间特征。为了克服由于忽略空间特征而导致的语义和结构信息丢失,我们提出了一种用于无监督HAD的粒度不一致变压器(GIFormer)。具体而言,利用空间维度和光谱维度之间的相互作用来增强高光谱图像的空间光谱特征表示,突出背景和异常特征之间的差异。GIFormer通过多级背景重建来检测异常。在编码器中,应用补丁级异常消除掩模重建背景特征,利用异常的空间相关性抑制跨多个像素的异常模式。解码器在像素级操作,使用细粒度的接受域进行全局注意力建模,这使得模型能够细化可能由编码器在更大的补丁中聚集的局部细节,确保最终重建保留原始高光谱数据的复杂结构。此外,引入自适应减重方法指导网络训练。大量的实验结果证实了GIFormer的优越性能。
{"title":"Granularity-Inconsistent Transformer for Unsupervised Hyperspectral Anomaly Detection","authors":"Cong Wang;Yunfeng Wang;Yu Wang;Mingming Xu;Leiquan Wang","doi":"10.1109/JSTARS.2025.3647616","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3647616","url":null,"abstract":"In hyperspectral anomaly detection (HAD), anomalous pixels typically exhibit a sparsely distributed spatial pattern. Existing deep models often generate backgrounds by reconstructing spectral vectors, yet fail to capture the inherent spatial characteristics of the image. To overcome the semantic and structural information loss caused by neglecting spatial features, we propose the granularity-inconsistent transformer (GIFormer) for unsupervised HAD. Specifically, the interaction between the spatial and spectral dimensions is leveraged to enhance the spatial-spectral feature representation of hyperspectral images, highlighting the differences between background and anomaly features. The GIFormer performs multilevel background reconstruction to detect anomalies. In the encoder, patch-level anomaly elimination masks are applied to reconstruct background features, where spatial correlations of anomalies are utilized to suppress anomalous patterns spanning multiple pixels. The decoder operates at the pixel level, using fine-grained receptive fields for global attention modeling, which enables the model to refine local details that may have been aggregated by the encoder in larger patches, ensuring the final reconstruction retains the intricate structure of the original hyperspectral data. Furthermore, adaptive weight loss is incorporated to guide network training. Extensive experimental results confirm the superior performance of GIFormer.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"2850-2863"},"PeriodicalIF":5.3,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DARFNet: A Divergence-Aware Reciprocal Fusion Network for Multispectral Feature Alignment and Fusion DARFNet:一种多光谱特征对准与融合的发散感知互易融合网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-31 DOI: 10.1109/JSTARS.2025.3647819
Junyu Huang;Jiawei Chen;Renbo Luo;Yongan Lu;Jinxin Yang;Zhifeng Wu
Robust detection of small objects in remote sensing imagery remains a significant challenge due to complex backgrounds, scale variation, and modality inconsistency. In this article, we propose DARFNet, a novel multispectral detection framework that effectively integrates RGB and infrared information for accurate small object localization. DARFNet employs a dual-branch architecture with a dynamic attention-based fusion mechanism to adaptively enhance complementary features. In addition, we incorporate lightweight yet expressive modules–ODConv and ConvNeXtBlock–to boost detection performance while maintaining computational efficiency. Extensive experiments on three widely-used benchmarks, including VEDAI, NWPU, and DroneVehicle, demonstrate that DARFNet outperforms state-of-the-art methods in both accuracy and efficiency. Notably, our model shows superior performance in detecting small and densely distributed targets under complex remote sensing conditions.
由于背景复杂、尺度变化和模态不一致,遥感图像中小目标的鲁棒检测仍然是一个重大挑战。在本文中,我们提出了一种新的多光谱检测框架DARFNet,它有效地集成了RGB和红外信息,用于精确的小目标定位。DARFNet采用双分支架构和基于注意力的动态融合机制,自适应增强互补特性。此外,我们还结合了轻量级但富有表现力的模块odconv和convnextblock,以提高检测性能,同时保持计算效率。在三种广泛使用的基准测试(包括VEDAI、NWPU和无人机)上进行的大量实验表明,DARFNet在准确性和效率方面都优于最先进的方法。值得注意的是,在复杂遥感条件下,我们的模型在检测小而密集的目标方面表现出优异的性能。
{"title":"DARFNet: A Divergence-Aware Reciprocal Fusion Network for Multispectral Feature Alignment and Fusion","authors":"Junyu Huang;Jiawei Chen;Renbo Luo;Yongan Lu;Jinxin Yang;Zhifeng Wu","doi":"10.1109/JSTARS.2025.3647819","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3647819","url":null,"abstract":"Robust detection of small objects in remote sensing imagery remains a significant challenge due to complex backgrounds, scale variation, and modality inconsistency. In this article, we propose DARFNet, a novel multispectral detection framework that effectively integrates RGB and infrared information for accurate small object localization. DARFNet employs a dual-branch architecture with a dynamic attention-based fusion mechanism to adaptively enhance complementary features. In addition, we incorporate lightweight yet expressive modules–ODConv and ConvNeXtBlock–to boost detection performance while maintaining computational efficiency. Extensive experiments on three widely-used benchmarks, including VEDAI, NWPU, and DroneVehicle, demonstrate that DARFNet outperforms state-of-the-art methods in both accuracy and efficiency. Notably, our model shows superior performance in detecting small and densely distributed targets under complex remote sensing conditions.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4779-4789"},"PeriodicalIF":5.3,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11320323","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentinel-2 Multispectral Imagery Case-I Water Semianalytical Bathymetry Retrieval Model Assisted by Satellite-Derived Pixel Substrate Spectrum Sentinel-2多光谱影像Case-I基于卫星源像元基片光谱辅助的水半解析测深反演模型
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/JSTARS.2025.3649266
Jinshan Zhu;Yu Wang;Ruifu Wang;Yuquan Wen;Cong Jiao;Yina Han;Bopeng Liu
Bathymetry is a crucial topographic element in shallow water. When retrieving bathymetry using the semianalytical (SA) model, issues of too many unknown parameters and difficulty of obtaining substrate spectrum should be addressed. In this article, a Case-I water semianalytical bathymetry retrieval model assisted by a pixel substrate spectrum (SBM-P) is proposed for multispectral images to retrieve bathymetry without prior information. First, the substrate spectrum of each pixel is obtained with the assistance of the Ice, Cloud, and land Elevation Satellite-2 and Sentinel-2 data. Second, the SA bathymetry retrieval model for Case- I water is reparametrized. Third, the optimal objective function is selected in the process of numerical optimization. Finally, the performance of the proposed SBM-P model is evaluated. Two datasets, Oahu Island (OI) and Vieques Island (VI), are prepared for experiments. Results show that the minimum distance and spectral angle matching (MS) objective function has better performance compared to the minimum distance (MD). For example, in the bright substrate of OI case, compared to MD, the root mean square error ($text{RMSE}$), mean absolute error ($text{MAE}$), and mean relative error ($text{MRE}$) of MS decrease by 2.07 m, 1.61 m, 43.1%, respectively. Compared to the generally used semianalytical bathymetry retrieval model using a fixed substrate spectrum (SBM-F), the SBM-P demonstrates improvements in the evaluation metrics: for the OI case, the $text{RMSE}$ decreases by 0.75 m, the $text{MAE}$ by 0.67 m, and the $text{MRE}$ by 40.1% ; similarly for the VI case, the $text{RMSE}$ $text{MAE}$, and $text{MRE}$ reduce by 0.84, 0.7, and 19% . In conclusion, the proposed SBM-P model is effective and can achieve higher accuracy compared to the generally used SBM-F model.
水深测量是浅水地形的重要组成部分。当使用半解析(SA)模型检索水深测量时,应解决未知参数过多和难以获得底物光谱的问题。本文提出了一种基于像元底谱辅助的Case-I型水半解析水深检索模型,用于无先验信息的多光谱图像水深检索。首先,在冰、云和陆地高程卫星2号和哨兵2号数据的帮助下,获得每个像元的基底光谱。其次,对Case- I水的SA水深反演模型进行了重新参数化。第三,在数值优化过程中选择最优目标函数。最后,对所提出的SBM-P模型进行了性能评价。准备了瓦胡岛(OI)和别克斯岛(VI)两个数据集进行实验。结果表明,最小距离与光谱角匹配目标函数(MS)比最小距离匹配目标函数(MD)具有更好的性能。例如,在OI情况下,与MD相比,MS的均方根误差($text{RMSE}$)、平均绝对误差($text{MAE}$)和平均相对误差($text{MRE}$)分别降低了2.07 m、1.61 m和43.1%。与使用固定底物谱(SBM-F)的常用半解析测深检索模型相比,SBM-P在评价指标上有所改进:对于OI情况,$text{RMSE}$降低了0.75 m, $text{MAE}$降低了0.67 m, $text{MRE}$降低了40.1%;同样,对于VI的情况,$text{RMSE}$ $text{MAE}$和$text{MRE}$分别减少了0.84、0.7和19%。综上所述,与常用的SBM-F模型相比,所提出的SBM-P模型是有效的,可以达到更高的精度。
{"title":"Sentinel-2 Multispectral Imagery Case-I Water Semianalytical Bathymetry Retrieval Model Assisted by Satellite-Derived Pixel Substrate Spectrum","authors":"Jinshan Zhu;Yu Wang;Ruifu Wang;Yuquan Wen;Cong Jiao;Yina Han;Bopeng Liu","doi":"10.1109/JSTARS.2025.3649266","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649266","url":null,"abstract":"Bathymetry is a crucial topographic element in shallow water. When retrieving bathymetry using the semianalytical (SA) model, issues of too many unknown parameters and difficulty of obtaining substrate spectrum should be addressed. In this article, a Case-I water semianalytical bathymetry retrieval model assisted by a pixel substrate spectrum (SBM-P) is proposed for multispectral images to retrieve bathymetry without prior information. First, the substrate spectrum of each pixel is obtained with the assistance of the Ice, Cloud, and land Elevation Satellite-2 and Sentinel-2 data. Second, the SA bathymetry retrieval model for Case- I water is reparametrized. Third, the optimal objective function is selected in the process of numerical optimization. Finally, the performance of the proposed SBM-P model is evaluated. Two datasets, Oahu Island (OI) and Vieques Island (VI), are prepared for experiments. Results show that the minimum distance and spectral angle matching (MS) objective function has better performance compared to the minimum distance (MD). For example, in the bright substrate of OI case, compared to MD, the root mean square error (<inline-formula><tex-math>$text{RMSE}$</tex-math></inline-formula>), mean absolute error (<inline-formula><tex-math>$text{MAE}$</tex-math></inline-formula>), and mean relative error (<inline-formula><tex-math>$text{MRE}$</tex-math></inline-formula>) of MS decrease by 2.07 m, 1.61 m, 43.1%, respectively. Compared to the generally used semianalytical bathymetry retrieval model using a fixed substrate spectrum (SBM-F), the SBM-P demonstrates improvements in the evaluation metrics: for the OI case, the <inline-formula><tex-math>$text{RMSE}$</tex-math></inline-formula> decreases by 0.75 m, the <inline-formula><tex-math>$text{MAE}$</tex-math></inline-formula> by 0.67 m, and the <inline-formula><tex-math>$text{MRE}$</tex-math></inline-formula> by 40.1% ; similarly for the VI case, the <inline-formula><tex-math>$text{RMSE}$</tex-math></inline-formula> <inline-formula><tex-math>$text{MAE}$</tex-math></inline-formula>, and <inline-formula><tex-math>$text{MRE}$</tex-math></inline-formula> reduce by 0.84, 0.7, and 19% . In conclusion, the proposed SBM-P model is effective and can achieve higher accuracy compared to the generally used SBM-F model.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4133-4150"},"PeriodicalIF":5.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11319153","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Semantic Reasoning and Fusion Network for Open-Pit Mine Semantic Change Detection in High-Resolution Remote Sensing Images 高分辨率遥感图像露天矿语义变化检测的轻量级语义推理与融合网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/JSTARS.2025.3649267
Zilin Xie;Jinbao Jiang;Kangning Li;Xiaojun Qiao;Jinzhong Yang
An open-pit mine semantic change detection (SCD) using high-resolution remote sensing images is a critical task for both mineral resource management and environmental protection. Traditional approaches usually rely on land cover classification to indirectly SCD, a process that often introduces cumulative errors and consequently limits their performance and robustness. While advanced SCD methods using multitask architectures have demonstrated strong performance in other domains, their application to open-pit mines remains unexplored. Moreover, these methods face challenges, including inference conflicts among subtasks, a lack of semantic segmentation labels for unchanged areas, and insufficient exploration of model lightweighting. Therefore, a novel lightweight semantic reasoning and fusion network (LSRFNet) is introduced for open-pit mine SCD. LSRFNet leverages a lightweight convolutional backbone within a multitask framework. Moreover, an improved multitask fusion architecture is proposed, building upon existing multitask frameworks to explicitly optimize the final SCD output by fusing subtask predictions at the decision level, thereby mitigating inference conflicts. Furthermore, a semantic reasoning loss is designed based on pseudolabeling semisupervised learning and the local semantic consistency of land cover. By generating pseudolabels and applying local semantic consistency constraints, LSRFNet can iteratively self-train and progressively infer semantic information in unchanged areas. Experiments confirm that LSRFNet achieves state-of-the-art performance on the open-pit mine SCD task, with OA, mIoU, Sek, and Fscd values of 98.12%, 83.86%, 10.89%, and 82.13%, respectively, with only about 1/50 of the parameters and inference time compared with mainstream SCD methods. LSRFNet shows significance for open-pit mine SCD in high-resolution remote sensing images.
利用高分辨率遥感图像进行露天矿语义变化检测是矿产资源管理和环境保护的一项重要任务。传统的方法通常依赖于土地覆盖分类来间接地进行SCD,这一过程往往会引入累积误差,从而限制了它们的性能和鲁棒性。虽然使用多任务架构的先进SCD方法在其他领域表现出色,但它们在露天矿中的应用仍未得到探索。此外,这些方法还面临着子任务之间的推理冲突、缺乏对未改变区域的语义分割标签以及对模型轻量化的探索不足等挑战。为此,提出了一种新的轻量级语义推理与融合网络(LSRFNet)。LSRFNet利用多任务框架中的轻量级卷积主干。此外,在现有多任务框架的基础上,提出了一种改进的多任务融合架构,通过在决策层面融合子任务预测来显式优化最终的SCD输出,从而减轻推理冲突。在此基础上,设计了基于伪标记半监督学习和土地覆盖局部语义一致性的语义推理损失模型。LSRFNet通过生成伪标签并应用局部语义一致性约束,在不变区域内迭代自训练并逐步推断出语义信息。实验证明,LSRFNet在露天矿SCD任务上达到了最先进的性能,OA、mIoU、Sek和Fscd值分别为98.12%、83.86%、10.89%和82.13%,与主流SCD方法相比,参数和推理时间仅为1/50左右。在高分辨率遥感图像中,LSRFNet对露天矿SCD具有重要意义。
{"title":"A Lightweight Semantic Reasoning and Fusion Network for Open-Pit Mine Semantic Change Detection in High-Resolution Remote Sensing Images","authors":"Zilin Xie;Jinbao Jiang;Kangning Li;Xiaojun Qiao;Jinzhong Yang","doi":"10.1109/JSTARS.2025.3649267","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649267","url":null,"abstract":"An open-pit mine semantic change detection (SCD) using high-resolution remote sensing images is a critical task for both mineral resource management and environmental protection. Traditional approaches usually rely on land cover classification to indirectly SCD, a process that often introduces cumulative errors and consequently limits their performance and robustness. While advanced SCD methods using multitask architectures have demonstrated strong performance in other domains, their application to open-pit mines remains unexplored. Moreover, these methods face challenges, including inference conflicts among subtasks, a lack of semantic segmentation labels for unchanged areas, and insufficient exploration of model lightweighting. Therefore, a novel lightweight semantic reasoning and fusion network (LSRFNet) is introduced for open-pit mine SCD. LSRFNet leverages a lightweight convolutional backbone within a multitask framework. Moreover, an improved multitask fusion architecture is proposed, building upon existing multitask frameworks to explicitly optimize the final SCD output by fusing subtask predictions at the decision level, thereby mitigating inference conflicts. Furthermore, a semantic reasoning loss is designed based on pseudolabeling semisupervised learning and the local semantic consistency of land cover. By generating pseudolabels and applying local semantic consistency constraints, LSRFNet can iteratively self-train and progressively infer semantic information in unchanged areas. Experiments confirm that LSRFNet achieves state-of-the-art performance on the open-pit mine SCD task, with OA, mIoU, Sek, and F<sub>scd</sub> values of 98.12%, 83.86%, 10.89%, and 82.13%, respectively, with only about 1/50 of the parameters and inference time compared with mainstream SCD methods. LSRFNet shows significance for open-pit mine SCD in high-resolution remote sensing images.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5618-5633"},"PeriodicalIF":5.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11319156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LESMI: Integrating Linear-Exponential Model, Shapelets, and Multirocket for Wetland Vegetation Inundation Monitoring With Time Series SAR 基于线性-指数模型、Shapelets和Multirocket的时间序列SAR湿地植被淹没监测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/JSTARS.2025.3649200
Yuanye Cao;Xiuguo Liu;Yuannan Long;Hui Yang;Shixiong Yan;Qihao Chen
Accurate monitoring of wetland vegetation inundation is crucial for maintaining regional ecological balance and conserving biodiversity, serving as a fundamental prerequisite for wetland environmental monitoring and protection. The complex scattering characteristics of vegetation under different inundation conditions, combined with spatial and seasonal heterogeneity, pose significant challenges to precise vegetation inundation state identification. Therefore, this study proposes a novel approach named the linear-exponential model, shapelets, and multirocket integration (LESMI), for monitoring the inundation state and temporal changes of wetland vegetation using radar backscatter variation patterns. First, a new linear-exponential model is developed to characterize the backscatter-water depth relationship and represent the inundation state characteristics of wetland vegetation. Second, based on the typical inundated state of historical stages determined by the linear-exponential model, LESMI method innovatively combines the Shapelets with multirocket classification to efficiently extract multivariate key time periods features for inundation state identification and achieve large-scale, near real-time inundation state classification. Experimental results in the Dongting Lake wetland show that the proposed method achieves inundation recognition accuracies of 96.84% for reeds and 92.59% for grassland, outperforming traditional methods and LSTM deep learning by average margins of 12.95% and 1.87%, respectively. The linear-exponential model significantly enhances identification performance, improving accuracy by 5.64% and 3.83% compared to linear and normal distribution models. Monitoring from 2019 to 2021 demonstrates that LESMI effectively captures flood peak impacts on vegetation inundation and provides detailed classification of noninundated, shallow inundated, and deep inundated states, offering reliable technical support for dynamic wetland ecosystem monitoring and refined management.
湿地植被淹没的准确监测对维护区域生态平衡和保护生物多样性至关重要,是湿地环境监测与保护的基本前提。不同淹没条件下植被的复杂散射特性,加之空间和季节异质性,给植被淹没状态的精确识别带来了重大挑战。为此,本研究提出了一种基于雷达后向散射变化模式的线性指数模型、shapelets和多火箭集成(LESMI)方法来监测湿地植被的淹没状态和时间变化。首先,建立了一种新的线性-指数模型来表征湿地植被的后向散射-水深关系和淹没状态特征。其次,基于线性指数模型确定的历史阶段典型淹没状态,LESMI方法创新地将Shapelets与多火箭分类相结合,高效提取多变量关键时间段特征用于淹没状态识别,实现大规模、近实时的淹没状态分类。在洞庭湖湿地的实验结果表明,该方法对芦苇和草地的洪水识别准确率分别达到96.84%和92.59%,优于传统方法和LSTM深度学习,平均差值分别为12.95%和1.87%。线性-指数模型显著提高了识别性能,与线性和正态分布模型相比,准确率分别提高了5.64%和3.83%。2019 - 2021年监测结果表明,LESMI有效捕获了洪峰对植被淹没的影响,并提供了详细的非淹没、浅淹没和深淹没状态分类,为湿地生态系统动态监测和精细化管理提供了可靠的技术支持。
{"title":"LESMI: Integrating Linear-Exponential Model, Shapelets, and Multirocket for Wetland Vegetation Inundation Monitoring With Time Series SAR","authors":"Yuanye Cao;Xiuguo Liu;Yuannan Long;Hui Yang;Shixiong Yan;Qihao Chen","doi":"10.1109/JSTARS.2025.3649200","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649200","url":null,"abstract":"Accurate monitoring of wetland vegetation inundation is crucial for maintaining regional ecological balance and conserving biodiversity, serving as a fundamental prerequisite for wetland environmental monitoring and protection. The complex scattering characteristics of vegetation under different inundation conditions, combined with spatial and seasonal heterogeneity, pose significant challenges to precise vegetation inundation state identification. Therefore, this study proposes a novel approach named the linear-exponential model, shapelets, and multirocket integration (LESMI), for monitoring the inundation state and temporal changes of wetland vegetation using radar backscatter variation patterns. First, a new linear-exponential model is developed to characterize the backscatter-water depth relationship and represent the inundation state characteristics of wetland vegetation. Second, based on the typical inundated state of historical stages determined by the linear-exponential model, LESMI method innovatively combines the Shapelets with multirocket classification to efficiently extract multivariate key time periods features for inundation state identification and achieve large-scale, near real-time inundation state classification. Experimental results in the Dongting Lake wetland show that the proposed method achieves inundation recognition accuracies of 96.84% for reeds and 92.59% for grassland, outperforming traditional methods and LSTM deep learning by average margins of 12.95% and 1.87%, respectively. The linear-exponential model significantly enhances identification performance, improving accuracy by 5.64% and 3.83% compared to linear and normal distribution models. Monitoring from 2019 to 2021 demonstrates that LESMI effectively captures flood peak impacts on vegetation inundation and provides detailed classification of noninundated, shallow inundated, and deep inundated states, offering reliable technical support for dynamic wetland ecosystem monitoring and refined management.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4242-4256"},"PeriodicalIF":5.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11319178","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on Harmful Algal Blooms in the Waters Near the Yangtze River Estuary Based on Twin Satellites HY-1C/D COCTS Data 基于HY-1C/D COCTS双卫星数据的长江口海域有害藻华研究
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1109/JSTARS.2025.3649548
Xuan Liu;Lina Cai;Jiahua Li;Tianle Mao
This study examined a novel harmful algal bloom (HAB) inversion model (HABI) using domestic Chinese ocean color and temperature scanner multispectral data from HY-1C/D satellites. The model achieves the dual capabilities of HAB presence detection and density quantification, a key advancement over conventional binary classification models that lack the ability to delineate HAB density gradients. Key findings of this article include the following. 1) The HABI model uses spectral bands at 443, 490, and 565 nm, demonstrating superior performance in quantifying HAB density gradients compared to existing methods, with design adaptability to sensors featuring similar spectral configurations. 2) HABI achieved high inversion accuracy (R2 = 0.8682, RMSE = 0.09195, Recall = 0.9300, Precision = 0.949, F1-score = 0.939), showing strong consistency with Bulletin of China Marine Disaster and in situ HAB measurements in the Waters near the Yangtze River Estuary. 3) The distribution of HABs takes on obvious temporal and spatial change characteristics, with high density clusters localized in coastal zones, peaking in spring/summer, and changed seasonally. Their seasonal factors contributing to the change of HAB mainly include Yangtze River freshwater discharge and coastal upwelling, and modulated by physical (e.g., sea surface temperature), anthropogenic (e.g., industrial wastewater), and biogeochemical factors (e.g., dissolved inorganic nitrogen) as well as biodiversity. These findings are conceptually integrated in Fig. 14, synthesizing the model mechanics and spatio-temporal dynamics. The HABI algorithm proposed in this article can effectively applied for HAB monitoring and quantification, providing a technical support for near-shore ecological assessment and management.
利用海- 1c /D卫星海洋色温扫描仪多光谱数据,建立了一种新的有害藻华(HAB)反演模型。该模型实现了有害藻华存在检测和密度量化的双重功能,这是传统二元分类模型的一个关键进步,后者缺乏描述有害藻华密度梯度的能力。本文的主要发现包括以下内容。1) HABI模型使用443、490和565 nm的光谱波段,与现有方法相比,在量化HAB密度梯度方面表现出更好的性能,并且具有对具有相似光谱配置的传感器的设计适应性。2) HABI具有较高的反演精度(R2 = 0.8682, RMSE = 0.09195, Recall = 0.9300, Precision = 0.949, s1 -score = 0.939),与《中国海情公报》和长江口附近海域原位HAB测量结果具有较强的一致性。3)HABs分布具有明显的时空变化特征,高密度聚集集中在海岸带,春夏季达到峰值,季节性变化明显。影响赤潮变化的季节性因子主要包括长江淡水排放和沿岸上升流,并受物理因子(如海表温度)、人为因子(如工业废水)、生物地球化学因子(如溶解无机氮)和生物多样性的调节。综合模型力学和时空动力学,这些发现在概念上集成在图14中。本文提出的HABI算法可有效应用于有害藻华的监测与量化,为近岸生态评价与管理提供技术支持。
{"title":"Study on Harmful Algal Blooms in the Waters Near the Yangtze River Estuary Based on Twin Satellites HY-1C/D COCTS Data","authors":"Xuan Liu;Lina Cai;Jiahua Li;Tianle Mao","doi":"10.1109/JSTARS.2025.3649548","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3649548","url":null,"abstract":"This study examined a novel harmful algal bloom (HAB) inversion model (HABI) using domestic Chinese ocean color and temperature scanner multispectral data from HY-1C/D satellites. The model achieves the dual capabilities of HAB presence detection and density quantification, a key advancement over conventional binary classification models that lack the ability to delineate HAB density gradients. Key findings of this article include the following. 1) The HABI model uses spectral bands at 443, 490, and 565 nm, demonstrating superior performance in quantifying HAB density gradients compared to existing methods, with design adaptability to sensors featuring similar spectral configurations. 2) HABI achieved high inversion accuracy (<italic>R</i><sup>2</sup> = 0.8682, RMSE = 0.09195, Recall = 0.9300, Precision = 0.949, F1-score = 0.939), showing strong consistency with Bulletin of China Marine Disaster and in situ HAB measurements in the Waters near the Yangtze River Estuary. 3) The distribution of HABs takes on obvious temporal and spatial change characteristics, with high density clusters localized in coastal zones, peaking in spring/summer, and changed seasonally. Their seasonal factors contributing to the change of HAB mainly include Yangtze River freshwater discharge and coastal upwelling, and modulated by physical (e.g., sea surface temperature), anthropogenic (e.g., industrial wastewater), and biogeochemical factors (e.g., dissolved inorganic nitrogen) as well as biodiversity. These findings are conceptually integrated in Fig. 14, synthesizing the model mechanics and spatio-temporal dynamics. The HABI algorithm proposed in this article can effectively applied for HAB monitoring and quantification, providing a technical support for near-shore ecological assessment and management.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4872-4886"},"PeriodicalIF":5.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11319152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Network for Change Detection Based on a Divide-and-Conquer Fusion Strategy 一种基于分而治之融合策略的变化检测网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-26 DOI: 10.1109/JSTARS.2025.3648330
Mengmeng Wang;Xu Lin;Yuanxin Ye;Wenhui Wu;Bai Zhu;Yanshuai Dai
Change detection (CD) is a fundamental task that is pivotal in understanding surface changes. Recently, CD methods have advanced rapidly and attained impressive results, driven by deep learning technology. However, existing methods generally employ fusion modules with the same design for multilevel features, overlooking the inherent distinctions between low-level spatial features and deep-level semantic features generated by deep networks. To overcome this limitation, this article proposes a novel CD network, referred to as DACNet. This method introduces a divide-and-conquer fusion strategy designed to fuse multilevel features using different fusion strategies. Specifically, the widely used MobileNetV2 is employed within a dual-branch architecture to extract multilevel features from bitemporal images. Subsequently, the proposed divide-and-conquer fusion strategy comprises two specialized modules: the change region localization module and the edge complementarity module, which are tailored to fuse deep-level semantic features and low-level spatial features, respectively. In addition, to mitigate the unnecessary noise introduced by the conventional UNet architectures, attention gates are introduced into the UNet decoder to enhance the changed information and suppress background noises. Extensive experiments are conducted on three available CD datasets: LEVIR-CD, Google-CD, and MSRS-CD. The proposed network achieved favorable results compared to the nine state-of-the-art methods across all experiments, improving the F1 score by 0.93%, 1.10%, and 0.81% on the LEVIR-CD, Google-CD, and MSRS-CD datasets, respectively.
变化检测是了解地表变化的一项基础性工作。近年来,在深度学习技术的推动下,CD方法发展迅速,取得了令人印象深刻的成果。然而,现有方法通常采用相同设计的融合模块来处理多层特征,忽略了深层网络生成的底层空间特征和深层语义特征之间的内在区别。为了克服这一限制,本文提出了一种新的CD网络,称为DACNet。该方法引入了一种分而治之的融合策略,旨在使用不同的融合策略融合多层次特征。具体而言,广泛使用的MobileNetV2在双分支架构中用于从双时间图像中提取多层特征。随后,本文提出的分而治之融合策略包括两个专门的模块:变化区域定位模块和边缘互补模块,分别针对深层语义特征和底层空间特征进行融合。此外,为了减轻传统UNet结构带来的不必要的噪声,在UNet解码器中引入了注意门,以增强变化信息并抑制背景噪声。在三个可用的CD数据集:LEVIR-CD、Google-CD和MSRS-CD上进行了广泛的实验。该网络在所有实验中均取得了较好的结果,在LEVIR-CD、Google-CD和MSRS-CD数据集上的F1得分分别提高了0.93%、1.10%和0.81%。
{"title":"A Novel Network for Change Detection Based on a Divide-and-Conquer Fusion Strategy","authors":"Mengmeng Wang;Xu Lin;Yuanxin Ye;Wenhui Wu;Bai Zhu;Yanshuai Dai","doi":"10.1109/JSTARS.2025.3648330","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3648330","url":null,"abstract":"Change detection (CD) is a fundamental task that is pivotal in understanding surface changes. Recently, CD methods have advanced rapidly and attained impressive results, driven by deep learning technology. However, existing methods generally employ fusion modules with the same design for multilevel features, overlooking the inherent distinctions between low-level spatial features and deep-level semantic features generated by deep networks. To overcome this limitation, this article proposes a novel CD network, referred to as DACNet. This method introduces a divide-and-conquer fusion strategy designed to fuse multilevel features using different fusion strategies. Specifically, the widely used MobileNetV2 is employed within a dual-branch architecture to extract multilevel features from bitemporal images. Subsequently, the proposed divide-and-conquer fusion strategy comprises two specialized modules: the change region localization module and the edge complementarity module, which are tailored to fuse deep-level semantic features and low-level spatial features, respectively. In addition, to mitigate the unnecessary noise introduced by the conventional UNet architectures, attention gates are introduced into the UNet decoder to enhance the changed information and suppress background noises. Extensive experiments are conducted on three available CD datasets: LEVIR-CD, Google-CD, and MSRS-CD. The proposed network achieved favorable results compared to the nine state-of-the-art methods across all experiments, improving the F1 score by 0.93%, 1.10%, and 0.81% on the LEVIR-CD, Google-CD, and MSRS-CD datasets, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"2891-2904"},"PeriodicalIF":5.3,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1