首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Dual-Domain Masked Representation Learning for Semantic Segmentation of Remote Sensing Images 基于双域掩码表示学习的遥感图像语义分割
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-20 DOI: 10.1109/JSTARS.2026.3655583
Yujia Fu;Mingyang Wang;Danfeng Hong;Gemine Vivone
Self-supervisedlearning (SSL) has emerged as a promising paradigm for remote sensing semantic segmentation, enabling the exploitation of large-scale unlabeled data to learn meaningful representations. However, most existing methods focus solely on the spatial domain, overlooking rich frequency information that is particularly critical in remote sensing images, where fine-grained textures and repetitive structural patterns are prevalent. To address this limitation, we propose a novel dual-domain masked representation (DDMR) learning framework. Specifically, the spatial masking branch simulates partial occlusions and encourages spatial context reasoning by randomly masking regions in the spatial domain. Meanwhile, randomized frequency masking increases input diversity during training and improves generalization. In addition, feature representations are further decoupled into amplitude and phase components in the frequency branch, and an amplitude-phase loss is introduced to encourage fine-grained, frequency-aware learning. By jointly leveraging spatial and frequency masked representation learning, DDMR enhances the robustness and discriminative power of learned features. Extensive experiments on two remote sensing datasets demonstrate that our method consistently outperforms state-of-the-art self-supervised approaches, validating its effectiveness for self-supervised semantic segmentation in complex remote sensing scenarios.
自监督学习(SSL)已经成为遥感语义分割的一个很有前途的范例,可以利用大规模未标记数据来学习有意义的表示。然而,大多数现有方法仅关注空间域,忽略了在遥感图像中特别重要的丰富频率信息,其中细粒度纹理和重复结构模式普遍存在。为了解决这一限制,我们提出了一种新的双域掩码表示(DDMR)学习框架。具体来说,空间掩蔽分支模拟部分遮挡,并通过在空间域中随机掩蔽区域来鼓励空间上下文推理。同时,随机频率掩蔽增加了训练过程中的输入分集,提高了泛化能力。此外,特征表示进一步解耦为频率分支中的幅度和相位分量,并引入幅度相位损失以鼓励细粒度的频率感知学习。通过联合利用空间和频率掩蔽表征学习,DDMR增强了学习特征的鲁棒性和判别能力。在两个遥感数据集上进行的大量实验表明,我们的方法始终优于最先进的自监督方法,验证了其在复杂遥感场景下自监督语义分割的有效性。
{"title":"Dual-Domain Masked Representation Learning for Semantic Segmentation of Remote Sensing Images","authors":"Yujia Fu;Mingyang Wang;Danfeng Hong;Gemine Vivone","doi":"10.1109/JSTARS.2026.3655583","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655583","url":null,"abstract":"Self-supervisedlearning (SSL) has emerged as a promising paradigm for remote sensing semantic segmentation, enabling the exploitation of large-scale unlabeled data to learn meaningful representations. However, most existing methods focus solely on the spatial domain, overlooking rich frequency information that is particularly critical in remote sensing images, where fine-grained textures and repetitive structural patterns are prevalent. To address this limitation, we propose a novel dual-domain masked representation (DDMR) learning framework. Specifically, the spatial masking branch simulates partial occlusions and encourages spatial context reasoning by randomly masking regions in the spatial domain. Meanwhile, randomized frequency masking increases input diversity during training and improves generalization. In addition, feature representations are further decoupled into amplitude and phase components in the frequency branch, and an amplitude-phase loss is introduced to encourage fine-grained, frequency-aware learning. By jointly leveraging spatial and frequency masked representation learning, DDMR enhances the robustness and discriminative power of learned features. Extensive experiments on two remote sensing datasets demonstrate that our method consistently outperforms state-of-the-art self-supervised approaches, validating its effectiveness for self-supervised semantic segmentation in complex remote sensing scenarios.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5507-5519"},"PeriodicalIF":5.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11359001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CosmDiff: Integrating Multitemporal Optical-SAR Data With Conditional Diffusion Models for Optical Satellite Time Series Reconstruction 基于条件扩散模型的多时相光学sar数据的光学卫星时间序列重构
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-20 DOI: 10.1109/JSTARS.2026.3655691
Yuan Yuan;Junhan Zhou;Lei Lin;Ying Yu;Qingshan Liu
Optical satellite time series data play a crucial role in monitoring vegetation dynamics and land surface changes. However, persistent cloud cover often leads to missing data, particularly during critical phenological stages, which significantly diminishes data quality and hinders downstream applications. To address this issue, we present conditional optical-SAR multitemporal diffusion (CosmDiff), a novel framework for reconstructing optical satellite time series by integrating multimodal, multitemporal optical and synthetic aperture radar (SAR) data using conditional diffusion models. In CosmDiff, the reconstruction task is formulated as a multivariate time series imputation problem, where missing values are modeled as conditionally dependent on both cloudfree optical observations and synergic SAR time series. The framework incorporates a Transformer-based network within the diffusion process, introducing a novel dimensional decomposition attention mechanism that fuses optical-SAR time series across both temporal and feature dimensions. This mechanism enables the dynamic extraction of essential and complementary features from both modalities. In addition, linearly interpolated optical time series are used as auxiliary inputs to further guide the imputation process. Experimental results on Sentinel-1/-2 datasets demonstrate that CosmDiff consistently outperforms both traditional interpolation methods and advanced deep learning approaches, achieving a 3.8% reduction in mean absolute error and a 6.8% improvement in spectral angle mapper compared to competing methods. Furthermore, CosmDiff provides comprehensive uncertainty estimates for its predictions, which are particularly valuable for decision-making applications.
光学卫星时间序列数据在监测植被动态和地表变化方面具有重要作用。然而,持续的云层覆盖经常导致数据丢失,特别是在关键的物候阶段,这大大降低了数据质量并阻碍了下游应用。为了解决这一问题,我们提出了条件光学SAR多时相扩散(CosmDiff),这是一个新的框架,通过条件扩散模型集成多模态、多时相光学和合成孔径雷达(SAR)数据来重建光学卫星时间序列。在CosmDiff中,重建任务被表述为一个多变量时间序列imputation问题,其中缺失值被建模为有条件地依赖于无云光学观测和协同SAR时间序列。该框架在扩散过程中集成了一个基于变压器的网络,引入了一种新的维度分解注意机制,该机制融合了跨越时间和特征维度的光学sar时间序列。这种机制能够从两种模式中动态提取基本和互补的特征。此外,使用线性插值的光学时间序列作为辅助输入,进一步指导插补过程。在Sentinel-1/ 2数据集上的实验结果表明,CosmDiff始终优于传统的插值方法和先进的深度学习方法,与竞争方法相比,平均绝对误差降低3.8%,光谱角度映射器提高6.8%。此外,CosmDiff为其预测提供了全面的不确定性估计,这对决策应用程序特别有价值。
{"title":"CosmDiff: Integrating Multitemporal Optical-SAR Data With Conditional Diffusion Models for Optical Satellite Time Series Reconstruction","authors":"Yuan Yuan;Junhan Zhou;Lei Lin;Ying Yu;Qingshan Liu","doi":"10.1109/JSTARS.2026.3655691","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655691","url":null,"abstract":"Optical satellite time series data play a crucial role in monitoring vegetation dynamics and land surface changes. However, persistent cloud cover often leads to missing data, particularly during critical phenological stages, which significantly diminishes data quality and hinders downstream applications. To address this issue, we present conditional optical-SAR multitemporal diffusion (CosmDiff), a novel framework for reconstructing optical satellite time series by integrating multimodal, multitemporal optical and synthetic aperture radar (SAR) data using conditional diffusion models. In CosmDiff, the reconstruction task is formulated as a multivariate time series imputation problem, where missing values are modeled as conditionally dependent on both cloudfree optical observations and synergic SAR time series. The framework incorporates a Transformer-based network within the diffusion process, introducing a novel dimensional decomposition attention mechanism that fuses optical-SAR time series across both temporal and feature dimensions. This mechanism enables the dynamic extraction of essential and complementary features from both modalities. In addition, linearly interpolated optical time series are used as auxiliary inputs to further guide the imputation process. Experimental results on Sentinel-1/-2 datasets demonstrate that CosmDiff consistently outperforms both traditional interpolation methods and advanced deep learning approaches, achieving a 3.8% reduction in mean absolute error and a 6.8% improvement in spectral angle mapper compared to competing methods. Furthermore, CosmDiff provides comprehensive uncertainty estimates for its predictions, which are particularly valuable for decision-making applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5722-5740"},"PeriodicalIF":5.3,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11359003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSST-GAN: A Sampling-Based Spatial-Spectral Transformer and Generative Adversarial Network for Hyperspectral Unmixing SSST-GAN:一种基于采样的空间光谱转换器和生成对抗网络用于高光谱解混
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/JSTARS.2026.3655512
Yu Zhang;Jiageng Huang;Yefei Huang;Wei Gao;Jie Chen
Transformer-based architectures have shown strong potential in hyperspectral unmixing due to their powerful modeling capabilities. However, most existing transformer-based methods still struggle to effectively capture and fuse spatial–spectral features, and their predominant reliance on reconstruction error further constrains overall unmixing performance. Moreover, they rarely account for the nonlinear correlations that inherently exist between the spatial and spectral domains. To address these challenges, we propose a sampling-based spatial–spectral transformer and generative adversarial network (SSST-GAN). The proposed model employs a dual-branch, sampling-based transformer encoder to independently extract spatial and spectral representations. Specifically, the spatial branch adopts a full-sampling multihead attention mechanism to capture rich contextual dependences among spatial pixels, while the spectral branch utilizes a sparse sampling strategy to efficiently distill key information from high-dimensional spectral data. A feature enhancement module is introduced to integrate and strengthen the complementary characteristics of spatial and spectral features. To further improve the modeling of complex nonlinear mixing patterns, we incorporate a generalized nonlinear fluctuation model at the decoding stage. In addition, SSST-GAN leverages a generative adversarial learning framework, in which a discriminator evaluates the authenticity of reconstructed pixels, thereby enhancing the fidelity of the unmixing results. Extensive experiments on both synthetic and real-world datasets demonstrate that SSST-GAN consistently outperforms several state-of-the-art methods in terms of unmixing accuracy.
基于变压器的架构由于其强大的建模能力,在高光谱分解中显示出强大的潜力。然而,大多数现有的基于变压器的方法仍然难以有效地捕获和融合空间光谱特征,并且它们对重构误差的主要依赖进一步限制了整体解混性能。此外,它们很少考虑到空间和光谱域之间固有的非线性相关性。为了解决这些挑战,我们提出了一种基于采样的空间频谱转换器和生成对抗网络(SSST-GAN)。该模型采用双支路、基于采样的变压器编码器独立提取空间和频谱表示。其中,空间分支采用全采样多头注意机制捕获空间像素间丰富的上下文依赖关系,光谱分支采用稀疏采样策略从高维光谱数据中高效提取关键信息。引入特征增强模块,对空间特征和光谱特征的互补特征进行整合和增强。为了进一步改进复杂非线性混合模式的建模,我们在解码阶段引入了广义非线性波动模型。此外,SSST-GAN利用生成式对抗学习框架,其中判别器评估重建像素的真实性,从而增强解混结果的保真度。在合成和现实世界数据集上进行的大量实验表明,SSST-GAN在解混精度方面始终优于几种最先进的方法。
{"title":"SSST-GAN: A Sampling-Based Spatial-Spectral Transformer and Generative Adversarial Network for Hyperspectral Unmixing","authors":"Yu Zhang;Jiageng Huang;Yefei Huang;Wei Gao;Jie Chen","doi":"10.1109/JSTARS.2026.3655512","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655512","url":null,"abstract":"Transformer-based architectures have shown strong potential in hyperspectral unmixing due to their powerful modeling capabilities. However, most existing transformer-based methods still struggle to effectively capture and fuse spatial–spectral features, and their predominant reliance on reconstruction error further constrains overall unmixing performance. Moreover, they rarely account for the nonlinear correlations that inherently exist between the spatial and spectral domains. To address these challenges, we propose a sampling-based spatial–spectral transformer and generative adversarial network (SSST-GAN). The proposed model employs a dual-branch, sampling-based transformer encoder to independently extract spatial and spectral representations. Specifically, the spatial branch adopts a full-sampling multihead attention mechanism to capture rich contextual dependences among spatial pixels, while the spectral branch utilizes a sparse sampling strategy to efficiently distill key information from high-dimensional spectral data. A feature enhancement module is introduced to integrate and strengthen the complementary characteristics of spatial and spectral features. To further improve the modeling of complex nonlinear mixing patterns, we incorporate a generalized nonlinear fluctuation model at the decoding stage. In addition, SSST-GAN leverages a generative adversarial learning framework, in which a discriminator evaluates the authenticity of reconstructed pixels, thereby enhancing the fidelity of the unmixing results. Extensive experiments on both synthetic and real-world datasets demonstrate that SSST-GAN consistently outperforms several state-of-the-art methods in terms of unmixing accuracy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5741-5757"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358397","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visible-Light-Guided Infrared Image Super Resolution With Dual Amplitude-Phase Optimization 双幅相位优化的可见光制导红外图像超分辨率
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/JSTARS.2026.3655485
Qingwang Wang;Yuhang Wu;Pengcheng Jin;Yan Lin;Zhen Zhang;Tao Shen
Infrared imaging plays a crucial role in applications, such as search-and-rescue operations and fire monitoring, due to its robustness under complex environmental conditions. Nevertheless, the inherent low spatial resolution of infrared cameras, and the complicated imaging degradation process, still constrains the quality of captured images, thereby posing challenges for downstream tasks. Existing infrared image super-resolution methods (e.g., diffusion-based methods) often neglect the unique modality characteristics of infrared images and fail to effectively introduce additional fine-grained information. To address these limitations, we propose a novel framework named Visible-light-guided infrared image super resolution with dual amplitude-phase optimization (vap-SR). By leveraging the powerful generative capability of conditional diffusion and fully exploiting the rich structural priors embedded in visible images, vap-SR effectively compensates for the deficiencies of infrared images in terms of details, thereby overcoming the inherent limitations in texture fidelity. Phase and amplitude losses are designed to preserve the physical characteristics of the infrared modality while effectively leveraging the structural information from visible-light images. Extensive experiments demonstrate that vap-SR consistently outperforms state-of-the-art methods in both reconstruction quality and downstream object detection task, validating its effectiveness for infrared super resolution.
红外成像由于其在复杂环境条件下的鲁棒性,在搜救行动和火灾监控等应用中发挥着至关重要的作用。然而,红外相机固有的低空间分辨率和复杂的成像退化过程仍然制约着捕获图像的质量,给后续任务带来了挑战。现有的红外图像超分辨方法(如基于扩散的方法)往往忽略了红外图像独特的模态特征,不能有效地引入额外的细粒度信息。为了解决这些限制,我们提出了一种新的框架,称为可见光引导红外图像超分辨率与双幅相位优化(vap-SR)。vap-SR利用条件扩散的强大生成能力,充分利用可见光图像中嵌入的丰富结构先验,有效弥补了红外图像在细节方面的不足,从而克服了纹理保真度的固有局限性。相位和振幅损失的设计是为了保持红外模态的物理特性,同时有效地利用可见光图像的结构信息。大量实验表明,vap-SR在重建质量和下游目标检测任务方面始终优于最先进的方法,验证了其在红外超分辨率方面的有效性。
{"title":"Visible-Light-Guided Infrared Image Super Resolution With Dual Amplitude-Phase Optimization","authors":"Qingwang Wang;Yuhang Wu;Pengcheng Jin;Yan Lin;Zhen Zhang;Tao Shen","doi":"10.1109/JSTARS.2026.3655485","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655485","url":null,"abstract":"Infrared imaging plays a crucial role in applications, such as search-and-rescue operations and fire monitoring, due to its robustness under complex environmental conditions. Nevertheless, the inherent low spatial resolution of infrared cameras, and the complicated imaging degradation process, still constrains the quality of captured images, thereby posing challenges for downstream tasks. Existing infrared image super-resolution methods (e.g., diffusion-based methods) often neglect the unique modality characteristics of infrared images and fail to effectively introduce additional fine-grained information. To address these limitations, we propose a novel framework named Visible-light-guided infrared image super resolution with dual amplitude-phase optimization (vap-SR). By leveraging the powerful generative capability of conditional diffusion and fully exploiting the rich structural priors embedded in visible images, vap-SR effectively compensates for the deficiencies of infrared images in terms of details, thereby overcoming the inherent limitations in texture fidelity. Phase and amplitude losses are designed to preserve the physical characteristics of the infrared modality while effectively leveraging the structural information from visible-light images. Extensive experiments demonstrate that vap-SR consistently outperforms state-of-the-art methods in both reconstruction quality and downstream object detection task, validating its effectiveness for infrared super resolution.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5774-5784"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358958","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Ergodicity of Doppler Characteristics in Polarimetric Ocean Radar Scattering: A Numerical Study 海洋极化雷达散射多普勒特征的空间遍历性:数值研究
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/JSTARS.2026.3655359
Jianing Shao;Yanlei Du;Xiaofeng Yang;Longxiang Linghu;Jinsong Chong;Jian Yang
This study numerically investigates the spatial ergodicity of Doppler characteristics in polarimetric ocean radar scattering. The full Apel wave spectrum is employed to generate 2-D time-varying sea surfaces that involve all dominant large-scale gravity waves and small-scale capillary waves. By solving the radar scattering from time-varying ocean surfaces with various illumination sizes using the second-order small-slope approximation (SSA-2) model, the Doppler spectra, along with the Doppler shift and width, are thus computed and analyzed. The numerical simulations are conducted at L-band for three typical fully developed sea states. A Doppler shift error threshold is defined based on the accuracy requirements of sea surface current retrieval, and the spatial ergodicity of Doppler shift is evaluated quantitatively. Simulation results indicate that under co-polarization, the Doppler shift manifests spatial ergodicity when the sea surface size illuminated by radar is no less than one-quarter of the largest gravity wave wavelength at the corresponding sea state. For cross-polarization, the spatial ergodicity of the Doppler shift is significantly reduced and is observed only when the illumination size exceeds about one-half of the largest gravity wave wavelength. The results also indicate that wind direction has a limited effect on the spatial ergodicity of the Doppler shift.
本文对极化海洋雷达散射中多普勒特征的空间遍历性进行了数值研究。利用完整的阿佩尔波谱生成了包含所有主要大尺度重力波和小尺度毛细波的二维时变海面。利用二阶小斜率近似(SSA-2)模型求解不同照度下时变海洋表面的雷达散射,计算并分析了多普勒光谱、多普勒频移和宽度。在l波段对三种典型的完全发达海况进行了数值模拟。根据海流反演精度要求,定义了多普勒频移误差阈值,定量评价了多普勒频移的空间遍历性。仿真结果表明,在共极化条件下,雷达照射海面尺寸不小于对应海况下最大重力波波长的四分之一时,多普勒频移表现出空间遍历性。对于交叉偏振,多普勒频移的空间遍历性显著降低,只有当照明尺寸超过最大重力波波长的一半左右时才会观察到。结果还表明,风向对多普勒频移的空间遍历性影响有限。
{"title":"Spatial Ergodicity of Doppler Characteristics in Polarimetric Ocean Radar Scattering: A Numerical Study","authors":"Jianing Shao;Yanlei Du;Xiaofeng Yang;Longxiang Linghu;Jinsong Chong;Jian Yang","doi":"10.1109/JSTARS.2026.3655359","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655359","url":null,"abstract":"This study numerically investigates the spatial ergodicity of Doppler characteristics in polarimetric ocean radar scattering. The full Apel wave spectrum is employed to generate 2-D time-varying sea surfaces that involve all dominant large-scale gravity waves and small-scale capillary waves. By solving the radar scattering from time-varying ocean surfaces with various illumination sizes using the second-order small-slope approximation (SSA-2) model, the Doppler spectra, along with the Doppler shift and width, are thus computed and analyzed. The numerical simulations are conducted at L-band for three typical fully developed sea states. A Doppler shift error threshold is defined based on the accuracy requirements of sea surface current retrieval, and the spatial ergodicity of Doppler shift is evaluated quantitatively. Simulation results indicate that under co-polarization, the Doppler shift manifests spatial ergodicity when the sea surface size illuminated by radar is no less than one-quarter of the largest gravity wave wavelength at the corresponding sea state. For cross-polarization, the spatial ergodicity of the Doppler shift is significantly reduced and is observed only when the illumination size exceeds about one-half of the largest gravity wave wavelength. The results also indicate that wind direction has a limited effect on the spatial ergodicity of the Doppler shift.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5493-5506"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358708","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CRMF-Net: A Multimodal Fusion Network for Water–Land Classification From Single-Wavelength Bathymetric LiDAR CRMF-Net:用于单波长测深激光雷达水陆分类的多模态融合网络
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/JSTARS.2026.3655350
Wenjing Li;Libin Du;Xinglei Zhao
Accurate water-land classification is fundamental for topographic mapping and coastal zone monitoring based on airborne LiDAR bathymetry (ALB). However, due to the limited information content and feature ambiguity of one-dimensional (1-D) waveform signals, accurate classification from single-wavelength ALB data remains challenging. To address this issue, a dual-branch multimodal fusion network (CRMF-Net) is proposed to improve both classification accuracy and robustness. The proposed network consists of a convolutional neural network (CNN) branch and a convolutional block attention module optimized residual neural network branch, which are designed to capture complementary temporal and spatial features, respectively. The 1-D green waveform is converted into a 2-D time-frequency representation through the continuous wavelet transform, thereby increasing the dimensions and quantity of waveform features. By jointly exploiting complementary information from waveform signals and their corresponding time–frequency representations, the proposed method enables more effective feature representation without relying on extensive handcrafted analysis. Experiments conducted on CZMIL datasets from Qinshan Island demonstrate that CRMF-Net achieves an overall accuracy of 97.33% with a kappa coefficient of 0.9168, outperforming traditional methods, such as fuzzy C-means, support vector machine, and the one-dimensional convolutional neural network approach. These results indicate that the proposed method provides a promising solution for fully automated processing of single-wavelength ALB data.
准确的水陆分类是基于机载激光雷达测深(ALB)的地形测绘和海岸带监测的基础。然而,由于一维(1-D)波形信号的信息量有限和特征模糊,单波长ALB数据的准确分类仍然是一个挑战。为了解决这一问题,提出了一种双分支多模态融合网络(CRMF-Net),以提高分类精度和鲁棒性。该网络由卷积神经网络(CNN)分支和卷积块注意力模块优化残差神经网络分支组成,分别用于捕获互补的时间和空间特征。将一维绿色波形通过连续小波变换转换为二维时频表示,从而增加了波形特征的维度和数量。通过联合利用波形信号及其相应时频表示的互补信息,该方法可以更有效地表示特征,而无需依赖大量手工分析。在秦山岛CZMIL数据集上进行的实验表明,CRMF-Net的总体准确率为97.33%,kappa系数为0.9168,优于模糊c均值、支持向量机、一维卷积神经网络等传统方法。这些结果表明,该方法为单波长ALB数据的全自动处理提供了一种很有前途的解决方案。
{"title":"CRMF-Net: A Multimodal Fusion Network for Water–Land Classification From Single-Wavelength Bathymetric LiDAR","authors":"Wenjing Li;Libin Du;Xinglei Zhao","doi":"10.1109/JSTARS.2026.3655350","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655350","url":null,"abstract":"Accurate water-land classification is fundamental for topographic mapping and coastal zone monitoring based on airborne LiDAR bathymetry (ALB). However, due to the limited information content and feature ambiguity of one-dimensional (1-D) waveform signals, accurate classification from single-wavelength ALB data remains challenging. To address this issue, a dual-branch multimodal fusion network (CRMF-Net) is proposed to improve both classification accuracy and robustness. The proposed network consists of a convolutional neural network (CNN) branch and a convolutional block attention module optimized residual neural network branch, which are designed to capture complementary temporal and spatial features, respectively. The 1-D green waveform is converted into a 2-D time-frequency representation through the continuous wavelet transform, thereby increasing the dimensions and quantity of waveform features. By jointly exploiting complementary information from waveform signals and their corresponding time–frequency representations, the proposed method enables more effective feature representation without relying on extensive handcrafted analysis. Experiments conducted on CZMIL datasets from Qinshan Island demonstrate that CRMF-Net achieves an overall accuracy of 97.33% with a kappa coefficient of 0.9168, outperforming traditional methods, such as fuzzy C-means, support vector machine, and the one-dimensional convolutional neural network approach. These results indicate that the proposed method provides a promising solution for fully automated processing of single-wavelength ALB data.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5804-5813"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358398","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Outdoor Population Presence Monitoring With Mobile Network Data and Satellite Imagery 基于移动网络数据和卫星图像的室外人口存在监测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1109/JSTARS.2026.3655144
Marta Alonso Tubía;Miguel Baena Botana;An Vo Quang;Ana Burgin;Oliva Garcia Cantú-Ros
Dynamic population mapping has become crucial for capturing real-time human movement and behavior, beyond traditional population mapping relying on census data. Differentiating indoor and outdoor activity enhances accuracy for smart city planning, emergency response, public health, or emerging technologies like Innovative Air Mobility, where pedestrian data informs safer, less disruptive flight planning. Data passively collected from mobile networks have proven to be highly effective in accurately capturing population presence and mobility patterns. By enhancing this rich data source with GPS data for spatial accuracy and validating the results with satellite imagery of detected pedestrians, we provide a procedure for indoor and outdoor population detection. The results show agreement between both methodologies. Despite some limitations related to GPS data biases and pedestrian detection issues caused by urban furniture and shadows, the procedure demonstrates strong potential to capture people’s movements, which could ultimately enable near real-time monitoring of population presence on the streets.
动态人口制图已经成为捕捉实时人类运动和行为的关键,超越了传统的依赖人口普查数据的人口制图。区分室内和室外活动可提高智慧城市规划、应急响应、公共卫生或创新空中移动等新兴技术的准确性,其中行人数据可为更安全、破坏性更小的飞行计划提供信息。从移动网络被动收集的数据已被证明在准确捕捉人口存在和流动模式方面非常有效。通过使用GPS数据增强这一丰富的数据源以提高空间精度,并使用检测到的行人的卫星图像验证结果,我们提供了一种室内和室外人口检测程序。结果表明两种方法是一致的。尽管存在与GPS数据偏差和城市家具和阴影引起的行人检测问题相关的一些限制,但该程序显示出捕捉人们运动的强大潜力,最终可以实现对街道上人口存在的近实时监控。
{"title":"Toward Outdoor Population Presence Monitoring With Mobile Network Data and Satellite Imagery","authors":"Marta Alonso Tubía;Miguel Baena Botana;An Vo Quang;Ana Burgin;Oliva Garcia Cantú-Ros","doi":"10.1109/JSTARS.2026.3655144","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3655144","url":null,"abstract":"Dynamic population mapping has become crucial for capturing real-time human movement and behavior, beyond traditional population mapping relying on census data. Differentiating indoor and outdoor activity enhances accuracy for smart city planning, emergency response, public health, or emerging technologies like Innovative Air Mobility, where pedestrian data informs safer, less disruptive flight planning. Data passively collected from mobile networks have proven to be highly effective in accurately capturing population presence and mobility patterns. By enhancing this rich data source with GPS data for spatial accuracy and validating the results with satellite imagery of detected pedestrians, we provide a procedure for indoor and outdoor population detection. The results show agreement between both methodologies. Despite some limitations related to GPS data biases and pedestrian detection issues caused by urban furniture and shadows, the procedure demonstrates strong potential to capture people’s movements, which could ultimately enable near real-time monitoring of population presence on the streets.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"5834-5852"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358662","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Perception Detector for Ship Detection in SAR Images 基于双感知检测器的SAR图像船舶检测
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-16 DOI: 10.1109/JSTARS.2026.3654602
Ming Tong;Shenghua Fan;Jiu Jiang;Hezhi Sun;Jisan Yang;Chu He
Recently, detectors based on deep learning have boosted the state-of-the-art of application on ship detection in synthetic aperture radar (SAR) images. However, constructing discriminative feature from scattering of background and distinguishing contour of ship precisely still present challenging subject to the inherent scattering mechanism of SAR. In this article, a dual-branch detection framework with perception of scattering characteristic and geometric contour is introduced to deal with the problem. First, a scattering characteristic perception branch is proposed to fit the scattering distribution of SAR ship through conditional diffusion model, which introduces learnable scattering feature. Second, a convex contour perception branch is designed as two-stage coarse-to-fine pipeline to delimit the irregular boundary of ship by learning scattering key points. Finally, a cross-token integration module following Bayesian framework is introduced to couple features of scattering and texture adaptively to learn construction of discriminative feature. Furthermore, comprehensive experiments on three authoritative SAR datasets for oriented ship detection demonstrate the effectiveness of proposed method.
近年来,基于深度学习的探测器在合成孔径雷达(SAR)图像舰船检测中的应用水平得到了提升。然而,由于SAR固有的散射机制,从背景散射中构造判别特征并精确识别船舶轮廓仍然是一个挑战。本文引入了一种具有散射特征和几何轮廓感知的双分支检测框架来解决这一问题。首先,通过引入可学习散射特征的条件扩散模型,提出了一个散射特征感知分支来拟合SAR舰船的散射分布;其次,设计一个凸轮廓感知分支作为两阶段粗到细的管道,通过学习散射关键点来划分船舶的不规则边界;最后,引入贝叶斯框架下的交叉标记积分模块,自适应耦合散射和纹理特征,学习判别特征的构建。最后,在三个权威SAR数据集上进行了船舶定向检测实验,验证了该方法的有效性。
{"title":"Dual-Perception Detector for Ship Detection in SAR Images","authors":"Ming Tong;Shenghua Fan;Jiu Jiang;Hezhi Sun;Jisan Yang;Chu He","doi":"10.1109/JSTARS.2026.3654602","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654602","url":null,"abstract":"Recently, detectors based on deep learning have boosted the state-of-the-art of application on ship detection in synthetic aperture radar (SAR) images. However, constructing discriminative feature from scattering of background and distinguishing contour of ship precisely still present challenging subject to the inherent scattering mechanism of SAR. In this article, a dual-branch detection framework with perception of scattering characteristic and geometric contour is introduced to deal with the problem. First, a scattering characteristic perception branch is proposed to fit the scattering distribution of SAR ship through conditional diffusion model, which introduces learnable scattering feature. Second, a convex contour perception branch is designed as two-stage coarse-to-fine pipeline to delimit the irregular boundary of ship by learning scattering key points. Finally, a cross-token integration module following Bayesian framework is introduced to couple features of scattering and texture adaptively to learn construction of discriminative feature. Furthermore, comprehensive experiments on three authoritative SAR datasets for oriented ship detection demonstrate the effectiveness of proposed method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4790-4808"},"PeriodicalIF":5.3,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11355870","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Extraction of 3-D Windows From MVS Point Clouds by Comprehensive Fusion of Multitype Features 基于多类型特征综合融合的MVS点云三维窗口自动提取
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/JSTARS.2026.3654241
Yuan Li;Tianzhu Zhang;Ziyi Xiong;Junying Lv;Yinning Pang
Detecting three-dimensional (3-D) windows is vital for creating semantic building models with high level of detail, furnishing smart city and digital twin programs. Existing studies on window extraction using street imagery or laser scanning data often rely on limited types of features, resulting in compromised accuracy and completeness due to shadows and geometric decorations caused by curtains, balconies, plants, and other objects. To enhance the effectiveness and robustness of building window extraction in 3-D, this article proposes an automatic method that leverages synergistic information from multiview-stereo (MVS) point clouds, through an adaptive divide-and-combine pipeline. Color information inherited from the imagery serves as a main clue to acquire the point clouds of individual building façades that may be coplanar and connected. The geometric information associated with normal vectors is then combined with color, to adaptively divide individual building façade into an irregular grid that conforms to the window edges. Subsequently, HSV color and depth distances within each grid cell are computed, and the grid cells are encoded to quantify the global arrangement features of windows. Finally, the multitype features are fused in an integer programming model, by solving which the optimal combination of grid cells corresponding to windows is obtained. Benefitting from the informative MVS point clouds and the fusion of multitype features, our method is able to directly produce 3-D models with high regularity for buildings with different appearances. Experimental results demonstrate that the proposed method is effective in 3-D window extraction while overcoming variations in façade appearances caused by foreign objects and missing data, with a high point-wise precision of 92.7%, recall of 77.09%, IoU of 71.95%, and F1-score of 83.42%. The results also exhibit a high level of integrity, with the accuracy of correctly extracted windows reaching 89.81%. In the future, we will focus on the development of a more universal façade dividing method to deal with even more complicated windows.
检测三维(3-D)窗口对于创建具有高水平细节的语义建筑模型,提供智慧城市和数字孪生计划至关重要。现有的利用街道图像或激光扫描数据进行窗口提取的研究往往依赖于有限类型的特征,由于窗帘、阳台、植物和其他物体造成的阴影和几何装饰,导致准确性和完整性受到影响。为了提高三维建筑窗口提取的有效性和鲁棒性,本文提出了一种利用多视立体(MVS)点云的协同信息,通过自适应分并管道自动提取的方法。从图像中继承的颜色信息作为获取单个建筑立面点云的主要线索,这些立面可能是共面的,也可能是连通的。然后将与法向量相关的几何信息与颜色相结合,自适应地将单个建筑立面划分为符合窗户边缘的不规则网格。然后,计算每个网格单元内的HSV颜色距离和深度距离,并对网格单元进行编码,量化窗口的全局排列特征。最后,将多类型特征融合到一个整数规划模型中,通过求解该模型得到窗口对应网格单元的最优组合。利用丰富的MVS点云和多类型特征的融合,我们的方法可以直接生成具有高规则性的不同外观建筑物的三维模型。实验结果表明,该方法在克服异物和数据缺失引起的表面形貌变化的同时,能够有效地提取出三维窗口,点向精度为92.7%,召回率为77.09%,IoU为71.95%,f1分数为83.42%。结果也显示出很高的完整性,正确提取窗口的准确率达到89.81%。在未来,我们将专注于开发一种更通用的farade划分方法来处理更复杂的窗口。
{"title":"Automated Extraction of 3-D Windows From MVS Point Clouds by Comprehensive Fusion of Multitype Features","authors":"Yuan Li;Tianzhu Zhang;Ziyi Xiong;Junying Lv;Yinning Pang","doi":"10.1109/JSTARS.2026.3654241","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654241","url":null,"abstract":"Detecting three-dimensional (3-D) windows is vital for creating semantic building models with high level of detail, furnishing smart city and digital twin programs. Existing studies on window extraction using street imagery or laser scanning data often rely on limited types of features, resulting in compromised accuracy and completeness due to shadows and geometric decorations caused by curtains, balconies, plants, and other objects. To enhance the effectiveness and robustness of building window extraction in 3-D, this article proposes an automatic method that leverages synergistic information from multiview-stereo (MVS) point clouds, through an adaptive divide-and-combine pipeline. Color information inherited from the imagery serves as a main clue to acquire the point clouds of individual building façades that may be coplanar and connected. The geometric information associated with normal vectors is then combined with color, to adaptively divide individual building façade into an irregular grid that conforms to the window edges. Subsequently, HSV color and depth distances within each grid cell are computed, and the grid cells are encoded to quantify the global arrangement features of windows. Finally, the multitype features are fused in an integer programming model, by solving which the optimal combination of grid cells corresponding to windows is obtained. Benefitting from the informative MVS point clouds and the fusion of multitype features, our method is able to directly produce 3-D models with high regularity for buildings with different appearances. Experimental results demonstrate that the proposed method is effective in 3-D window extraction while overcoming variations in façade appearances caused by foreign objects and missing data, with a high point-wise precision of 92.7%, recall of 77.09%, IoU of 71.95%, and F1-score of 83.42%. The results also exhibit a high level of integrity, with the accuracy of correctly extracted windows reaching 89.81%. In the future, we will focus on the development of a more universal façade dividing method to deal with even more complicated windows.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4918-4934"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11353237","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights on the Working Principles of a CNN for Forest Height Regression From Single-Pass InSAR Data 基于单次InSAR数据的森林高度回归CNN工作原理研究
IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1109/JSTARS.2026.3654195
Daniel Carcereri;Luca Dell’Amore;Stefano Tebaldini;Paola Rizzoli
The increasing use of artificial intelligence (AI) models in Earth Observation (EO) applications, such as forest height estimation, has led to a growing need for explainable AI (XAI) methods. Despite their high accuracy, AI models are often criticized for their “black-box” nature, making it difficult to understand the inner decision-making process. In this study, we propose a multifaceted approach to XAI for a convolutional neural network (CNN)-based model that estimates forest height from TanDEM-X single-pass InSAR data. By combining domain knowledge, saliency maps, and feature importance analysis through exhaustive model permutations, we provide a comprehensive investigation of the network working principles. Our results suggests that the proposed model is implicitly capable of recognizing and compensating for the SAR acquisition geometry-related distortions. We find that the mean phase center height and its local variability represents the most informative predictor. We also find evidence that the interferometric coherence and the backscatter maps capture complementary but equally relevant views of the vegetation. This work contributes to advance the understanding of the model’s inner workings, and targets the development of more transparent and trustworthy AI for EO applications, ultimately leading to improved accuracy and reliability in the estimation of forest parameters.
人工智能(AI)模型在地球观测(EO)应用中的使用越来越多,例如森林高度估计,导致对可解释的人工智能(XAI)方法的需求日益增长。尽管具有很高的准确性,但人工智能模型经常因其“黑箱”性质而受到批评,难以理解内部决策过程。在本研究中,我们提出了一种基于卷积神经网络(CNN)的XAI方法,该模型从TanDEM-X单次InSAR数据中估计森林高度。通过结合领域知识、显著性图和通过穷举模型排列的特征重要性分析,我们对网络工作原理进行了全面的研究。我们的研究结果表明,所提出的模型能够隐式地识别和补偿SAR捕获几何相关的畸变。我们发现平均相位中心高度及其局部变率是最具信息量的预测因子。我们还发现有证据表明,干涉相干性和后向散射图捕获了互补但同样相关的植被视图。这项工作有助于促进对模型内部工作原理的理解,并旨在为EO应用开发更透明、更可信的人工智能,最终提高森林参数估计的准确性和可靠性。
{"title":"Insights on the Working Principles of a CNN for Forest Height Regression From Single-Pass InSAR Data","authors":"Daniel Carcereri;Luca Dell’Amore;Stefano Tebaldini;Paola Rizzoli","doi":"10.1109/JSTARS.2026.3654195","DOIUrl":"https://doi.org/10.1109/JSTARS.2026.3654195","url":null,"abstract":"The increasing use of artificial intelligence (AI) models in Earth Observation (EO) applications, such as forest height estimation, has led to a growing need for explainable AI (XAI) methods. Despite their high accuracy, AI models are often criticized for their “black-box” nature, making it difficult to understand the inner decision-making process. In this study, we propose a multifaceted approach to XAI for a convolutional neural network (CNN)-based model that estimates forest height from TanDEM-X single-pass InSAR data. By combining domain knowledge, saliency maps, and feature importance analysis through exhaustive model permutations, we provide a comprehensive investigation of the network working principles. Our results suggests that the proposed model is implicitly capable of recognizing and compensating for the SAR acquisition geometry-related distortions. We find that the mean phase center height and its local variability represents the most informative predictor. We also find evidence that the interferometric coherence and the backscatter maps capture complementary but equally relevant views of the vegetation. This work contributes to advance the understanding of the model’s inner workings, and targets the development of more transparent and trustworthy AI for EO applications, ultimately leading to improved accuracy and reliability in the estimation of forest parameters.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"19 ","pages":"4809-4824"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352840","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1