首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
An SW-TES hybrid algorithm for retrieving mountainous land surface temperature from high-resolution thermal infrared remote sensing data 基于高分辨率热红外遥感数据的山地地表温度反演SW-TES混合算法
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-15 DOI: 10.1016/j.isprsjprs.2026.01.016
Zhi-Wei He , Bo-Hui Tang , Zhao-Liang Li
<div><div>Mountainous land surface temperature (MLST) is a key parameter for studying the energy exchange between land surface and atmosphere in mountainous areas. However, traditional land surface temperature (LST) retrieval methods often neglect the influence of three-dimensional (3D) structures and adjacent pixels due to rugged terrain. To address this, a mountainous split-window and temperature-emissivity separation (MSW-TES) hybrid algorithm was proposed to retrieve MLST. The hybrid algorithm that combines the improved split window (SW) algorithm and temperature-emissivity separation (TES) algorithm, which considering the topographic and adjacent effects (T-A effect) to retrieve MLST from five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). In this hybrid algorithm, an improved mountainous canopy multiple scattering TIR radiative transfer model was proposed to construct the simulation dataset. Then, an improved SW algorithm was developed to build a 3D lookup table (LUT) of regression coefficients using small-scale self-heating parameter (SSP) and sky-view factor (SVF) to estimate brightness temperature (BT) at ground level. Furthermore, The TES algorithm was refined to account for the influence of rugged terrain within pixel on mountainous land surface effective emissivity (MLSE) by reconstructing the relationship between minimum emissivity and maximum-minimum difference (MMD) for different SSPs. Results from simulated data show that the accuracy of the improved SW algorithm is increased by up to 0.5 K at most for estimating BT at ground level. The MSW-TES algorithm, considering the T-A effect, generally retrieves lower LST values compared to those without this consideration. The hybrid algorithm yielded root mean square error (RMSE) of 0.99 K and 1.83 K for LST retrieval with and without the T-A effect, respectively, with most differences falling between 0.0 K and 3.0 K. The sensitivity analysis indicated that the perturbation of input parameters has little influence on MLST and MLSE, which proves that the MSW-TES algorithm has strong robustness. Additionally, the accuracy of MLST retrieval by the MSW-TES algorithm was validated using both discrete anisotropic radiative transfer (DART) model simulations and <em>in-situ</em> measurements. The validation result of DART simulations showed biases ranging from −0.13 K to 1.03 K and RMSEs from 0.76 K to 1.29 K across the five ASTER TIR bands, while validation result of the in-situ measurements yielded a bias of 0.97 K and an RMSE of 1.25 K, demonstrating consistent and reliable results. This study underscores the necessity of accounting for the T-A effect to improve MLST retrieval and provides a promising pathway for global clear-sky high-resolution MLST mapping in upcoming thermal missions. The source code and simulated data are available at <span><span>https://github.com/hezwppp/MSW-TES</span><svg><path></path></svg></span>.</div></div
山地地表温度(MLST)是研究山区地表与大气能量交换的关键参数。然而,由于地形起伏,传统的地表温度反演方法往往忽略了三维结构及其相邻像元的影响。为了解决这一问题,提出了一种山地分割窗和温度发射率分离(MSW-TES)混合算法来检索MLST。结合改进的分割窗(SW)算法和温度-发射率分离(TES)算法,考虑地形和相邻效应(T-A效应),从先进星载热发射与反射辐射计(ASTER)的5个热红外(TIR)波段提取MLST。在该混合算法中,提出了一种改进的山地冠层多重散射TIR辐射传输模型来构建模拟数据集。然后,开发了一种改进的SW算法,利用小尺度自热参数(SSP)和天景因子(SVF)建立回归系数的三维查找表(LUT),估算地面亮度温度(BT)。此外,通过重构不同ssp的最小发射率与最大最小差(MMD)之间的关系,对TES算法进行了改进,以考虑像元内崎岖地形对山地地表有效发射率(MLSE)的影响。仿真结果表明,改进的SW算法对地面BT的估计精度最高可提高0.5 K。考虑T-A效应的MSW-TES算法通常比不考虑T-A效应的算法检索到更低的LST值。混合算法对有无T-A效应的LST检索结果的均方根误差(RMSE)分别为0.99 K和1.83 K,最大差异在0.0 K和3.0 K之间。灵敏度分析表明,输入参数的扰动对MLST和MLSE的影响较小,证明了MSW-TES算法具有较强的鲁棒性。此外,通过离散各向异性辐射传输(DART)模型模拟和现场测量,验证了MSW-TES算法检索MLST的准确性。DART模拟验证结果显示,5个ASTER TIR波段的偏差范围为- 0.13 K ~ 1.03 K,均方根误差范围为0.76 K ~ 1.29 K,而原位测量验证结果的偏差为0.97 K,均方根误差为1.25 K,结果一致可靠。该研究强调了考虑T-A效应对改进MLST检索的必要性,并为未来热成像任务中全球晴空高分辨率MLST制图提供了一条有希望的途径。源代码和模拟数据可在https://github.com/hezwppp/MSW-TES上获得。
{"title":"An SW-TES hybrid algorithm for retrieving mountainous land surface temperature from high-resolution thermal infrared remote sensing data","authors":"Zhi-Wei He ,&nbsp;Bo-Hui Tang ,&nbsp;Zhao-Liang Li","doi":"10.1016/j.isprsjprs.2026.01.016","DOIUrl":"10.1016/j.isprsjprs.2026.01.016","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Mountainous land surface temperature (MLST) is a key parameter for studying the energy exchange between land surface and atmosphere in mountainous areas. However, traditional land surface temperature (LST) retrieval methods often neglect the influence of three-dimensional (3D) structures and adjacent pixels due to rugged terrain. To address this, a mountainous split-window and temperature-emissivity separation (MSW-TES) hybrid algorithm was proposed to retrieve MLST. The hybrid algorithm that combines the improved split window (SW) algorithm and temperature-emissivity separation (TES) algorithm, which considering the topographic and adjacent effects (T-A effect) to retrieve MLST from five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). In this hybrid algorithm, an improved mountainous canopy multiple scattering TIR radiative transfer model was proposed to construct the simulation dataset. Then, an improved SW algorithm was developed to build a 3D lookup table (LUT) of regression coefficients using small-scale self-heating parameter (SSP) and sky-view factor (SVF) to estimate brightness temperature (BT) at ground level. Furthermore, The TES algorithm was refined to account for the influence of rugged terrain within pixel on mountainous land surface effective emissivity (MLSE) by reconstructing the relationship between minimum emissivity and maximum-minimum difference (MMD) for different SSPs. Results from simulated data show that the accuracy of the improved SW algorithm is increased by up to 0.5 K at most for estimating BT at ground level. The MSW-TES algorithm, considering the T-A effect, generally retrieves lower LST values compared to those without this consideration. The hybrid algorithm yielded root mean square error (RMSE) of 0.99 K and 1.83 K for LST retrieval with and without the T-A effect, respectively, with most differences falling between 0.0 K and 3.0 K. The sensitivity analysis indicated that the perturbation of input parameters has little influence on MLST and MLSE, which proves that the MSW-TES algorithm has strong robustness. Additionally, the accuracy of MLST retrieval by the MSW-TES algorithm was validated using both discrete anisotropic radiative transfer (DART) model simulations and &lt;em&gt;in-situ&lt;/em&gt; measurements. The validation result of DART simulations showed biases ranging from −0.13 K to 1.03 K and RMSEs from 0.76 K to 1.29 K across the five ASTER TIR bands, while validation result of the in-situ measurements yielded a bias of 0.97 K and an RMSE of 1.25 K, demonstrating consistent and reliable results. This study underscores the necessity of accounting for the T-A effect to improve MLST retrieval and provides a promising pathway for global clear-sky high-resolution MLST mapping in upcoming thermal missions. The source code and simulated data are available at &lt;span&gt;&lt;span&gt;https://github.com/hezwppp/MSW-TES&lt;/span&gt;&lt;svg&gt;&lt;path&gt;&lt;/path&gt;&lt;/svg&gt;&lt;/span&gt;.&lt;/div&gt;&lt;/div","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 865-889"},"PeriodicalIF":12.2,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond the surface: machine learning uncovers ENSO’s hidden and contrasting impacts on phytoplankton vertical structure 超越表面:机器学习揭示了ENSO对浮游植物垂直结构的隐藏和对比影响
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-15 DOI: 10.1016/j.isprsjprs.2026.01.002
Jing Yang , Yanfeng Wen , Peng Chen , Zhenhua Zhang , Delu Pan
Satellite-based ocean remote sensing is fundamentally limited to observing the ocean surface (top-of-the-ocean), a constraint that severely hinders a comprehensive understanding of how the entire water column ecosystem responds to climate variability like the El Niño-Southern Oscillation (ENSO). Surface-only views cannot resolve critical shifts in the subsurface chlorophyll maximum (SCM), a key layer for marine biodiversity and biogeochemical cycles. To overcome this critical limitation, we develop and validate a novel stacked generalization ensemble machine learning framework. This framework robustly reconstructs a 25-year (1998–2022) high-resolution 3D chlorophyll-a (Chl-a) field by integrating 133,792 globally distributed Biogeochemical-Argo (BGC-Argo) profiles with multi-source satellite data. The reconstructed 3D Chl-a fields were rigorously validated against both satellite and in-situ observations, achieving strong agreement (R ≥ 0.97, mean absolute percentage error ≤ 27 %), demonstrating the robustness and reliability of the framework. Applying this framework to two contrasting South China Sea upwelling system reveals that ENSO phases fundamentally restructure the entire water column. Crucially, we discover that El Niño and La Niña exert opposing effects on the SCM: El Niño events deepen and thin the SCM with decreasing Chl-a by 15–30 %, whereas La Niña events cause it to shoal and thicken, increasing Chl-a by 20–40 %. This vertical restructuring is mechanistically linked to ENSO-driven changes in wind stress curl, Rossby wave propagation, and nitrate availability. Furthermore, we identify a significant subsurface-first response, where the SCM reacts to ENSO forcing months before significant changes are detectable at the surface. Our findings demonstrate that a three-dimensional perspective, enabled by our novel remote sensing reconstruction framework, is essential for accurately quantifying the biogeochemical consequences of climate variability, revealing that surface-only observations can significantly underestimate the vulnerability and response of marine ecosystems to ENSO events.
基于卫星的海洋遥感基本上仅限于观测海洋表面(海洋顶部),这一限制严重阻碍了对整个水柱生态系统如何响应厄尔尼诺Niño-Southern涛动(ENSO)等气候变化的全面理解。仅从地表观测无法解决海洋生物多样性和生物地球化学循环的关键层——亚地表叶绿素最大值(SCM)的关键变化。为了克服这一关键限制,我们开发并验证了一种新的堆叠泛化集成机器学习框架。该框架通过整合133,792个全球分布的生物地球化学- argo (BGC-Argo)剖面和多源卫星数据,对25年(1998-2022)高分辨率三维叶绿素-a (Chl-a)场进行了强大的重建。重建的三维Chl-a场与卫星和现场观测结果进行了严格验证,结果一致性强(R≥0.97,平均绝对百分比误差≤27%),证明了框架的鲁棒性和可靠性。将这一框架应用到两个对比鲜明的南海上升流系统中,可以发现ENSO阶段从根本上重构了整个水柱。至关重要的是,我们发现El Niño和La Niña事件对SCM产生相反的影响:El Niño事件使SCM加深和变薄,使Chl-a减少15 - 30%,而La Niña事件使SCM变浅和变厚,使Chl-a增加20 - 40%。这种垂直重构与enso驱动的风应力旋度、罗斯比波传播和硝酸盐有效性的变化有机械上的联系。此外,我们还发现了一个显著的地下优先响应,即SCM在地表检测到显著变化之前几个月就对ENSO强迫做出反应。我们的研究结果表明,通过我们的新型遥感重建框架,三维视角对于准确量化气候变率的生物地球化学后果至关重要,揭示了仅表面观测可以显著低估海洋生态系统对ENSO事件的脆弱性和响应。
{"title":"Beyond the surface: machine learning uncovers ENSO’s hidden and contrasting impacts on phytoplankton vertical structure","authors":"Jing Yang ,&nbsp;Yanfeng Wen ,&nbsp;Peng Chen ,&nbsp;Zhenhua Zhang ,&nbsp;Delu Pan","doi":"10.1016/j.isprsjprs.2026.01.002","DOIUrl":"10.1016/j.isprsjprs.2026.01.002","url":null,"abstract":"<div><div>Satellite-based ocean remote sensing is fundamentally limited to observing the ocean surface (top-of-the-ocean), a constraint that severely hinders a comprehensive understanding of how the entire water column ecosystem responds to climate variability like the El Niño-Southern Oscillation (ENSO). Surface-only views cannot resolve critical shifts in the subsurface chlorophyll maximum (SCM), a key layer for marine biodiversity and biogeochemical cycles. To overcome this critical limitation, we develop and validate a novel stacked generalization ensemble machine learning framework. This framework robustly reconstructs a 25-year (1998–2022) high-resolution 3D chlorophyll-a (Chl-a) field by integrating 133,792 globally distributed Biogeochemical-Argo (BGC-Argo) profiles with multi-source satellite data. The reconstructed 3D Chl-a fields were rigorously validated against both satellite and in-situ observations, achieving strong agreement (R ≥ 0.97, mean absolute percentage error ≤ 27 %), demonstrating the robustness and reliability of the framework. Applying this framework to two contrasting South China Sea upwelling system reveals that ENSO phases fundamentally restructure the entire water column. Crucially, we discover that El Niño and La Niña exert opposing effects on the SCM: El Niño events deepen and thin the SCM with decreasing Chl-a by 15–30 %, whereas La Niña events cause it to shoal and thicken, increasing Chl-a by 20–40 %. This vertical restructuring is mechanistically linked to ENSO-driven changes in wind stress curl, Rossby wave propagation, and nitrate availability. Furthermore, we identify a significant subsurface-first response, where the SCM reacts to ENSO forcing months before significant changes are detectable at the surface. Our findings demonstrate that a three-dimensional perspective, enabled by our novel remote sensing reconstruction framework, is essential for accurately quantifying the biogeochemical consequences of climate variability, revealing that surface-only observations can significantly underestimate the vulnerability and response of marine ecosystems to ENSO events.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 890-909"},"PeriodicalIF":12.2,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DVGBench: Implicit-to-explicit visual grounding benchmark in UAV imagery with large vision–language models DVGBench:基于大型视觉语言模型的无人机图像中隐式到显式视觉接地基准
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-14 DOI: 10.1016/j.isprsjprs.2026.01.005
Yue Zhou , Jue Chen , Zilun Zhang , Penghui Huang , Ran Ding , Zhentao Zou , PengFei Gao , Yuchen Wei , Ke Li , Xue Yang , Xue Jiang , Hongxin Yang , Jonathan Li
Remote sensing (RS) large vision–language models (LVLMs) have shown strong promise across visual grounding (VG) tasks. However, existing RS VG datasets predominantly rely on explicit referring expressions – such as relative position, relative size, and color cues – thereby constraining performance on implicit VG tasks that require scenario-specific domain knowledge. This article introduces DVGBench, a high-quality implicit VG benchmark for drones, covering six major application scenarios: traffic, disaster, security, sport, social activity, and productive activity. Each object provides both explicit and implicit queries. Based on the dataset, we design DroneVG-R1, an LVLM that integrates the novel Implicit-to-Explicit Chain-of-Thought (I2E-CoT) within a reinforcement learning paradigm. This enables the model to take advantage of scene-specific expertise, converting implicit references into explicit ones and thus reducing grounding difficulty. Finally, an evaluation of mainstream models on both explicit and implicit VG tasks reveals substantial limitations in their reasoning capabilities. These findings provide actionable insights for advancing the reasoning capacity of LVLMs for drone-based agents. The code and datasets will be released at https://github.com/zytx121/DVGBench.
遥感(RS)大视觉语言模型(LVLMs)在视觉基础(VG)任务中显示出强大的应用前景。然而,现有的RS VG数据集主要依赖于显式引用表达式——例如相对位置、相对大小和颜色线索——从而限制了需要特定于场景的领域知识的隐式VG任务的性能。本文介绍了DVGBench,这是一个用于无人机的高质量隐式VG基准测试,涵盖了六个主要应用场景:交通、灾难、安全、体育、社交活动和生产活动。每个对象都提供显式和隐式查询。基于该数据集,我们设计了DroneVG-R1,这是一种将新型的隐式到显式思维链(I2E-CoT)集成到强化学习范式中的LVLM。这使模型能够利用特定场景的专业知识,将隐式引用转换为显式引用,从而降低接地难度。最后,对主流模型在显式和隐式VG任务上的评估揭示了它们在推理能力上的实质性限制。这些发现为提高基于无人机的智能体的LVLMs推理能力提供了可操作的见解。代码和数据集将在https://github.com/zytx121/DVGBench上发布。
{"title":"DVGBench: Implicit-to-explicit visual grounding benchmark in UAV imagery with large vision–language models","authors":"Yue Zhou ,&nbsp;Jue Chen ,&nbsp;Zilun Zhang ,&nbsp;Penghui Huang ,&nbsp;Ran Ding ,&nbsp;Zhentao Zou ,&nbsp;PengFei Gao ,&nbsp;Yuchen Wei ,&nbsp;Ke Li ,&nbsp;Xue Yang ,&nbsp;Xue Jiang ,&nbsp;Hongxin Yang ,&nbsp;Jonathan Li","doi":"10.1016/j.isprsjprs.2026.01.005","DOIUrl":"10.1016/j.isprsjprs.2026.01.005","url":null,"abstract":"<div><div>Remote sensing (RS) large vision–language models (LVLMs) have shown strong promise across visual grounding (VG) tasks. However, existing RS VG datasets predominantly rely on explicit referring expressions – such as relative position, relative size, and color cues – thereby constraining performance on implicit VG tasks that require scenario-specific domain knowledge. This article introduces DVGBench, a high-quality implicit VG benchmark for drones, covering six major application scenarios: traffic, disaster, security, sport, social activity, and productive activity. Each object provides both explicit and implicit queries. Based on the dataset, we design DroneVG-R1, an LVLM that integrates the novel Implicit-to-Explicit Chain-of-Thought (I2E-CoT) within a reinforcement learning paradigm. This enables the model to take advantage of scene-specific expertise, converting implicit references into explicit ones and thus reducing grounding difficulty. Finally, an evaluation of mainstream models on both explicit and implicit VG tasks reveals substantial limitations in their reasoning capabilities. These findings provide actionable insights for advancing the reasoning capacity of LVLMs for drone-based agents. The code and datasets will be released at <span><span>https://github.com/zytx121/DVGBench</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 831-847"},"PeriodicalIF":12.2,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling spatiotemporal forest cover patterns breaking the cloud barrier: Annual 30  m mapping in cloud-prone southern China from 2000 to 2020 揭示突破云层屏障的森林覆盖时空格局:2000 - 2020年中国南方多云地区30 m年际制图
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-14 DOI: 10.1016/j.isprsjprs.2026.01.015
Peng Qin , Huabing Huang , Jie Wang , Yunxia Cui , Peimin Chen , Shuang Chen , Yu Xia , Shuai Yuan , Yumei Li , Xiangyu Liu
Large-scale, long-term, and high-frequency monitoring of forest cover is essential for sustainable forest management and carbon stock assessment. However, in persistently cloudy regions such as southern China, the scarcity of high-quality remote sensing data and reliable training samples has resulted in forest cover products with limited spatial and temporal resolution. In addition, many existing datasets fail to accurately characterize forest distribution and dynamics—particularly underestimating forest expansion and overlooking fine-scale and high-frequency changes. To address these limitations, we propose a novel forest–non-forest mapping framework based on reconstructed remote sensing data. First, we successfully achieved large-scale data reconstruction using two deep learning-based multi-sensor fusion methods across extensive (2.04 million km2), long-term (2000–2020), persistently cloudy regions, effectively generating seamless imagery and NDVI time series to address extensive spatial and temporal data gaps for forest classification. Next, by combining spectrally similar sample transfer method with existing land cover products, we constructed robust training samples spanning broad spatial and temporal scales. Subsequently, using a random forest classifier we generated annual 30  m forest cover maps for cloudy southern China, achieving an unprecedented balance between spatial and temporal resolution while improving mapping accuracy. The results demonstrate an overall accuracy of 0.904, surpassing that of the China Land Cover Dataset (CLCD, 0.889) and the China Annual Tree Cover Dataset (CATCD, 0.850). Particularly, our results revealed an overall upward trend in forest area—from 119.84 to 132.09 million hectares (Mha)—that was rarely captured in previous studies, closely aligning with National Forest Inventory (NFI) data (R2 = 0.86). Finally, by integrating time-series analysis with classification results, this study transformed forest mapping from a traditional static framework to a dynamic temporal perspective, reducing uncertainties associated with direct interannual comparisons and estimating forest gains of 23.87 Mha and losses of 12.56 Mha. Notably, reconstructed data improved forest mapping in terms of completeness, resolution, and accuracy. In Guangxi, the annual product detected 11.24 Mha more forest gain than the 10-year composite, indicating better completeness. It also offered finer spatial resolution (30  m vs. 500  m) and higher overall accuracy (0.879 vs. 0.853), compared to the widely used cloud-affected annual product. Overall, this study presents a robust framework for precise forest monitoring in cloudy regions.
森林覆盖的大规模、长期和高频率监测对于可持续森林管理和碳储量评估至关重要。然而,在中国南方等持续多云的地区,由于缺乏高质量的遥感数据和可靠的训练样本,导致森林覆盖产品的时空分辨率有限。此外,许多现有数据集无法准确描述森林分布和动态特征,特别是低估了森林扩张,忽视了精细尺度和高频变化。为了解决这些问题,我们提出了一种基于重建遥感数据的森林-非森林制图框架。首先,我们成功地利用两种基于深度学习的多传感器融合方法在广泛(204万平方公里)、长期(2000-2020)、持续多云地区实现了大规模数据重建,有效地生成了无缝图像和NDVI时间序列,以解决森林分类中广泛的时空数据缺口。其次,将光谱相似的样本转移方法与现有的土地覆盖产品相结合,构建了跨越广泛时空尺度的鲁棒训练样本。随后,我们使用随机森林分类器生成了中国南方多云地区每年30米的森林覆盖地图,在提高制图精度的同时,实现了前所未有的时空分辨率平衡。结果表明,总体精度为0.904,优于中国土地覆盖数据集(CLCD, 0.889)和中国树木覆盖数据集(CATCD, 0.850)。特别是,我们的研究结果显示,森林面积总体呈上升趋势,从1.1984亿公顷(Mha)增加到1.3209亿公顷(Mha),这在以前的研究中很少被捕捉到,与国家森林清查(NFI)数据密切一致(R2 = 0.86)。最后,通过将时间序列分析与分类结果相结合,将森林作图从传统的静态框架转变为动态时间视角,减少了直接年际比较的不确定性,估算出23.87 Mha的森林收益和12.56 Mha的森林损失。值得注意的是,重建数据在完整性、分辨率和精度方面提高了森林制图。在广西,年际产品比10年综合产品多检测到11.24 Mha的森林增收,表明完整性较好。与广泛使用的受云影响的年度产品相比,它还提供了更精细的空间分辨率(30米vs 500米)和更高的整体精度(0.879 vs 0.853)。总的来说,本研究为多云地区的精确森林监测提供了一个强大的框架。
{"title":"Unveiling spatiotemporal forest cover patterns breaking the cloud barrier: Annual 30  m mapping in cloud-prone southern China from 2000 to 2020","authors":"Peng Qin ,&nbsp;Huabing Huang ,&nbsp;Jie Wang ,&nbsp;Yunxia Cui ,&nbsp;Peimin Chen ,&nbsp;Shuang Chen ,&nbsp;Yu Xia ,&nbsp;Shuai Yuan ,&nbsp;Yumei Li ,&nbsp;Xiangyu Liu","doi":"10.1016/j.isprsjprs.2026.01.015","DOIUrl":"10.1016/j.isprsjprs.2026.01.015","url":null,"abstract":"<div><div>Large-scale, long-term, and high-frequency monitoring of forest cover is essential for sustainable forest management and carbon stock assessment. However, in persistently cloudy regions such as southern China, the scarcity of high-quality remote sensing data and reliable training samples has resulted in forest cover products with limited spatial and temporal resolution. In addition, many existing datasets fail to accurately characterize forest distribution and dynamics—particularly underestimating forest expansion and overlooking fine-scale and high-frequency changes. To address these limitations, we propose a novel forest–non-forest mapping framework based on reconstructed remote sensing data. First, we successfully achieved large-scale data reconstruction using two deep learning-based multi-sensor fusion methods across extensive (2.04 million km<sup>2</sup>), long-term (2000–2020), persistently cloudy regions, effectively generating seamless imagery and NDVI time series to address extensive spatial and temporal data gaps for forest classification. Next, by combining spectrally similar sample transfer method with existing land cover products, we constructed robust training samples spanning broad spatial and temporal scales. Subsequently, using a random forest classifier we generated annual 30  m forest cover maps for cloudy southern China, achieving an unprecedented balance between spatial and temporal resolution while improving mapping accuracy. The results demonstrate an overall accuracy of 0.904, surpassing that of the China Land Cover Dataset (CLCD, 0.889) and the China Annual Tree Cover Dataset (CATCD, 0.850). Particularly, our results revealed an overall upward trend in forest area—from 119.84 to 132.09 million hectares (Mha)—that was rarely captured in previous studies, closely aligning with National Forest Inventory (NFI) data (R<sup>2</sup> = 0.86). Finally, by integrating time-series analysis with classification results, this study transformed forest mapping from a traditional static framework to a dynamic temporal perspective, reducing uncertainties associated with direct interannual comparisons and estimating forest gains of 23.87 Mha and losses of 12.56 Mha. Notably, reconstructed data improved forest mapping in terms of completeness, resolution, and accuracy. In Guangxi, the annual product detected 11.24 Mha more forest gain than the 10-year composite, indicating better completeness. It also offered finer spatial resolution (30  m vs. 500  m) and higher overall accuracy (0.879 vs. 0.853), compared to the widely used cloud-affected annual product. Overall, this study presents a robust framework for precise forest monitoring in cloudy regions.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 848-864"},"PeriodicalIF":12.2,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TUM2TWIN: Introducing the large-scale multimodal urban digital twin benchmark dataset TUM2TWIN:介绍大规模多模式城市数字孪生基准数据集
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-13 DOI: 10.1016/j.isprsjprs.2025.12.013
Olaf Wysocki , Benedikt Schwab , Manoj Kumar Biswanath , Michael Greza , Qilin Zhang , Jingwei Zhu , Thomas Froech , Medhini Heeramaglore , Ihab Hijazi , Khaoula Kanna , Mathias Pechinger , Zhaiyu Chen , Yao Sun , Alejandro Rueda Segura , Ziyang Xu , Omar AbdelGafar , Mansour Mehranfar , Chandan Yeshwanth , Yueh-Cheng Liu , Hadi Yazdi , Boris Jutzi
Urban Digital Twins (UDTs) have become essential for managing cities and integrating complex, heterogeneous data from diverse sources. Creating UDTs involves challenges at multiple process stages, including acquiring accurate 3D source data, reconstructing high-fidelity 3D models, maintaining models’ updates, and ensuring seamless interoperability to downstream tasks. Current datasets are usually limited to one part of the processing chain, hampering comprehensive Urban Digital Twin (UDT)s validation. To address these challenges, we introduce the first comprehensive multimodal Urban Digital Twin benchmark dataset: TUM2TWIN. This dataset includes georeferenced, semantically aligned 3D models and networks along with various terrestrial, mobile, aerial, and satellite observations boasting 32 data subsets over roughly 100,000 m2 and currently 767 GB of data. By ensuring georeferenced indoor–outdoor acquisition, high accuracy, and multimodal data integration, the benchmark supports robust analysis of sensors and the development of advanced reconstruction methods. Additionally, we explore downstream tasks demonstrating the potential of TUM2TWIN, including novel view synthesis of NeRF and Gaussian Splatting, solar potential analysis, point cloud semantic segmentation, and LoD3 building reconstruction. We are convinced this contribution lays a foundation for overcoming current limitations in UDT creation, fostering new research directions and practical solutions for smarter, data-driven urban environments. The project is available under: https://tum2t.win.
城市数字孪生(udt)对于管理城市和整合来自不同来源的复杂异构数据至关重要。创建udt涉及多个过程阶段的挑战,包括获取准确的3D源数据、重建高保真度的3D模型、维护模型的更新,以及确保与下游任务的无缝互操作性。目前的数据集通常仅限于处理链的一部分,阻碍了城市数字孪生(UDT)的全面验证。为了应对这些挑战,我们引入了第一个全面的多模式城市数字孪生基准数据集:TUM2TWIN。该数据集包括地理参考、语义对齐的3D模型和网络,以及各种地面、移动、空中和卫星观测,拥有32个数据子集,超过大约10万平方米,目前数据量为767 GB。通过确保地理参考室内外采集,高精度和多模态数据集成,基准支持传感器的鲁棒分析和先进重建方法的开发。此外,我们还探索了展示TUM2TWIN潜力的下游任务,包括NeRF和高斯飞溅的新视图合成、太阳能潜力分析、点云语义分割和LoD3建筑重建。我们相信,这一贡献为克服当前UDT创建的局限性奠定了基础,为更智能、数据驱动的城市环境培育新的研究方向和实用解决方案。该项目可在:https://tum2t.win。
{"title":"TUM2TWIN: Introducing the large-scale multimodal urban digital twin benchmark dataset","authors":"Olaf Wysocki ,&nbsp;Benedikt Schwab ,&nbsp;Manoj Kumar Biswanath ,&nbsp;Michael Greza ,&nbsp;Qilin Zhang ,&nbsp;Jingwei Zhu ,&nbsp;Thomas Froech ,&nbsp;Medhini Heeramaglore ,&nbsp;Ihab Hijazi ,&nbsp;Khaoula Kanna ,&nbsp;Mathias Pechinger ,&nbsp;Zhaiyu Chen ,&nbsp;Yao Sun ,&nbsp;Alejandro Rueda Segura ,&nbsp;Ziyang Xu ,&nbsp;Omar AbdelGafar ,&nbsp;Mansour Mehranfar ,&nbsp;Chandan Yeshwanth ,&nbsp;Yueh-Cheng Liu ,&nbsp;Hadi Yazdi ,&nbsp;Boris Jutzi","doi":"10.1016/j.isprsjprs.2025.12.013","DOIUrl":"10.1016/j.isprsjprs.2025.12.013","url":null,"abstract":"<div><div>Urban Digital Twins (UDTs) have become essential for managing cities and integrating complex, heterogeneous data from diverse sources. Creating UDTs involves challenges at multiple process stages, including acquiring accurate 3D source data, reconstructing high-fidelity 3D models, maintaining models’ updates, and ensuring seamless interoperability to downstream tasks. Current datasets are usually limited to one part of the processing chain, hampering comprehensive Urban Digital Twin (UDT)s validation. To address these challenges, we introduce the first comprehensive multimodal Urban Digital Twin benchmark dataset: TUM2TWIN. This dataset includes georeferenced, semantically aligned 3D models and networks along with various terrestrial, mobile, aerial, and satellite observations boasting 32 data subsets over roughly 100,000 <span><math><msup><mrow><mi>m</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> and currently 767 GB of data. By ensuring georeferenced indoor–outdoor acquisition, high accuracy, and multimodal data integration, the benchmark supports robust analysis of sensors and the development of advanced reconstruction methods. Additionally, we explore downstream tasks demonstrating the potential of TUM2TWIN, including novel view synthesis of NeRF and Gaussian Splatting, solar potential analysis, point cloud semantic segmentation, and LoD3 building reconstruction. We are convinced this contribution lays a foundation for overcoming current limitations in UDT creation, fostering new research directions and practical solutions for smarter, data-driven urban environments. The project is available under: <span><span>https://tum2t.win</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 810-830"},"PeriodicalIF":12.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowd detection using Very-Fine-Resolution satellite imagery 使用高分辨率卫星图像进行人群检测
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-13 DOI: 10.1016/j.isprsjprs.2026.01.001
Tong Xiao , Qunming Wang , Ping Lu , Tenghai Huang , Xiaohua Tong , Peter M. Atkinson
Accurate crowd detection (CD) is critical for public safety and historical pattern analysis, yet existing methods relying on ground and aerial imagery suffer from limited spatio-temporal coverage. The development of very-fine-resolution (VFR) satellite sensor imagery (e.g., ∼0.3 m spatial resolution) provides unprecedented opportunities for large-scale crowd activity analysis, but it has never been considered for this task. To address this gap, we proposed CrowdSat-Net, a novel point-based convolutional neural network, which features two innovative components: Dual-Context Progressive Attention Network (DCPAN) to improve feature representation of individuals by aggregating scene context and local individual characteristics, and High-Frequency Guided Deformable Upsampler (HFGDU) that recovers high-frequency information during upsampling through frequency-domain guided deformable convolutions. To validate the effectiveness of CrowdSat-Net, we developed CrowdSat, the first VFR satellite imagery dataset designed specifically for CD tasks, comprising over 120 k manually labeled individuals from multi-source satellite platforms (Beijing-3 N, Jilin-1 Gaofen-04A and Google Earth) across China. In the experiments, CrowdSat-Net was compared with eight state-of-the-art point-based CD methods (originally designed for ground or aerial imagery and satellite-based animal detection) using CrowdSat and achieved the largest F1-score of 66.12 % and Precision of 73.23 %, surpassing the second-best method by 0.80 % and 6.83 %, respectively. Moreover, extensive ablation experiments validated the importance of the DCPAN and HFGDU modules. Furthermore, cross-regional evaluation further demonstrated the spatial generalizability of CrowdSat-Net. This research advances CD capability by providing both a newly developed network architecture for CD and a pioneering benchmark dataset to facilitate future CD development. The source code is available at https://github.com/Tong-777777/CrowdSat-Net.
准确的人群检测(CD)对于公共安全和历史模式分析至关重要,但现有的依赖地面和航空图像的方法存在时空覆盖有限的问题。非常精细分辨率(VFR)卫星传感器图像(例如,~ 0.3 m空间分辨率)的发展为大规模人群活动分析提供了前所未有的机会,但它从未被考虑用于这项任务。为了解决这一问题,我们提出了一种新的基于点的卷积神经网络CrowdSat-Net,它具有两个创新的组成部分:双上下文渐进注意网络(DCPAN),通过聚合场景上下文和局部个体特征来改善个体的特征表示;高频引导可变形上采样器(HFGDU),通过频域引导可变形卷积在上采样过程中恢复高频信息。为了验证CrowdSat- net的有效性,我们开发了第一个专门为CD任务设计的VFR卫星图像数据集CrowdSat,该数据集包括来自中国各地多源卫星平台(北京- 3n、吉林-1高分- 04a和谷歌地球)的超过120万名人工标记的个体。在实验中,CrowdSat- net与使用CrowdSat的8种最先进的基于点的CD方法(最初设计用于地面或航空图像和卫星动物检测)进行了比较,获得了最高的f1分数66.12%和精度73.23%,分别比第二好的方法高出0.80%和6.83%。此外,大量的烧蚀实验验证了DCPAN和HFGDU模块的重要性。跨区域评价进一步证明了CrowdSat-Net的空间概括性。本研究通过为CD提供新开发的网络架构和开创性的基准数据集来促进未来CD的发展,从而提高了CD的能力。源代码可从https://github.com/Tong-777777/CrowdSat-Net获得。
{"title":"Crowd detection using Very-Fine-Resolution satellite imagery","authors":"Tong Xiao ,&nbsp;Qunming Wang ,&nbsp;Ping Lu ,&nbsp;Tenghai Huang ,&nbsp;Xiaohua Tong ,&nbsp;Peter M. Atkinson","doi":"10.1016/j.isprsjprs.2026.01.001","DOIUrl":"10.1016/j.isprsjprs.2026.01.001","url":null,"abstract":"<div><div>Accurate crowd detection (CD) is critical for public safety and historical pattern analysis, yet existing methods relying on ground and aerial imagery suffer from limited spatio-temporal coverage. The development of very-fine-resolution (VFR) satellite sensor imagery (e.g., ∼0.3 m spatial resolution) provides unprecedented opportunities for large-scale crowd activity analysis, but it has never been considered for this task. To address this gap, we proposed CrowdSat-Net, a novel point-based convolutional neural network, which features two innovative components: Dual-Context Progressive Attention Network (DCPAN) to improve feature representation of individuals by aggregating scene context and local individual characteristics, and High-Frequency Guided Deformable Upsampler (HFGDU) that recovers high-frequency information during upsampling through frequency-domain guided deformable convolutions. To validate the effectiveness of CrowdSat-Net, we developed CrowdSat, the first VFR satellite imagery dataset designed specifically for CD tasks, comprising over 120 k manually labeled individuals from multi-source satellite platforms (Beijing-3 N, Jilin-1 Gaofen-04A and Google Earth) across China. In the experiments, CrowdSat-Net was compared with eight state-of-the-art point-based CD methods (originally designed for ground or aerial imagery and satellite-based animal detection) using CrowdSat and achieved the largest F1-score of 66.12 % and Precision of 73.23 %, surpassing the second-best method by 0.80 % and 6.83 %, respectively. Moreover, extensive ablation experiments validated the importance of the DCPAN and HFGDU modules. Furthermore, cross-regional evaluation further demonstrated the spatial generalizability of CrowdSat-Net. This research advances CD capability by providing both a newly developed network architecture for CD and a pioneering benchmark dataset to facilitate future CD development. The source code is available at https://github.com/Tong-777777/CrowdSat-Net.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 787-809"},"PeriodicalIF":12.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change tensor: Estimating complex topographic changes from point clouds using Riemann manifold surfaces 变化张量:估计复杂的地形变化从点云使用黎曼流形表面
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-12 DOI: 10.1016/j.isprsjprs.2026.01.009
Shoujun Jia , Lotte de Vugt , Andreas Mayr , Katharina Anders , Chun Liu , Martin Rutzinger
Estimating complex 3D topographic surface changes including rigid spatial movement and non-rigid morphological deformation is an essential task to investigate Earth surface dynamics. However, for current 3D point comparison approaches, it is challenging to separate rigid and non-rigid topographic surface changes from multi-temporal 3D point clouds. Additionally, these methods are affected by challenges including topographic surface roughness and point cloud heterogeneities (i.e., discrete and irregular point distributions). To address these challenges, in this paper, we consider the dynamic evolution of topographic surfaces as the geometric changes of Riemann manifold surfaces. By building Euclidean (straight) and non-Euclidean (curved) coordinate systems on Riemann manifold surfaces that are represented from point clouds, the rigid transformation and non-rigid deformation of the Riemann manifold surfaces are solved to conceptualize rigid and non-rigid change tensors, respectively. On this basis, we design rigid (i.e., translation and rotation) and non-rigid (i.e., stretch and distortion) change features to describe various topographic surface changes and quantify the associated uncertainties to capture significant changes. The proposed method is tested on pairwise point clouds with simulated and real topographic surface changes in mountain regions. Simulation experiments demonstrate that the proposed method performed better than the baseline (i.e., M3C2) and state-of-the-art methods (i.e., LOG), with a higher translation accuracy (more than 50% improvement), a lower translation uncertainty (more than 61% reduction), and strong robustness to varying point densities. These results also show that the proposed method accurately quantifies three additional types of change features (i.e., the mean accuracies of rotation, stretch, and distortion are 1.5°, 0.5%, and 3.5°, respectively). Moreover, the real-scene experimental results demonstrate the effectiveness and superiority of the proposed method in estimating various topographic changes in real environments, the applicability in analyzing geomorphological processes, and the potential contribution for understanding spatiotemporal patterns of Earth surface dynamics.
估计复杂的三维地形表面变化,包括刚性空间运动和非刚性形态变形,是研究地球表面动力学的重要任务。然而,对于目前的三维点比较方法来说,从多时相三维点云中分离出刚性和非刚性地形表面变化是一个挑战。此外,这些方法还受到地形表面粗糙度和点云异质性(即离散和不规则点分布)等挑战的影响。为了应对这些挑战,本文将地形表面的动态演变视为黎曼流形表面的几何变化。通过在点云表示的黎曼流形表面上建立欧几里得(直)和非欧几里得(弯)坐标系,求解黎曼流形表面的刚性变换和非刚性变形,分别将刚性和非刚性变化张量概念化。在此基础上,我们设计了刚性(即平移和旋转)和非刚性(即拉伸和扭曲)变化特征来描述各种地形表面变化,并量化相关的不确定性以捕捉重大变化。该方法在模拟和真实地形变化的山地成对点云上进行了测试。仿真实验表明,该方法优于基准方法(即M3C2)和最先进的方法(即LOG),具有更高的翻译精度(提高50%以上),更低的翻译不确定性(降低61%以上),并且对变点密度具有较强的鲁棒性。这些结果还表明,该方法准确量化了另外三种类型的变化特征(即旋转、拉伸和畸变的平均精度分别为1.5°、0.5%和3.5°)。此外,实际实验结果表明,该方法在估算真实环境中各种地形变化方面具有有效性和优越性,在分析地貌过程方面具有适用性,对理解地表动力学时空格局具有潜在贡献。
{"title":"Change tensor: Estimating complex topographic changes from point clouds using Riemann manifold surfaces","authors":"Shoujun Jia ,&nbsp;Lotte de Vugt ,&nbsp;Andreas Mayr ,&nbsp;Katharina Anders ,&nbsp;Chun Liu ,&nbsp;Martin Rutzinger","doi":"10.1016/j.isprsjprs.2026.01.009","DOIUrl":"10.1016/j.isprsjprs.2026.01.009","url":null,"abstract":"<div><div>Estimating complex 3D topographic surface changes including rigid spatial movement and non-rigid morphological deformation is an essential task to investigate Earth surface dynamics. However, for current 3D point comparison approaches, it is challenging to separate rigid and non-rigid topographic surface changes from multi-temporal 3D point clouds. Additionally, these methods are affected by challenges including topographic surface roughness and point cloud heterogeneities (i.e., discrete and irregular point distributions). To address these challenges, in this paper, we consider the dynamic evolution of topographic surfaces as the geometric changes of Riemann manifold surfaces. By building Euclidean (straight) and non-Euclidean (curved) coordinate systems on Riemann manifold surfaces that are represented from point clouds, the rigid transformation and non-rigid deformation of the Riemann manifold surfaces are solved to conceptualize rigid and non-rigid change tensors, respectively. On this basis, we design rigid (i.e., translation and rotation) and non-rigid (i.e., stretch and distortion) change features to describe various topographic surface changes and quantify the associated uncertainties to capture significant changes. The proposed method is tested on pairwise point clouds with simulated and real topographic surface changes in mountain regions. Simulation experiments demonstrate that the proposed method performed better than the baseline (i.e., M3C2) and state-of-the-art methods (i.e., LOG), with a higher translation accuracy (more than 50% improvement), a lower translation uncertainty (more than 61% reduction), and strong robustness to varying point densities. These results also show that the proposed method accurately quantifies three additional types of change features (i.e., the mean accuracies of rotation, stretch, and distortion are 1.5°, 0.5%, and 3.5°, respectively). Moreover, the real-scene experimental results demonstrate the effectiveness and superiority of the proposed method in estimating various topographic changes in real environments, the applicability in analyzing geomorphological processes, and the potential contribution for understanding spatiotemporal patterns of Earth surface dynamics.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 766-786"},"PeriodicalIF":12.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmartQSM: a novel quantitative structure model using sparse-convolution-based point cloud contraction for reconstruction and analysis of individual tree architecture SmartQSM:一种新的定量结构模型,使用基于稀疏卷积的点云收缩来重建和分析单个树形结构
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-10 DOI: 10.1016/j.isprsjprs.2026.01.011
Jie Yang , Huaiqing Zhang , Jinyang Li , Haoyue Yang , Tian Gao , Tingdong Yang , Jiaxin Wang , Xiaoli Zhang , Ting Yun , Yuxin Duanmu , Sihan Chen , Yukai Shi
Tree architecture analysis is fundamental to forestry, but complex trees challenge the accuracy and efficiency of point-cloud-based reconstruction. Here, we present SmartQSM, a novel quantitative structure model designed for reconstructing individual trees and extracting their parameters using ground-based laser scanning data. The method achieves point cloud contraction and forms the thin structures required for skeletonization by iteratively applying a sparse-convolution-based residual U-shaped network (ResUNet) to predict point movement towards the medial axis. This process is integrated with techniques from previous studies to form a complete reconstruction pipeline. Following the organization and QSM-based quantification of 47 individual-scale, 26 organ-scale, and 8 plot-scale parameters, the proposed method provides comprehensive support for extracting these metrics using the input point cloud and its outputs, including the skeleton and mesh. The performance was verified using the two-period leaf-off LiDAR data of a natural coniferous and broad-leaved mixed forest plot (in Qingyuan county, Liaoning province, China), and 2 open forest datasets. The existing major QSMs was used for comparison. The inference network adopted a three-stage hierarchical spatial compression architecture, initiating with 8 input channels and predicting with multi-layer perceptron. The reconstruction was insensitive to remaining leaves and the model did not have apparent distortion. The processing speed is efficient, about 12,000 points per second. In terms of major architectural parameters, the R2 scores for trunk length, trunk volume, and bole height on the tested two period data of different tree species in the plot reached 0.97, 0.957, and 0.949, respectively, which were 0.043, 0.114, 0.029 higher than existing methods. the R2 scores for branch length, branching angle, and tip deflection angle remained around 0.95. The overestimation of stem volume or aboveground biomass has been alleviated. The high reconstruction quality, efficiency, rich parameters, and unique visual interaction capabilities of the proposed method offer a novel and practical solution for forestry research and broader domains. The implementation code is currently available at: https://github.com/project-lightlin/SmartQSM.
树木结构分析是林业的基础,但复杂的树木对基于点云的重建的准确性和效率提出了挑战。在这里,我们提出了SmartQSM,一种新的定量结构模型,旨在利用地面激光扫描数据重建单个树木并提取其参数。该方法通过迭代应用基于稀疏卷积的残差u形网络(ResUNet)来预测点向中轴线的移动,从而实现点云收缩并形成骨架化所需的薄结构。这一过程与以往研究的技术相结合,形成了一个完整的重建管道。在对47个个体尺度、26个器官尺度和8个地块尺度参数进行组织和qsm量化之后,该方法为使用输入点云和输出点云(包括骨架和网格)提取这些指标提供了全面的支持。利用辽宁省清远县天然针叶阔叶混交林样地两期叶片激光雷达数据和2个开放森林数据集验证了该方法的有效性。采用现有的主要qsm进行比较。该推理网络采用三阶段分层空间压缩架构,以8个输入通道初始化,并使用多层感知器进行预测。重建对残叶不敏感,模型无明显畸变。处理速度非常快,大约每秒12000点。在主要建筑参数方面,样地不同树种的树干长度、树干体积和孔高的R2分别达到0.97、0.957和0.949,分别比现有方法高0.043、0.114和0.029。枝长、枝角、枝尖偏转角的R2值保持在0.95左右。对茎体积或地上生物量的高估得到了缓解。该方法具有较高的重建质量、效率、丰富的参数和独特的视觉交互能力,为林业研究和更广泛的领域提供了一种新颖实用的解决方案。实现代码目前可在:https://github.com/project-lightlin/SmartQSM获得。
{"title":"SmartQSM: a novel quantitative structure model using sparse-convolution-based point cloud contraction for reconstruction and analysis of individual tree architecture","authors":"Jie Yang ,&nbsp;Huaiqing Zhang ,&nbsp;Jinyang Li ,&nbsp;Haoyue Yang ,&nbsp;Tian Gao ,&nbsp;Tingdong Yang ,&nbsp;Jiaxin Wang ,&nbsp;Xiaoli Zhang ,&nbsp;Ting Yun ,&nbsp;Yuxin Duanmu ,&nbsp;Sihan Chen ,&nbsp;Yukai Shi","doi":"10.1016/j.isprsjprs.2026.01.011","DOIUrl":"10.1016/j.isprsjprs.2026.01.011","url":null,"abstract":"<div><div>Tree architecture analysis is fundamental to forestry, but complex trees challenge the accuracy and efficiency of point-cloud-based reconstruction. Here, we present SmartQSM, a novel quantitative structure model designed for reconstructing individual trees and extracting their parameters using ground-based laser scanning data. The method achieves point cloud contraction and forms the thin structures required for skeletonization by iteratively applying a sparse-convolution-based residual U-shaped network (ResUNet) to predict point movement towards the medial axis. This process is integrated with techniques from previous studies to form a complete reconstruction pipeline. Following the organization and QSM-based quantification of 47 individual-scale, 26 organ-scale, and 8 plot-scale parameters, the proposed method provides comprehensive support for extracting these metrics using the input point cloud and its outputs, including the skeleton and mesh. The performance was verified using the two-period leaf-off LiDAR data of a natural coniferous and broad-leaved mixed forest plot (in Qingyuan county, Liaoning province, China), and 2 open forest datasets. The existing major QSMs was used for comparison. The inference network adopted a three-stage hierarchical spatial compression architecture, initiating with 8 input channels and predicting with multi-layer perceptron. The reconstruction was insensitive to remaining leaves and the model did not have apparent distortion. The processing speed is efficient, about 12,000 points per second. In terms of major architectural parameters, the <span><math><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></math></span> scores for trunk length, trunk volume, and bole height on the tested two period data of different tree species in the plot reached 0.97, 0.957, and 0.949, respectively, which were 0.043, 0.114, 0.029 higher than existing methods. the <span><math><msup><mrow><mi>R</mi></mrow><mn>2</mn></msup></math></span> scores for branch length, branching angle, and tip deflection angle remained around 0.95. The overestimation of stem volume or aboveground biomass has been alleviated. The high reconstruction quality, efficiency, rich parameters, and unique visual interaction capabilities of the proposed method offer a novel and practical solution for forestry research and broader domains. The implementation code is currently available at: <span><span>https://github.com/project-lightlin/SmartQSM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 712-739"},"PeriodicalIF":12.2,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex convolutional sparse coding InSAR phase filtering Incorporating directional gradients and second-order difference regularization 结合方向梯度和二阶差分正则化的复卷积稀疏编码InSAR相位滤波
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-10 DOI: 10.1016/j.isprsjprs.2025.12.016
Pengcheng Hu , Xu Li , Junhuan Peng , Xu Ma , Yuhan Su , Xiaoman Qi , Xinwei Jiang , Wenwen Wang
Interferometric Synthetic Aperture Radar (InSAR) is a technology that can effectively obtain ground information, conduct large-scale topography mapping, and monitor surface deformation. However, InSAR data is interfered by speckle noise caused by radar echo signal fading, ground background clutter, and decoherence, which affects the InSAR interferometric phase quality and thus reduces the accuracy of InSAR results. The existing Complex Convolutional Sparse Coding Gradient Regularization (ComCSC-GR) method incorporates gradient regularization by considering the sparse coefficient matrix’s gradients in both row (azimuth) and column (range) directions. It is an advanced and effective interferogram phase filtering method that can improve the interferogram quality. However, this method does not take into account the variation characteristics of the diagonal gradient and the second-order difference information (caused by edge mutations). As a result, the interferogram still exhibits problems such as staircase artifacts in high-noise and low-coherence areas, uneven interferograms (caused by a large number of residual points), and unclear phase edge structure. This article introduces multiple directional gradients and second-order differential Laplacian operator information, and construct two models: “Complex Convolutional Sparse Coding Model with L2-norm Regularization of Directional Gradients and Laplacian Operator (ComCSC-RCDL) ” and “Complex Convolutional Sparse Coding Model Coupled with L1-norm Total Variation Regularization (ComCSC-RCDL-TV)”. These methods enhance the fidelity of phase texture and edge structure, and improve the quality of InSAR interferogram filtering phase in low-coherence scenarios. Comparative experiments were conducted using simulated data, real data from Sentinel-1 and LuTan-1 (LT-1), and advanced methods including ComCSC-GR and InSAR-BM3D (real data experiments included comparison experiments before and after removing the interferogram orbit error). The results show that the proposed model method performs better than the comparative model, verifying the effectiveness of the proposed model.
干涉合成孔径雷达(InSAR)是一种能够有效获取地面信息、进行大尺度地形测绘、监测地表变形的技术。然而,由于雷达回波信号衰落、地面背景杂波、退相干等因素引起的散斑噪声对InSAR数据的干扰,影响了InSAR干涉相位质量,降低了InSAR结果的精度。现有的复卷积稀疏编码梯度正则化(ComCSC-GR)方法通过考虑稀疏系数矩阵在行(方位角)和列(距离)方向上的梯度,实现梯度正则化。它是一种先进有效的干涉图相位滤波方法,可以提高干涉图质量。然而,该方法没有考虑对角梯度的变化特征和二阶差分信息(由边缘突变引起)。因此,干涉图仍然存在高噪声和低相干区域的阶梯伪影、干涉图不均匀(由大量残差点引起)、相位边缘结构不清晰等问题。本文引入了多个方向梯度和二阶微分拉普拉斯算子信息,构建了两个模型:“方向梯度和拉普拉斯算子l2范数正则化的复卷积稀疏编码模型(ComCSC-RCDL)”和“l1范数全变分正则化的复卷积稀疏编码模型(ComCSC-RCDL- tv)”。这些方法提高了相位纹理和边缘结构的保真度,提高了低相干情况下InSAR干涉图滤波相位的质量。采用模拟数据、Sentinel-1和LuTan-1 (LT-1)的真实数据以及comccs - gr和InSAR-BM3D等先进方法(真实数据实验包括去除干涉图轨道误差前后的对比实验)进行对比实验。结果表明,所提模型方法的性能优于对比模型,验证了所提模型的有效性。
{"title":"Complex convolutional sparse coding InSAR phase filtering Incorporating directional gradients and second-order difference regularization","authors":"Pengcheng Hu ,&nbsp;Xu Li ,&nbsp;Junhuan Peng ,&nbsp;Xu Ma ,&nbsp;Yuhan Su ,&nbsp;Xiaoman Qi ,&nbsp;Xinwei Jiang ,&nbsp;Wenwen Wang","doi":"10.1016/j.isprsjprs.2025.12.016","DOIUrl":"10.1016/j.isprsjprs.2025.12.016","url":null,"abstract":"<div><div>Interferometric Synthetic Aperture Radar (InSAR) is a technology that can effectively obtain ground information, conduct large-scale topography mapping, and monitor surface deformation. However, InSAR data is interfered by speckle noise caused by radar echo signal fading, ground background clutter, and decoherence, which affects the InSAR interferometric phase quality and thus reduces the accuracy of InSAR results. The existing Complex Convolutional Sparse Coding Gradient Regularization (ComCSC-GR) method incorporates gradient regularization by considering the sparse coefficient matrix’s gradients in both row (azimuth) and column (range) directions. It is an advanced and effective interferogram phase filtering method that can improve the interferogram quality. However, this method does not take into account the variation characteristics of the diagonal gradient and the second-order difference information (caused by edge mutations). As a result, the interferogram still exhibits problems such as staircase artifacts in high-noise and low-coherence areas, uneven interferograms (caused by a large number of residual points), and unclear phase edge structure. This article introduces multiple directional gradients and second-order differential Laplacian operator information, and construct two models: “Complex Convolutional Sparse Coding Model with <em>L</em><sub>2</sub>-norm Regularization of Directional Gradients and Laplacian Operator (ComCSC-RCDL) ” and “Complex Convolutional Sparse Coding Model Coupled with <em>L</em><sub>1</sub>-norm Total Variation Regularization (ComCSC-RCDL-TV)”. These methods enhance the fidelity of phase texture and edge structure, and improve the quality of InSAR interferogram filtering phase in low-coherence scenarios. Comparative experiments were conducted using simulated data, real data from Sentinel-1 and LuTan-1 (LT-1), and advanced methods including ComCSC-GR and InSAR-BM3D (real data experiments included comparison experiments before and after removing the interferogram orbit error). The results show that the proposed model method performs better than the comparative model, verifying the effectiveness of the proposed model.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 740-765"},"PeriodicalIF":12.2,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial–spectral fusion under highly dynamic ocean conditions based on optical water classification 基于光学水体分类的高动态海洋条件下空间光谱融合
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-10 DOI: 10.1016/j.isprsjprs.2025.12.018
Changpeng Li , Bangyi Tao , Yunzhou Li , Yan Wang , Yixian Zhu , Renjie Chen , Haiqing Huang , Delu Pan , Hongtao Wang
Spatial–spectral fusion offers a viable solution for the quantitative inversion of water parameters using multispectral resolution images (MSRIs) and limited bands of high spatial resolution images (HSRIs). Most existing fusion methods assume that the ground coverage type at the same location or the spatial patterns of images do not change over time. However, these fundamental assumptions are not valid under highly dynamic ocean conditions caused by various currents and tides. In this study, we propose a new assumption: the types of optical water bodies remain consistent within a certain time frame and a specific spatial region, and the spectral characteristics of each optical water type remain stable. Subsequently, a new spatial–spectral fusion method, referred to as the optical water classification based data fusion (OWCDF), was developed to realize accurate spatial–spectral fusion in oceanic environments. The OWCDF algorithm comprises three key steps: 1) re-calibration based on a maximum cosine correlation (MCC) match, 2) spectral regression based on optical water classification, and 3) residual compensation. A well-designed scheme was developed to evaluate the performance of OWCDF against existing algorithms by fusing a CZI/HY-1C/D image (HSRI) with time-series GOCI-II images (MSRI), with temporal differences increasing from 1 h to 4 h. The OWCDF algorithm exhibited a substantially better ability to resist the influence of highly dynamic changes in water bodies than other algorithms. Further tests applying OWCDF to the data of OLCI/S3 and CZI/HY-1C/D or CCD/HJ-2A/B confirmed its applicability to polar-orbiting satellites, achieving quantitative observation with a high spatial resolution even 2–3 times a day. In the future, the accuracy of the optical water type classification must be improved, and limitations under poor observation conditions, such as broken clouds and sun glints, should be further considered.
空间-光谱融合为利用多光谱分辨率图像(MSRIs)和有限波段高空间分辨率图像(HSRIs)定量反演水体参数提供了可行的解决方案。现有的融合方法大多假设同一位置的地面覆盖类型或图像的空间格局不随时间变化。然而,这些基本假设在由各种海流和潮汐引起的高度动态海洋条件下是无效的。在本研究中,我们提出了一个新的假设:在一定的时间框架和特定的空间区域内,光学水体类型保持一致,每种光学水体类型的光谱特征保持稳定。随后,提出了一种新的空间-光谱融合方法,即基于光学水体分类的数据融合(OWCDF),以实现海洋环境下的精确空间-光谱融合。OWCDF算法包括三个关键步骤:1)基于最大余弦相关(MCC)匹配的重新校准,2)基于光学水分类的光谱回归,3)残差补偿。通过将CZI/HY-1C/D图像(HSRI)与时间序列GOCI-II图像(MSRI)融合,设计了一种设计良好的方案来评估OWCDF算法与现有算法的性能,时间差异从1 h增加到4 h。OWCDF算法比其他算法具有更好的抵抗水体高度动态变化影响的能力。将OWCDF应用于OLCI/S3和CZI/HY-1C/D或CCD/HJ-2A/B数据的进一步试验证实了其对极轨卫星的适用性,甚至可以实现每天2-3次的高空间分辨率定量观测。未来必须提高光学水型分类的精度,进一步考虑破碎云、太阳闪烁等恶劣观测条件下的局限性。
{"title":"Spatial–spectral fusion under highly dynamic ocean conditions based on optical water classification","authors":"Changpeng Li ,&nbsp;Bangyi Tao ,&nbsp;Yunzhou Li ,&nbsp;Yan Wang ,&nbsp;Yixian Zhu ,&nbsp;Renjie Chen ,&nbsp;Haiqing Huang ,&nbsp;Delu Pan ,&nbsp;Hongtao Wang","doi":"10.1016/j.isprsjprs.2025.12.018","DOIUrl":"10.1016/j.isprsjprs.2025.12.018","url":null,"abstract":"<div><div>Spatial–spectral fusion offers a viable solution for the quantitative inversion of water parameters using multispectral resolution images (MSRIs) and limited bands of high spatial resolution images (HSRIs). Most existing fusion methods assume that the ground coverage type at the same location or the spatial patterns of images do not change over time. However, these fundamental assumptions are not valid under highly dynamic ocean conditions caused by various currents and tides. In this study, we propose a new assumption: the types of optical water bodies remain consistent within a certain time frame and a specific spatial region, and the spectral characteristics of each optical water type remain stable. Subsequently, a new spatial–spectral fusion method, referred to as the optical water classification based data fusion (OWCDF), was developed to realize accurate spatial–spectral fusion in oceanic environments. The OWCDF algorithm comprises three key steps: 1) re-calibration based on a maximum cosine correlation (MCC) match, 2) spectral regression based on optical water classification, and 3) residual compensation. A well-designed scheme was developed to evaluate the performance of OWCDF against existing algorithms by fusing a CZI/HY-1C/D image (HSRI) with time-series GOCI-II images (MSRI), with temporal differences increasing from 1 h to 4 h. The OWCDF algorithm exhibited a substantially better ability to resist the influence of highly dynamic changes in water bodies than other algorithms. Further tests applying OWCDF to the data of OLCI/S3 and CZI/HY-1C/D or CCD/HJ-2A/B confirmed its applicability to polar-orbiting satellites, achieving quantitative observation with a high spatial resolution even 2–3 times a day. In the future, the accuracy of the optical water type classification must be improved, and limitations under poor observation conditions, such as broken clouds and sun glints, should be further considered.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 689-711"},"PeriodicalIF":12.2,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1