Pub Date : 2026-01-17DOI: 10.1016/j.isprsjprs.2026.01.014
Yichi Zhang , Ge Han , Yiyang Huang , Huayi Wang , Hongyuan Zhang , Zhipeng Pei , Yuanxue Pu , Haotian Luo , Jinchun Yi , Tianqi Shi , Siwei Li , Wei Gong
Industrial parks are major sources of greenhouse gas (GHG) emissions and the ultimate entities responsible for implementing mitigation policies. Current satellite remote sensing technologies perform well in reporting localized strong point-source emissions, but face significant challenges in monitoring emissions from multiple densely clustered sources. To address the limitation, we propose an emission allocation framework, EA-MILES, which integrates multi-source hyperspectral data with plume modeling to quantify process-level emissions. Simulation experiments show that with existing hyperspectral satellites, EA-MILES can estimate emissions for sources with intensities above 80 t CO2/h and 100 kg CH4/h with bias not exceed 13.60 % and 17.08 %. A steel and power production park is selected as a case study, where EA-MILES estimates process-level emissions with uncertainties ranging from 26.33 % to 37.78 %. Estimation results are consistent with inventory values derived from emission factor methods. Top-down Integrated Mass Enhancement method is utilized to compare with EA-MILES results, the estimation bias did not exceed 16.84 %. According to the Climate TRACE, about 32 % of CO2 and 44 % of CH4 point-sources worldwide fall within EA-MILES detection coverage, accounting for over 80 % and 55 % of anthropogenic CO2 and CH4 emissions. Therefore, this study provides a novel satellite-based approach for reporting facility-scale GHG emissions in industrial parks, offering transparent and accurate monitoring data to support the mitigation and energy transition decision-making.
工业园区是温室气体(GHG)排放的主要来源,也是负责实施减缓政策的最终实体。目前的卫星遥感技术在报告局部强点源排放方面表现良好,但在监测多个密集聚集源的排放方面面临重大挑战。为了解决这一限制,我们提出了一个排放分配框架EA-MILES,该框架将多源高光谱数据与羽流建模相结合,以量化过程级排放。模拟实验表明,利用现有的高光谱卫星,EA-MILES可以估算强度在80 t CO2/h和100 kg CH4/h以上的源的排放量,偏差不超过13.60%和17.08%。以某钢铁和电力生产园区为例,EA-MILES估算的过程级排放不确定性在26.33% ~ 37.78%之间。估算结果与排放因子法得出的库存值一致。采用自顶向下集成质量增强方法与EA-MILES结果进行比较,估计偏差不超过16.84%。根据Climate TRACE,全球约32%的CO2和44%的CH4点源在EA-MILES检测范围内,占人为CO2和CH4排放量的80%和55%以上。因此,本研究提供了一种新的基于卫星的方法来报告工业园区设施规模的温室气体排放,提供透明和准确的监测数据,以支持减缓和能源转型决策。
{"title":"Attributing GHG emissions to individual facilities using multi-temporal hyperspectral images: Methodology and applications","authors":"Yichi Zhang , Ge Han , Yiyang Huang , Huayi Wang , Hongyuan Zhang , Zhipeng Pei , Yuanxue Pu , Haotian Luo , Jinchun Yi , Tianqi Shi , Siwei Li , Wei Gong","doi":"10.1016/j.isprsjprs.2026.01.014","DOIUrl":"10.1016/j.isprsjprs.2026.01.014","url":null,"abstract":"<div><div>Industrial parks are major sources of greenhouse gas (GHG) emissions and the ultimate entities responsible for implementing mitigation policies. Current satellite remote sensing technologies perform well in reporting localized strong point-source emissions, but face significant challenges in monitoring emissions from multiple densely clustered sources. To address the limitation, we propose an emission allocation framework, EA-MILES, which integrates multi-source hyperspectral data with plume modeling to quantify process-level emissions. Simulation experiments show that with existing hyperspectral satellites, EA-MILES can estimate emissions for sources with intensities above 80 t CO<sub>2</sub>/h and 100 kg CH<sub>4</sub>/h with bias not exceed 13.60 % and 17.08 %. A steel and power production park is selected as a case study, where EA-MILES estimates process-level emissions with uncertainties ranging from 26.33 % to 37.78 %. Estimation results are consistent with inventory values derived from emission factor methods. Top-down Integrated Mass Enhancement method is utilized to compare with EA-MILES results, the estimation bias did not exceed 16.84 %. According to the <em>Climate TRACE</em>, about 32 % of CO<sub>2</sub> and 44 % of CH<sub>4</sub> point-sources worldwide fall within EA-MILES detection coverage, accounting for over 80 % and 55 % of anthropogenic CO<sub>2</sub> and CH<sub>4</sub> emissions. Therefore, this study provides a novel satellite-based approach for reporting facility-scale GHG emissions in industrial parks, offering transparent and accurate monitoring data to support the mitigation and energy transition decision-making.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 937-956"},"PeriodicalIF":12.2,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.isprsjprs.2026.01.019
Hao Wu , Yu Ran , Xiaoxiang Zhang, Xinying Luo, Li Wang, Teng Zhao, Yongcheng Song, Zhijun Zhang, Huisong Zhang, Jin Liu, Jian Li
Visual relocalization estimates the precise pose of a query image within a pre-built visual map, serving as a fundamental component for robot navigation, autonomous driving, surveying and mapping, etc. In the past few decades, significant research efforts have been devoted to achieving high relocalization accuracy. However, challenges remain when the query images exhibit significant changes compared to the reference scene. This paper primarily addresses the problem of pose verification and correction of inaccurate pose estimations from the relocalization. We propose a novel anchor-based visual relocalization framework that achieves robust pose estimations through multi-view co-visibility verification. Our approach further utilizes a tightly-coupled multi-sensor data fusion for pose refinement. Comprehensive evaluations on large-scale, real-world urban driving datasets (containing frequent dynamic objects, severe occlusions, and long-term environmental changes) demonstrate that our framework achieves state-of-the-art performance. Specifically, compared to traditional SFM-based and Transformer-based methods under these challenging conditions, our approach reduces the translation error by 46.2% and the rotation error by 8.55%.
{"title":"AnchorReF: A novel anchor-based visual re-localization framework aided by multi-sensor data fusion","authors":"Hao Wu , Yu Ran , Xiaoxiang Zhang, Xinying Luo, Li Wang, Teng Zhao, Yongcheng Song, Zhijun Zhang, Huisong Zhang, Jin Liu, Jian Li","doi":"10.1016/j.isprsjprs.2026.01.019","DOIUrl":"10.1016/j.isprsjprs.2026.01.019","url":null,"abstract":"<div><div>Visual relocalization estimates the precise pose of a query image within a pre-built visual map, serving as a fundamental component for robot navigation, autonomous driving, surveying and mapping, etc. In the past few decades, significant research efforts have been devoted to achieving high relocalization accuracy. However, challenges remain when the query images exhibit significant changes compared to the reference scene. This paper primarily addresses the problem of pose verification and correction of inaccurate pose estimations from the relocalization. We propose a novel anchor-based visual relocalization framework that achieves robust pose estimations through multi-view co-visibility verification. Our approach further utilizes a tightly-coupled multi-sensor data fusion for pose refinement. Comprehensive evaluations on large-scale, real-world urban driving datasets (containing frequent dynamic objects, severe occlusions, and long-term environmental changes) demonstrate that our framework achieves state-of-the-art performance. Specifically, compared to traditional SFM-based and Transformer-based methods under these challenging conditions, our approach reduces the translation error by 46.2% and the rotation error by 8.55%.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 1-13"},"PeriodicalIF":12.2,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid advancement of low-altitude remote sensing and Vision-Language Models (VLMs), Embodied Agents based on Unmanned Aerial Vehicles (UAVs) have shown significant potential in autonomous tasks. However, current evaluation methods for UAV-Embodied Agents (UAV-EAs) remain constrained by the lack of standardized benchmarks, diverse testing scenarios and open system interfaces. To address these challenges, we propose BEDI (Benchmark for Embodied Drone Intelligence), a systematic and standardized benchmark designed for evaluating UAV-EAs. Specifically, we introduce a novel Dynamic Chain-of-Embodied-Task paradigm based on the perception-decision-action loop, which decomposes complex UAV tasks into standardized, measurable subtasks. Building on this paradigm, we design a unified evaluation framework encompassing six core sub-skills: semantic perception, spatial perception, motion control, tool utilization, task planning and action generation. Furthermore, we develop a hybrid testing platform that incorporates a wide range of both virtual and real-world scenarios, enabling a comprehensive evaluation of UAV-EAs across diverse contexts. The platform also offers open and standardized interfaces, allowing researchers to customize tasks and extend scenarios, thereby enhancing flexibility and scalability in the evaluation process. Finally, through empirical evaluations of several state-of-the-art (SOTA) VLMs, we reveal their limitations in embodied UAV tasks, underscoring the critical role of the BEDI benchmark in advancing embodied intelligence research and model optimization. By filling the gap in systematic and standardized evaluation within this field, BEDI facilitates objective model comparison and lays a robust foundation for future development in this field. Our benchmark is now publicly available at https://github.com/lostwolves/BEDI.
{"title":"BEDI: a comprehensive benchmark for evaluating embodied agents on UAVs","authors":"Mingning Guo , Mengwei Wu , Jiarun He, Shaoxian Li, Haifeng Li, Chao Tao","doi":"10.1016/j.isprsjprs.2026.01.013","DOIUrl":"10.1016/j.isprsjprs.2026.01.013","url":null,"abstract":"<div><div>With the rapid advancement of low-altitude remote sensing and Vision-Language Models (VLMs), Embodied Agents based on Unmanned Aerial Vehicles (UAVs) have shown significant potential in autonomous tasks. However, current evaluation methods for UAV-Embodied Agents (UAV-EAs) remain constrained by the lack of standardized benchmarks, diverse testing scenarios and open system interfaces. To address these challenges, we propose BEDI (Benchmark for Embodied Drone Intelligence), a systematic and standardized benchmark designed for evaluating UAV-EAs. Specifically, we introduce a novel Dynamic Chain-of-Embodied-Task paradigm based on the perception-decision-action loop, which decomposes complex UAV tasks into standardized, measurable subtasks. Building on this paradigm, we design a unified evaluation framework encompassing six core sub-skills: semantic perception, spatial perception, motion control, tool utilization, task planning and action generation. Furthermore, we develop a hybrid testing platform that<!--> <!-->incorporates a wide range of both virtual and real-world scenarios, enabling a comprehensive evaluation of UAV-EAs across diverse contexts. The platform also offers open and standardized interfaces, allowing researchers to customize tasks and extend scenarios, thereby enhancing flexibility and scalability in the evaluation process. Finally, through empirical evaluations of several state-of-the-art (SOTA) VLMs, we reveal their limitations in embodied UAV tasks, underscoring the critical role of the BEDI benchmark in advancing embodied intelligence research and model optimization. By filling the gap in systematic and standardized evaluation within this field, BEDI facilitates objective model comparison and lays a robust foundation for future development in this field. Our benchmark is now publicly available at <span><span>https://github.com/lostwolves/BEDI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 910-936"},"PeriodicalIF":12.2,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral image (HSI) contains information at various spectra, making it valuable in several real-world applications such as environmental monitoring, agriculture, and remote sensing. However, the acquisition process often introduces noise, necessitating effective HSI denoising methods to maintain its applicability. Deep Learning (DL) is considered as the de-facto for HSI denoising, but it requires a significant number of training samples to optimize network parameters for effective denoising outcomes. However, obtaining extensive datasets is challenging in HSI, leading to epistemic uncertainties and thereby deteriorating the denoising performance. This paper introduces a novel supervised contrastive learning (SCL) method, RECREATE, to enhance feature learning and mitigate the issue of epistemic uncertainty for HSI denoising. Furthermore, we introduce the exploration of image inpainting as an auxiliary task to enhance the HSI denoising performance. By adding HSI inpainting to CL, our method essentially enhances HSI denoising by increasing training datasets and enforcing improved feature learning. Experimental outcomes on various HSI datasets validate the efficacy of RECREATE, showcasing its potential for integration with existing HSI denoising techniques to enhance their performance, both qualitatively and quantitatively. This innovative method holds promise for addressing the limitations posed by limited training data and thereby advancing the field toward proposing better HSI denoising methods.
{"title":"RECREATE: Supervised contrastive learning and inpainting based hyperspectral image denoising","authors":"Aditya Dixit , Anup Kumar Gupta , Puneet Gupta , Ankur Garg","doi":"10.1016/j.isprsjprs.2026.01.022","DOIUrl":"10.1016/j.isprsjprs.2026.01.022","url":null,"abstract":"<div><div>Hyperspectral image (HSI) contains information at various spectra, making it valuable in several real-world applications such as environmental monitoring, agriculture, and remote sensing. However, the acquisition process often introduces noise, necessitating effective HSI denoising methods to maintain its applicability. Deep Learning (DL) is considered as the de-facto for HSI denoising, but it requires a significant number of training samples to optimize network parameters for effective denoising outcomes. However, obtaining extensive datasets is challenging in HSI, leading to epistemic uncertainties and thereby deteriorating the denoising performance. This paper introduces a novel supervised contrastive learning (SCL) method, <em>RECREATE</em>, to enhance feature learning and mitigate the issue of epistemic uncertainty for HSI denoising. Furthermore, we introduce the exploration of image inpainting as an auxiliary task to enhance the HSI denoising performance. By adding HSI inpainting to CL, our method essentially enhances HSI denoising by increasing training datasets and enforcing improved feature learning. Experimental outcomes on various HSI datasets validate the efficacy of <em>RECREATE</em>, showcasing its potential for integration with existing HSI denoising techniques to enhance their performance, both qualitatively and quantitatively. This innovative method holds promise for addressing the limitations posed by limited training data and thereby advancing the field toward proposing better HSI denoising methods.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 14-24"},"PeriodicalIF":12.2,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.isprsjprs.2026.01.008
Yifan Sun , Chenguang Dai , Wenke Li , Xinpu Liu , Yongqi Sun , Ye Zhang , Weijun Guan , Yongsheng Zhang , Yulan Guo , Hanyun Wang
LiDAR point cloud semantic segmentation is crucial for scene understanding in autonomous driving, yet the sparse and textureless characteristics of point clouds cause huge challenges for this task. To address this, numerous studies have explored to leverage the dense color and fine-grained texture from RGB images for multi-modality 3D semantic segmentation. Nevertheless, these methods still encounter certain limitations when facing complex scenarios, as RGB images degrade under poor lighting conditions. In contrast, thermal infrared (TIR) images can provide thermal radiation information of road objects and are robust to illumination change, offering complementary advantages to RGB images. Therefore, in this work we introduce RTPSeg, the first and only multi-modality dataset to simultaneously provide RGB and TIR images for point cloud semantic segmentation. RTPSeg includes over 3000 synchronized frames collected by RGB camera, infrared camera, and LiDAR, providing over 248M pointwise annotations for 18 semantic categories in autonomous driving, involving urban and village scenes during both daytime and nighttime. Based on RTPSeg, we also propose RTPSegNet, a baseline model for point cloud semantic segmentation jointly assisted with RGB and TIR images. Extensive experiments demonstrate that the RTPSeg dataset presents considerable challenges and opportunities to existing point cloud semantic segmentation approaches, and our RTPSegNet exhibits promising effectiveness in jointly leveraging the complementary information between point clouds, RGB images, and TIR images. More importantly, the experimental results also confirm that 3D semantic segmentation can be effectively enhanced by introducing additional TIR image modality, revealing the promising potential of this innovative research and application. We anticipate that the RTPSeg will catalyze in-depth research in this field. Both RTPSeg and RTPSegNet will be released at https://github.com/sssssyf/RTPSeg
{"title":"RTPSeg: A multi-modality dataset for LiDAR point cloud semantic segmentation assisted with RGB-thermal images in autonomous driving","authors":"Yifan Sun , Chenguang Dai , Wenke Li , Xinpu Liu , Yongqi Sun , Ye Zhang , Weijun Guan , Yongsheng Zhang , Yulan Guo , Hanyun Wang","doi":"10.1016/j.isprsjprs.2026.01.008","DOIUrl":"10.1016/j.isprsjprs.2026.01.008","url":null,"abstract":"<div><div>LiDAR point cloud semantic segmentation is crucial for scene understanding in autonomous driving, yet the sparse and textureless characteristics of point clouds cause huge challenges for this task. To address this, numerous studies have explored to leverage the dense color and fine-grained texture from RGB images for multi-modality 3D semantic segmentation. Nevertheless, these methods still encounter certain limitations when facing complex scenarios, as RGB images degrade under poor lighting conditions. In contrast, thermal infrared (TIR) images can provide thermal radiation information of road objects and are robust to illumination change, offering complementary advantages to RGB images. Therefore, in this work we introduce RTPSeg, the first and only multi-modality dataset to simultaneously provide RGB and TIR images for point cloud semantic segmentation. RTPSeg includes over 3000 synchronized frames collected by RGB camera, infrared camera, and LiDAR, providing over 248M pointwise annotations for 18 semantic categories in autonomous driving, involving urban and village scenes during both daytime and nighttime. Based on RTPSeg, we also propose RTPSegNet, a baseline model for point cloud semantic segmentation jointly assisted with RGB and TIR images. Extensive experiments demonstrate that the RTPSeg dataset presents considerable challenges and opportunities to existing point cloud semantic segmentation approaches, and our RTPSegNet exhibits promising effectiveness in jointly leveraging the complementary information between point clouds, RGB images, and TIR images. More importantly, the experimental results also confirm that 3D semantic segmentation can be effectively enhanced by introducing additional TIR image modality, revealing the promising potential of this innovative research and application. We anticipate that the RTPSeg will catalyze in-depth research in this field. Both RTPSeg and RTPSegNet will be released at <span><span>https://github.com/sssssyf/RTPSeg</span><svg><path></path></svg></span></div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 25-38"},"PeriodicalIF":12.2,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.isprsjprs.2026.01.016
Zhi-Wei He , Bo-Hui Tang , Zhao-Liang Li
<div><div>Mountainous land surface temperature (MLST) is a key parameter for studying the energy exchange between land surface and atmosphere in mountainous areas. However, traditional land surface temperature (LST) retrieval methods often neglect the influence of three-dimensional (3D) structures and adjacent pixels due to rugged terrain. To address this, a mountainous split-window and temperature-emissivity separation (MSW-TES) hybrid algorithm was proposed to retrieve MLST. The hybrid algorithm that combines the improved split window (SW) algorithm and temperature-emissivity separation (TES) algorithm, which considering the topographic and adjacent effects (T-A effect) to retrieve MLST from five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). In this hybrid algorithm, an improved mountainous canopy multiple scattering TIR radiative transfer model was proposed to construct the simulation dataset. Then, an improved SW algorithm was developed to build a 3D lookup table (LUT) of regression coefficients using small-scale self-heating parameter (SSP) and sky-view factor (SVF) to estimate brightness temperature (BT) at ground level. Furthermore, The TES algorithm was refined to account for the influence of rugged terrain within pixel on mountainous land surface effective emissivity (MLSE) by reconstructing the relationship between minimum emissivity and maximum-minimum difference (MMD) for different SSPs. Results from simulated data show that the accuracy of the improved SW algorithm is increased by up to 0.5 K at most for estimating BT at ground level. The MSW-TES algorithm, considering the T-A effect, generally retrieves lower LST values compared to those without this consideration. The hybrid algorithm yielded root mean square error (RMSE) of 0.99 K and 1.83 K for LST retrieval with and without the T-A effect, respectively, with most differences falling between 0.0 K and 3.0 K. The sensitivity analysis indicated that the perturbation of input parameters has little influence on MLST and MLSE, which proves that the MSW-TES algorithm has strong robustness. Additionally, the accuracy of MLST retrieval by the MSW-TES algorithm was validated using both discrete anisotropic radiative transfer (DART) model simulations and <em>in-situ</em> measurements. The validation result of DART simulations showed biases ranging from −0.13 K to 1.03 K and RMSEs from 0.76 K to 1.29 K across the five ASTER TIR bands, while validation result of the in-situ measurements yielded a bias of 0.97 K and an RMSE of 1.25 K, demonstrating consistent and reliable results. This study underscores the necessity of accounting for the T-A effect to improve MLST retrieval and provides a promising pathway for global clear-sky high-resolution MLST mapping in upcoming thermal missions. The source code and simulated data are available at <span><span>https://github.com/hezwppp/MSW-TES</span><svg><path></path></svg></span>.</div></div
山地地表温度(MLST)是研究山区地表与大气能量交换的关键参数。然而,由于地形起伏,传统的地表温度反演方法往往忽略了三维结构及其相邻像元的影响。为了解决这一问题,提出了一种山地分割窗和温度发射率分离(MSW-TES)混合算法来检索MLST。结合改进的分割窗(SW)算法和温度-发射率分离(TES)算法,考虑地形和相邻效应(T-A效应),从先进星载热发射与反射辐射计(ASTER)的5个热红外(TIR)波段提取MLST。在该混合算法中,提出了一种改进的山地冠层多重散射TIR辐射传输模型来构建模拟数据集。然后,开发了一种改进的SW算法,利用小尺度自热参数(SSP)和天景因子(SVF)建立回归系数的三维查找表(LUT),估算地面亮度温度(BT)。此外,通过重构不同ssp的最小发射率与最大最小差(MMD)之间的关系,对TES算法进行了改进,以考虑像元内崎岖地形对山地地表有效发射率(MLSE)的影响。仿真结果表明,改进的SW算法对地面BT的估计精度最高可提高0.5 K。考虑T-A效应的MSW-TES算法通常比不考虑T-A效应的算法检索到更低的LST值。混合算法对有无T-A效应的LST检索结果的均方根误差(RMSE)分别为0.99 K和1.83 K,最大差异在0.0 K和3.0 K之间。灵敏度分析表明,输入参数的扰动对MLST和MLSE的影响较小,证明了MSW-TES算法具有较强的鲁棒性。此外,通过离散各向异性辐射传输(DART)模型模拟和现场测量,验证了MSW-TES算法检索MLST的准确性。DART模拟验证结果显示,5个ASTER TIR波段的偏差范围为- 0.13 K ~ 1.03 K,均方根误差范围为0.76 K ~ 1.29 K,而原位测量验证结果的偏差为0.97 K,均方根误差为1.25 K,结果一致可靠。该研究强调了考虑T-A效应对改进MLST检索的必要性,并为未来热成像任务中全球晴空高分辨率MLST制图提供了一条有希望的途径。源代码和模拟数据可在https://github.com/hezwppp/MSW-TES上获得。
{"title":"An SW-TES hybrid algorithm for retrieving mountainous land surface temperature from high-resolution thermal infrared remote sensing data","authors":"Zhi-Wei He , Bo-Hui Tang , Zhao-Liang Li","doi":"10.1016/j.isprsjprs.2026.01.016","DOIUrl":"10.1016/j.isprsjprs.2026.01.016","url":null,"abstract":"<div><div>Mountainous land surface temperature (MLST) is a key parameter for studying the energy exchange between land surface and atmosphere in mountainous areas. However, traditional land surface temperature (LST) retrieval methods often neglect the influence of three-dimensional (3D) structures and adjacent pixels due to rugged terrain. To address this, a mountainous split-window and temperature-emissivity separation (MSW-TES) hybrid algorithm was proposed to retrieve MLST. The hybrid algorithm that combines the improved split window (SW) algorithm and temperature-emissivity separation (TES) algorithm, which considering the topographic and adjacent effects (T-A effect) to retrieve MLST from five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). In this hybrid algorithm, an improved mountainous canopy multiple scattering TIR radiative transfer model was proposed to construct the simulation dataset. Then, an improved SW algorithm was developed to build a 3D lookup table (LUT) of regression coefficients using small-scale self-heating parameter (SSP) and sky-view factor (SVF) to estimate brightness temperature (BT) at ground level. Furthermore, The TES algorithm was refined to account for the influence of rugged terrain within pixel on mountainous land surface effective emissivity (MLSE) by reconstructing the relationship between minimum emissivity and maximum-minimum difference (MMD) for different SSPs. Results from simulated data show that the accuracy of the improved SW algorithm is increased by up to 0.5 K at most for estimating BT at ground level. The MSW-TES algorithm, considering the T-A effect, generally retrieves lower LST values compared to those without this consideration. The hybrid algorithm yielded root mean square error (RMSE) of 0.99 K and 1.83 K for LST retrieval with and without the T-A effect, respectively, with most differences falling between 0.0 K and 3.0 K. The sensitivity analysis indicated that the perturbation of input parameters has little influence on MLST and MLSE, which proves that the MSW-TES algorithm has strong robustness. Additionally, the accuracy of MLST retrieval by the MSW-TES algorithm was validated using both discrete anisotropic radiative transfer (DART) model simulations and <em>in-situ</em> measurements. The validation result of DART simulations showed biases ranging from −0.13 K to 1.03 K and RMSEs from 0.76 K to 1.29 K across the five ASTER TIR bands, while validation result of the in-situ measurements yielded a bias of 0.97 K and an RMSE of 1.25 K, demonstrating consistent and reliable results. This study underscores the necessity of accounting for the T-A effect to improve MLST retrieval and provides a promising pathway for global clear-sky high-resolution MLST mapping in upcoming thermal missions. The source code and simulated data are available at <span><span>https://github.com/hezwppp/MSW-TES</span><svg><path></path></svg></span>.</div></div","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 865-889"},"PeriodicalIF":12.2,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.isprsjprs.2026.01.002
Jing Yang , Yanfeng Wen , Peng Chen , Zhenhua Zhang , Delu Pan
Satellite-based ocean remote sensing is fundamentally limited to observing the ocean surface (top-of-the-ocean), a constraint that severely hinders a comprehensive understanding of how the entire water column ecosystem responds to climate variability like the El Niño-Southern Oscillation (ENSO). Surface-only views cannot resolve critical shifts in the subsurface chlorophyll maximum (SCM), a key layer for marine biodiversity and biogeochemical cycles. To overcome this critical limitation, we develop and validate a novel stacked generalization ensemble machine learning framework. This framework robustly reconstructs a 25-year (1998–2022) high-resolution 3D chlorophyll-a (Chl-a) field by integrating 133,792 globally distributed Biogeochemical-Argo (BGC-Argo) profiles with multi-source satellite data. The reconstructed 3D Chl-a fields were rigorously validated against both satellite and in-situ observations, achieving strong agreement (R ≥ 0.97, mean absolute percentage error ≤ 27 %), demonstrating the robustness and reliability of the framework. Applying this framework to two contrasting South China Sea upwelling system reveals that ENSO phases fundamentally restructure the entire water column. Crucially, we discover that El Niño and La Niña exert opposing effects on the SCM: El Niño events deepen and thin the SCM with decreasing Chl-a by 15–30 %, whereas La Niña events cause it to shoal and thicken, increasing Chl-a by 20–40 %. This vertical restructuring is mechanistically linked to ENSO-driven changes in wind stress curl, Rossby wave propagation, and nitrate availability. Furthermore, we identify a significant subsurface-first response, where the SCM reacts to ENSO forcing months before significant changes are detectable at the surface. Our findings demonstrate that a three-dimensional perspective, enabled by our novel remote sensing reconstruction framework, is essential for accurately quantifying the biogeochemical consequences of climate variability, revealing that surface-only observations can significantly underestimate the vulnerability and response of marine ecosystems to ENSO events.
{"title":"Beyond the surface: machine learning uncovers ENSO’s hidden and contrasting impacts on phytoplankton vertical structure","authors":"Jing Yang , Yanfeng Wen , Peng Chen , Zhenhua Zhang , Delu Pan","doi":"10.1016/j.isprsjprs.2026.01.002","DOIUrl":"10.1016/j.isprsjprs.2026.01.002","url":null,"abstract":"<div><div>Satellite-based ocean remote sensing is fundamentally limited to observing the ocean surface (top-of-the-ocean), a constraint that severely hinders a comprehensive understanding of how the entire water column ecosystem responds to climate variability like the El Niño-Southern Oscillation (ENSO). Surface-only views cannot resolve critical shifts in the subsurface chlorophyll maximum (SCM), a key layer for marine biodiversity and biogeochemical cycles. To overcome this critical limitation, we develop and validate a novel stacked generalization ensemble machine learning framework. This framework robustly reconstructs a 25-year (1998–2022) high-resolution 3D chlorophyll-a (Chl-a) field by integrating 133,792 globally distributed Biogeochemical-Argo (BGC-Argo) profiles with multi-source satellite data. The reconstructed 3D Chl-a fields were rigorously validated against both satellite and in-situ observations, achieving strong agreement (R ≥ 0.97, mean absolute percentage error ≤ 27 %), demonstrating the robustness and reliability of the framework. Applying this framework to two contrasting South China Sea upwelling system reveals that ENSO phases fundamentally restructure the entire water column. Crucially, we discover that El Niño and La Niña exert opposing effects on the SCM: El Niño events deepen and thin the SCM with decreasing Chl-a by 15–30 %, whereas La Niña events cause it to shoal and thicken, increasing Chl-a by 20–40 %. This vertical restructuring is mechanistically linked to ENSO-driven changes in wind stress curl, Rossby wave propagation, and nitrate availability. Furthermore, we identify a significant subsurface-first response, where the SCM reacts to ENSO forcing months before significant changes are detectable at the surface. Our findings demonstrate that a three-dimensional perspective, enabled by our novel remote sensing reconstruction framework, is essential for accurately quantifying the biogeochemical consequences of climate variability, revealing that surface-only observations can significantly underestimate the vulnerability and response of marine ecosystems to ENSO events.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 890-909"},"PeriodicalIF":12.2,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.isprsjprs.2026.01.005
Yue Zhou , Jue Chen , Zilun Zhang , Penghui Huang , Ran Ding , Zhentao Zou , PengFei Gao , Yuchen Wei , Ke Li , Xue Yang , Xue Jiang , Hongxin Yang , Jonathan Li
Remote sensing (RS) large vision–language models (LVLMs) have shown strong promise across visual grounding (VG) tasks. However, existing RS VG datasets predominantly rely on explicit referring expressions – such as relative position, relative size, and color cues – thereby constraining performance on implicit VG tasks that require scenario-specific domain knowledge. This article introduces DVGBench, a high-quality implicit VG benchmark for drones, covering six major application scenarios: traffic, disaster, security, sport, social activity, and productive activity. Each object provides both explicit and implicit queries. Based on the dataset, we design DroneVG-R1, an LVLM that integrates the novel Implicit-to-Explicit Chain-of-Thought (I2E-CoT) within a reinforcement learning paradigm. This enables the model to take advantage of scene-specific expertise, converting implicit references into explicit ones and thus reducing grounding difficulty. Finally, an evaluation of mainstream models on both explicit and implicit VG tasks reveals substantial limitations in their reasoning capabilities. These findings provide actionable insights for advancing the reasoning capacity of LVLMs for drone-based agents. The code and datasets will be released at https://github.com/zytx121/DVGBench.
{"title":"DVGBench: Implicit-to-explicit visual grounding benchmark in UAV imagery with large vision–language models","authors":"Yue Zhou , Jue Chen , Zilun Zhang , Penghui Huang , Ran Ding , Zhentao Zou , PengFei Gao , Yuchen Wei , Ke Li , Xue Yang , Xue Jiang , Hongxin Yang , Jonathan Li","doi":"10.1016/j.isprsjprs.2026.01.005","DOIUrl":"10.1016/j.isprsjprs.2026.01.005","url":null,"abstract":"<div><div>Remote sensing (RS) large vision–language models (LVLMs) have shown strong promise across visual grounding (VG) tasks. However, existing RS VG datasets predominantly rely on explicit referring expressions – such as relative position, relative size, and color cues – thereby constraining performance on implicit VG tasks that require scenario-specific domain knowledge. This article introduces DVGBench, a high-quality implicit VG benchmark for drones, covering six major application scenarios: traffic, disaster, security, sport, social activity, and productive activity. Each object provides both explicit and implicit queries. Based on the dataset, we design DroneVG-R1, an LVLM that integrates the novel Implicit-to-Explicit Chain-of-Thought (I2E-CoT) within a reinforcement learning paradigm. This enables the model to take advantage of scene-specific expertise, converting implicit references into explicit ones and thus reducing grounding difficulty. Finally, an evaluation of mainstream models on both explicit and implicit VG tasks reveals substantial limitations in their reasoning capabilities. These findings provide actionable insights for advancing the reasoning capacity of LVLMs for drone-based agents. The code and datasets will be released at <span><span>https://github.com/zytx121/DVGBench</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 831-847"},"PeriodicalIF":12.2,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.isprsjprs.2026.01.015
Peng Qin , Huabing Huang , Jie Wang , Yunxia Cui , Peimin Chen , Shuang Chen , Yu Xia , Shuai Yuan , Yumei Li , Xiangyu Liu
Large-scale, long-term, and high-frequency monitoring of forest cover is essential for sustainable forest management and carbon stock assessment. However, in persistently cloudy regions such as southern China, the scarcity of high-quality remote sensing data and reliable training samples has resulted in forest cover products with limited spatial and temporal resolution. In addition, many existing datasets fail to accurately characterize forest distribution and dynamics—particularly underestimating forest expansion and overlooking fine-scale and high-frequency changes. To address these limitations, we propose a novel forest–non-forest mapping framework based on reconstructed remote sensing data. First, we successfully achieved large-scale data reconstruction using two deep learning-based multi-sensor fusion methods across extensive (2.04 million km2), long-term (2000–2020), persistently cloudy regions, effectively generating seamless imagery and NDVI time series to address extensive spatial and temporal data gaps for forest classification. Next, by combining spectrally similar sample transfer method with existing land cover products, we constructed robust training samples spanning broad spatial and temporal scales. Subsequently, using a random forest classifier we generated annual 30 m forest cover maps for cloudy southern China, achieving an unprecedented balance between spatial and temporal resolution while improving mapping accuracy. The results demonstrate an overall accuracy of 0.904, surpassing that of the China Land Cover Dataset (CLCD, 0.889) and the China Annual Tree Cover Dataset (CATCD, 0.850). Particularly, our results revealed an overall upward trend in forest area—from 119.84 to 132.09 million hectares (Mha)—that was rarely captured in previous studies, closely aligning with National Forest Inventory (NFI) data (R2 = 0.86). Finally, by integrating time-series analysis with classification results, this study transformed forest mapping from a traditional static framework to a dynamic temporal perspective, reducing uncertainties associated with direct interannual comparisons and estimating forest gains of 23.87 Mha and losses of 12.56 Mha. Notably, reconstructed data improved forest mapping in terms of completeness, resolution, and accuracy. In Guangxi, the annual product detected 11.24 Mha more forest gain than the 10-year composite, indicating better completeness. It also offered finer spatial resolution (30 m vs. 500 m) and higher overall accuracy (0.879 vs. 0.853), compared to the widely used cloud-affected annual product. Overall, this study presents a robust framework for precise forest monitoring in cloudy regions.
{"title":"Unveiling spatiotemporal forest cover patterns breaking the cloud barrier: Annual 30 m mapping in cloud-prone southern China from 2000 to 2020","authors":"Peng Qin , Huabing Huang , Jie Wang , Yunxia Cui , Peimin Chen , Shuang Chen , Yu Xia , Shuai Yuan , Yumei Li , Xiangyu Liu","doi":"10.1016/j.isprsjprs.2026.01.015","DOIUrl":"10.1016/j.isprsjprs.2026.01.015","url":null,"abstract":"<div><div>Large-scale, long-term, and high-frequency monitoring of forest cover is essential for sustainable forest management and carbon stock assessment. However, in persistently cloudy regions such as southern China, the scarcity of high-quality remote sensing data and reliable training samples has resulted in forest cover products with limited spatial and temporal resolution. In addition, many existing datasets fail to accurately characterize forest distribution and dynamics—particularly underestimating forest expansion and overlooking fine-scale and high-frequency changes. To address these limitations, we propose a novel forest–non-forest mapping framework based on reconstructed remote sensing data. First, we successfully achieved large-scale data reconstruction using two deep learning-based multi-sensor fusion methods across extensive (2.04 million km<sup>2</sup>), long-term (2000–2020), persistently cloudy regions, effectively generating seamless imagery and NDVI time series to address extensive spatial and temporal data gaps for forest classification. Next, by combining spectrally similar sample transfer method with existing land cover products, we constructed robust training samples spanning broad spatial and temporal scales. Subsequently, using a random forest classifier we generated annual 30 m forest cover maps for cloudy southern China, achieving an unprecedented balance between spatial and temporal resolution while improving mapping accuracy. The results demonstrate an overall accuracy of 0.904, surpassing that of the China Land Cover Dataset (CLCD, 0.889) and the China Annual Tree Cover Dataset (CATCD, 0.850). Particularly, our results revealed an overall upward trend in forest area—from 119.84 to 132.09 million hectares (Mha)—that was rarely captured in previous studies, closely aligning with National Forest Inventory (NFI) data (R<sup>2</sup> = 0.86). Finally, by integrating time-series analysis with classification results, this study transformed forest mapping from a traditional static framework to a dynamic temporal perspective, reducing uncertainties associated with direct interannual comparisons and estimating forest gains of 23.87 Mha and losses of 12.56 Mha. Notably, reconstructed data improved forest mapping in terms of completeness, resolution, and accuracy. In Guangxi, the annual product detected 11.24 Mha more forest gain than the 10-year composite, indicating better completeness. It also offered finer spatial resolution (30 m vs. 500 m) and higher overall accuracy (0.879 vs. 0.853), compared to the widely used cloud-affected annual product. Overall, this study presents a robust framework for precise forest monitoring in cloudy regions.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 848-864"},"PeriodicalIF":12.2,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1016/j.isprsjprs.2025.12.013
Olaf Wysocki , Benedikt Schwab , Manoj Kumar Biswanath , Michael Greza , Qilin Zhang , Jingwei Zhu , Thomas Froech , Medhini Heeramaglore , Ihab Hijazi , Khaoula Kanna , Mathias Pechinger , Zhaiyu Chen , Yao Sun , Alejandro Rueda Segura , Ziyang Xu , Omar AbdelGafar , Mansour Mehranfar , Chandan Yeshwanth , Yueh-Cheng Liu , Hadi Yazdi , Boris Jutzi
Urban Digital Twins (UDTs) have become essential for managing cities and integrating complex, heterogeneous data from diverse sources. Creating UDTs involves challenges at multiple process stages, including acquiring accurate 3D source data, reconstructing high-fidelity 3D models, maintaining models’ updates, and ensuring seamless interoperability to downstream tasks. Current datasets are usually limited to one part of the processing chain, hampering comprehensive Urban Digital Twin (UDT)s validation. To address these challenges, we introduce the first comprehensive multimodal Urban Digital Twin benchmark dataset: TUM2TWIN. This dataset includes georeferenced, semantically aligned 3D models and networks along with various terrestrial, mobile, aerial, and satellite observations boasting 32 data subsets over roughly 100,000 and currently 767 GB of data. By ensuring georeferenced indoor–outdoor acquisition, high accuracy, and multimodal data integration, the benchmark supports robust analysis of sensors and the development of advanced reconstruction methods. Additionally, we explore downstream tasks demonstrating the potential of TUM2TWIN, including novel view synthesis of NeRF and Gaussian Splatting, solar potential analysis, point cloud semantic segmentation, and LoD3 building reconstruction. We are convinced this contribution lays a foundation for overcoming current limitations in UDT creation, fostering new research directions and practical solutions for smarter, data-driven urban environments. The project is available under: https://tum2t.win.
{"title":"TUM2TWIN: Introducing the large-scale multimodal urban digital twin benchmark dataset","authors":"Olaf Wysocki , Benedikt Schwab , Manoj Kumar Biswanath , Michael Greza , Qilin Zhang , Jingwei Zhu , Thomas Froech , Medhini Heeramaglore , Ihab Hijazi , Khaoula Kanna , Mathias Pechinger , Zhaiyu Chen , Yao Sun , Alejandro Rueda Segura , Ziyang Xu , Omar AbdelGafar , Mansour Mehranfar , Chandan Yeshwanth , Yueh-Cheng Liu , Hadi Yazdi , Boris Jutzi","doi":"10.1016/j.isprsjprs.2025.12.013","DOIUrl":"10.1016/j.isprsjprs.2025.12.013","url":null,"abstract":"<div><div>Urban Digital Twins (UDTs) have become essential for managing cities and integrating complex, heterogeneous data from diverse sources. Creating UDTs involves challenges at multiple process stages, including acquiring accurate 3D source data, reconstructing high-fidelity 3D models, maintaining models’ updates, and ensuring seamless interoperability to downstream tasks. Current datasets are usually limited to one part of the processing chain, hampering comprehensive Urban Digital Twin (UDT)s validation. To address these challenges, we introduce the first comprehensive multimodal Urban Digital Twin benchmark dataset: TUM2TWIN. This dataset includes georeferenced, semantically aligned 3D models and networks along with various terrestrial, mobile, aerial, and satellite observations boasting 32 data subsets over roughly 100,000 <span><math><msup><mrow><mi>m</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> and currently 767 GB of data. By ensuring georeferenced indoor–outdoor acquisition, high accuracy, and multimodal data integration, the benchmark supports robust analysis of sensors and the development of advanced reconstruction methods. Additionally, we explore downstream tasks demonstrating the potential of TUM2TWIN, including novel view synthesis of NeRF and Gaussian Splatting, solar potential analysis, point cloud semantic segmentation, and LoD3 building reconstruction. We are convinced this contribution lays a foundation for overcoming current limitations in UDT creation, fostering new research directions and practical solutions for smarter, data-driven urban environments. The project is available under: <span><span>https://tum2t.win</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"232 ","pages":"Pages 810-830"},"PeriodicalIF":12.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}