首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
Set-CVGL: A new perspective on cross-view geo-localization with unordered ground-view image sets Set-CVGL:一种基于无序地视图像集的跨视点地理定位新视角
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-30 DOI: 10.1016/j.isprsjprs.2026.01.037
Qiong Wu , Panwang Xia , Lei Yu , Yi Liu , Mingtao Xiong , Liheng Zhong , Jingdong Chen , Ming Yang , Yongjun Zhang , Yi Wan
Cross-view geo-localization (CVGL) has been widely applied in fields such as robotic navigation and geographic information coupling. Existing approaches primarily use single images or fixed-view image sequences as queries, which limits perspective diversity. In contrast, when humans determine their location visually, they typically move around to gather multiple perspectives. This behavior suggests that integrating diverse visual cues can improve geo-localization reliability. Therefore, we propose a novel task: Cross-View Image Set Geo-Localization (Set-CVGL), which gathers multiple images with diverse perspectives as a query set for localization. To support this task, we introduce SetVL-480K, a benchmark comprising 480,000 ground images captured worldwide and their corresponding satellite images, with each satellite image corresponds to an average of 40 ground images from varied perspectives and locations. Furthermore, we propose FlexGeo, a flexible method designed for Set-CVGL that can also adapt to single-image and image-sequence inputs. FlexGeo includes two key modules: the Similarity-guided Feature Fuser (SFF), which adaptively fuses image features without prior content dependency, and the Individual-level Attributes Learner (IAL), leveraging geo-attributes of each image for comprehensive scene perception. FlexGeo consistently outperforms existing methods on SetVL-480K and four public datasets (VIGOR, University-1652, SeqGeo, and KITTI-CVL), achieving a 2.34× improvement in localization accuracy on SetVL-480K. The codes and dataset will be available at https://github.com/Mabel0403/Set-CVGL.
交叉视角地理定位技术在机器人导航、地理信息耦合等领域有着广泛的应用。现有的方法主要使用单个图像或固定视图图像序列作为查询,这限制了视角的多样性。相比之下,当人类在视觉上确定自己的位置时,他们通常会四处走动,以收集多个视角。这种行为表明,整合不同的视觉线索可以提高地理定位的可靠性。因此,我们提出了一种新的任务:交叉视图图像集地理定位(Set- cvgl),该任务将具有不同视角的多幅图像作为定位的查询集。为了支持这项任务,我们引入了SetVL-480K,这是一个基准,包括48万张全球地面图像及其相应的卫星图像,每张卫星图像平均对应40张不同角度和位置的地面图像。此外,我们提出了FlexGeo,这是一种为Set-CVGL设计的灵活方法,也可以适应单图像和图像序列输入。FlexGeo包括两个关键模块:相似性引导的特征融合器(SFF),它自适应地融合图像特征,而不依赖于先前的内容;以及个人层面的属性学习器(IAL),利用每张图像的地理属性进行全面的场景感知。FlexGeo在SetVL-480K和四个公共数据集(VIGOR、University-1652、SeqGeo和KITTI-CVL)上的定位精度持续优于现有方法,在SetVL-480K上的定位精度提高了2.34倍。代码和数据集可在https://github.com/Mabel0403/Set-CVGL上获得。
{"title":"Set-CVGL: A new perspective on cross-view geo-localization with unordered ground-view image sets","authors":"Qiong Wu ,&nbsp;Panwang Xia ,&nbsp;Lei Yu ,&nbsp;Yi Liu ,&nbsp;Mingtao Xiong ,&nbsp;Liheng Zhong ,&nbsp;Jingdong Chen ,&nbsp;Ming Yang ,&nbsp;Yongjun Zhang ,&nbsp;Yi Wan","doi":"10.1016/j.isprsjprs.2026.01.037","DOIUrl":"10.1016/j.isprsjprs.2026.01.037","url":null,"abstract":"<div><div>Cross-view geo-localization (CVGL) has been widely applied in fields such as robotic navigation and geographic information coupling. Existing approaches primarily use single images or fixed-view image sequences as queries, which limits perspective diversity. In contrast, when humans determine their location visually, they typically move around to gather multiple perspectives. This behavior suggests that integrating diverse visual cues can improve geo-localization reliability. Therefore, we propose a novel task: Cross-View Image Set Geo-Localization (Set-CVGL), which gathers multiple images with diverse perspectives as <strong>a query set</strong> for localization. To support this task, we introduce SetVL-480K, a benchmark comprising 480,000 ground images captured worldwide and their corresponding satellite images, with each satellite image corresponds to an average of 40 ground images from varied perspectives and locations. Furthermore, we propose FlexGeo, a flexible method designed for Set-CVGL that can also adapt to single-image and image-sequence inputs. FlexGeo includes two key modules: the Similarity-guided Feature Fuser (SFF), which adaptively fuses image features without prior content dependency, and the Individual-level Attributes Learner (IAL), leveraging geo-attributes of each image for comprehensive scene perception. FlexGeo consistently outperforms existing methods on SetVL-480K and four public datasets (VIGOR, University-1652, SeqGeo, and KITTI-CVL), achieving a 2.34<span><math><mo>×</mo></math></span> improvement in localization accuracy on SetVL-480K. The codes and dataset will be available at <span><span>https://github.com/Mabel0403/Set-CVGL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 328-345"},"PeriodicalIF":12.2,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An advanced decoupled polarimetric calibration method for the LuTan-1 hybrid- and quadrature-polarimetric modes LuTan-1混合偏振模式和正交偏振模式的一种高级解耦偏振定标方法
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-30 DOI: 10.1016/j.isprsjprs.2026.01.035
Lizhi Liu, Lijie Huang, Yiding Wang, Pingping Lu, Bo Li, Liang Li, Robert Wang, Yirong Wu
During solar maximum, low-frequency spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) systems suffer ionosphere-induced distortions that couple with system-induced polarimetric distortions. High-precision decoupled polarimetric calibration is therefore essential for obtaining high-fidelity PolSAR data. Existing point-target calibration methods lack a general approach for unbiased estimation of polarimetric distortion across multiple polarimetric modes and calibrator combinations, particularly under spatiotemporally varying ionospheric conditions. To address this, we derive the necessary conditions for unbiased estimation and propose a General Polarimetric Calibration Method (GPCM) applicable to various configurations. In addition, Enhanced Multi-Look Autofocus (EMLA), a modified STEC inversion method, is introduced for precise inversion of Slant Total Electron Content (STEC), enabling estimation of the spatiotemporally varying Faraday rotation angle for system distortion decoupling and PolSAR data compensation. GPCM applied to LuTan-1 HP and QP data results in HH/VV amplitude and phase imbalances of 0.0433  dB (STD: 0.017) and − 0.60° (STD: 1.02°), respectively, measured on trihedral corner reflectors. Calibration results also indicate that QP mode isolation exceeds 39 dB, while estimated axial ratios for HP mode are lower than 0.115 dB. Under comparable conditions, the results of GPCM are consistent with the Freeman analytical method. Furthermore, EMLA outperforms existing STEC inversion methods (COA, MLA, and GIM-based mapping), achieving a mean absolute difference of 1.95 TECU compared with in-situ measurements while demonstrating applicability to general scenes. Overall, the effectiveness of GPCM and EMLA in the LuTan-1 calibration mission is confirmed, indicating their potential for future PolSAR calibration tasks. The primary calibrated experimental dataset is publicly available at https://radars.ac.cn/web/data/getData?dataType=HPSAREADEN&pageType=en.
在太阳活动极大期,低频星载极化合成孔径雷达(PolSAR)系统遭受电离层诱导的畸变,这种畸变与系统诱导的极化畸变耦合。因此,高精度解耦极化校准对于获得高保真的PolSAR数据至关重要。现有的点目标校准方法缺乏一种通用的方法来无偏估计跨多个极化模式和校准器组合的极化失真,特别是在时空变化的电离层条件下。为了解决这个问题,我们推导了无偏估计的必要条件,并提出了一种适用于各种配置的通用偏振校准方法(GPCM)。此外,提出了一种改进的STEC反演方法——Enhanced Multi-Look Autofocus (EMLA),用于精确反演倾斜总电子含量(STEC),从而估算出法拉第旋转角的时空变化,从而实现系统畸变解耦和PolSAR数据补偿。采用GPCM对鲁坦1号的HP和QP数据进行处理,在三面角反射镜上测得的HH/VV振幅和相位不平衡分别为0.0433 dB (STD: 0.017)和- 0.60°(STD: 1.02°)。校准结果还表明,QP模式隔离度超过39 dB,而HP模式的估计轴向比低于0.115 dB。在可比条件下,GPCM的计算结果与Freeman分析方法一致。此外,EMLA优于现有的STEC反演方法(基于COA、MLA和基于gimm的制图),与原位测量相比,平均绝对差为1.95 TECU,同时证明了对一般场景的适用性。总体而言,GPCM和EMLA在LuTan-1校准任务中的有效性得到了证实,表明它们在未来PolSAR校准任务中的潜力。主要校准的实验数据集可在https://radars.ac.cn/web/data/getData?dataType=HPSAREADEN&pageType=en上公开获取。
{"title":"An advanced decoupled polarimetric calibration method for the LuTan-1 hybrid- and quadrature-polarimetric modes","authors":"Lizhi Liu,&nbsp;Lijie Huang,&nbsp;Yiding Wang,&nbsp;Pingping Lu,&nbsp;Bo Li,&nbsp;Liang Li,&nbsp;Robert Wang,&nbsp;Yirong Wu","doi":"10.1016/j.isprsjprs.2026.01.035","DOIUrl":"10.1016/j.isprsjprs.2026.01.035","url":null,"abstract":"<div><div>During solar maximum, low-frequency spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) systems suffer ionosphere-induced distortions that couple with system-induced polarimetric distortions. High-precision decoupled polarimetric calibration is therefore essential for obtaining high-fidelity PolSAR data. Existing point-target calibration methods lack a general approach for unbiased estimation of polarimetric distortion across multiple polarimetric modes and calibrator combinations, particularly under spatiotemporally varying ionospheric conditions. To address this, we derive the necessary conditions for unbiased estimation and propose a General Polarimetric Calibration Method (GPCM) applicable to various configurations. In addition, Enhanced Multi-Look Autofocus (EMLA), a modified STEC inversion method, is introduced for precise inversion of Slant Total Electron Content (STEC), enabling estimation of the spatiotemporally varying Faraday rotation angle for system distortion decoupling and PolSAR data compensation. GPCM applied to LuTan-1 HP and QP data results in HH/VV amplitude and phase imbalances of 0.0433  dB (STD: 0.017) and − 0.60° (STD: 1.02°), respectively, measured on trihedral corner reflectors. Calibration results also indicate that QP mode isolation exceeds 39 dB, while estimated axial ratios for HP mode are lower than 0.115 dB. Under comparable conditions, the results of GPCM are consistent with the Freeman analytical method. Furthermore, EMLA outperforms existing STEC inversion methods (COA, MLA, and GIM-based mapping), achieving a mean absolute difference of 1.95 TECU compared with in-situ measurements while demonstrating applicability to general scenes. Overall, the effectiveness of GPCM and EMLA in the LuTan-1 calibration mission is confirmed, indicating their potential for future PolSAR calibration tasks. The primary calibrated experimental dataset is publicly available at <span><span>https://radars.ac.cn/web/data/getData?dataType=HPSAREADEN&amp;pageType=en</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 310-327"},"PeriodicalIF":12.2,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RegScorer: Learning to select the best transformation of point cloud registration RegScorer:学习选择点云注册的最佳转换
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-27 DOI: 10.1016/j.isprsjprs.2026.01.034
Xiaochen Yang , Haiping Wang , Yuan Liu , Bisheng Yang , Zhen Dong
We propose RegScorer, a model learning to identify the optimal transformation to register unaligned point clouds. Existing registration advancements can generate a set of candidate transformations, which are then evaluated using conventional metrics such as Inlier Ratio (IR), Mean Squared Error (MSE) or Chamfer Distance (CD). The candidate achieving the best score is selected as the final result. However, we argue that these metrics often fail to select the correct transformation, especially in challenging scenarios involving symmetric objects, repetitive structures, or low-overlap regions. This leads to significant degradation in registration performance, a problem that has long been overlooked. The core issue lies in their limited focus on local geometric consistency and inability to capture two key conflict cases of misalignment: (1) point pairs that are spatially close after alignment but have conflicting features, and (2) point pairs with high feature similarity but large spatial distances after alignment. To address this, we propose RegScorer, which models both the spatial and feature relationships of all point pairs. This allows RegScorer to learn to capture the above conflict cases and provides a more reliable score for transformation quality. On the 3DLoMatch and ScanNet datasets, RegScorer demonstrate 19.3% and 14.1% improvements in registration recall, leading to 4.7% and 5.1% accuracy gains in multiview registration. Moreover, when generalized to symmetric and low-texture outdoor scenes, RegScorer achieves a 25% increase in transformation recall over IR metric, highlighting its robustness and generalizability. The pre-trained model and the complete code repository can be accessed at https://github.com/WHU-USI3DV/RegScorer.
我们提出了RegScorer,一个模型学习来识别最优的转换,以配准不对齐的点云。现有的配准进展可以生成一组候选变换,然后使用传统的指标(如Inlier Ratio (IR)、均方误差(MSE)或倒角距离(CD))对其进行评估。成绩最好的候选人被选为最终成绩。然而,我们认为这些指标经常不能选择正确的转换,特别是在涉及对称对象、重复结构或低重叠区域的具有挑战性的场景中。这将导致注册性能的显著下降,这是一个长期被忽视的问题。核心问题在于它们对局部几何一致性的关注有限,无法捕捉到两种关键的不对齐冲突情况:(1)对齐后空间接近但特征冲突的点对;(2)对齐后特征相似度高但空间距离大的点对。为了解决这个问题,我们提出了RegScorer,它对所有点对的空间和特征关系进行建模。这允许RegScorer学习捕获上述冲突案例,并为转换质量提供更可靠的评分。在3DLoMatch和ScanNet数据集上,RegScorer的注册召回率分别提高了19.3%和14.1%,导致多视图注册的准确率分别提高了4.7%和5.1%。此外,当推广到对称和低纹理户外场景时,RegScorer的变换召回率比IR指标提高了25%,突出了其鲁棒性和泛化性。预训练的模型和完整的代码存储库可以在https://github.com/WHU-USI3DV/RegScorer上访问。
{"title":"RegScorer: Learning to select the best transformation of point cloud registration","authors":"Xiaochen Yang ,&nbsp;Haiping Wang ,&nbsp;Yuan Liu ,&nbsp;Bisheng Yang ,&nbsp;Zhen Dong","doi":"10.1016/j.isprsjprs.2026.01.034","DOIUrl":"10.1016/j.isprsjprs.2026.01.034","url":null,"abstract":"<div><div>We propose RegScorer, a model learning to identify the optimal transformation to register unaligned point clouds. Existing registration advancements can generate a set of candidate transformations, which are then evaluated using conventional metrics such as Inlier Ratio (IR), Mean Squared Error (MSE) or Chamfer Distance (CD). The candidate achieving the best score is selected as the final result. However, we argue that these metrics often fail to select the correct transformation, especially in challenging scenarios involving symmetric objects, repetitive structures, or low-overlap regions. This leads to significant degradation in registration performance, a problem that has long been overlooked. The core issue lies in their limited focus on local geometric consistency and inability to capture two key conflict cases of misalignment: (1) point pairs that are spatially close after alignment but have conflicting features, and (2) point pairs with high feature similarity but large spatial distances after alignment. To address this, we propose RegScorer, which models both the spatial and feature relationships of all point pairs. This allows RegScorer to learn to capture the above conflict cases and provides a more reliable score for transformation quality. On the 3DLoMatch and ScanNet datasets, RegScorer demonstrate <strong>19.3</strong>% and <strong>14.1</strong>% improvements in registration recall, leading to <strong>4.7</strong>% and <strong>5.1</strong>% accuracy gains in multiview registration. Moreover, when generalized to symmetric and low-texture outdoor scenes, RegScorer achieves a <strong>25</strong>% increase in transformation recall over IR metric, highlighting its robustness and generalizability. The pre-trained model and the complete code repository can be accessed at <span><span>https://github.com/WHU-USI3DV/RegScorer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 266-277"},"PeriodicalIF":12.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146071855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multispectral airborne laser scanning for tree species classification: A benchmark of machine learning and deep learning algorithms 树种分类的多光谱机载激光扫描:机器学习和深度学习算法的基准
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-27 DOI: 10.1016/j.isprsjprs.2026.01.031
Josef Taher , Eric Hyyppä , Matti Hyyppä , Klaara Salolahti , Xiaowei Yu , Leena Matikainen , Antero Kukko , Matti Lehtomäki , Harri Kaartinen , Sopitta Thurachen , Paula Litkey , Ville Luoma , Markus Holopainen , Gefei Kong , Hongchao Fan , Petri Rönnholm , Matti Vaaja , Antti Polvivaara , Samuli Junttila , Mikko Vastaranta , Juha Hyyppä
<div><div>Climate-smart and biodiversity-preserving forestry demands precise information on forest resources, extending to the individual tree level. Multispectral airborne laser scanning (ALS) has shown promise in automated point cloud processing, but challenges remain in leveraging deep learning techniques and identifying rare tree species in class-imbalanced datasets. This study addresses these gaps by conducting a comprehensive benchmark of deep learning and traditional shallow machine learning methods for tree species classification. For the study, we collected high-density multispectral ALS data (<span><math><mrow><mo>></mo><mn>1000</mn></mrow></math></span> <span><math><mrow><mi>pts</mi><mo>/</mo><msup><mrow><mi>m</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math></span>) at three wavelengths using the FGI-developed HeliALS system, complemented by existing Optech Titan data (35 <span><math><mrow><mi>pts</mi><mo>/</mo><msup><mrow><mi>m</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow></math></span>), to evaluate the species classification accuracy of various algorithms in a peri-urban study area located in southern Finland. We established a field reference dataset of 6326 segments across nine species using a newly developed browser-based crowdsourcing tool, which facilitated efficient data annotation. The ALS data, including a training dataset of 1065 segments, was shared with the scientific community to foster collaborative research and diverse algorithmic contributions. Based on 5261 test segments, our findings demonstrate that point-based deep learning methods, particularly a point transformer model, outperformed traditional machine learning and image-based deep learning approaches on high-density multispectral point clouds. For the high-density ALS dataset, a point transformer model provided the best performance reaching an overall (macro-average) accuracy of 87.9% (74.5%) with a training set of 1065 segments and 92.0% (85.1%) with a larger training set of 5000 segments. With 1065 training segments, the best image-based deep learning method, DetailView, reached an overall (macro-average) accuracy of 84.3% (63.9%), whereas a shallow random forest (RF) classifier achieved an overall (macro-average) accuracy of 83.2% (61.3%). For the sparser ALS dataset, an RF model topped the list with an overall (macro-average) accuracy of 79.9% (57.6%), closely followed by the point transformer at 79.6% (56.0%). Importantly, the overall classification accuracy of the point transformer model on the HeliALS data increased from 73.0% with no spectral information to 84.7% with single-channel reflectance, and to 87.9% with spectral information of all the three channels. Furthermore, we studied the scaling of the classification accuracy as a function of point density and training set size using 5-fold cross-validation of our dataset. Based on our findings, multispectral information is especially beneficial for sparse point clouds with 1–50 <span><math>
气候智慧型和生物多样性保护林业需要森林资源的精确信息,并延伸到单个树木的水平。多光谱机载激光扫描(ALS)在自动化点云处理中显示出前景,但在利用深度学习技术和识别类别不平衡数据集中的稀有树种方面仍然存在挑战。本研究通过对树种分类的深度学习和传统浅机器学习方法进行全面的基准测试,解决了这些差距。在这项研究中,我们使用fgis开发的helals系统在三个波长下收集高密度多光谱ALS数据(>1000 pts/m2),并辅以现有的Optech Titan数据(35 pts/m2),以评估芬兰南部城郊研究区内各种算法的物种分类精度。利用新开发的基于浏览器的众包工具,建立了9个物种6326个片段的野外参考数据集,方便了数据标注。ALS数据,包括1065个片段的训练数据集,与科学界共享,以促进合作研究和多样化的算法贡献。基于5261个测试片段,我们的研究结果表明,在高密度多光谱点云上,基于点的深度学习方法,特别是点转换器模型,优于传统的机器学习和基于图像的深度学习方法。对于高密度ALS数据集,点转换模型提供了最好的性能,在1065个片段的训练集上达到了87.9%(74.5%)的总体(宏观平均)准确率,在5000个片段的更大的训练集上达到了92.0%(85.1%)。在1065个训练片段中,基于图像的最佳深度学习方法DetailView的总体(宏观平均)准确率为84.3%(63.9%),而浅随机森林(RF)分类器的总体(宏观平均)准确率为83.2%(61.3%)。对于稀疏的ALS数据集,RF模型以79.9%(57.6%)的总体(宏观平均)准确率位居榜首,紧随其后的是点变压器,准确率为79.6%(56.0%)。重要的是,在没有光谱信息的情况下,点变压器模型在helals数据上的总体分类精度从73.0%提高到单通道反射率下的84.7%,在三个通道都有光谱信息的情况下提高到87.9%。此外,我们使用数据集的5倍交叉验证研究了分类精度作为点密度和训练集大小的函数的缩放。基于我们的研究结果,多光谱信息对1-50 pts/m2的稀疏点云特别有利。此外,我们观察到分类误差与训练集大小m呈幂律关系,并且随着训练集大小的增加,点变压器的分类误差降低的速度明显快于RF。
{"title":"Multispectral airborne laser scanning for tree species classification: A benchmark of machine learning and deep learning algorithms","authors":"Josef Taher ,&nbsp;Eric Hyyppä ,&nbsp;Matti Hyyppä ,&nbsp;Klaara Salolahti ,&nbsp;Xiaowei Yu ,&nbsp;Leena Matikainen ,&nbsp;Antero Kukko ,&nbsp;Matti Lehtomäki ,&nbsp;Harri Kaartinen ,&nbsp;Sopitta Thurachen ,&nbsp;Paula Litkey ,&nbsp;Ville Luoma ,&nbsp;Markus Holopainen ,&nbsp;Gefei Kong ,&nbsp;Hongchao Fan ,&nbsp;Petri Rönnholm ,&nbsp;Matti Vaaja ,&nbsp;Antti Polvivaara ,&nbsp;Samuli Junttila ,&nbsp;Mikko Vastaranta ,&nbsp;Juha Hyyppä","doi":"10.1016/j.isprsjprs.2026.01.031","DOIUrl":"10.1016/j.isprsjprs.2026.01.031","url":null,"abstract":"&lt;div&gt;&lt;div&gt;Climate-smart and biodiversity-preserving forestry demands precise information on forest resources, extending to the individual tree level. Multispectral airborne laser scanning (ALS) has shown promise in automated point cloud processing, but challenges remain in leveraging deep learning techniques and identifying rare tree species in class-imbalanced datasets. This study addresses these gaps by conducting a comprehensive benchmark of deep learning and traditional shallow machine learning methods for tree species classification. For the study, we collected high-density multispectral ALS data (&lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mo&gt;&gt;&lt;/mo&gt;&lt;mn&gt;1000&lt;/mn&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt; &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;pts&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;msup&gt;&lt;mrow&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msup&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;) at three wavelengths using the FGI-developed HeliALS system, complemented by existing Optech Titan data (35 &lt;span&gt;&lt;math&gt;&lt;mrow&gt;&lt;mi&gt;pts&lt;/mi&gt;&lt;mo&gt;/&lt;/mo&gt;&lt;msup&gt;&lt;mrow&gt;&lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt;&lt;mrow&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/mrow&gt;&lt;/msup&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/span&gt;), to evaluate the species classification accuracy of various algorithms in a peri-urban study area located in southern Finland. We established a field reference dataset of 6326 segments across nine species using a newly developed browser-based crowdsourcing tool, which facilitated efficient data annotation. The ALS data, including a training dataset of 1065 segments, was shared with the scientific community to foster collaborative research and diverse algorithmic contributions. Based on 5261 test segments, our findings demonstrate that point-based deep learning methods, particularly a point transformer model, outperformed traditional machine learning and image-based deep learning approaches on high-density multispectral point clouds. For the high-density ALS dataset, a point transformer model provided the best performance reaching an overall (macro-average) accuracy of 87.9% (74.5%) with a training set of 1065 segments and 92.0% (85.1%) with a larger training set of 5000 segments. With 1065 training segments, the best image-based deep learning method, DetailView, reached an overall (macro-average) accuracy of 84.3% (63.9%), whereas a shallow random forest (RF) classifier achieved an overall (macro-average) accuracy of 83.2% (61.3%). For the sparser ALS dataset, an RF model topped the list with an overall (macro-average) accuracy of 79.9% (57.6%), closely followed by the point transformer at 79.6% (56.0%). Importantly, the overall classification accuracy of the point transformer model on the HeliALS data increased from 73.0% with no spectral information to 84.7% with single-channel reflectance, and to 87.9% with spectral information of all the three channels. Furthermore, we studied the scaling of the classification accuracy as a function of point density and training set size using 5-fold cross-validation of our dataset. Based on our findings, multispectral information is especially beneficial for sparse point clouds with 1–50 &lt;span&gt;&lt;math&gt;","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 278-309"},"PeriodicalIF":12.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Satellite-based heat Index estimatioN modEl (SHINE): An integrated machine learning approach for the conterminous United States 基于卫星的热指数估算模型(SHINE):美国周边地区的综合机器学习方法
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-23 DOI: 10.1016/j.isprsjprs.2026.01.018
Seyed Babak Haji Seyed Asadollah, Giorgos Mountrakis, Stephen B. Shaw
The accelerating frequency, duration and intensity of extreme heat events demand accurate, spatially complete heat exposure metrics. Here, a modeling approach is presented for estimating the daily-maximum Heat Index (HI) at 1 km spatial resolution. Our study area covered the conterminous United States (CONUS) during the warm season (May to September) between 2003 and 2023. More than 4.6 million observations from approximately 2000 weather stations were paired with weather-related, geographical, land cover and historical climatic factors to develop the proposed Satellite-based Heat Index estimatioN modEl (SHINE). Selected explanatory variables at daily temporal intervals included reanalysis products from Modern-Era Retrospective analysis for Research and Applications (MERRA) and direct satellite products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor.
The most influential variables for HI estimation were the MERRA surface layer height and specific humidity products and the dual-pass MODIS daily land surface temperatures. These were followed by land cover products capturing water and forest presence, historical norms of wind speed and maximum temperature, elevation information and the corresponding day of year. An Extreme Gradient Boosting (XGBoost) regressor trained with spatial cross-validation explained 93 % of the variance (R2 = 0.93) and attained a Root Mean Square Error (RMSE) of 1.9°C and a Mean Absolute Error (MAE) of 1.4°C. Comparison of alternative configurations showed that while a MERRA-only model provided slightly higher accuracy (RMSE of 1.8°C), its coarse resolution failed to capture fine-scale heat variations. Conversely, a MODIS-only model offered kilometer-scale spatial resolution but with higher estimation errors (RMSE of 2.9°C). Integrating both MERRA and MODIS sources enabled SHINE to maintain spatial detail and preserved accuracy, underscoring the complementary strengths of reanalysis and satellite products. SHINE also demonstrated resistance to missing MODIS LST observations due to clouds as the additional RMSE error was approximately 0.5°C in the worst case of missing both morning and afternoon MODIS land surface temperature observations. Spatial error analysis revealed <1.7°C RMSE in arid and Mediterranean zones but larger, more heterogeneous errors in the humid Midwest and High Plains. From the policy perspective and considering the HI operational range for public-health heat effects, the proposed SHINE approach outperformed typically used proxies, such as land surface and air temperature. The resulting 1 km daily HI estimations can potentially be used as the foundation of the first wall-to-wall, multi-decadal, high resolution heat dataset for CONUS and offer actionable information for public-health heat studies, energy-demand forecasting and environmental-justice implications.
极端高温事件的频率、持续时间和强度不断加快,需要精确、空间完整的热暴露指标。本文提出了在1 km空间分辨率下估算日最大热指数(HI)的建模方法。我们的研究区域覆盖了2003年至2023年暖季(5月至9月)的美国(CONUS)。来自大约2000个气象站的460多万份观测资料与天气、地理、土地覆盖和历史气候因素相结合,开发了拟议的基于卫星的热指数估算模型(SHINE)。选取的每日时间间隔解释变量包括来自现代研究与应用回顾性分析(MERRA)的再分析产品和来自中分辨率成像光谱仪(MODIS)传感器的直接卫星产品。对HI估算影响最大的变量是MERRA地表高度和比湿产品以及MODIS双通道地表日温度。其次是土地覆盖产品,包括水和森林的存在、风速和最高温度的历史标准、海拔信息和相应的年份。使用空间交叉验证训练的极端梯度增强(XGBoost)回归器解释了93% %的方差(R2 = 0.93),获得了1.9°C的均方根误差(RMSE)和1.4°C的平均绝对误差(MAE)。不同配置的对比表明,仅merra模式的精度略高(RMSE为1.8°C),但其粗分辨率无法捕获精细尺度的热量变化。相反,仅使用modis的模式提供千米尺度的空间分辨率,但估计误差较高(RMSE为2.9°C)。整合MERRA和MODIS源使SHINE能够保持空间细节和保持精度,强调再分析和卫星产品的互补优势。SHINE还显示出对由于云层而丢失的MODIS LST观测值的抵抗能力,因为在最坏的情况下,在丢失上午和下午MODIS陆地表面温度观测值的情况下,额外的RMSE误差约为0.5°C。空间误差分析显示,干旱和地中海地区的RMSE为 <;1.7°C,而湿润的中西部和高平原地区的误差更大,异质性更强。从政策角度来看,并考虑到公共卫生热效应的HI操作范围,拟议的SHINE方法优于通常使用的替代指标,如陆地表面和空气温度。由此产生的每天1 公里的HI估计可能被用作CONUS的第一个墙到墙、多年代际、高分辨率热量数据集的基础,并为公共卫生热量研究、能源需求预测和环境正义影响提供可操作的信息。
{"title":"Satellite-based heat Index estimatioN modEl (SHINE): An integrated machine learning approach for the conterminous United States","authors":"Seyed Babak Haji Seyed Asadollah,&nbsp;Giorgos Mountrakis,&nbsp;Stephen B. Shaw","doi":"10.1016/j.isprsjprs.2026.01.018","DOIUrl":"10.1016/j.isprsjprs.2026.01.018","url":null,"abstract":"<div><div>The accelerating frequency, duration and intensity of extreme heat events demand accurate, spatially complete heat exposure metrics. Here, a modeling approach is presented for estimating the daily-maximum Heat Index (HI) at 1 km spatial resolution. Our study area covered the conterminous United States (CONUS) during the warm season (May to September) between 2003 and 2023. More than 4.6 million observations from approximately 2000 weather stations were paired with weather-related, geographical, land cover and historical climatic factors to develop the proposed Satellite-based Heat Index estimatioN modEl (SHINE). Selected explanatory variables at daily temporal intervals included reanalysis products from Modern-Era Retrospective analysis for Research and Applications (MERRA) and direct satellite products from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor.</div><div>The most influential variables for HI estimation were the MERRA surface layer height and specific humidity products and the dual-pass MODIS daily land surface temperatures. These were followed by land cover products capturing water and forest presence, historical norms of wind speed and maximum temperature, elevation information and the corresponding day of year. An Extreme Gradient Boosting (XGBoost) regressor trained with spatial cross-validation explained 93 % of the variance (R<sup>2</sup> = 0.93) and attained a Root Mean Square Error (RMSE) of 1.9°C and a Mean Absolute Error (MAE) of 1.4°C. Comparison of alternative configurations showed that while a MERRA-only model provided slightly higher accuracy (RMSE of 1.8°C), its coarse resolution failed to capture fine-scale heat variations. Conversely, a MODIS-only model offered kilometer-scale spatial resolution but with higher estimation errors (RMSE of 2.9°C). Integrating both MERRA and MODIS sources enabled SHINE to maintain spatial detail and preserved accuracy, underscoring the complementary strengths of reanalysis and satellite products. SHINE also demonstrated resistance to missing MODIS LST observations due to clouds as the additional RMSE error was approximately 0.5°C in the worst case of missing both morning and afternoon MODIS land surface temperature observations. Spatial error analysis revealed &lt;1.7°C RMSE in arid and Mediterranean zones but larger, more heterogeneous errors in the humid Midwest and High Plains. From the policy perspective and considering the HI operational range for public-health heat effects, the proposed SHINE approach outperformed typically used proxies, such as land surface and air temperature. The resulting 1 km daily HI estimations can potentially be used as the foundation of the first wall-to-wall, multi-decadal, high resolution heat dataset for CONUS and offer actionable information for public-health heat studies, energy-demand forecasting and environmental-justice implications.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 209-230"},"PeriodicalIF":12.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A weakly supervised approach for large-scale agricultural parcel extraction from VHR imagery via foundation models and adaptive noise correction 基于基础模型和自适应噪声校正的VHR图像大尺度农业地块提取弱监督方法
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-23 DOI: 10.1016/j.isprsjprs.2026.01.030
Wenpeng Zhao , Shanchuan Guo , Xueliang Zhang , Pengfei Tang , Xiaoquan Pan , Haowei Mu , Chenghan Yang , Zilong Xia , Zheng Wang , Jun Du , Peijun Du
Large-scale and fine-grained extraction of agricultural parcels from very-high-resolution (VHR) imagery is essential for precision agriculture. However, traditional parcel segmentation methods and fully supervised deep learning approaches typically face scalability constraints due to costly manual annotations, while extraction accuracy is generally limited by the inadequate capacity of segmentation architectures to represent complex agricultural scenes. To address these challenges, this study proposes a Weakly Supervised approach for agricultural Parcel Extraction (WSPE), which leverages publicly available 10 m resolution images and labels to guide the delineation of 0.5 m agricultural parcels. The WSPE framework integrates the tabular (Tabular Prior-data Fitted Network, TabPFN) and the vision foundation model (Segment Anything Model 2, SAM2) to initially generate pseudo-labels with high geometric precision. These pseudo-labels are further refined for semantic accuracy through an adaptive noisy label correction module based on curriculum learning. The refined knowledge is distilled into the proposed Triple-branch Kolmogorov-Arnold enhanced Boundary-aware Network (TKBNet), a prompt-free end-to-end architecture enabling rapid inference and scalable deployment, with outputs vectorized through post-processing. The effectiveness of WSPE was evaluated on a self-constructed dataset from nine agricultural zones in China, the public AI4Boundaries and FGFD datasets, and three large-scale regions: Zhoukou, Hengshui, and Fengcheng. Results demonstrate that WSPE and its integrated TKBNet achieve robust performance across datasets with diverse agricultural scenes, validated by extensive comparative and ablation experiments. The weakly supervised approach achieves 97.7 % of fully supervised performance, and large-scale deployment verifies its scalability and generalization, offering a practical solution for fine-grained, large-scale agricultural parcel mapping. Code is available at https://github.com/zhaowenpeng/WSPE.
从高分辨率(VHR)图像中大规模和细粒度地提取农业地块对于精准农业至关重要。然而,传统的包裹分割方法和完全监督的深度学习方法通常面临可扩展性的限制,因为人工标注成本高,而提取精度通常受到分割架构表示复杂农业场景的能力不足的限制。为了解决这些挑战,本研究提出了一种弱监督的农业包裹提取方法(WSPE),该方法利用公开可用的10米分辨率图像和标签来指导0.5米农业包裹的描绘。WSPE框架集成了表格(tabular Prior-data拟合网络,TabPFN)和视觉基础模型(Segment Anything model 2, SAM2),初步生成几何精度较高的伪标签。这些伪标签通过基于课程学习的自适应噪声标签校正模块进一步细化语义准确性。精细化的知识被提炼到提议的三分支Kolmogorov-Arnold增强边界感知网络(TKBNet)中,这是一种即时的端到端架构,可以实现快速推理和可扩展部署,并通过后处理将输出矢量化。利用中国9个农业区自建数据集、AI4Boundaries和FGFD公共数据集以及周口、衡水和丰城3个大尺度区域对WSPE的有效性进行了评价。结果表明,WSPE及其集成的TKBNet在不同农业场景的数据集上实现了稳健的性能,并得到了广泛的对比和消融实验的验证。弱监督方法达到了97.7%的完全监督性能,大规模部署验证了其可扩展性和泛化性,为细粒度、大规模的农业地块测绘提供了实用的解决方案。代码可从https://github.com/zhaowenpeng/WSPE获得。
{"title":"A weakly supervised approach for large-scale agricultural parcel extraction from VHR imagery via foundation models and adaptive noise correction","authors":"Wenpeng Zhao ,&nbsp;Shanchuan Guo ,&nbsp;Xueliang Zhang ,&nbsp;Pengfei Tang ,&nbsp;Xiaoquan Pan ,&nbsp;Haowei Mu ,&nbsp;Chenghan Yang ,&nbsp;Zilong Xia ,&nbsp;Zheng Wang ,&nbsp;Jun Du ,&nbsp;Peijun Du","doi":"10.1016/j.isprsjprs.2026.01.030","DOIUrl":"10.1016/j.isprsjprs.2026.01.030","url":null,"abstract":"<div><div>Large-scale and fine-grained extraction of agricultural parcels from very-high-resolution (VHR) imagery is essential for precision agriculture. However, traditional parcel segmentation methods and fully supervised deep learning approaches typically face scalability constraints due to costly manual annotations, while extraction accuracy is generally limited by the inadequate capacity of segmentation architectures to represent complex agricultural scenes. To address these challenges, this study proposes a Weakly Supervised approach for agricultural Parcel Extraction (WSPE), which leverages publicly available 10 m resolution images and labels to guide the delineation of 0.5 m agricultural parcels. The WSPE framework integrates the tabular (Tabular Prior-data Fitted Network, TabPFN) and the vision foundation model (Segment Anything Model 2, SAM2) to initially generate pseudo-labels with high geometric precision. These pseudo-labels are further refined for semantic accuracy through an adaptive noisy label correction module based on curriculum learning. The refined knowledge is distilled into the proposed Triple-branch Kolmogorov-Arnold enhanced Boundary-aware Network (TKBNet), a prompt-free end-to-end architecture enabling rapid inference and scalable deployment, with outputs vectorized through post-processing. The effectiveness of WSPE was evaluated on a self-constructed dataset from nine agricultural zones in China, the public AI4Boundaries and FGFD datasets, and three large-scale regions: Zhoukou, Hengshui, and Fengcheng. Results demonstrate that WSPE and its integrated TKBNet achieve robust performance across datasets with diverse agricultural scenes, validated by extensive comparative and ablation experiments. The weakly supervised approach achieves 97.7 % of fully supervised performance, and large-scale deployment verifies its scalability and generalization, offering a practical solution for fine-grained, large-scale agricultural parcel mapping. Code is available at <span><span>https://github.com/zhaowenpeng/WSPE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 180-208"},"PeriodicalIF":12.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Varying sensitivities of RED-NIR-based vegetation indices to the input reflectance affect the detected long-term trends 基于red - nir的植被指数对输入反射率的不同敏感性影响了探测到的长期趋势
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-23 DOI: 10.1016/j.isprsjprs.2026.01.028
Qing Tian , Hongxiao Jin , Rasmus Fensholt , Torbern Tagesson , Luwei Feng , Feng Tian
Widespread vegetation changes have been evidenced by satellite-observed long-term trends over decades in vegetation indices (VIs). However, many issues can affect the derived VIs trends, among which the inherent difference between VIs calculated from the same input reflectance has not been investigated. Here, we compared global long-term trends in six widely used RED-NIR (near-infrared)-based VIs calculated from the MODIS nadir bidirectional reflectance distribution function (BRDF) adjusted product (MCD43A4) during 2000–2023, including normalized difference vegetation index (NDVI), kernel NDVI (kNDVI), 2-band enhanced vegetation index (EVI2), near-infrared reflectance of vegetation (NIRv), difference vegetation index (DVI), and plant phenology index (PPI). We identified two distinct groups of VIs, i.e., (1) NDVI and kNDVI, and (2) EVI2, NIRv, DVI, and PPI, which shared similar trends within the group but showed significant directional differences between groups in 17.4% of the studied area. Only 20.5% of the global land surface showed consistent trends. Based on the radiation transfer model and remote sensing observations, we demonstrated that the two groups of VIs differed in their sensitivities to RED and NIR reflectance. These differences lead to inconsistent long-term trends arising from variations in vegetation type, mixed pixel effects, saturation, and asynchronous changes in vegetation chlorophyll content and structural attributes. Comparisons with ground-observed leaf area index (LAI), flux tower gross primary productivity (GPP), and PhenoCam green chromatic coordinate (GCC) further revealed that the EVI2, NIRv, DVI, and PPI trends corresponded more closely with LAI and GPP trends, whereas the NDVI and kNDVI trends were more strongly associated with GCC trends. Our results highlight that long-term vegetation trends derived from different RED–NIR-based VIs must be interpreted by considering their intrinsic sensitivities to biophysical properties, which is essential for reliable assessments of vegetation dynamics.
卫星观测到的植被指数(VIs)几十年来的长期趋势证明了广泛的植被变化。然而,许多问题会影响所得的能见度趋势,其中,从相同的输入反射率计算所得的能见度之间的固有差异尚未得到研究。本文比较了MODIS最低值双向反射率分布函数(BRDF)调整后的产品(MCD43A4)在2000-2023年间全球广泛使用的6种基于RED-NIR(近红外)的VIs的长期趋势,包括归一化差异植被指数(NDVI)、核NDVI (kNDVI)、2波段增强植被指数(EVI2)、植被近红外反射率(NIRv)、差异植被指数(DVI)和植物物象指数(PPI)。我们确定了两个不同的VIs组,即(1)NDVI和kNDVI, (2) EVI2、NIRv、DVI和PPI,它们在组内具有相似的趋势,但在17.4%的研究区域中,组间存在显著的方向性差异。全球只有20.5%的陆地表面呈现出一致的趋势。基于辐射传输模型和遥感观测,我们证明了两组VIs对红、近红外反射率的敏感性不同。这些差异导致植被类型、混合像元效应、饱和度的变化以及植被叶绿素含量和结构属性的非同步变化导致长期趋势不一致。与地面观测叶面积指数(LAI)、通量塔总初级生产力(GPP)和PhenoCam绿色坐标(GCC)的比较进一步表明,EVI2、NIRv、DVI和PPI趋势与LAI和GPP趋势的相关性更强,而NDVI和kNDVI趋势与GCC趋势的相关性更强。我们的研究结果强调,从不同的基于red - nir的VIs中得出的长期植被趋势必须考虑到它们对生物物理特性的内在敏感性,这对于可靠地评估植被动态至关重要。
{"title":"Varying sensitivities of RED-NIR-based vegetation indices to the input reflectance affect the detected long-term trends","authors":"Qing Tian ,&nbsp;Hongxiao Jin ,&nbsp;Rasmus Fensholt ,&nbsp;Torbern Tagesson ,&nbsp;Luwei Feng ,&nbsp;Feng Tian","doi":"10.1016/j.isprsjprs.2026.01.028","DOIUrl":"10.1016/j.isprsjprs.2026.01.028","url":null,"abstract":"<div><div>Widespread vegetation changes have been evidenced by satellite-observed long-term trends over decades in vegetation indices (VIs). However, many issues can affect the derived VIs trends, among which the inherent difference between VIs calculated from the same input reflectance has not been investigated. Here, we compared global long-term trends in six widely used RED-NIR (near-infrared)-based VIs calculated from the MODIS nadir bidirectional reflectance distribution function (BRDF) adjusted product (MCD43A4) during 2000–2023, including normalized difference vegetation index (NDVI), kernel NDVI (kNDVI), 2-band enhanced vegetation index (EVI2), near-infrared reflectance of vegetation (NIRv), difference vegetation index (DVI), and plant phenology index (PPI). We identified two distinct groups of VIs, i.e., (1) NDVI and kNDVI, and (2) EVI2, NIRv, DVI, and PPI, which shared similar trends within the group but showed significant directional differences between groups in 17.4% of the studied area. Only 20.5% of the global land surface showed consistent trends. Based on the radiation transfer model and remote sensing observations, we demonstrated that the two groups of VIs differed in their sensitivities to RED and NIR reflectance. These differences lead to inconsistent long-term trends arising from variations in vegetation type, mixed pixel effects, saturation, and asynchronous changes in vegetation chlorophyll content and structural attributes. Comparisons with ground-observed leaf area index (LAI), flux tower gross primary productivity (GPP), and PhenoCam green chromatic coordinate (GCC) further revealed that the EVI2, NIRv, DVI, and PPI trends corresponded more closely with LAI and GPP trends, whereas the NDVI and kNDVI trends were more strongly associated with GCC trends. Our results highlight that long-term vegetation trends derived from different RED–NIR-based VIs must be interpreted by considering their intrinsic sensitivities to biophysical properties, which is essential for reliable assessments of vegetation dynamics.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 247-265"},"PeriodicalIF":12.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weak supervision makes strong details: fine-grained object recognition in remote sensing images via regional diffusion with VLM 弱监督生成强细节:利用VLM进行区域扩散的遥感图像细粒度目标识别
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-23 DOI: 10.1016/j.isprsjprs.2026.01.024
Liuqian Wang, Jing Zhang, Guangming Mi, Li Zhuo
Fine-grained object recognition (FGOR) is gaining increasing attention in automated remote sensing analysis and interpretation (RSAI). However, the full potential of FGOR in remote sensing images (RSIs) is still constrained by several key issues: the reliance on high-quality labeled data, the difficulty of reconstructing fine details in low-resolution images, and the limited robustness of FGOR model for distinguishing similar object categories. In response, we propose an automatic fine-grained object recognition network (AutoFGOR) that follows a hierarchical dual-pipeline architecture for object analysis at global and regional levels. Specifically, Pipeline I: region detection network, which leverages geometric invariance module for weakly-supervised learning to improve the detection accuracy of sparsely labeled RSIs and extract category-free regions; and on top of that, Pipeline II: regional diffusion with vision language model (RD-VLM), which pioneers the combination of stable diffusion XL (SDXL) and large language and vision assistant (LLaVA) through a specially designed adaptive resolution adaptor (ARA) for object region super-resolution reconstruction, fundamentally solving the difficulties of feature extraction from low-quality regions and fine-grained feature mining. In addition, we introduce a winner-takes-all (WTA) strategy that utilizes a voting mechanism to enhance the reliability of fine-grained classification in complex scenes. Experimental results on FAIR1M-v2.0, VEDAI, and HRSC2016 datasets demonstrate our AutoFGOR achieving 31.72%, 80.25%, and 88.05% mAP, respectively, with highly competitive performance. In addition, the × 4 reconstruction results achieve scores of 0.5275 and 0.8173 on the MANIQA and CLIP-IQA indicators, respectively. The code will be available on GitHub: https://github.com/BJUT-AIVBD/AutoFGOR.
细粒度目标识别(FGOR)在自动遥感分析与解释(RSAI)中越来越受到关注。然而,FGOR在遥感图像(rsi)中的全部潜力仍然受到几个关键问题的限制:对高质量标记数据的依赖,在低分辨率图像中重建精细细节的困难,以及FGOR模型在区分相似目标类别方面的有限鲁棒性。作为回应,我们提出了一种自动细粒度目标识别网络(AutoFGOR),该网络遵循分层双管道架构,用于全球和区域层面的目标分析。其中,管道1:区域检测网络,利用几何不变性模块进行弱监督学习,提高稀疏标记rsi的检测精度,提取无类别区域;在此基础上,Pipeline II:区域扩散与视觉语言模型(RD-VLM),通过专门设计的自适应分辨率适配器(ARA),率先将稳定扩散XL (SDXL)与大型语言视觉助手(LLaVA)相结合,进行目标区域超分辨率重建,从根本上解决了低质量区域特征提取和细粒度特征挖掘的难题。此外,我们引入了赢家通吃(WTA)策略,该策略利用投票机制来增强复杂场景中细粒度分类的可靠性。在FAIR1M-v2.0、VEDAI和HRSC2016数据集上的实验结果表明,我们的AutoFGOR分别实现了31.72%、80.25%和88.05%的mAP,具有很强的竞争力。此外,× 4重建结果在MANIQA和CLIP-IQA指标上分别达到0.5275和0.8173分。代码将在GitHub上提供:https://github.com/BJUT-AIVBD/AutoFGOR。
{"title":"Weak supervision makes strong details: fine-grained object recognition in remote sensing images via regional diffusion with VLM","authors":"Liuqian Wang,&nbsp;Jing Zhang,&nbsp;Guangming Mi,&nbsp;Li Zhuo","doi":"10.1016/j.isprsjprs.2026.01.024","DOIUrl":"10.1016/j.isprsjprs.2026.01.024","url":null,"abstract":"<div><div>Fine-grained object recognition (FGOR) is gaining increasing attention in automated remote sensing analysis and interpretation (RSAI). However, the full potential of FGOR in remote sensing images (RSIs) is still constrained by several key issues: the reliance on high-quality labeled data, the difficulty of reconstructing fine details in low-resolution images, and the limited robustness of FGOR model for distinguishing similar object categories. In response, we propose an automatic fine-grained object recognition network (AutoFGOR) that follows a hierarchical dual-pipeline architecture for object analysis at global and regional levels. Specifically, Pipeline I: region detection network, which leverages geometric invariance module for weakly-supervised learning to improve the detection accuracy of sparsely labeled RSIs and extract category-free regions; and on top of that, Pipeline II: regional diffusion with vision language model (RD-VLM), which pioneers the combination of stable diffusion XL (SDXL) and large language and vision assistant (LLaVA) through a specially designed adaptive resolution adaptor (ARA) for object region super-resolution reconstruction, fundamentally solving the difficulties of feature extraction from low-quality regions and fine-grained feature mining. In addition, we introduce a winner-takes-all (WTA) strategy that utilizes a voting mechanism to enhance the reliability of fine-grained classification in complex scenes. Experimental results on FAIR1M-v2.0, VEDAI, and HRSC2016 datasets demonstrate our AutoFGOR achieving 31.72%, 80.25%, and 88.05% mAP, respectively, with highly competitive performance. In addition, the × 4 reconstruction results achieve scores of 0.5275 and 0.8173 on the MANIQA and CLIP-IQA indicators, respectively. <u>The code will be available on GitHub:</u> <span><span><u>https://github.com/BJUT-AIVBD/AutoFGOR</u></span><svg><path></path></svg></span><u>.</u></div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 231-246"},"PeriodicalIF":12.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge distillation with spatial semantic enhancement for remote sensing object detection 基于空间语义增强的知识升华遥感目标检测
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-22 DOI: 10.1016/j.isprsjprs.2026.01.017
Kai Hu , Jiaxin Li , Nan Ji , Xueshang Xiang , Kai Jiang , Xieping Gao
Knowledge distillation is extensively utilized in remote sensing object detection within resource-constrained environments. Among knowledge distillation methods, prediction imitation has garnered significant attention due to its ease of deployment. However, prevailing prediction imitation paradigms, which rely on an isolated, point-wise alignment of prediction scores, neglect the crucial spatial semantic information. This oversight is particularly detrimental in remote sensing images due to the abundance of objects with weak feature responses. To this end, we propose a novel Spatial Semantic Enhanced Knowledge Distillation framework, called S2EKD, for remote sensing object detection. Through two complementary modules, S2EKD shifts the focus of prediction imitation from matching isolated values to learning structured spatial semantic information. First, for classification distillation, we introduce a Weak-feature Response Enhancement Module, which models the structured spatial relationships between objects and their background to establish an initial perception of objects with weak feature responses. Second, to further capture more refined spatial information, we propose a Teacher Boundary Refinement Module for localization distillation. It provides robust boundary guidance by constructing a regression target enriched with more comprehensive spatial information. Furthermore, we introduce a Feature Mapping mechanism to ensure this spatial semantic knowledge is effectively utilized. Through extensive experiments on the DIOR and DOTA-v1.0 datasets, our method’s superiority is consistently demonstrated across diverse architectures, including both single-stage and two-stage detectors. The results show that our S2EKD achieves state-of-the-art results and, in some cases, even surpasses the performance of its teacher model. The code will be available soon.
知识蒸馏广泛应用于资源受限环境下的遥感目标检测。在知识蒸馏方法中,预测模仿因其易于部署而备受关注。然而,目前流行的预测模仿范式依赖于孤立的、逐点排列的预测分数,忽视了关键的空间语义信息。这种疏忽在遥感图像中尤其有害,因为大量的物体特征响应较弱。为此,我们提出了一种新的空间语义增强知识蒸馏框架,称为S2EKD,用于遥感目标检测。通过两个互补的模块,S2EKD将预测模仿的重点从匹配孤立的值转移到学习结构化的空间语义信息。首先,在分类蒸馏方面,我们引入了一个弱特征响应增强模块,该模块对物体及其背景之间的结构化空间关系进行建模,以建立对具有弱特征响应的物体的初始感知。其次,为了进一步捕获更精细的空间信息,我们提出了一个教师边界细化模块用于定位蒸馏。该方法通过构建具有更全面空间信息的回归目标,提供鲁棒的边界引导。此外,我们还引入了一种特征映射机制,以确保这些空间语义知识得到有效利用。通过在DIOR和DOTA-v1.0数据集上的广泛实验,我们的方法的优势在不同的架构中得到了一致的证明,包括单级和两级检测器。结果表明,我们的S2EKD达到了最先进的结果,在某些情况下,甚至超过了其教师模型的表现。代码将很快可用。
{"title":"Knowledge distillation with spatial semantic enhancement for remote sensing object detection","authors":"Kai Hu ,&nbsp;Jiaxin Li ,&nbsp;Nan Ji ,&nbsp;Xueshang Xiang ,&nbsp;Kai Jiang ,&nbsp;Xieping Gao","doi":"10.1016/j.isprsjprs.2026.01.017","DOIUrl":"10.1016/j.isprsjprs.2026.01.017","url":null,"abstract":"<div><div>Knowledge distillation is extensively utilized in remote sensing object detection within resource-constrained environments. Among knowledge distillation methods, prediction imitation has garnered significant attention due to its ease of deployment. However, prevailing prediction imitation paradigms, which rely on an isolated, point-wise alignment of prediction scores, neglect the crucial spatial semantic information. This oversight is particularly detrimental in remote sensing images due to the abundance of objects with weak feature responses. To this end, we propose a novel Spatial Semantic Enhanced Knowledge Distillation framework, called <span><math><msup><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><em>EKD</em>, for remote sensing object detection. Through two complementary modules, <span><math><msup><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><em>EKD</em> shifts the focus of prediction imitation from matching isolated values to learning structured spatial semantic information. First, for classification distillation, we introduce a Weak-feature Response Enhancement Module, which models the structured spatial relationships between objects and their background to establish an initial perception of objects with weak feature responses. Second, to further capture more refined spatial information, we propose a Teacher Boundary Refinement Module for localization distillation. It provides robust boundary guidance by constructing a regression target enriched with more comprehensive spatial information. Furthermore, we introduce a Feature Mapping mechanism to ensure this spatial semantic knowledge is effectively utilized. Through extensive experiments on the DIOR and DOTA-v1.0 datasets, our method’s superiority is consistently demonstrated across diverse architectures, including both single-stage and two-stage detectors. The results show that our <span><math><msup><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span><em>EKD</em> achieves state-of-the-art results and, in some cases, even surpasses the performance of its teacher model. The code will be available soon.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 144-157"},"PeriodicalIF":12.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying green leaf and leaf phenology of large trees and forests by time series PlanetScope and Sentinel-2 images and the chlorophyll and green leaf indicator (CGLI) 利用PlanetScope和Sentinel-2时间序列影像及叶绿素和绿叶指示剂(CGLI)识别大型树木和森林的绿叶和叶物候
IF 12.2 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2026-01-22 DOI: 10.1016/j.isprsjprs.2026.01.027
Baihong Pan , Xiangming Xiao , Li Pan , Andrew D Richardson , Yujie Liu , Yuan Yao , Cheng Meng , Yanhua Xie , Chenchen Zhang , Yuanwei Qin
Plant phenology serves as a vital indicator of plant’s response to climate variation and change. To date, our knowledge and data products of plant leaf phenology at the scales of large trees and forest stand are very limited, in part due to the lack of time series image data at very high spatial resolution (VHSR, meters). Here, we investigated surface reflectance (BLUE, GREEN, RED) and vegetation indices over a large cottonwood tree, using images from PlanetScope (daily, 3-m) and Sentinel-2A/B (5-day, 10-m) in 2023 and in-situ field photos. At the leaf scale, a green leaf has a spectral signature of BLUE < GREEN > RED, as chlorophyll pigment absorbs more blue and red light than green light, which is named as chlorophyll and green leaf indicator (CGLI); and a dead leaf has BLUE < GREEN < RED. At the tree scale, tree with only branches and trunk (no green leaves) has BLUE < GREEN < RED, while tree with green leaves has BLUE < GREEN > RED. We evaluated the start of season (SOS) and end of season (EOS) of the cottonwood tree, derived from (1) vegetation index (VI) data with three methods (VI-slope-, VI-ratio-, and VI-threshold-based methods) and (2) surface reflectance data with CGLI-based method. To evaluate broader applicability of the CGLI-based method, we applied the same workflow to five deciduous broadleaf forest sites within the National Ecological Observatory Network, equipped with PhenoCam. At these five sites, we compared phenology metrics (SOS, EOS) derived from VI- and CGLI-based methods with reference dates derived from PhenoCam Green Chromatic Coordinate (GCC) data. Results show that the CGLI-based method, which classifies each observation as either green leaf or non-green leaf/canopy (binary), is simple and effective in delineating leaf/canopy dynamics and phenology metrics. These findings provide a foundation for monitoring leaf phenology of large trees using satellite data.
植物物候是反映植物对气候变化响应的重要指标。迄今为止,由于缺乏非常高空间分辨率(VHSR,米)的时间序列图像数据,我们在大树和林分尺度上的植物叶物候知识和数据产品非常有限。在这里,我们利用PlanetScope(每日,3米)和Sentinel-2A/B(5天,10米)在2023年的图像和现场照片,研究了一棵大型棉杨树的表面反射率(蓝色,绿色,红色)和植被指数。在叶片尺度上,绿叶的光谱特征为BLUE <; green >; RED,这是因为叶绿素色素吸收的蓝光和红光比绿光多,称为叶绿素和绿叶指示剂(CGLI);而枯叶则是蓝<;绿<;红。在树的尺度上,只有树枝和树干(没有绿叶)的树是BLUE <; green <; RED,有绿叶的树是BLUE <; green >; RED。本文对杨树的季初(SOS)和季末(EOS)数据进行了评价,该数据来源于:(1)基于植被指数(VI)的三种方法(VI-slope- based、VI-ratio- based和VI-threshold-based方法)和(2)基于cgi的地表反射率数据。为了评估基于cgi方法的更广泛适用性,我们将相同的工作流程应用于国家生态观测站网络内的五个落叶阔叶林站点,并配备了PhenoCam。在这五个地点,我们将基于VI和cgi方法得出的物候指标(SOS, EOS)与来自PhenoCam Green Chromatic Coordinate (GCC)数据的参考日期进行了比较。结果表明,基于cgi的方法将每个观测值分为绿叶或非绿叶/冠层(二元),在描述叶/冠层动态和物候指标方面简单有效。这些发现为利用卫星数据监测大型树木叶片物候提供了基础。
{"title":"Identifying green leaf and leaf phenology of large trees and forests by time series PlanetScope and Sentinel-2 images and the chlorophyll and green leaf indicator (CGLI)","authors":"Baihong Pan ,&nbsp;Xiangming Xiao ,&nbsp;Li Pan ,&nbsp;Andrew D Richardson ,&nbsp;Yujie Liu ,&nbsp;Yuan Yao ,&nbsp;Cheng Meng ,&nbsp;Yanhua Xie ,&nbsp;Chenchen Zhang ,&nbsp;Yuanwei Qin","doi":"10.1016/j.isprsjprs.2026.01.027","DOIUrl":"10.1016/j.isprsjprs.2026.01.027","url":null,"abstract":"<div><div>Plant phenology serves as a vital indicator of plant’s response to climate variation and change. To date, our knowledge and data products of plant leaf phenology at the scales of large trees and forest stand are very limited, in part due to the lack of time series image data at very high spatial resolution (VHSR, meters). Here, we investigated surface reflectance (BLUE, GREEN, RED) and vegetation indices over a large cottonwood tree, using images from PlanetScope (daily, 3-m) and Sentinel-2A/B (5-day, 10-m) in 2023 and in-situ field photos. At the leaf scale, a green leaf has a spectral signature of BLUE &lt; GREEN &gt; RED, as chlorophyll pigment absorbs more blue and red light than green light, which is named as chlorophyll and green leaf indicator (CGLI); and a dead leaf has BLUE &lt; GREEN &lt; RED. At the tree scale, tree with only branches and trunk (no green leaves) has BLUE &lt; GREEN &lt; RED, while tree with green leaves has BLUE &lt; GREEN &gt; RED. We evaluated the start of season (SOS) and end of season (EOS) of the cottonwood tree, derived from (1) vegetation index (VI) data with three methods (VI-slope-, VI-ratio-, and VI-threshold-based methods) and (2) surface reflectance data with CGLI-based method. To evaluate broader applicability of the CGLI-based method, we applied the same workflow to five deciduous broadleaf forest sites within the National Ecological Observatory Network, equipped with PhenoCam. At these five sites, we compared phenology metrics (SOS, EOS) derived from VI- and CGLI-based methods with reference dates derived from PhenoCam Green Chromatic Coordinate (GCC) data. Results show that the CGLI-based method, which classifies each observation as either green leaf or non-green leaf/canopy (binary), is simple and effective in delineating leaf/canopy dynamics and phenology metrics. These findings provide a foundation for monitoring leaf phenology of large trees using satellite data.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"233 ","pages":"Pages 104-125"},"PeriodicalIF":12.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1