首页 > 最新文献

ISPRS Journal of Photogrammetry and Remote Sensing最新文献

英文 中文
GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.023
Bing Han , Tengteng Qu , Jie Jiang
Owing to the difficulty of utilizing hidden spatio-temporal information, spatio-temporal knowledge graph (KG) reasoning tasks in real geographic environments have issues of low accuracy and poor interpretability. This paper proposes a grid neighborhood-based graph convolutional network (GN-GCN) for spatio-temporal KG reasoning. Based on the discretized process of encoding spatio-temporal data through the GeoSOT global grid model, the GN-GCN consists of three parts: a static graph neural network, a neighborhood grid calculation, and a time evolution unit, which can learn semantic knowledge, spatial knowledge, and temporal knowledge, respectively. The GN-GCN can also improve the training accuracy and efficiency of the model through the multiscale aggregation characteristic of GeoSOT and can visualize different probabilities in a spatio-temporal intentional probabilistic grid map. Compared with other existing models (RE-GCN, CyGNet, RE-NET, etc.), the mean reciprocal rank (MRR) of GN-GCN reaches 48.33 and 54.06 in spatio-temporal entity and relation prediction tasks, increased by 6.32/18.16% and 6.64/15.67% respectively, which achieves state-of-the-art (SOTA) results in spatio-temporal reasoning. The source code of the project is available at https://doi.org/10.18170/DVN/UIS4VC.
{"title":"GN-GCN: Grid neighborhood-based graph convolutional network for spatio-temporal knowledge graph reasoning","authors":"Bing Han ,&nbsp;Tengteng Qu ,&nbsp;Jie Jiang","doi":"10.1016/j.isprsjprs.2025.01.023","DOIUrl":"10.1016/j.isprsjprs.2025.01.023","url":null,"abstract":"<div><div>Owing to the difficulty of utilizing hidden spatio-temporal information, spatio-temporal knowledge graph (KG) reasoning tasks in real geographic environments have issues of low accuracy and poor interpretability. This paper proposes a grid neighborhood-based graph convolutional network (GN-GCN) for spatio-temporal KG reasoning. Based on the discretized process of encoding spatio-temporal data through the GeoSOT global grid model, the GN-GCN consists of three parts: a static graph neural network, a neighborhood grid calculation, and a time evolution unit, which can learn semantic knowledge, spatial knowledge, and temporal knowledge, respectively. The GN-GCN can also improve the training accuracy and efficiency of the model through the multiscale aggregation characteristic of GeoSOT and can visualize different probabilities in a spatio-temporal intentional probabilistic grid map. Compared with other existing models (RE-GCN, CyGNet, RE-NET, etc.), the mean reciprocal rank (MRR) of GN-GCN reaches 48.33 and 54.06 in spatio-temporal entity and relation prediction tasks, increased by 6.32/18.16% and 6.64/15.67% respectively, which achieves state-of-the-art (SOTA) results in spatio-temporal reasoning. The source code of the project is available at <span><span>https://doi.org/10.18170/DVN/UIS4VC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 728-739"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate semantic segmentation of very high-resolution remote sensing images considering feature state sequences: From benchmark datasets to urban applications
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2025.01.017
Zijie Wang , Jizheng Yi , Aibin Chen , Lijiang Chen , Hui Lin , Kai Xu
Very High-Resolution (VHR) urban remote sensing images segmentation is widely used in ecological environmental protection, urban dynamic monitoring, fine urban management and other related fields. However, the large-scale variation and discrete distribution of objects in VHR images presents a significant challenge to accurate segmentation. The existing studies have primarily concentrated on the internal correlations within a single features, while overlooking the inherent sequential relationships across different feature state. In this paper, a novel Urban Spatial Segmentation Framework (UrbanSSF) is proposed, which fully considers the connections between feature states at different phases. Specifically, the Feature State Interaction (FSI) Mamba with powerful sequence modeling capabilities is designed based on state space modules. It effectively facilitates interactions between the information across different features. Given the disparate semantic information and spatial details of features at different scales, a Global Semantic Enhancer (GSE) module and a Spatial Interactive Attention (SIA) mechanism are designed. The GSE module operates on the high-level features, while the SIA mechanism processes the middle and low-level features. To address the computational challenges of large-scale dense feature fusion, a Channel Space Reconstruction (CSR) algorithm is proposed. This algorithm effectively reduces the computational burden while ensuring efficient processing and maintaining accuracy. In addition, the lightweight UrbanSSF-T, the efficient UrbanSSF-S and the accurate UrbanSSF-L are designed to meet different application requirements in urban scenarios. Comprehensive experiments on the UAVid, ISPRS Vaihingen and Potsdam datasets validate the superior performance of UrbanSSF series. Especially, the UrbanSSF-L achieves a mean intersection over union of 71.0% on the UAVid dataset. Code is available at https://github.com/KotlinWang/UrbanSSF.
{"title":"Accurate semantic segmentation of very high-resolution remote sensing images considering feature state sequences: From benchmark datasets to urban applications","authors":"Zijie Wang ,&nbsp;Jizheng Yi ,&nbsp;Aibin Chen ,&nbsp;Lijiang Chen ,&nbsp;Hui Lin ,&nbsp;Kai Xu","doi":"10.1016/j.isprsjprs.2025.01.017","DOIUrl":"10.1016/j.isprsjprs.2025.01.017","url":null,"abstract":"<div><div>Very High-Resolution (VHR) urban remote sensing images segmentation is widely used in ecological environmental protection, urban dynamic monitoring, fine urban management and other related fields. However, the large-scale variation and discrete distribution of objects in VHR images presents a significant challenge to accurate segmentation. The existing studies have primarily concentrated on the internal correlations within a single features, while overlooking the inherent sequential relationships across different feature state. In this paper, a novel Urban Spatial Segmentation Framework (UrbanSSF) is proposed, which fully considers the connections between feature states at different phases. Specifically, the Feature State Interaction (FSI) Mamba with powerful sequence modeling capabilities is designed based on state space modules. It effectively facilitates interactions between the information across different features. Given the disparate semantic information and spatial details of features at different scales, a Global Semantic Enhancer (GSE) module and a Spatial Interactive Attention (SIA) mechanism are designed. The GSE module operates on the high-level features, while the SIA mechanism processes the middle and low-level features. To address the computational challenges of large-scale dense feature fusion, a Channel Space Reconstruction (CSR) algorithm is proposed. This algorithm effectively reduces the computational burden while ensuring efficient processing and maintaining accuracy. In addition, the lightweight UrbanSSF-T, the efficient UrbanSSF-S and the accurate UrbanSSF-L are designed to meet different application requirements in urban scenarios. Comprehensive experiments on the UAVid, ISPRS Vaihingen and Potsdam datasets validate the superior performance of UrbanSSF series. Especially, the UrbanSSF-L achieves a mean intersection over union of 71.0% on the UAVid dataset. Code is available at <span><span>https://github.com/KotlinWang/UrbanSSF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 824-840"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic guided large scale factor remote sensing image super-resolution with generative diffusion prior 利用生成扩散先验实现语义引导的大尺度因子遥感图像超分辨率
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.12.001
Ce Wang, Wanjie Sun
In the realm of remote sensing, images captured by different platforms exhibit significant disparities in spatial resolution. Consequently, effective large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit. However, existing methods confront challenges such as semantic inaccuracies and blurry textures in the reconstructed images. To tackle these issues, we introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution. The framework exploits a pre-trained generative model as a prior to generate perceptually plausible high-resolution (HR) images, thereby constraining the solution space and mitigating texture blurriness. We further enhance the reconstruction by incorporating vector maps, which carry structural and semantic cues to enhance the reconstruction fidelity of ground objects. Moreover, pixel-level inconsistencies in paired remote sensing images, stemming from sensor-specific imaging characteristics, may hinder the convergence of the model and the diversity in generated results. To address this problem, we develop a method to extract sensor-specific imaging characteristics and model the distribution of them. The proposed model can decouple imaging characteristics from image content, allowing it to generate diverse super-resolution images based on imaging characteristics provided by reference satellite images or sampled from the imaging characteristic probability distributions. To validate and evaluate our approach, we create the Cross-Modal Super-Resolution Dataset (CMSRD). Qualitative and quantitative experiments on CMSRD showcase the superiority and broad applicability of our method. Experimental results on downstream vision tasks also demonstrate the utilitarian of the generated SR images. The dataset and code will be publicly available at https://github.com/wwangcece/SGDM.
在遥感领域,不同平台捕获的图像在空间分辨率方面存在显著差异。因此,有效的大尺度因子超分辨率(SR)算法对于最大化利用从轨道捕获的低分辨率(LR)卫星数据至关重要。然而,现有方法存在语义不准确、重构图像纹理模糊等问题。为了解决这些问题,我们引入了一个新的框架,即语义引导扩散模型(SGDM),该模型是为大尺度因子遥感图像超分辨率而设计的。该框架利用预训练的生成模型作为先验,生成感知上可信的高分辨率(HR)图像,从而限制了解决方案空间并减轻了纹理模糊。我们通过结合矢量图进一步增强重建,矢量图携带结构和语义线索,以提高地面物体的重建保真度。此外,由于传感器特定的成像特性,配对遥感图像的像素级不一致可能会阻碍模型的收敛和生成结果的多样性。为了解决这个问题,我们开发了一种提取传感器特定成像特征并对其分布建模的方法。该模型可以将成像特征与图像内容解耦,使其能够基于参考卫星图像提供的成像特征或从成像特征概率分布中采样生成各种超分辨率图像。为了验证和评估我们的方法,我们创建了跨模态超分辨率数据集(CMSRD)。在CMSRD上的定性和定量实验证明了该方法的优越性和广泛的适用性。下游视觉任务的实验结果也证明了生成的SR图像的实用性。数据集和代码将在https://github.com/wwangcece/SGDM上公开提供。
{"title":"Semantic guided large scale factor remote sensing image super-resolution with generative diffusion prior","authors":"Ce Wang,&nbsp;Wanjie Sun","doi":"10.1016/j.isprsjprs.2024.12.001","DOIUrl":"10.1016/j.isprsjprs.2024.12.001","url":null,"abstract":"<div><div>In the realm of remote sensing, images captured by different platforms exhibit significant disparities in spatial resolution. Consequently, effective large scale factor super-resolution (SR) algorithms are vital for maximizing the utilization of low-resolution (LR) satellite data captured from orbit. However, existing methods confront challenges such as semantic inaccuracies and blurry textures in the reconstructed images. To tackle these issues, we introduce a novel framework, the Semantic Guided Diffusion Model (SGDM), designed for large scale factor remote sensing image super-resolution. The framework exploits a pre-trained generative model as a prior to generate perceptually plausible high-resolution (HR) images, thereby constraining the solution space and mitigating texture blurriness. We further enhance the reconstruction by incorporating vector maps, which carry structural and semantic cues to enhance the reconstruction fidelity of ground objects. Moreover, pixel-level inconsistencies in paired remote sensing images, stemming from sensor-specific imaging characteristics, may hinder the convergence of the model and the diversity in generated results. To address this problem, we develop a method to extract sensor-specific imaging characteristics and model the distribution of them. The proposed model can decouple imaging characteristics from image content, allowing it to generate diverse super-resolution images based on imaging characteristics provided by reference satellite images or sampled from the imaging characteristic probability distributions. To validate and evaluate our approach, we create the Cross-Modal Super-Resolution Dataset (CMSRD). Qualitative and quantitative experiments on CMSRD showcase the superiority and broad applicability of our method. Experimental results on downstream vision tasks also demonstrate the utilitarian of the generated SR images. The dataset and code will be publicly available at <span><span>https://github.com/wwangcece/SGDM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 125-138"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLC-net: A sparse reconstruction network for TomoSAR imaging based on multi-label classification neural network MLC-net:一种基于多标签分类神经网络的TomoSAR成像稀疏重建网络
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2025-02-01 DOI: 10.1016/j.isprsjprs.2024.11.018
Depeng Ouyang , Yueting Zhang , Jiayi Guo , Guangyao Zhou
Synthetic Aperture Radar tomography (TomoSAR) has garnered significant interest for its capability to achieve three-dimensional resolution along the elevation angle by collecting a stack of SAR images from different cross-track angles. Compressed Sensing (CS) algorithms have been widely introduced into SAR tomography. However, traditional CS-based TomoSAR methods suffer from weaknesses in noise resistance, high computational complexity, and insufficient super-resolution capabilities. Addressing the efficient TomoSAR imaging problem, this paper proposes an end-to-end neural network-based TomoSAR inversion method, named Multi-Label Classification-based Sparse Imaging Network (MLC-net). MLC-net focuses on the l0 norm optimization problem, completely departing from the iterative framework of traditional compressed sensing methods and overcoming the limitations imposed by the l1 norm optimization problem on signal coherence. Simultaneously, the concept of multi-label classification is introduced for the first time in TomoSAR inversion, enabling MLC-net to accurately invert scenarios with multiple scatterers within the same range-azimuth cell. Additionally, a novel evaluation system for TomoSAR inversion results is introduced, transforming inversion results into a 3D point cloud and utilizing mature evaluation methods for 3D point clouds. Under the new evaluation system, the proposed method is more than 30% stronger than existing methods. Finally, by training solely on simulated data, we conducted extensive experimental testing on both simulated and real data, achieving excellent results that validate the effectiveness, efficiency, and robustness of the proposed method. Specifically, the VQA_PC score improved from 91.085 to 92.713. The code of our network is available in https://github.com/OscarYoungDepend/MLC-net.
合成孔径雷达层析成像技术(TomoSAR)因其通过收集不同交叉航迹角度的SAR图像堆栈,沿仰角获得三维分辨率的能力而引起了人们的极大兴趣。压缩感知(CS)算法已广泛应用于SAR层析成像。然而,传统的基于cs的TomoSAR方法存在抗噪性差、计算复杂度高、超分辨率能力不足等问题。针对TomoSAR高效成像问题,提出了一种基于端到端神经网络的TomoSAR反演方法,称为基于多标签分类的稀疏成像网络(MLC-net)。MLC-net关注的是10范数优化问题,完全脱离了传统压缩感知方法的迭代框架,克服了l1范数优化问题对信号相干性的限制。同时,在TomoSAR反演中首次引入了多标签分类的概念,使MLC-net能够在同一距离方位单元内准确地反演具有多个散射体的场景。此外,介绍了一种新的TomoSAR反演结果评价系统,将反演结果转化为三维点云,利用成熟的三维点云评价方法。在新的评估体系下,所提出的方法比现有方法强30%以上。最后,通过仅对模拟数据进行训练,我们对模拟数据和真实数据进行了广泛的实验测试,取得了优异的结果,验证了所提出方法的有效性、高效性和鲁棒性。具体来说,VQA_PC得分从91.085提高到92.713。我们网络的代码可以在https://github.com/OscarYoungDepend/MLC-net找到。
{"title":"MLC-net: A sparse reconstruction network for TomoSAR imaging based on multi-label classification neural network","authors":"Depeng Ouyang ,&nbsp;Yueting Zhang ,&nbsp;Jiayi Guo ,&nbsp;Guangyao Zhou","doi":"10.1016/j.isprsjprs.2024.11.018","DOIUrl":"10.1016/j.isprsjprs.2024.11.018","url":null,"abstract":"<div><div>Synthetic Aperture Radar tomography (TomoSAR) has garnered significant interest for its capability to achieve three-dimensional resolution along the elevation angle by collecting a stack of SAR images from different cross-track angles. Compressed Sensing (CS) algorithms have been widely introduced into SAR tomography. However, traditional CS-based TomoSAR methods suffer from weaknesses in noise resistance, high computational complexity, and insufficient super-resolution capabilities. Addressing the efficient TomoSAR imaging problem, this paper proposes an end-to-end neural network-based TomoSAR inversion method, named Multi-Label Classification-based Sparse Imaging Network (MLC-net). MLC-net focuses on the l0 norm optimization problem, completely departing from the iterative framework of traditional compressed sensing methods and overcoming the limitations imposed by the l1 norm optimization problem on signal coherence. Simultaneously, the concept of multi-label classification is introduced for the first time in TomoSAR inversion, enabling MLC-net to accurately invert scenarios with multiple scatterers within the same range-azimuth cell. Additionally, a novel evaluation system for TomoSAR inversion results is introduced, transforming inversion results into a 3D point cloud and utilizing mature evaluation methods for 3D point clouds. Under the new evaluation system, the proposed method is more than 30% stronger than existing methods. Finally, by training solely on simulated data, we conducted extensive experimental testing on both simulated and real data, achieving excellent results that validate the effectiveness, efficiency, and robustness of the proposed method. Specifically, the VQA_PC score improved from 91.085 to 92.713. The code of our network is available in <span><span>https://github.com/OscarYoungDepend/MLC-net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"220 ","pages":"Pages 85-99"},"PeriodicalIF":10.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real time LiDAR-Visual-Inertial object level semantic SLAM for forest environments 面向森林环境的实时激光雷达-视觉-惯性目标级语义SLAM
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-30 DOI: 10.1016/j.isprsjprs.2024.11.013
Hongwei Liu, Guoqi Xu, Bo Liu, Yuanxin Li, Shuhang Yang, Jie Tang, Kai Pan, Yanqiu Xing
The accurate positioning of individual trees, the reconstruction of forest environment in three dimensions and the identification of tree species distribution are crucial aspects of forestry remote sensing. Simultaneous Localization and Mapping (SLAM) algorithms, primarily based on LiDAR or visual technologies, serve as essential tools for outdoor spatial positioning and mapping, overcoming signal loss challenges caused by tree canopy obstruction in the Global Navigation Satellite System (GNSS). To address these challenges, a semantic SLAM algorithm called LVI-ObjSemantic is proposed, which integrates visual, LiDAR, IMU and deep learning at the object level. LVI-ObjSemantic is capable of performing individual tree segmentation, localization and tree spices discrimination tasks in forest environment. The proposed Cluster-Block-single and Cluster-Block-global data structures combined with the deep learning model can effectively reduce the cases of misdetection and false detection. Due to the lack of publicly available forest datasets, we chose to validate the proposed algorithm on eight experimental plots. The experimental results indicate that the average root mean square error (RMSE) of the trajectories across the eight plots is 2.7, 2.8, 1.9 and 2.2 times lower than that of LIO-SAM, FAST-LIO2, LVI-SAM and FAST-LIVO, respectively. Additionally, the mean absolute error in tree localization is 0.12 m. Moreover, the mapping drift of the proposed algorithm is consistently lower than that of the aforementioned comparison algorithms.
林木单株的准确定位、森林环境的三维重建和树种分布的识别是林业遥感的重要内容。同步定位和制图(SLAM)算法主要基于激光雷达或视觉技术,是室外空间定位和制图的重要工具,可以克服全球导航卫星系统(GNSS)中树冠遮挡造成的信号丢失挑战。为了解决这些挑战,提出了一种名为LVI-ObjSemantic的语义SLAM算法,该算法在对象级集成了视觉、激光雷达、IMU和深度学习。LVI-ObjSemantic能够在森林环境中执行单株树木分割、定位和树木香料识别任务。本文提出的Cluster-Block-single和Cluster-Block-global数据结构与深度学习模型相结合,可以有效地减少误检和误检的情况。由于缺乏公开可用的森林数据集,我们选择在八个实验地块上验证所提出的算法。实验结果表明,8个地块的轨迹平均均方根误差(RMSE)分别比LIO-SAM、FAST-LIO2、LVI-SAM和FAST-LIVO低2.7、2.8、1.9和2.2倍。此外,树木定位的平均绝对误差为0.12 m。此外,该算法的映射漂移始终低于上述比较算法。
{"title":"A real time LiDAR-Visual-Inertial object level semantic SLAM for forest environments","authors":"Hongwei Liu,&nbsp;Guoqi Xu,&nbsp;Bo Liu,&nbsp;Yuanxin Li,&nbsp;Shuhang Yang,&nbsp;Jie Tang,&nbsp;Kai Pan,&nbsp;Yanqiu Xing","doi":"10.1016/j.isprsjprs.2024.11.013","DOIUrl":"10.1016/j.isprsjprs.2024.11.013","url":null,"abstract":"<div><div>The accurate positioning of individual trees, the reconstruction of forest environment in three dimensions and the identification of tree species distribution are crucial aspects of forestry remote sensing. Simultaneous Localization and Mapping (SLAM) algorithms, primarily based on LiDAR or visual technologies, serve as essential tools for outdoor spatial positioning and mapping, overcoming signal loss challenges caused by tree canopy obstruction in the Global Navigation Satellite System (GNSS). To address these challenges, a semantic SLAM algorithm called LVI-ObjSemantic is proposed, which integrates visual, LiDAR, IMU and deep learning at the object level. LVI-ObjSemantic is capable of performing individual tree segmentation, localization and tree spices discrimination tasks in forest environment. The proposed Cluster-Block-single and Cluster-Block-global data structures combined with the deep learning model can effectively reduce the cases of misdetection and false detection. Due to the lack of publicly available forest datasets, we chose to validate the proposed algorithm on eight experimental plots. The experimental results indicate that the average root mean square error (RMSE) of the trajectories across the eight plots is 2.7, 2.8, 1.9 and 2.2 times lower than that of LIO-SAM, FAST-LIO2, LVI-SAM and FAST-LIVO, respectively. Additionally, the mean absolute error in tree localization is 0.12 m. Moreover, the mapping drift of the proposed algorithm is consistently lower than that of the aforementioned comparison algorithms.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 71-90"},"PeriodicalIF":10.6,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Location and orientation united graph comparison for topographic point cloud change estimation 地形点云变化估计的位置和方向联合图比较
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-29 DOI: 10.1016/j.isprsjprs.2024.11.016
Shoujun Jia , Lotte de Vugt , Andreas Mayr , Chun Liu , Martin Rutzinger
3D topographic point cloud change estimation produces fundamental inputs for understanding Earth surface process dynamics. In general, change estimation aims at detecting the largest possible number of points with significance (i.e., difference > uncertainty) and quantifying multiple types of topographic changes. However, several complex factors, including the inhomogeneous nature of point cloud data, the high uncertainty in positional changes, and the different types of quantifying difference, pose challenges for the reliable detection and quantification of 3D topographic changes. To address these limitations, the paper proposes a graph comparison-based method to estimate 3D topographic change from point clouds. First, a graph with both location and orientation representation is designed to aggregate local neighbors of topographic point clouds against the disordered and unstructured data nature. Second, the corresponding graphs between two topographic point clouds are identified and compared to quantify the differences and associated uncertainties in both location and orientation features. Particularly, the proposed method unites the significant changes derived from both features (i.e., location and orientation) and captures the location difference (i.e., distance) and the orientation difference (i.e., rotation) for each point with significant change. We tested the proposed method in a mountain region (Sellrain, Tyrol, Austria) covered by three airborne laser scanning point cloud pairs with different point densities and complex topographic changes at intervals of four, six, and ten years. Our method detected significant changes in 91.39 % − 93.03 % of the study area, while a state-of-the-art method (i.e., Multiscale Model-to-Model Cloud Comparison, M3C2) identified 36.81 % − 47.41 % significant changes for the same area. Especially for unchanged building roofs, our method measured lower change magnitudes than M3C2. Looking at the case of shallow landslides, our method identified 84 out of a total of 88 reference landslides by analysing change in distance or rotation. Therefore, our method not only detects a large number of significant changes but also quantifies two types of topographic changes (i.e., distance and rotation), and is more robust against registration errors. It shows large potential for estimation and interpretation of topographic changes in natural environments.
三维地形点云变化估计为理解地球表面过程动力学提供了基本输入。一般来说,变化估计的目的是检测尽可能多的具有显著性的点(即差异>;不确定性)和量化多种类型的地形变化。然而,点云数据的非均匀性、位置变化的高度不确定性以及不同类型的量化差异等复杂因素,对三维地形变化的可靠检测和量化提出了挑战。为了解决这些局限性,本文提出了一种基于图比较的方法来估计点云的三维地形变化。首先,针对地形点云的无序和非结构化特性,设计了一个具有位置和方向表示的图来聚合地形点云的局部邻居。其次,识别和比较两种地形点云之间的对应图,以量化位置和方向特征的差异和相关不确定性。特别是,该方法将两个特征(即位置和方向)的显著变化结合起来,并捕获每个显著变化点的位置差异(即距离)和方向差异(即旋转)。我们在一个山区(奥地利蒂罗尔的Sellrain)测试了所提出的方法,该山区覆盖了三对机载激光扫描点云,这些点云具有不同的点密度和复杂的地形变化,间隔时间为4年、6年和10年。我们的方法检测到91.39% ~ 93.03%的研究区域发生了显著变化,而最先进的方法(即多尺度模型到模型云比较,M3C2)在同一区域发现了36.81% ~ 47.41%的显著变化。特别是对于不变的建筑物屋顶,我们的方法测量的变化幅度低于M3C2。在浅层滑坡的情况下,我们的方法通过分析距离或旋转的变化,从总共88个参考滑坡中识别出84个。因此,我们的方法不仅可以检测到大量的显著变化,而且可以量化两种地形变化(即距离和旋转),并且对配准误差具有更强的鲁棒性。它显示了估计和解释自然环境中地形变化的巨大潜力。
{"title":"Location and orientation united graph comparison for topographic point cloud change estimation","authors":"Shoujun Jia ,&nbsp;Lotte de Vugt ,&nbsp;Andreas Mayr ,&nbsp;Chun Liu ,&nbsp;Martin Rutzinger","doi":"10.1016/j.isprsjprs.2024.11.016","DOIUrl":"10.1016/j.isprsjprs.2024.11.016","url":null,"abstract":"<div><div>3D topographic point cloud change estimation produces fundamental inputs for understanding Earth surface process dynamics. In general, change estimation aims at detecting the largest possible number of points with significance (<em>i.e.,</em> difference <span><math><mrow><mo>&gt;</mo></mrow></math></span> uncertainty) and quantifying multiple types of topographic changes. However, several complex factors, including the inhomogeneous nature of point cloud data, the high uncertainty in positional changes, and the different types of quantifying difference, pose challenges for the reliable detection and quantification of 3D topographic changes. To address these limitations, the paper proposes a graph comparison-based method to estimate 3D topographic change from point clouds. First, a graph with both location and orientation representation is designed to aggregate local neighbors of topographic point clouds against the disordered and unstructured data nature. Second, the corresponding graphs between two topographic point clouds are identified and compared to quantify the differences and associated uncertainties in both location and orientation features. Particularly, the proposed method unites the significant changes derived from both features (<em>i.e.,</em> location and orientation) and captures the location difference (<em>i.e.,</em> distance) and the orientation difference (<em>i.e.,</em> rotation) for each point with significant change. We tested the proposed method in a mountain region (Sellrain, Tyrol, Austria) covered by three airborne laser scanning point cloud pairs with different point densities and complex topographic changes at intervals of four, six, and ten years. Our method detected significant changes in 91.39 % − 93.03 % of the study area, while a state-of-the-art method (<em>i.e.,</em> Multiscale Model-to-Model Cloud Comparison, M3C2) identified 36.81 % − 47.41 % significant changes for the same area. Especially for unchanged building roofs, our method measured lower change magnitudes than M3C2. Looking at the case of shallow landslides, our method identified 84 out of a total of 88 reference landslides by analysing change in distance or rotation. Therefore, our method not only detects a large number of significant changes but also quantifies two types of topographic changes (<em>i.e.,</em> distance and rotation), and is more robust against registration errors. It shows large potential for estimation and interpretation of topographic changes in natural environments.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 52-70"},"PeriodicalIF":10.6,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MGCNet: Multi-granularity consensus network for remote sensing image correspondence pruning MGCNet:遥感图像对应剪剪的多粒度共识网络
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-28 DOI: 10.1016/j.isprsjprs.2024.11.011
Fengyuan Zhuang , Yizhang Liu , Xiaojie Li , Ji Zhou , Riqing Chen , Lifang Wei , Changcai Yang , Jiayi Ma
Correspondence pruning aims to remove false correspondences (outliers) from an initial putative correspondence set. This process holds significant importance and serves as a fundamental step in various applications within the fields of remote sensing and photogrammetry. The presence of noise, illumination changes, and small overlaps in remote sensing images frequently result in a substantial number of outliers within the initial set, thereby rendering the correspondence pruning notably challenging. Although the spatial consensus of correspondences has been widely used to determine the correctness of each correspondence, achieving uniform consensus can be challenging due to the uneven distribution of correspondences. Existing works have mainly focused on either local or global consensus, with a very small perspective or large perspective, respectively. They often ignore the moderate perspective between local and global consensus, called group consensus, which serves as a buffering organization from local to global consensus, hence leading to insufficient correspondence consensus aggregation. To address this issue, we propose a multi-granularity consensus network (MGCNet) to achieve consensus across regions of different scales, which leverages local, group, and global consensus to accomplish robust and accurate correspondence pruning. Specifically, we introduce a GroupGCN module that randomly divides the initial correspondences into several groups and then focuses on group consensus and acts as a buffer organization from local to global consensus. Additionally, we propose a Multi-level Local Feature Aggregation Module that adapts to the size of the local neighborhood to capture local consensus and a Multi-order Global Feature Module to enhance the richness of the global consensus. Experimental results demonstrate that MGCNet outperforms state-of-the-art methods on various tasks, highlighting the superiority and great generalization of our method. In particular, we achieve 3.95% and 8.5% mAP5° improvement without RANSAC on the YFCC100M dataset in known and unknown scenes for pose estimation, compared to the second-best models (MSA-LFC and CLNet). Source code: https://github.com/1211193023/MGCNet.
对应修剪的目的是从初始假定的对应集中去除错误的对应(异常值)。这一过程具有重要意义,是遥感和摄影测量领域各种应用的基本步骤。由于遥感图像中存在噪声、光照变化和小的重叠,通常会在初始集合中产生大量的异常值,从而使对应剪接具有极大的挑战性。虽然对应的空间一致性已被广泛用于确定每个对应的正确性,但由于对应的分布不均匀,实现统一的共识可能是具有挑战性的。现有的工作主要集中在局部共识或全球共识上,分别是非常小的视角或大的视角。它们往往忽略了局部共识与全局共识之间的适度视角,即群体共识,群体共识是局部共识与全局共识之间的缓冲组织,从而导致一致性共识聚集不足。为了解决这一问题,我们提出了一个多粒度共识网络(MGCNet)来实现不同规模的区域共识,它利用本地、群体和全球共识来实现鲁棒和准确的对应修剪。具体来说,我们引入了一个GroupGCN模块,该模块将初始通信随机分成若干组,然后关注组共识,并作为从局部共识到全局共识的缓冲组织。此外,我们提出了一个适应局部邻域大小的多级局部特征聚合模块来捕获局部共识,并提出了一个多阶全局特征模块来增强全局共识的丰富性。实验结果表明,MGCNet在各种任务上都优于当前最先进的方法,突出了我们的方法的优越性和良好的泛化性。特别是,与第二好的模型(MSA-LFC和CLNet)相比,在已知和未知场景的YFCC100M数据集上,我们在没有RANSAC的情况下实现了3.95%和8.5%的mAP5°改进。源代码:https://github.com/1211193023/MGCNet。
{"title":"MGCNet: Multi-granularity consensus network for remote sensing image correspondence pruning","authors":"Fengyuan Zhuang ,&nbsp;Yizhang Liu ,&nbsp;Xiaojie Li ,&nbsp;Ji Zhou ,&nbsp;Riqing Chen ,&nbsp;Lifang Wei ,&nbsp;Changcai Yang ,&nbsp;Jiayi Ma","doi":"10.1016/j.isprsjprs.2024.11.011","DOIUrl":"10.1016/j.isprsjprs.2024.11.011","url":null,"abstract":"<div><div>Correspondence pruning aims to remove false correspondences (outliers) from an initial putative correspondence set. This process holds significant importance and serves as a fundamental step in various applications within the fields of remote sensing and photogrammetry. The presence of noise, illumination changes, and small overlaps in remote sensing images frequently result in a substantial number of outliers within the initial set, thereby rendering the correspondence pruning notably challenging. Although the spatial consensus of correspondences has been widely used to determine the correctness of each correspondence, achieving uniform consensus can be challenging due to the uneven distribution of correspondences. Existing works have mainly focused on either local or global consensus, with a very small perspective or large perspective, respectively. They often ignore the moderate perspective between local and global consensus, called group consensus, which serves as a buffering organization from local to global consensus, hence leading to insufficient correspondence consensus aggregation. To address this issue, we propose a multi-granularity consensus network (MGCNet) to achieve consensus across regions of different scales, which leverages local, group, and global consensus to accomplish robust and accurate correspondence pruning. Specifically, we introduce a GroupGCN module that randomly divides the initial correspondences into several groups and then focuses on group consensus and acts as a buffer organization from local to global consensus. Additionally, we propose a Multi-level Local Feature Aggregation Module that adapts to the size of the local neighborhood to capture local consensus and a Multi-order Global Feature Module to enhance the richness of the global consensus. Experimental results demonstrate that MGCNet outperforms state-of-the-art methods on various tasks, highlighting the superiority and great generalization of our method. In particular, we achieve 3.95% and 8.5% mAP<span><math><mrow><mn>5</mn><mo>°</mo></mrow></math></span> improvement without RANSAC on the YFCC100M dataset in known and unknown scenes for pose estimation, compared to the second-best models (MSA-LFC and CLNet). Source code: https://github.com/1211193023/MGCNet.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 38-51"},"PeriodicalIF":10.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pansharpening via predictive filtering with element-wise feature mixing 通过预测滤波与元素特征混合实现平移锐化
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-26 DOI: 10.1016/j.isprsjprs.2024.10.029
Yongchuan Cui , Peng Liu , Yan Ma , Lajiao Chen , Mengzhen Xu , Xingyan Guo
Pansharpening is a crucial technique in remote sensing for enhancing spatial resolution by fusing low spatial resolution multispectral (LRMS) images with high spatial panchromatic (PAN) images. Existing deep convolutional networks often face challenges in capturing fine details due to the homogeneous operation of convolutional kernels. In this paper, we propose a novel predictive filtering approach for pansharpening to mitigate spectral distortions and spatial degradations. By obtaining predictive filters through the fusion of LRMS and PAN and conducting filtering operations using unique kernels assigned to each pixel, our method reduces information loss significantly. To learn more effective kernels, we propose an effective fine-grained fusion method for LRMS and PAN features, namely element-wise feature mixing. Specifically, features of LRMS and PAN will be exchanged under the guidance of a learned mask. The value of the mask signifies the extent to which the element will be mixed. Extensive experimental results demonstrate that the proposed method achieves better performances than the state-of-the-art models with fewer parameters and lower computations. Visual comparisons indicate that our model pays more attention to details, which further confirms the effectiveness of the proposed fine-grained fusion method. Codes are available at https://github.com/yc-cui/PreMix.
平差是遥感技术中的一项重要技术,通过将低空间分辨率的多光谱(LRMS)图像与高空间分辨率的全色(PAN)图像相融合来提高空间分辨率。由于卷积核的同质化操作,现有的深度卷积网络在捕捉精细细节方面往往面临挑战。在本文中,我们提出了一种用于平锐化的新型预测滤波方法,以减轻光谱失真和空间退化。通过融合 LRMS 和 PAN 获得预测滤波器,并使用分配给每个像素的独特内核进行滤波操作,我们的方法大大减少了信息损失。为了学习更有效的内核,我们提出了一种有效的 LRMS 和 PAN 特征细粒度融合方法,即要素式特征混合。具体来说,LRMS 和 PAN 的特征将在学习到的掩码指导下进行交换。掩码的值表示元素混合的程度。广泛的实验结果表明,与最先进的模型相比,所提出的方法参数更少、计算量更低,却能取得更好的性能。直观比较表明,我们的模型更注重细节,这进一步证实了所提出的细粒度融合方法的有效性。代码见 https://github.com/yc-cui/PreMix。
{"title":"Pansharpening via predictive filtering with element-wise feature mixing","authors":"Yongchuan Cui ,&nbsp;Peng Liu ,&nbsp;Yan Ma ,&nbsp;Lajiao Chen ,&nbsp;Mengzhen Xu ,&nbsp;Xingyan Guo","doi":"10.1016/j.isprsjprs.2024.10.029","DOIUrl":"10.1016/j.isprsjprs.2024.10.029","url":null,"abstract":"<div><div>Pansharpening is a crucial technique in remote sensing for enhancing spatial resolution by fusing low spatial resolution multispectral (LRMS) images with high spatial panchromatic (PAN) images. Existing deep convolutional networks often face challenges in capturing fine details due to the homogeneous operation of convolutional kernels. In this paper, we propose a novel predictive filtering approach for pansharpening to mitigate spectral distortions and spatial degradations. By obtaining predictive filters through the fusion of LRMS and PAN and conducting filtering operations using unique kernels assigned to each pixel, our method reduces information loss significantly. To learn more effective kernels, we propose an effective fine-grained fusion method for LRMS and PAN features, namely element-wise feature mixing. Specifically, features of LRMS and PAN will be exchanged under the guidance of a learned mask. The value of the mask signifies the extent to which the element will be mixed. Extensive experimental results demonstrate that the proposed method achieves better performances than the state-of-the-art models with fewer parameters and lower computations. Visual comparisons indicate that our model pays more attention to details, which further confirms the effectiveness of the proposed fine-grained fusion method. Codes are available at <span><span>https://github.com/yc-cui/PreMix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 22-37"},"PeriodicalIF":10.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-scale evaluation of a satellite-based terrestrial biosphere model for estimating crop response to management practices and productivity 对基于卫星的陆地生物圈模型进行实地评估,以估算作物对管理方法和生产力的反应
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-26 DOI: 10.1016/j.isprsjprs.2024.11.008
Jingwen Wang , Jose Luis Pancorbo , Miguel Quemada , Jiahua Zhang , Yun Bai , Sha Zhang , Shanxin Guo , Jinsong Chen
Timely and accurate information on crop productivity is essential for characterizing crop growing status and guiding adaptive management practices to ensure food security. Terrestrial biosphere models forced by satellite observations (satellite-TBMs) are viewed as robust tools for understanding large-scale agricultural productivity, with distinct advantages of generalized input data requirement and comprehensive representation of carbon–water-energy exchange mechanisms. However, it remains unclear whether these models can maintain consistent accuracy at field scale and provide useful information for farmers to make site-specific management decisions. This study aims to investigate the capability of a satellite-TBM to estimate crop productivity at the granularity of individual fields using harmonized Sentinel-2 and Landsat-8 time series. Emphasis was placed on evaluating the model performance in: (i) representing crop response to the spatially and temporally varying field management practices, and (ii) capturing the variation in crop growth, biomass and yield under complex interactions among crop genotypes, environment, and management conditions. To achieve the first objective, we conducted on-farm experiments with controlled nitrogen (N) fertilization and irrigation treatments to assess the efficacy of using satellite-retrieved leaf area index (LAI) to reflect the effect of management practices in the TBM. For the second objective, we integrated a yield formation module into the satellite-TBM and compared it with the semi-empirical harvest index (HI) method. The model performance was then evaluated under varying conditions using an extensive dataset consisting of observations from four crop species (i.e., soybean, wheat, rice and maize), 42 cultivars and 58 field-years. Results demonstrated that satellite-retrieved LAI effectively captured the effects of N and water supply on crop growth, showing high sensitivity to both the timing and quantity of these inputs. This allowed for a spatiotemporal representation of management impacts, even without prior knowledge of the specific management schedules. The TBM forced by satellite LAI produced consistent biomass dynamics with ground measurements, showing an overall correlation coefficient (R) of 0.93 and a relative root mean square error (RRMSE) of 31.4 %. However, model performance declined from biomass to yield estimation, with the HI-based method (R = 0.80, RRMSE = 23.7 %) outperforming mechanistic modeling of grain filling (R = 0.43, RRMSE = 43.4 %). Model accuracy for winter wheat was lower than that for summer crops such as rice, maize and soybean, suggesting potential underrepresentation of the overwintering processes. This study illustrates the utility of satellite-TBMs in crop productivity estimation at the field level, and identifies existing uncertainties and limitations for future model developments.
及时准确的作物生产力信息对于描述作物生长状况和指导适应性管理实践以确保粮食安全至关重要。以卫星观测为动力的陆地生物圈模型(卫星-TBM)被视为了解大规模农业生产力的有力工具,具有输入数据要求通用化和全面反映碳-水-能量交换机制的独特优势。然而,这些模型是否能在田间尺度上保持稳定的准确性,并为农民提供有用的信息以做出针对具体地点的管理决策,目前仍不清楚。本研究旨在利用协调的 Sentinel-2 和 Landsat-8 时间序列,调查卫星-TBM 在单个田块粒度上估算作物生产力的能力。重点是评估该模型在以下方面的性能:(i) 表现作物对时空变化的田间管理措施的反应;(ii) 在作物基因型、环境和管理条件之间复杂的相互作用下捕捉作物生长、生物量和产量的变化。为实现第一个目标,我们在农场进行了氮肥和灌溉控制试验,以评估利用卫星获取的叶面积指数(LAI)来反映田间管理措施效果的有效性。第二个目标是将产量形成模块集成到卫星-TBM 中,并与半经验收获指数 (HI) 方法进行比较。随后,我们使用一个广泛的数据集对该模型在不同条件下的性能进行了评估,该数据集由四个作物品种(即大豆、小麦、水稻和玉米)、42 个栽培品种和 58 个田间年的观测数据组成。结果表明,卫星检索的 LAI 有效地捕捉到了氮和水的供应对作物生长的影响,显示出对这些投入的时间和数量的高度敏感性。这样,即使事先不知道具体的管理计划,也能在时空上体现管理的影响。由卫星 LAI 强化的 TBM 与地面测量结果产生了一致的生物量动态,总体相关系数 (R) 为 0.93,相对均方根误差 (RRMSE) 为 31.4%。然而,从生物量到产量估算,模型性能有所下降,基于 HI 的方法(R = 0.80,RRMSE = 23.7 %)优于谷粒灌浆的机理模型(R = 0.43,RRMSE = 43.4 %)。冬小麦的模型精度低于水稻、玉米和大豆等夏季作物,这表明模型可能没有充分反映越冬过程。这项研究说明了卫星 TBM 在田间作物生产力估算中的实用性,并指出了现有的不确定性和未来模型开发的局限性。
{"title":"Field-scale evaluation of a satellite-based terrestrial biosphere model for estimating crop response to management practices and productivity","authors":"Jingwen Wang ,&nbsp;Jose Luis Pancorbo ,&nbsp;Miguel Quemada ,&nbsp;Jiahua Zhang ,&nbsp;Yun Bai ,&nbsp;Sha Zhang ,&nbsp;Shanxin Guo ,&nbsp;Jinsong Chen","doi":"10.1016/j.isprsjprs.2024.11.008","DOIUrl":"10.1016/j.isprsjprs.2024.11.008","url":null,"abstract":"<div><div>Timely and accurate information on crop productivity is essential for characterizing crop growing status and guiding adaptive management practices to ensure food security. Terrestrial biosphere models forced by satellite observations (satellite-TBMs) are viewed as robust tools for understanding large-scale agricultural productivity, with distinct advantages of generalized input data requirement and comprehensive representation of carbon–water-energy exchange mechanisms. However, it remains unclear whether these models can maintain consistent accuracy at field scale and provide useful information for farmers to make site-specific management decisions. This study aims to investigate the capability of a satellite-TBM to estimate crop productivity at the granularity of individual fields using harmonized Sentinel-2 and Landsat-8 time series. Emphasis was placed on evaluating the model performance in: (i) representing crop response to the spatially and temporally varying field management practices, and (ii) capturing the variation in crop growth, biomass and yield under complex interactions among crop genotypes, environment, and management conditions. To achieve the first objective, we conducted on-farm experiments with controlled nitrogen (N) fertilization and irrigation treatments to assess the efficacy of using satellite-retrieved leaf area index (LAI) to reflect the effect of management practices in the TBM. For the second objective, we integrated a yield formation module into the satellite-TBM and compared it with the semi-empirical harvest index (HI) method. The model performance was then evaluated under varying conditions using an extensive dataset consisting of observations from four crop species (i.e., soybean, wheat, rice and maize), 42 cultivars and 58 field-years. Results demonstrated that satellite-retrieved LAI effectively captured the effects of N and water supply on crop growth, showing high sensitivity to both the timing and quantity of these inputs. This allowed for a spatiotemporal representation of management impacts, even without prior knowledge of the specific management schedules. The TBM forced by satellite LAI produced consistent biomass dynamics with ground measurements, showing an overall correlation coefficient (R) of 0.93 and a relative root mean square error (RRMSE) of 31.4 %. However, model performance declined from biomass to yield estimation, with the HI-based method (R = 0.80, RRMSE = 23.7 %) outperforming mechanistic modeling of grain filling (R = 0.43, RRMSE = 43.4 %). Model accuracy for winter wheat was lower than that for summer crops such as rice, maize and soybean, suggesting potential underrepresentation of the overwintering processes. This study illustrates the utility of satellite-TBMs in crop productivity estimation at the field level, and identifies existing uncertainties and limitations for future model developments.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"219 ","pages":"Pages 1-21"},"PeriodicalIF":10.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A UAV-based sparse viewpoint planning framework for detailed 3D modelling of cultural heritage monuments 基于无人机的稀疏视点规划框架,用于文化遗产古迹的详细 3D 建模
IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL Pub Date : 2024-11-26 DOI: 10.1016/j.isprsjprs.2024.10.028
Zebiao Wu , Patrick Marais , Heinz Rüther
Creating 3D digital models of heritage sites typically involves laser scanning and photogrammetry. Although laser scan-derived point clouds provide detailed geometry, occlusions and hidden areas often lead to gaps. Terrestrial and UAV photography can largely fill these gaps and also enhance definition and accuracy at edges and corners. Historical buildings with complex architectural or decorative details require a systematically planned combination of laser scanning with handheld and UAV photography. High-resolution photography not only enhances the geometry of 3D building models but also improves their texturing. The use of cameras, especially UAV cameras, requires robust viewpoint planning to ensure sufficient coverage of the documented structure whilst minimising viewpoints for efficient image acquisition and processing economy. Determining ideal viewpoints for detailed modelling is challenging. Existing planners, relying on coarse scene proxies, often miss fine structures, significantly restrict the search space of candidate viewpoints and surface targets due to high computational costs, and are sensitive to surface orientation errors, which limits their applicability in complex scenarios. To address these limitations, we propose a strategy for generating sparse viewpoints from point clouds for efficient and accurate UAV-based modelling. Unlike existing planners, our backward visibility approach enables exploration of the camera viewpoint space at low computational cost and does not require surface orientation (normal vector) estimation. We introduce an observability-based planning criterion, a direction diversity-driven reconstructability criterion, which assesses modelling quality by encouraging global diversity in viewing directions, and a coarse-to-fine adaptive viewpoint search approach that builds on these criteria. The approach was validated on a number of complex heritage scenes. It achieves efficient modelling with minimal viewpoints and accurately captures fine structures, like thin spires, that are problematic for other planners. For our test examples, we achieve at least 98% coverage, using significantly fewer viewpoints, and with a consistently high structural similarity across all models.
创建遗产地的三维数字模型通常需要进行激光扫描和摄影测量。虽然激光扫描生成的点云可提供详细的几何图形,但遮挡和隐藏区域往往会导致缺失。地面和无人机摄影可以在很大程度上填补这些空白,并提高边缘和角落的清晰度和精确度。具有复杂建筑或装饰细节的历史建筑需要有计划地将激光扫描与手持式和无人机摄影相结合。高分辨率摄影不仅能增强三维建筑模型的几何形状,还能改善其纹理。使用照相机,尤其是无人机照相机,需要对视点进行合理规划,以确保充分覆盖所记录的结构,同时尽量减少视点,以实现高效的图像采集和处理经济性。为详细建模确定理想视点具有挑战性。现有的规划方法依赖于粗略的场景代理,经常会遗漏精细结构,由于计算成本高,候选视点和表面目标的搜索空间受到很大限制,而且对表面方向误差很敏感,这限制了它们在复杂场景中的适用性。为了解决这些局限性,我们提出了一种从点云生成稀疏视点的策略,以实现高效、准确的无人机建模。与现有的规划方法不同,我们的后向可见性方法能够以较低的计算成本探索相机视点空间,并且不需要进行表面方位(法向量)估算。我们引入了基于可观测性的规划标准、方向多样性驱动的可重构性标准(通过鼓励观察方向的全局多样性来评估建模质量)以及基于这些标准的从粗到细的自适应视点搜索方法。该方法在一些复杂的遗产场景中得到了验证。它能以最少的视点实现高效建模,并准确捕捉精细结构,如细尖塔,而其他规划方法却很难做到这一点。在我们的测试实例中,我们使用的视点数量大大减少,覆盖率至少达到 98%,而且所有模型的结构相似度始终很高。
{"title":"A UAV-based sparse viewpoint planning framework for detailed 3D modelling of cultural heritage monuments","authors":"Zebiao Wu ,&nbsp;Patrick Marais ,&nbsp;Heinz Rüther","doi":"10.1016/j.isprsjprs.2024.10.028","DOIUrl":"10.1016/j.isprsjprs.2024.10.028","url":null,"abstract":"<div><div>Creating 3D digital models of heritage sites typically involves laser scanning and photogrammetry. Although laser scan-derived point clouds provide detailed geometry, occlusions and hidden areas often lead to gaps. Terrestrial and UAV photography can largely fill these gaps and also enhance definition and accuracy at edges and corners. Historical buildings with complex architectural or decorative details require a systematically planned combination of laser scanning with handheld and UAV photography. High-resolution photography not only enhances the geometry of 3D building models but also improves their texturing. The use of cameras, especially UAV cameras, requires robust viewpoint planning to ensure sufficient coverage of the documented structure whilst minimising viewpoints for efficient image acquisition and processing economy. Determining ideal viewpoints for detailed modelling is challenging. Existing planners, relying on coarse scene proxies, often miss fine structures, significantly restrict the search space of candidate viewpoints and surface targets due to high computational costs, and are sensitive to surface orientation errors, which limits their applicability in complex scenarios. To address these limitations, we propose a strategy for generating sparse viewpoints from point clouds for efficient and accurate UAV-based modelling. Unlike existing planners, our backward visibility approach enables exploration of the camera viewpoint space at low computational cost and does not require surface orientation (normal vector) estimation. We introduce an observability-based planning criterion, a direction diversity-driven reconstructability criterion, which assesses modelling quality by encouraging global diversity in viewing directions, and a coarse-to-fine adaptive viewpoint search approach that builds on these criteria. The approach was validated on a number of complex heritage scenes. It achieves efficient modelling with minimal viewpoints and accurately captures fine structures, like thin spires, that are problematic for other planners. For our test examples, we achieve at least 98% coverage, using significantly fewer viewpoints, and with a consistently high structural similarity across all models.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 555-571"},"PeriodicalIF":10.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Journal of Photogrammetry and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1