首页 > 最新文献

ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

英文 中文
Application of Photogrammetric Computer Vision and Deep Learning in High-Resolution Underwater Mapping: A Case Study of Shallow-Water Coral Reefs 摄影测量计算机视觉和深度学习在高分辨率水下测绘中的应用:浅水珊瑚礁案例研究
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-247-2024
J. Zhong, Ming Li, A. Gruen, Jianya Gong, Deren Li, Mingjie Li, J. Qin
Abstract. Underwater mapping is vital for engineering applications and scientific research in ocean environments, with coral reefs being a primary focus. Unlike more uniform and predictable terrestrial environments, coral reefs present a unique challenge for 3D reconstruction due to their intricate and irregular structures. Traditional 3D reconstruction methods struggle to accurately capture the nuances of coral reefs. This is primarily because coral reefs exhibit a high degree of spatial heterogeneity, featuring diverse shapes, sizes, and textures. Additionally, the dynamic nature of underwater conditions, such as varying light, water clarity, and movement, further complicates the accurate geometrical estimation of these ecosystems. With the rapid advancement of photogrammetric computer vision and deep learning technologies, there are emerging methods that have potential to enhance the quality of its 3D reconstruction. In this context, this study formulates a coral reef reconstruction workflow that incorporates these cutting-edge technologies. This workflow is divided into two core stages: sparse reconstruction and dense reconstruction. We conduct individual summaries of the relevant research efforts in these stages and outline the available methods. To assess the specific capabilities of these methods, we apply them to real-world coral reef images and conduct a comprehensive evaluation. Additionally, we analyze the strengths and weaknesses of different methods and identify areas for improvement. We believe this study offers valuable references for future research in underwater mapping.
摘要水下测绘对于海洋环境中的工程应用和科学研究至关重要,其中珊瑚礁是一个主要重点。与较为统一和可预测的陆地环境不同,珊瑚礁因其错综复杂和不规则的结构,给三维重建带来了独特的挑战。传统的三维重建方法难以准确捕捉珊瑚礁的细微差别。这主要是因为珊瑚礁具有高度的空间异质性,其形状、大小和纹理各不相同。此外,水下条件的动态性质,如不同的光线、水的透明度和运动,使这些生态系统的精确几何估算变得更加复杂。随着摄影测量计算机视觉和深度学习技术的快速发展,一些新兴的方法有望提高三维重建的质量。在此背景下,本研究制定了一套结合这些前沿技术的珊瑚礁重建工作流程。该工作流程分为两个核心阶段:稀疏重建和密集重建。我们对这些阶段的相关研究工作进行了单独总结,并概述了可用的方法。为了评估这些方法的具体能力,我们将它们应用于真实世界的珊瑚礁图像,并进行了全面评估。此外,我们还分析了不同方法的优缺点,并确定了需要改进的地方。我们相信这项研究为未来的水下测绘研究提供了宝贵的参考。
{"title":"Application of Photogrammetric Computer Vision and Deep Learning in High-Resolution Underwater Mapping: A Case Study of Shallow-Water Coral Reefs","authors":"J. Zhong, Ming Li, A. Gruen, Jianya Gong, Deren Li, Mingjie Li, J. Qin","doi":"10.5194/isprs-annals-x-2-2024-247-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-247-2024","url":null,"abstract":"Abstract. Underwater mapping is vital for engineering applications and scientific research in ocean environments, with coral reefs being a primary focus. Unlike more uniform and predictable terrestrial environments, coral reefs present a unique challenge for 3D reconstruction due to their intricate and irregular structures. Traditional 3D reconstruction methods struggle to accurately capture the nuances of coral reefs. This is primarily because coral reefs exhibit a high degree of spatial heterogeneity, featuring diverse shapes, sizes, and textures. Additionally, the dynamic nature of underwater conditions, such as varying light, water clarity, and movement, further complicates the accurate geometrical estimation of these ecosystems. With the rapid advancement of photogrammetric computer vision and deep learning technologies, there are emerging methods that have potential to enhance the quality of its 3D reconstruction. In this context, this study formulates a coral reef reconstruction workflow that incorporates these cutting-edge technologies. This workflow is divided into two core stages: sparse reconstruction and dense reconstruction. We conduct individual summaries of the relevant research efforts in these stages and outline the available methods. To assess the specific capabilities of these methods, we apply them to real-world coral reef images and conduct a comprehensive evaluation. Additionally, we analyze the strengths and weaknesses of different methods and identify areas for improvement. We believe this study offers valuable references for future research in underwater mapping.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"107 32","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle Geolocalization from Drone Imagery 利用无人机图像进行车辆地理定位
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-171-2024
David Novikov, Paul Sotirelis, Alper Yilmaz
Abstract. We have developed a robust, novel, and cost-effective method for determining the geolocation of vehicles observed in drone camera footage. Previous studies in this area have relied on platform GPS and camera geometry to estimate the position of objects in drone footage, which we will refer to as object-to-drone location (ODL). The performance of these techniques is degraded with decreasing GPS measurement accuracy and camera orientation problems. Our method overcomes these shortcomings and reliably geolocates objects on the ground. We refer to our approach as object-to-map localization (OML). The proposed technique determines a transformation between drone camera footage and georectified aerial images, for example, from Google Maps. This transformation is then used to calculate the positions of objects captured in the drone camera footage. We provide an ablation study of our method’s configuration parameter, which are: feature extraction methods, key point filtering schemes, and types of transformations. We also conduct experiments with a simulated faulty GPS to demonstrate our method’s robustness to poor estimation of the drone’s position. Our approach requires only a drone with a camera and a low-accuracy estimate of its geoposition, we do not rely on markers or ground control points. As a result, our method can determine the geolocation of vehicles on the ground in an easy-to-set up and costeffective manner, making object geolocalization more accessible to users by decreasing the hardware and software requirements. Our GitHub with code can be found at https://github.com/OSUPCVLab/VehicleGeopositioning
摘要我们开发了一种稳健、新颖且经济高效的方法,用于确定无人机摄像机镜头中观察到的车辆的地理位置。以前在这一领域的研究都是依靠平台 GPS 和相机几何图形来估计无人机镜头中物体的位置,我们称之为物体对无人机定位(ODL)。这些技术的性能会随着 GPS 测量精度的降低和相机定向问题而下降。我们的方法克服了这些缺点,能够可靠地对地面物体进行地理定位。我们将这种方法称为物体到地图定位(OML)。所提出的技术可确定无人机摄像机镜头与经过地理校正的空中图像(例如谷歌地图)之间的转换。然后,利用这种变换来计算无人机摄像头拍摄到的物体的位置。我们对方法的配置参数进行了消融研究,这些参数包括:特征提取方法、关键点过滤方案和变换类型。我们还使用模拟故障 GPS 进行了实验,以证明我们的方法对无人机位置估计不准确的鲁棒性。我们的方法只需要一架带有摄像头的无人机和对其地理位置的低精度估计,不依赖于标记或地面控制点。因此,我们的方法能以一种易于设置且成本低廉的方式确定地面车辆的地理位置,通过降低硬件和软件要求,使用户更容易获得物体地理定位。我们的 GitHub 代码见 https://github.com/OSUPCVLab/VehicleGeopositioning
{"title":"Vehicle Geolocalization from Drone Imagery","authors":"David Novikov, Paul Sotirelis, Alper Yilmaz","doi":"10.5194/isprs-annals-x-2-2024-171-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-171-2024","url":null,"abstract":"Abstract. We have developed a robust, novel, and cost-effective method for determining the geolocation of vehicles observed in drone camera footage. Previous studies in this area have relied on platform GPS and camera geometry to estimate the position of objects in drone footage, which we will refer to as object-to-drone location (ODL). The performance of these techniques is degraded with decreasing GPS measurement accuracy and camera orientation problems. Our method overcomes these shortcomings and reliably geolocates objects on the ground. We refer to our approach as object-to-map localization (OML). The proposed technique determines a transformation between drone camera footage and georectified aerial images, for example, from Google Maps. This transformation is then used to calculate the positions of objects captured in the drone camera footage. We provide an ablation study of our method’s configuration parameter, which are: feature extraction methods, key point filtering schemes, and types of transformations. We also conduct experiments with a simulated faulty GPS to demonstrate our method’s robustness to poor estimation of the drone’s position. Our approach requires only a drone with a camera and a low-accuracy estimate of its geoposition, we do not rely on markers or ground control points. As a result, our method can determine the geolocation of vehicles on the ground in an easy-to-set up and costeffective manner, making object geolocalization more accessible to users by decreasing the hardware and software requirements. Our GitHub with code can be found at https://github.com/OSUPCVLab/VehicleGeopositioning\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"8 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Roof Wireframe Reconstruction Based on Self-Supervised Pretraining 基于自监督预训练的屋顶线框重构方法
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-239-2024
Hongxin Yang, Shangfeng Huang, Ruisheng Wang
Abstract. In this paper, we present a two-stage method for roof wireframe reconstruction employing a self-supervised pretraining technique. The initial stage utilizes a multi-scale mask autoencoder to generate point-wise features. The subsequent stage involves three steps for edge parameter regression. Firstly, the initial edge directions are generated under the guidance of edge point identification. The next step employs edge parameter regression and matching modules to extract the parameters (namely, direction vector and length) of edge representation from the obtained edge features. Finally, a specifically designed edge non-maximum suppression and an edge similarity loss function are employed to optimize the representation of the final wireframe models and eliminate redundant edges. Experimental results indicate that the pre-trained self-supervised model, enriched by the roof wireframe reconstruction task, demonstrates superior performance on both the publicly available Building3D dataset and its post-processed iterations, specifically the Dense dataset, outperforming even traditional methods.
摘要本文介绍了一种采用自监督预训练技术的两阶段屋顶线框重建方法。初始阶段利用多尺度掩码自动编码器生成点向特征。后续阶段包括三个边缘参数回归步骤。首先,在边缘点识别的指导下生成初始边缘方向。下一步是利用边缘参数回归和匹配模块,从获得的边缘特征中提取边缘表示参数(即方向向量和长度)。最后,采用专门设计的边缘非最大抑制和边缘相似性损失函数来优化最终线框模型的表示,并消除多余的边缘。实验结果表明,经过屋顶线框重建任务丰富的预训练自监督模型在公开的 Building3D 数据集及其后处理迭代数据集(特别是 Dense 数据集)上都表现出卓越的性能,甚至优于传统方法。
{"title":"A Method for Roof Wireframe Reconstruction Based on Self-Supervised Pretraining","authors":"Hongxin Yang, Shangfeng Huang, Ruisheng Wang","doi":"10.5194/isprs-annals-x-2-2024-239-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-239-2024","url":null,"abstract":"Abstract. In this paper, we present a two-stage method for roof wireframe reconstruction employing a self-supervised pretraining technique. The initial stage utilizes a multi-scale mask autoencoder to generate point-wise features. The subsequent stage involves three steps for edge parameter regression. Firstly, the initial edge directions are generated under the guidance of edge point identification. The next step employs edge parameter regression and matching modules to extract the parameters (namely, direction vector and length) of edge representation from the obtained edge features. Finally, a specifically designed edge non-maximum suppression and an edge similarity loss function are employed to optimize the representation of the final wireframe models and eliminate redundant edges. Experimental results indicate that the pre-trained self-supervised model, enriched by the roof wireframe reconstruction task, demonstrates superior performance on both the publicly available Building3D dataset and its post-processed iterations, specifically the Dense dataset, outperforming even traditional methods.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"119 44","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-based DSM Generation from Dual-Aspect SAR Data 从双视角合成孔径雷达数据生成基于深度学习的 DSM
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-193-2024
M. Recla, Michael Schmitt
Abstract. Rapid mapping demands efficient methods for a fast extraction of information from satellite data while minimizing data requirements. This paper explores the potential of deep learning for the generation of high-resolution urban elevation data from Synthetic Aperture Radar (SAR) imagery. In order to mitigate occlusion effects caused by the side-looking nature of SAR remote sensing, two SAR images from opposing aspects are leveraged and processed in an end-to-end deep neural network. The presented approach is the first of its kind to implicitly handle the transition from the SAR-specific slant range geometry to a ground-based mapping geometry within the model architecture. Comparative experiments demonstrate the superiority of the dual-aspect fusion over single-image methods in terms of reconstruction quality and geolocation accuracy. Notably, the model exhibits robust performance across diverse acquisition modes and geometries, showcasing its generalizability and suitability for height mapping applications. The study’s findings underscore the potential of deep learning-driven SAR techniques in generating high-quality urban surface models efficiently and economically.
摘要快速制图需要高效的方法,以便从卫星数据中快速提取信息,同时最大限度地减少数据需求。本文探讨了深度学习在从合成孔径雷达(SAR)图像生成高分辨率城市高程数据方面的潜力。为了减轻合成孔径雷达遥感的侧视特性所造成的遮挡效应,在端到端深度神经网络中利用并处理了两幅相反方向的合成孔径雷达图像。所提出的方法是首个在模型架构中隐含处理从合成孔径雷达特定斜距几何图形到地面测绘几何图形过渡的方法。对比实验证明,就重建质量和地理定位精度而言,双光谱融合优于单图像方法。值得注意的是,该模型在不同的采集模式和几何图形下都表现出了强大的性能,展示了其在高度测绘应用中的通用性和适用性。研究结果凸显了深度学习驱动的合成孔径雷达技术在高效、经济地生成高质量城市地表模型方面的潜力。
{"title":"Deep Learning-based DSM Generation from Dual-Aspect SAR Data","authors":"M. Recla, Michael Schmitt","doi":"10.5194/isprs-annals-x-2-2024-193-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-193-2024","url":null,"abstract":"Abstract. Rapid mapping demands efficient methods for a fast extraction of information from satellite data while minimizing data requirements. This paper explores the potential of deep learning for the generation of high-resolution urban elevation data from Synthetic Aperture Radar (SAR) imagery. In order to mitigate occlusion effects caused by the side-looking nature of SAR remote sensing, two SAR images from opposing aspects are leveraged and processed in an end-to-end deep neural network. The presented approach is the first of its kind to implicitly handle the transition from the SAR-specific slant range geometry to a ground-based mapping geometry within the model architecture. Comparative experiments demonstrate the superiority of the dual-aspect fusion over single-image methods in terms of reconstruction quality and geolocation accuracy. Notably, the model exhibits robust performance across diverse acquisition modes and geometries, showcasing its generalizability and suitability for height mapping applications. The study’s findings underscore the potential of deep learning-driven SAR techniques in generating high-quality urban surface models efficiently and economically.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 85","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sat-SINR: High-Resolution Species Distribution Models Through Satellite Imagery Sat-SINR:通过卫星图像建立高分辨率物种分布模型
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-41-2024
Johannes Dollinger, Philipp Brun, Vivien Sainte Fare Garnot, Jan Dirk Wegner
Abstract. We propose a deep learning approach for high-resolution species distribution modelling (SDM) at large scale combining point-wise, crowd-sourced species observation data and environmental data with Sentinel-2 satellite imagery. What makes this task challenging is the great variety of controlling factors for species distribution, such as habitat conditions, human intervention, competition, disturbances, and evolutionary history. Experts either incorporate these factors into complex mechanistic models based on presence-absence data collected in field campaigns or train machine learning models to learn the relationship between environmental data and presence-only species occurrence. We extend the latter approach here and learn deep SDMs end-to-end based on point-wise, crowd-sourced presence-only data in combination with satellite imagery. Our method, dubbed Sat-SINR, jointly models the spatial distributions of 5.6k plant species across Europe and increases the spatial resolution by a factor of 100 compared to the current state of the art. We exhaustively test and ablate multiple variations of combining geo-referenced point data with satellite imagery and show that our deep learning-based SDM method consistently shows an improvement of up to 3 percentage points across three metrics. We make all code publicly available at https://github.com/ecovision-uzh/sat-sinr.
摘要我们提出了一种深度学习方法,结合点式、众包的物种观测数据和环境数据与哨兵-2 卫星图像,进行大规模的高分辨率物种分布建模(SDM)。这项任务的挑战性在于物种分布的控制因素种类繁多,如栖息地条件、人为干预、竞争、干扰和进化史。专家们要么将这些因素纳入基于野外活动中收集的存在-消失数据的复杂机理模型中,要么训练机器学习模型来学习环境数据与仅存在的物种出现之间的关系。我们在此扩展了后一种方法,并基于点对点、众包的仅存在数据结合卫星图像学习深度 SDM。我们的方法被称为 Sat-SINR,可对欧洲 5.6 千种植物物种的空间分布进行联合建模,与目前的技术水平相比,空间分辨率提高了 100 倍。我们详尽地测试并消解了将地理参照点数据与卫星图像相结合的多种变化,结果表明,我们基于深度学习的 SDM 方法在三个指标上始终显示出高达 3 个百分点的改进。我们在 https://github.com/ecovision-uzh/sat-sinr 上公开了所有代码。
{"title":"Sat-SINR: High-Resolution Species Distribution Models Through Satellite Imagery","authors":"Johannes Dollinger, Philipp Brun, Vivien Sainte Fare Garnot, Jan Dirk Wegner","doi":"10.5194/isprs-annals-x-2-2024-41-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-41-2024","url":null,"abstract":"Abstract. We propose a deep learning approach for high-resolution species distribution modelling (SDM) at large scale combining point-wise, crowd-sourced species observation data and environmental data with Sentinel-2 satellite imagery. What makes this task challenging is the great variety of controlling factors for species distribution, such as habitat conditions, human intervention, competition, disturbances, and evolutionary history. Experts either incorporate these factors into complex mechanistic models based on presence-absence data collected in field campaigns or train machine learning models to learn the relationship between environmental data and presence-only species occurrence. We extend the latter approach here and learn deep SDMs end-to-end based on point-wise, crowd-sourced presence-only data in combination with satellite imagery. Our method, dubbed Sat-SINR, jointly models the spatial distributions of 5.6k plant species across Europe and increases the spatial resolution by a factor of 100 compared to the current state of the art. We exhaustively test and ablate multiple variations of combining geo-referenced point data with satellite imagery and show that our deep learning-based SDM method consistently shows an improvement of up to 3 percentage points across three metrics. We make all code publicly available at https://github.com/ecovision-uzh/sat-sinr.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 47","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Airborne LiDAR Point Cloud Filtering Algorithm Based on Supervoxel Ground Saliency 基于上像素地面显著性的机载激光雷达点云过滤算法
Pub Date : 2024-06-10 DOI: 10.5194/isprs-annals-x-2-2024-73-2024
Weiwei Fan, Xinyi Liu, Yongjun Zhang, Dongdong Yue, Senyuan Wang, Jiachen Zhong
Abstract. Airborne laser scanning (ALS) is able to penetrate sparse vegetation to obtain highly accurate height information on the ground surface. LiDAR point cloud filtering is an important prerequisite for downstream tasks such as digital terrain model (DTM) extraction and point cloud classification. Aiming at the problem that existing LiDAR point cloud filtering algorithms are prone to errors in complex terrain environments, an ALS point cloud filtering method based on supervoxel ground saliency (SGSF) is proposed in this paper. Firstly, a boundary-preserving TBBP supervoxel algorithm is utilized to perform supervoxel segmentation of ALS point clouds, and multi-directional scanning strip delineation and ground saliency computation are carried out for the clusters of supervoxel point clouds. Subsequently, the energy function is constructed by introducing the ground saliency and the optimal filtering plane of the supervoxel is solved using the semi-global optimization idea to realize the effective distinction between ground and non-ground points. Experimental results on the ALS point cloud filtering dataset openGF indicate that, compared to state-of-the-art surface-based filtering methods, the SGSF algorithm achieves the highest average values across various terrain conditions for multiple evaluation metrics. It also addresses the issue of recessed structures in buildings being prone to misclassification as ground points.
摘要。机载激光扫描(ALS)能够穿透稀疏植被,获取地表高精度高度信息。激光雷达点云过滤是数字地形模型(DTM)提取和点云分类等下游任务的重要前提。针对现有的激光雷达点云滤波算法在复杂地形环境下容易产生误差的问题,本文提出了一种基于上像素地面显著性(SGSF)的 ALS 点云滤波方法。首先,利用保界 TBBP 上像素算法对 ALS 点云进行上像素分割,并对上像素点云簇进行多向扫描带划分和地面突出度计算。随后,通过引入地面突出度构建能量函数,并利用半全局优化思想求解上位点的最优滤波平面,从而实现地面点与非地面点的有效区分。在 ALS 点云过滤数据集 openGF 上的实验结果表明,与最先进的基于地表的过滤方法相比,SGSF 算法在各种地形条件下的多个评价指标的平均值最高。它还解决了建筑物中的凹陷结构容易被误判为地面点的问题。
{"title":"Airborne LiDAR Point Cloud Filtering Algorithm Based on Supervoxel Ground Saliency","authors":"Weiwei Fan, Xinyi Liu, Yongjun Zhang, Dongdong Yue, Senyuan Wang, Jiachen Zhong","doi":"10.5194/isprs-annals-x-2-2024-73-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-2-2024-73-2024","url":null,"abstract":"Abstract. Airborne laser scanning (ALS) is able to penetrate sparse vegetation to obtain highly accurate height information on the ground surface. LiDAR point cloud filtering is an important prerequisite for downstream tasks such as digital terrain model (DTM) extraction and point cloud classification. Aiming at the problem that existing LiDAR point cloud filtering algorithms are prone to errors in complex terrain environments, an ALS point cloud filtering method based on supervoxel ground saliency (SGSF) is proposed in this paper. Firstly, a boundary-preserving TBBP supervoxel algorithm is utilized to perform supervoxel segmentation of ALS point clouds, and multi-directional scanning strip delineation and ground saliency computation are carried out for the clusters of supervoxel point clouds. Subsequently, the energy function is constructed by introducing the ground saliency and the optimal filtering plane of the supervoxel is solved using the semi-global optimization idea to realize the effective distinction between ground and non-ground points. Experimental results on the ALS point cloud filtering dataset openGF indicate that, compared to state-of-the-art surface-based filtering methods, the SGSF algorithm achieves the highest average values across various terrain conditions for multiple evaluation metrics. It also addresses the issue of recessed structures in buildings being prone to misclassification as ground points.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141363448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the effects of crop growth differences on radar vegetation index response and crop height estimation using dynamic monitoring model 利用动态监测模型探索作物生长差异对雷达植被指数响应和作物高度估算的影响
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-225-2024
Bo Wang, Yu Liu, Qinghong Sheng, Jun Li, Shuwei Wang, Yunfeng Qiao, Honglin He
Abstract. Synthetic aperture radar (SAR) has emerged as a promising technology for monitoring crop plant height due to its ability to capture the geometric properties of crops. Radar vegetation index (RVI) has been extensively utilized for qualitative and quantitative remote sensing monitoring of vegetation growth dynamics. However, the combination of crop, growing environment, and temporal dynamics makes crop monitoring data a complex task. Despite the relatively simple underlying mechanisms of this phenomenon, there is still a need for more research to identify specific vegetation structures that correspond to changes in the response of vegetation indices. Building upon this premise, this study utilized a dynamic monitoring model to conduct dynamic monitoring of plant height for three common crops: rice, wheat, and maize. The findings revealed that (1) models developed for specific spatial and temporal scales of particular crop varieties may not accurately predict crop growth in different regions or with different varieties in a timely manner, due to growth variations; (2) these models maintain accuracy over a range of plant heights, such as rice at around 70cm, wheat at around 50cm, and maize at around 150cm; (3) among the three crops, planting density was identified as the main factor influencing the differences in RVI response. This research contributes to our comprehension of the dynamic response of RVI to different growth conditions in crops, and offers valuable insights and references for agricultural monitoring.
摘要合成孔径雷达(SAR)能够捕捉作物的几何特性,因此已成为监测作物株高的一项前景广阔的技术。雷达植被指数(RVI)已被广泛用于对植被生长动态进行定性和定量遥感监测。然而,作物、生长环境和时间动态的结合使得作物监测数据成为一项复杂的任务。尽管这一现象的基本机制相对简单,但仍需开展更多研究,以确定与植被指数响应变化相对应的特定植被结构。在此前提下,本研究利用动态监测模型对水稻、小麦和玉米三种常见作物的植株高度进行了动态监测。研究结果表明:(1) 针对特定作物品种的特定时空尺度开发的模型,由于生长变化,可能无法及时准确地预测不同地区或不同品种的作物生长情况;(2) 这些模型在一定植株高度范围内保持准确性,如水稻在 70 厘米左右,小麦在 50 厘米左右,玉米在 150 厘米左右;(3) 在三种作物中,种植密度被认为是影响植被指数响应差异的主要因素。这项研究有助于我们理解 RVI 对作物不同生长条件的动态响应,并为农业监测提供有价值的见解和参考。
{"title":"Exploring the effects of crop growth differences on radar vegetation index response and crop height estimation using dynamic monitoring model","authors":"Bo Wang, Yu Liu, Qinghong Sheng, Jun Li, Shuwei Wang, Yunfeng Qiao, Honglin He","doi":"10.5194/isprs-annals-x-1-2024-225-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-225-2024","url":null,"abstract":"Abstract. Synthetic aperture radar (SAR) has emerged as a promising technology for monitoring crop plant height due to its ability to capture the geometric properties of crops. Radar vegetation index (RVI) has been extensively utilized for qualitative and quantitative remote sensing monitoring of vegetation growth dynamics. However, the combination of crop, growing environment, and temporal dynamics makes crop monitoring data a complex task. Despite the relatively simple underlying mechanisms of this phenomenon, there is still a need for more research to identify specific vegetation structures that correspond to changes in the response of vegetation indices. Building upon this premise, this study utilized a dynamic monitoring model to conduct dynamic monitoring of plant height for three common crops: rice, wheat, and maize. The findings revealed that (1) models developed for specific spatial and temporal scales of particular crop varieties may not accurately predict crop growth in different regions or with different varieties in a timely manner, due to growth variations; (2) these models maintain accuracy over a range of plant heights, such as rice at around 70cm, wheat at around 50cm, and maize at around 150cm; (3) among the three crops, planting density was identified as the main factor influencing the differences in RVI response. This research contributes to our comprehension of the dynamic response of RVI to different growth conditions in crops, and offers valuable insights and references for agricultural monitoring.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Hyperspectral Surface Reflectance Dataset of Typical Ore Concentration Area in Hami Remote Sensing Test Field 哈密遥感试验场典型矿石集中区高光谱地表反射率数据集研究
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-137-2024
Shuneng Liang, Yang Li, Hongyan Wei, Lina Dong, Jiaheng Zhang, Chenchao Xiao
Abstract. Surface reflectance data is the basic data source for the hyperspectral parametric remote sensing products and remote sensing quantitative application, which is widely used in various application fields such as natural resources and ecological environment monitoring. At present, multispectral data takes the leading role among the common land surface reflectance datasets and the reflectance data mainly involves the types of ground objects such as farmland, forest land, water body, soil, etc., while the datasets relatively less targets the types of rock and mineral surface objects, yet especially the reflectance datasets with the combination of time series and multi-scale satellite-earth are even more scarce. In order to better promote the application of hyperspectral surface reflectance and explore the advantages of joint application of satellite-earth multi-scale reflectance data, on the basis of field-measured rock and mineral target spectral, a comprehensive surface reflectance dataset was generated by using domestically produced hyperspectral satellite data as the data source in this study, mainly focusing on the typical ore concentration area in the Hami Remote Sensing test field in Xinjiang. The dataset includes multi-period hyperspectral satellite surface reflectance images, field measured rock and mineral spectral data, and multi-period sub-pixel spectral data collected based on ground spectral measured points, which can provide significant support for the research and development and accuracy verification as well as performance evaluation of algorithms such as surface reflectance inversion, mineral identification and ground object classification.
摘要地表反射率数据是高光谱参数化遥感产品和遥感定量应用的基础数据源,广泛应用于自然资源和生态环境监测等多个应用领域。目前常见的地表反射率数据集以多光谱数据为主,反射率数据主要涉及耕地、林地、水体、土壤等地表对象类型,针对岩石、矿物等地表对象类型的数据集相对较少,尤其是结合时间序列和多尺度卫星大地的反射率数据集更为稀缺。为了更好地推广高光谱地表反射率的应用,探索星地多尺度反射率数据联合应用的优势,本研究以野外实测岩矿目标光谱为基础,以新疆哈密遥感试验场典型矿石集中区为主要研究对象,以国产高光谱卫星数据为数据源,生成了综合地表反射率数据集。该数据集包括多周期高光谱卫星地表反射率影像、野外实测岩石和矿物光谱数据,以及基于地面光谱测点采集的多周期亚像素光谱数据,可为地表反射率反演、矿物识别和地面目标分类等算法的研发、精度验证和性能评估提供重要支撑。
{"title":"Research on Hyperspectral Surface Reflectance Dataset of Typical Ore Concentration Area in Hami Remote Sensing Test Field","authors":"Shuneng Liang, Yang Li, Hongyan Wei, Lina Dong, Jiaheng Zhang, Chenchao Xiao","doi":"10.5194/isprs-annals-x-1-2024-137-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-137-2024","url":null,"abstract":"Abstract. Surface reflectance data is the basic data source for the hyperspectral parametric remote sensing products and remote sensing quantitative application, which is widely used in various application fields such as natural resources and ecological environment monitoring. At present, multispectral data takes the leading role among the common land surface reflectance datasets and the reflectance data mainly involves the types of ground objects such as farmland, forest land, water body, soil, etc., while the datasets relatively less targets the types of rock and mineral surface objects, yet especially the reflectance datasets with the combination of time series and multi-scale satellite-earth are even more scarce. In order to better promote the application of hyperspectral surface reflectance and explore the advantages of joint application of satellite-earth multi-scale reflectance data, on the basis of field-measured rock and mineral target spectral, a comprehensive surface reflectance dataset was generated by using domestically produced hyperspectral satellite data as the data source in this study, mainly focusing on the typical ore concentration area in the Hami Remote Sensing test field in Xinjiang. The dataset includes multi-period hyperspectral satellite surface reflectance images, field measured rock and mineral spectral data, and multi-period sub-pixel spectral data collected based on ground spectral measured points, which can provide significant support for the research and development and accuracy verification as well as performance evaluation of algorithms such as surface reflectance inversion, mineral identification and ground object classification.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Garbage Monitoring And Management Using Deep Learning 利用深度学习监控和管理垃圾
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-163-2024
Charanya Manivannan, Jovina Virgin, Shivaani Suseendran, K. Vani
Abstract. Rapid urbanisation and population growth have led to an unprecedented increase in waste generation. In addition to this, increasing tourism has also increased the challenge of maintaining coastal areas. Inefficient and inadequate waste management practices pose significant environmental and health hazards to both humans and wildlife. Through deep learning and computer vision techniques, the garbage can be identified and its location can be extracted directly from the images. Videos are collected using UAVs. Auto generation of waste reports and additional services like chat-bots are also implemented. Furthermore, the system implements OR tools using which the routes of garbage collector vehicles is optimised. By minimising travel distances and maximising cleanup efficiency, the system reduces operational costs and enhances the overall effectiveness of beach cleanup initiatives. Predominant spots of garbage are analysed and the nearest dustbins are mapped along with the route to reach the dustbin. The garbage detection model gave a mAP of 0.845. The silhouette score of clustering was 70.1% for chameleon and 99.02% for k means. All of the above mentioned modules were integrated and presented on the user interface of the application developed.
摘要快速的城市化和人口增长导致废物产生量空前增加。此外,日益增长的旅游业也增加了维护沿海地区的挑战。低效和不适当的垃圾管理方法对人类和野生动物的环境和健康造成了严重危害。通过深度学习和计算机视觉技术,可以直接从图像中识别垃圾并提取其位置。使用无人机收集视频。还实现了自动生成垃圾报告和聊天机器人等附加服务。此外,该系统还使用 OR 工具优化垃圾收集车的行驶路线。通过最大限度地缩短旅行距离,最大限度地提高清理效率,该系统降低了运营成本,提高了海滩清理行动的整体效率。该系统分析了主要的垃圾点,并绘制了最近的垃圾箱以及到达垃圾箱的路线。垃圾检测模型的 mAP 值为 0.845。变色龙的聚类剪影得分率为 70.1%,K means 的聚类剪影得分率为 99.02%。上述所有模块均已集成并显示在所开发应用程序的用户界面上。
{"title":"Garbage Monitoring And Management Using Deep Learning","authors":"Charanya Manivannan, Jovina Virgin, Shivaani Suseendran, K. Vani","doi":"10.5194/isprs-annals-x-1-2024-163-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-163-2024","url":null,"abstract":"Abstract. Rapid urbanisation and population growth have led to an unprecedented increase in waste generation. In addition to this, increasing tourism has also increased the challenge of maintaining coastal areas. Inefficient and inadequate waste management practices pose significant environmental and health hazards to both humans and wildlife. Through deep learning and computer vision techniques, the garbage can be identified and its location can be extracted directly from the images. Videos are collected using UAVs. Auto generation of waste reports and additional services like chat-bots are also implemented. Furthermore, the system implements OR tools using which the routes of garbage collector vehicles is optimised. By minimising travel distances and maximising cleanup efficiency, the system reduces operational costs and enhances the overall effectiveness of beach cleanup initiatives. Predominant spots of garbage are analysed and the nearest dustbins are mapped along with the route to reach the dustbin. The garbage detection model gave a mAP of 0.845. The silhouette score of clustering was 70.1% for chameleon and 99.02% for k means. All of the above mentioned modules were integrated and presented on the user interface of the application developed.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse matching via point and line feature fusion for robust aerial triangulation of photovoltaic power stations’ thermal infrared imagery 通过点和线特征融合进行稀疏匹配,实现光伏电站热红外图像的稳健空中三角测量
Pub Date : 2024-05-09 DOI: 10.5194/isprs-annals-x-1-2024-107-2024
Tao Ke, Zhouyuan Ye, Xiao Zhang, Yifan Liao, Pengjie Tao
Abstract. In this paper, we present a novel matching method tailored for unmanned aerial vehicle (UAV) thermal infrared images of photovoltaic (PV) panels characterized by highly repetitive textures. This method capitalizes on the integration of point and line features within the image to obtain reliable corresponding points. Furthermore, it employs multiple constraints to eliminate mismatched features and get rid of the interference of repetitive textures on feature matching. To verify the effectiveness of the proposed method, we used an UAV equipped with the DJI Zenmuse H20T thermal infrared gimbal to capture 3767 images of a PV power station in Guangzhou, China. Experiments demonstrate that, for UAV thermal infrared images of PV panels, our method outperforms the state-of-the-art techniques in terms of the density of matching points, matching success rate and matching reliability, consequently leading to robust aerial triangulation results.
摘要本文提出了一种新颖的匹配方法,适用于以高度重复纹理为特征的光伏(PV)面板的无人机(UAV)热红外图像。该方法利用图像中点和线特征的整合来获得可靠的对应点。此外,它还采用多重约束来消除不匹配的特征,并摆脱重复纹理对特征匹配的干扰。为了验证所提方法的有效性,我们使用配备大疆 Zenmuse H20T 热红外云台的无人机拍摄了中国广州某光伏电站的 3767 幅图像。实验证明,对于无人机拍摄的光伏板热红外图像,我们的方法在匹配点密度、匹配成功率和匹配可靠性方面都优于最先进的技术,从而获得了稳健的空中三角测量结果。
{"title":"Sparse matching via point and line feature fusion for robust aerial triangulation of photovoltaic power stations’ thermal infrared imagery","authors":"Tao Ke, Zhouyuan Ye, Xiao Zhang, Yifan Liao, Pengjie Tao","doi":"10.5194/isprs-annals-x-1-2024-107-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-107-2024","url":null,"abstract":"Abstract. In this paper, we present a novel matching method tailored for unmanned aerial vehicle (UAV) thermal infrared images of photovoltaic (PV) panels characterized by highly repetitive textures. This method capitalizes on the integration of point and line features within the image to obtain reliable corresponding points. Furthermore, it employs multiple constraints to eliminate mismatched features and get rid of the interference of repetitive textures on feature matching. To verify the effectiveness of the proposed method, we used an UAV equipped with the DJI Zenmuse H20T thermal infrared gimbal to capture 3767 images of a PV power station in Guangzhou, China. Experiments demonstrate that, for UAV thermal infrared images of PV panels, our method outperforms the state-of-the-art techniques in terms of the density of matching points, matching success rate and matching reliability, consequently leading to robust aerial triangulation results.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1