首页 > 最新文献

Photogrammetric Engineering and Remote Sensing最新文献

英文 中文
A Real-Time Photogrammetric System for Acquisition and Monitoring of Three-Dimensional Human Body Kinematics 三维人体运动学的实时摄影测量系统
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-05-01 DOI: 10.14358/pers.87.5.363
Long Chen, Bo Wu, Yao Zhao, Yuan Li
Real-time acquisition and analysis of three-dimensional (3D) human body kinematics are essential in many applications. In this paper, we present a real-time photogrammetric system consisting of a stereo pair of red-green-blue (RGB) cameras. The system incorporates a multi-threaded and graphics processing unit (GPU)-accelerated solution for real-time extraction of 3D human kinematics. A deep learning approach is adopted to automatically extract two-dimensional (2D) human body features, which are then converted to 3D features based on photogrammetric processing, including dense image matching and triangulation. The multi-threading scheme and GPU-acceleration enable real-time acquisition and monitoring of 3D human body kinematics. Experimental analysis verified that the system processing rate reached ∼18 frames per second. The effective detection distance reached 15 m, with a geometric accuracy of better than 1% of the distance within a range of 12 m. The real-time measurement accuracy for human body kinematics ranged from 0.8% to 7.5%. The results suggest that the proposed system is capable of real-time acquisition and monitoring of 3D human kinematics with favorable performance, showing great potential for various applications.
三维人体运动学的实时采集和分析在许多应用中是必不可少的。在本文中,我们提出了一个实时摄影测量系统,由一对立体的红绿蓝(RGB)相机组成。该系统采用多线程和图形处理单元(GPU)加速解决方案,用于实时提取3D人体运动学。采用深度学习方法自动提取二维(2D)人体特征,然后基于密集图像匹配和三角剖分等摄影测量处理将其转换为三维特征。多线程方案和gpu加速实现了三维人体运动学的实时采集和监测。实验分析证实,系统的处理速率达到每秒18帧。有效探测距离达到15 m,在12 m范围内几何精度优于距离的1%。人体运动学实时测量精度为0.8% ~ 7.5%。结果表明,该系统能够对人体三维运动进行实时采集和监测,具有良好的性能,具有广阔的应用前景。
{"title":"A Real-Time Photogrammetric System for Acquisition and Monitoring of Three-Dimensional Human Body Kinematics","authors":"Long Chen, Bo Wu, Yao Zhao, Yuan Li","doi":"10.14358/pers.87.5.363","DOIUrl":"https://doi.org/10.14358/pers.87.5.363","url":null,"abstract":"Real-time acquisition and analysis of three-dimensional (3D) human body kinematics are essential in many applications. In this paper, we present a real-time photogrammetric system consisting of a stereo pair of red-green-blue (RGB) cameras. The system incorporates a multi-threaded and\u0000 graphics processing unit (GPU)-accelerated solution for real-time extraction of 3D human kinematics. A deep learning approach is adopted to automatically extract two-dimensional (2D) human body features, which are then converted to 3D features based on photogrammetric processing, including\u0000 dense image matching and triangulation. The multi-threading scheme and GPU-acceleration enable real-time acquisition and monitoring of 3D human body kinematics. Experimental analysis verified that the system processing rate reached ∼18 frames per second. The effective detection distance\u0000 reached 15 m, with a geometric accuracy of better than 1% of the distance within a range of 12 m. The real-time measurement accuracy for human body kinematics ranged from 0.8% to 7.5%. The results suggest that the proposed system is capable of real-time acquisition and monitoring of 3D human\u0000 kinematics with favorable performance, showing great potential for various applications.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"25 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75263766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Assessment of Target-Detection Algorithms for Urban Targets Using Hyperspectral Data 基于高光谱数据的城市目标检测算法的比较评估
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-05-01 DOI: 10.14358/pers.87.5.349
Shalini Gakhar, K. C. Tiwari
Hyperspectral data present better opportunities to exploit the treasure of spectral and spatial content that lies within their spectral bands. Hyperspectral data are increasingly being considered for exploring levels of urbanization, due to their capability to capture the spectral variability that a modern urban landscape offers. Data and algorithms are two sides of a coin: while the data capture the variations, the algorithms provide suitable methods to extract relevant information. The literature reports a variety of algorithms for extraction of urban information from any given data, with varying accuracies. This article aims to explore the binary-classifier approach to target detection to extract certain features. Roads and roofs are the most common features present in any urban scene. These experiments were conducted on a subset of AVIRIS-NG hyperspectral data from the Udaipur region of India, with roads and roofs as targets. Four categories of target-detection algorithms are identified from a literature survey and our previous experience—distance measures, angle-based measures, information measures, and machine-learning measures—followed by performance evaluation. The article also presents a brief taxonomy of algorithms; explores methods such as the Mahalanobis angle, which has been reported to be effective for extraction of urban targets; and explores newer machine-learning algorithms to increase accuracy. This work is likely to aid in city planning, sustainable development, and various other governmental and nongovernmental efforts related to urbanization.
高光谱数据为利用其光谱带内的光谱和空间内容宝藏提供了更好的机会。高光谱数据越来越多地被用于探索城市化水平,因为它们能够捕捉现代城市景观提供的光谱变化。数据和算法是硬币的两面:当数据捕捉变化时,算法提供合适的方法来提取相关信息。文献报道了从任何给定数据中提取城市信息的各种算法,其精度各不相同。本文旨在探索二分类器的目标检测方法,以提取某些特征。道路和屋顶是任何城市场景中最常见的特征。这些实验是在来自印度乌代普尔地区的AVIRIS-NG高光谱数据子集上进行的,以道路和屋顶为目标。从文献调查和我们之前的经验中确定了四类目标检测算法——距离测量、基于角度的测量、信息测量和机器学习测量——然后是性能评估。文章还简要介绍了算法的分类;探索了马氏角度等方法,该方法已被报道为有效地提取城市目标;并探索新的机器学习算法来提高准确性。这项工作可能有助于城市规划、可持续发展以及其他各种与城市化有关的政府和非政府努力。
{"title":"Comparative Assessment of Target-Detection Algorithms for Urban Targets Using Hyperspectral Data","authors":"Shalini Gakhar, K. C. Tiwari","doi":"10.14358/pers.87.5.349","DOIUrl":"https://doi.org/10.14358/pers.87.5.349","url":null,"abstract":"Hyperspectral data present better opportunities to exploit the treasure of spectral and spatial content that lies within their spectral bands. Hyperspectral data are increasingly being considered for exploring levels of urbanization, due to their capability to capture the spectral variability\u0000 that a modern urban landscape offers. Data and algorithms are two sides of a coin: while the data capture the variations, the algorithms provide suitable methods to extract relevant information. The literature reports a variety of algorithms for extraction of urban information from any given\u0000 data, with varying accuracies. This article aims to explore the binary-classifier approach to target detection to extract certain features. Roads and roofs are the most common features present in any urban scene. These experiments were conducted on a subset of AVIRIS-NG hyperspectral data\u0000 from the Udaipur region of India, with roads and roofs as targets. Four categories of target-detection algorithms are identified from a literature survey and our previous experience—distance measures, angle-based measures, information measures, and machine-learning measures—followed\u0000 by performance evaluation. The article also presents a brief taxonomy of algorithms; explores methods such as the Mahalanobis angle, which has been reported to be effective for extraction of urban targets; and explores newer machine-learning algorithms to increase accuracy. This work is likely\u0000 to aid in city planning, sustainable development, and various other governmental and nongovernmental efforts related to urbanization.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85499753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GIS Tips & Tricks—Understanding Aerial Triangulation GIS提示和技巧-了解空中三角测量
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-05-01 DOI: 10.14358/pers.87.5.319
D. Maune, A. Karlin
This month’s column is a bit of a twist on the “standard” GIS Tips & Tricks and focuses on a highly technical area of photogrammetry, namely Aerial Triangulation and gives us a brief history of the technology. Dr. David Maune contributed this column and he opens up the “black box” for a little trickery that enables low-cost, high-precision imagery. Enjoy. Today, Aerial Triangulation (AT) is performed with “black box” technology that most users don’t understand. My “trick” in teaching AT is to review all generations of photogrammetry that led to today’s digital photogrammetry and Structure from Motion (SfM).
这个月的专栏在“标准”GIS提示和技巧上做了一些改变,重点关注摄影测量的一个高度技术性的领域,即空中三角测量,并向我们简要介绍了这项技术的历史。David Maune博士是本专栏的撰稿人,他打开了一个“黑盒子”,用小技巧实现了低成本、高精度的图像。享受。如今,空中三角测量(AT)采用的是大多数用户不理解的“黑匣子”技术。我教授AT的“诀窍”是回顾导致今天的数字摄影测量和运动结构(SfM)的所有世代的摄影测量学。
{"title":"GIS Tips & Tricks—Understanding Aerial Triangulation","authors":"D. Maune, A. Karlin","doi":"10.14358/pers.87.5.319","DOIUrl":"https://doi.org/10.14358/pers.87.5.319","url":null,"abstract":"This month’s column is a bit of a twist on the “standard” GIS Tips & Tricks and focuses on a highly technical area of photogrammetry, namely Aerial Triangulation and gives us a brief history of the technology. Dr. David Maune contributed this column and he opens up the “black box” for a little trickery that enables low-cost, high-precision imagery. Enjoy. Today, Aerial Triangulation (AT) is performed with “black box” technology that most users don’t understand. My “trick” in teaching AT is to review all generations of photogrammetry that led to today’s digital photogrammetry and Structure from Motion (SfM).","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"91 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80703696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cartography. A Compendium of Design Thinking for Mapmakers 制图。地图绘制者设计思维纲要
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-05-01 DOI: 10.14358/pers.87.5.322
K. Field, Adam Steer
{"title":"Cartography. A Compendium of Design Thinking for Mapmakers","authors":"K. Field, Adam Steer","doi":"10.14358/pers.87.5.322","DOIUrl":"https://doi.org/10.14358/pers.87.5.322","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"2 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88706997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inversion of Solar-Induced Chlorophyll Fluorescence Using Polarization Measurements of Vegetation 利用植被偏振测量反演太阳诱导的叶绿素荧光
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-05-01 DOI: 10.14358/pers.87.5.331
Haiyan Yao, Ziying Li, Yang Han, Haofang Niu, Tianyi Hao, Yuyu Zhou
In vegetation remote sensing, the apparent radiation of the vegetation canopy is often combined with three components derived from different parts of vegetation that have different production mechanisms and optical properties: volume scattering Lvol, polarized light Lpol, and chlorophyll fluorescence ChlF. The chlorophyll fluorescence plays a very important role in vegetation remote sensing, and the polarization information in vegetation remote sensing has become an effective way to characterize the physical characteristics of vegetation. This study analyzes the difference between these three types of radiation flux and utilizes polarization radiation to separate them from the apparent radiation of the vegetation canopy. Specifically, solar-induced chlorophyll fluorescence is extracted from vegetation canopy radiation data using standard Fraunhofer-line discrimination. The results show that polarization measurements can quantitatively separate Lvol, Lpol, and ChlF and extract the solar-induced chlorophyll fluorescence. This study improves our understanding of the light-scattering properties of vegetation canopies and provides insights for developing building models and research algorithms.
在植被遥感中,植被冠层的视辐射通常与来自植被不同部位、具有不同产生机制和光学性质的三个分量相结合:体积散射Lvol、偏振光Lpol和叶绿素荧光ChlF。叶绿素荧光在植被遥感中起着非常重要的作用,植被遥感中的极化信息已成为表征植被物理特征的有效手段。本研究分析了这三种辐射通量的差异,并利用极化辐射将其与植被冠层的视辐射分离。具体来说,利用标准弗劳恩霍夫线判别法从植被冠层辐射数据中提取太阳诱导的叶绿素荧光。结果表明,偏振测量可以定量分离Lvol、Lpol和ChlF,并提取太阳诱导的叶绿素荧光。该研究提高了我们对植被冠层光散射特性的理解,并为开发建筑模型和研究算法提供了见解。
{"title":"Inversion of Solar-Induced Chlorophyll Fluorescence Using Polarization Measurements of Vegetation","authors":"Haiyan Yao, Ziying Li, Yang Han, Haofang Niu, Tianyi Hao, Yuyu Zhou","doi":"10.14358/pers.87.5.331","DOIUrl":"https://doi.org/10.14358/pers.87.5.331","url":null,"abstract":"In vegetation remote sensing, the apparent radiation of the vegetation canopy is often combined with three components derived from different parts of vegetation that have different production mechanisms and optical properties: volume scattering Lvol, polarized light Lpol,\u0000 and chlorophyll fluorescence ChlF. The chlorophyll fluorescence plays a very important role in vegetation remote sensing, and the polarization information in vegetation remote sensing has become an effective way to characterize the physical characteristics of vegetation. This study analyzes\u0000 the difference between these three types of radiation flux and utilizes polarization radiation to separate them from the apparent radiation of the vegetation canopy. Specifically, solar-induced chlorophyll fluorescence is extracted from vegetation canopy radiation data using standard Fraunhofer-line\u0000 discrimination. The results show that polarization measurements can quantitatively separate Lvol, Lpol, and ChlF and extract the solar-induced chlorophyll fluorescence. This study improves our understanding of the light-scattering properties of vegetation canopies and\u0000 provides insights for developing building models and research algorithms.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"8 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76047086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene Classification of Remotely Sensed Images via Densely Connected Convolutional Neural Networks and an Ensemble Classifier 基于密集连接卷积神经网络和集成分类器的遥感图像场景分类
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-04-01 DOI: 10.14358/PERS.87.3.295
Q. Cheng, Yuan Xu, Peng Fu, Jinling Li, Wen Wang, Y. Ren
Deep learning techniques, especially convolutional neural networks, have boosted performance in analyzing and understanding remotely sensed images to a great extent. However, existing scene-classification methods generally neglect local and spatial information that is vital to scene classification of remotely sensed images. In this study, a method of scene classification for remotely sensed images based on pretrained densely connected convolutional neural networks combined with an ensemble classifier is proposed to tackle the under-utilization of local and spatial information for image classification. Specifically, we first exploit the pretrained DenseNet and fine-tuned it to release its potential in remote-sensing image feature representation. Second, a spatial-pyramid structure and an improved Fisher-vector coding strategy are leveraged to further strengthen representation capability and the robustness of the feature map captured from convolutional layers. Then we integrate an ensemble classifier in our network architecture considering that lower attention to feature descriptors. Extensive experiments are conducted, and the proposed method achieves superior performance on UC Merced, AID, and NWPU-RESISC45 data sets.
深度学习技术,特别是卷积神经网络,在很大程度上提高了分析和理解遥感图像的性能。然而,现有的场景分类方法通常忽略了对遥感图像的场景分类至关重要的局部和空间信息。在本研究中,提出了一种基于预训练的密集连接卷积神经网络和集成分类器的遥感图像场景分类方法,以解决图像分类中局部和空间信息利用不足的问题。具体来说,我们首先利用预训练的DenseNet并对其进行微调,以释放其在遥感图像特征表示中的潜力。其次,利用空间金字塔结构和改进的Fisher矢量编码策略来进一步增强从卷积层捕获的特征图的表示能力和鲁棒性。然后,考虑到对特征描述符的关注度较低,我们在网络架构中集成了集成分类器。进行了大量的实验,该方法在UC Merced、AID和NWPU-RESSC45数据集上取得了优异的性能。
{"title":"Scene Classification of Remotely Sensed Images via Densely Connected Convolutional Neural Networks and an Ensemble Classifier","authors":"Q. Cheng, Yuan Xu, Peng Fu, Jinling Li, Wen Wang, Y. Ren","doi":"10.14358/PERS.87.3.295","DOIUrl":"https://doi.org/10.14358/PERS.87.3.295","url":null,"abstract":"Deep learning techniques, especially convolutional neural networks, have boosted performance in analyzing and understanding remotely sensed images to a great extent. However, existing scene-classification methods generally neglect local and spatial information that is vital to scene\u0000 classification of remotely sensed images. In this study, a method of scene classification for remotely sensed images based on pretrained densely connected convolutional neural networks combined with an ensemble classifier is proposed to tackle the under-utilization of local and spatial information\u0000 for image classification. Specifically, we first exploit the pretrained DenseNet and fine-tuned it to release its potential in remote-sensing image feature representation. Second, a spatial-pyramid structure and an improved Fisher-vector coding strategy are leveraged to further strengthen\u0000 representation capability and the robustness of the feature map captured from convolutional layers. Then we integrate an ensemble classifier in our network architecture considering that lower attention to feature descriptors. Extensive experiments are conducted, and the proposed method achieves\u0000 superior performance on UC Merced, AID, and NWPU-RESISC45 data sets.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"87 1","pages":"295-308"},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47399327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Digital Terrain Modeling Method in Urban Areas by the ICESat-2 (Generating precise terrain surface profiles from photon-counting technology) ICESat-2在城市地区的数字地形建模方法(利用光子计数技术生成精确的地形表面轮廓)
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-04-01 DOI: 10.14358/PERS.87.4.237
Nahed Osama, Bisheng Yang, Yue Ma, Mohamed Freeshah
The ICE, Cloud and land Elevation Satellite-2 (ICES at-2) can provide new measurements of the Earth's elevations through photon-counting technology. Most research has focused on extracting the ground and the canopy photons in vegetated areas. Yet the extraction of the ground photons from urban areas, where the vegetation is mixed with artificial constructions, has not been fully investigated. This article proposes a new method to estimate the ground surface elevations in urban areas. The ICES at-2 signal photons were detected by the improved Density-Based Spatial Clustering of Applications with Noise algorithm and the Advanced Topographic Laser Altimeter System algorithm. The Advanced Land Observing Satellite-1 PALSAR –derived digital surface model has been utilized to separate the terrain surface from the ICES at-2 data. A set of ground-truth data was used to evaluate the accuracy of these two methods, and the achieved accuracy was up to 2.7 cm, which makes our method effective and accurate in determining the ground elevation in urban scenes.
ICE、云和陆地高程卫星-2(ICES at-2)可以通过光子计数技术提供地球高程的新测量。大多数研究都集中在提取植被地区的地面和树冠光子上。然而,从植被与人工建筑混合的城市地区提取地面光子的方法尚未得到充分研究。本文提出了一种估算城市地表高程的新方法。ICES at-2信号光子是通过改进的基于密度的空间聚类应用噪声算法和先进的地形激光高度计系统算法检测到的。高级陆地观测卫星-1 PALSAR衍生的数字地表模型已被用于从ICES at-2数据中分离地形表面。使用一组地面实况数据来评估这两种方法的精度,所获得的精度高达2.7cm,这使得我们的方法在确定城市场景中的地面高程方面是有效和准确的。
{"title":"A Digital Terrain Modeling Method in Urban Areas by the ICESat-2 (Generating precise terrain surface profiles from photon-counting technology)","authors":"Nahed Osama, Bisheng Yang, Yue Ma, Mohamed Freeshah","doi":"10.14358/PERS.87.4.237","DOIUrl":"https://doi.org/10.14358/PERS.87.4.237","url":null,"abstract":"The ICE, Cloud and land Elevation Satellite-2 (ICES at-2) can provide new measurements of the Earth's elevations through photon-counting technology. Most research has focused on extracting the ground and the canopy photons in vegetated areas. Yet the extraction of the ground photons\u0000 from urban areas, where the vegetation is mixed with artificial constructions, has not been fully investigated. This article proposes a new method to estimate the ground surface elevations in urban areas. The ICES at-2 signal photons were detected by the improved Density-Based Spatial Clustering\u0000 of Applications with Noise algorithm and the Advanced Topographic Laser Altimeter System algorithm. The Advanced Land Observing Satellite-1 PALSAR –derived digital surface model has been utilized to separate the terrain surface from the ICES at-2 data. A set of ground-truth data was\u0000 used to evaluate the accuracy of these two methods, and the achieved accuracy was up to 2.7 cm, which makes our method effective and accurate in determining the ground elevation in urban scenes.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"87 1","pages":"237-248"},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42851596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
GIS Tips & Tricks—Using GIS to Hunt for Easter Eggs GIS小技巧——利用GIS寻找复活节彩蛋
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-04-01 DOI: 10.14358/PERS.87.4.225
A. Karlin, K. Patterson, Carly Bradshaw, Savannah Carter, Todd Waldorf
{"title":"GIS Tips & Tricks—Using GIS to Hunt for Easter Eggs","authors":"A. Karlin, K. Patterson, Carly Bradshaw, Savannah Carter, Todd Waldorf","doi":"10.14358/PERS.87.4.225","DOIUrl":"https://doi.org/10.14358/PERS.87.4.225","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"87 1","pages":"225-226"},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46410578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering Potential Illegal Construction Within Building Roofs from UAV Images Using Semantic Segmentation and Object-Based Change Detection 利用语义分割和基于对象的变化检测从无人机图像中发现屋顶内潜在的非法建筑
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-04-01 DOI: 10.14358/PERS.87.4.263
Yang Liu, Yujie Sun, Shikang Tao, Min Wang, Qian Shen, Jiru Huang
A novel potential illegal construction (PIC) detection method by bitemporal unmanned aerial vehicle (UAV ) image comparison (change detection) within building roof areas is proposed. In this method, roofs are first extracted from UAV images using a depth-channel improved UNet model. A two-step change detection scheme is then implemented for PIC detection. In the change detection stage, roofs with appearance, disappearance, and shape changes are first extracted by morphological analysis. Subroof primitives are then obtained by roof-constrained image segmentation within the remaining roof areas, and object-based iteratively reweighted multivariate alteration detection (IR-MAD ) is implemented to extract the small PICs from the subroof primitives. The proposed method organically combines deep learning and object-based image analysis, which can identify entire roof changes and locate small object changes within the roofs. Experiments show that the proposed method has better accuracy compared with the other counterparts, including the original IR-MAD, change vector analysis, and principal components analysis-K-means.
提出了一种新的基于双时无人机(UAV)图像比较(变化检测)的屋顶区域潜在违法建筑(PIC)检测方法。在该方法中,首先使用深度通道改进的UNet模型从无人机图像中提取屋顶。然后实现了用于PIC检测的两步变化检测方案。在变化检测阶段,首先通过形态学分析提取出现、消失和形状变化的屋顶。然后,通过在剩余的屋顶区域内进行屋顶约束的图像分割来获得子屋顶基元,并实现基于对象的迭代重加权多变量变化检测(IR-MAD),以从子屋顶基元中提取小的PIC。所提出的方法将深度学习和基于对象的图像分析有机地结合在一起,可以识别整个屋顶的变化,并定位屋顶内的小对象变化。实验表明,与其他方法相比,该方法具有更好的精度,包括原始的IR-MAD、变化向量分析和主成分分析-K。
{"title":"Discovering Potential Illegal Construction Within Building Roofs from UAV Images Using Semantic Segmentation and Object-Based Change Detection","authors":"Yang Liu, Yujie Sun, Shikang Tao, Min Wang, Qian Shen, Jiru Huang","doi":"10.14358/PERS.87.4.263","DOIUrl":"https://doi.org/10.14358/PERS.87.4.263","url":null,"abstract":"A novel potential illegal construction (PIC) detection method by bitemporal unmanned aerial vehicle (UAV ) image comparison (change detection) within building roof areas is proposed. In this method, roofs are first extracted from UAV images using a depth-channel improved UNet model.\u0000 A two-step change detection scheme is then implemented for PIC detection. In the change detection stage, roofs with appearance, disappearance, and shape changes are first extracted by morphological analysis. Subroof primitives are then obtained by roof-constrained image segmentation within\u0000 the remaining roof areas, and object-based iteratively reweighted multivariate alteration detection (IR-MAD ) is implemented to extract the small PICs from the subroof primitives. The proposed method organically combines deep learning and object-based image analysis, which can identify entire\u0000 roof changes and locate small object changes within the roofs. Experiments show that the proposed method has better accuracy compared with the other counterparts, including the original IR-MAD, change vector analysis, and principal components analysis-K-means.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"87 1","pages":"263-271"},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42408808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parsing of Urban Facades from 3D Point Clouds Based on a Novel Multi-View Domain 基于多视域的三维点云城市立面分析
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2021-04-01 DOI: 10.14358/PERS.87.4.283
Wei Wang, Yuanzi Xu, Y. Ren, Gang Wang
Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance improvement of the proposed method are further analyzed.
最近,通过设计更复杂的网络结构,提高了从三维点云进行立面解析的性能,这些网络结构花费了巨大的计算资源,并且没有充分利用立面结构的先验知识。相反,从数据分布的角度来看,我们基于立面对象的特征构建了一个新的分层网格多视图数据域,以实现深度学习模型和先验知识的融合,从而显著提高分割精度。我们在RueMonge 2014数据集上对当前主流方法进行了全面评估,并证明了我们方法的优越性。facade解析任务的平均交集超过并集指数达到76.41%,比当前的最佳结果高出2.75%。此外,通过对比实验,进一步分析了该方法性能提高的原因。
{"title":"Parsing of Urban Facades from 3D Point Clouds Based on a Novel Multi-View Domain","authors":"Wei Wang, Yuanzi Xu, Y. Ren, Gang Wang","doi":"10.14358/PERS.87.4.283","DOIUrl":"https://doi.org/10.14358/PERS.87.4.283","url":null,"abstract":"Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data\u0000 distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream\u0000 method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance\u0000 improvement of the proposed method are further analyzed.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"87 1","pages":"283-293"},"PeriodicalIF":1.3,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48822100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Photogrammetric Engineering and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1