首页 > 最新文献

ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences最新文献

英文 中文
A METHOD TO GENERATE FLOOD MAPS IN 3D USING DEM AND DEEP LEARNING 一种利用dem和深度学习生成三维洪水图的方法
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-25-2020
A. Gebrehiwot, L. Hashemi-Beni
Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.
摘要高分辨率遥感图像越来越多地用于洪水应用。从建立洪水指数到利用高分辨率数据进行图像分类,已经提出了不同的洪水范围制图方法。在这些方法中,深度学习方法在洪水范围提取方面显示出良好的效果;然而,这些二维(2D)图像分类方法不能直接提供水位测量。本文提出了一种将基于深度学习的二维洪水图与基于运动结构(SFM)的三维云点提取方法相结合,从无人机数据中提取三维洪水区的集成方法。我们对基于全卷积模型的预训练视觉几何组16 (VGG-16)进行了微调,以创建2D洪水地图。将二维分类图叠加在基于sfm的三维点云上,生成三维洪水图。通过从基于sfm的DEM中减去洪水前数字高程模型(DEM)来估计洪水深度。结果表明,该方法可以有效地建立三维洪水范围图,以支持洪水事件的应急响应和恢复活动。
{"title":"A METHOD TO GENERATE FLOOD MAPS IN 3D USING DEM AND DEEP LEARNING","authors":"A. Gebrehiwot, L. Hashemi-Beni","doi":"10.5194/isprs-archives-xliv-m-2-2020-25-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-25-2020","url":null,"abstract":"Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometry Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"64 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84700678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
THE CASE FOR LOW-COST, PERSONALIZED VISUALIZATION FOR ENHANCING NATURAL HAZARD PREPAREDNESS 为加强自然灾害防范而进行低成本、个性化可视化的案例
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-37-2020
Peter Gmelch, R. Lejano, Evan O'Keeffe, D. Laefer, Cady Drell, M. Bertolotto, U. Ofterdinger, Jennifer, McKinley
Abstract. Each year, lives are needlessly lost to floods due to residents failing to heed evacuation advisories. Risk communication research suggests that flood warnings need to be more vivid, contextualized, and visualizable, in order to engage the message recipient. This paper makes the case for the development of a low-cost augmented reality tool that enables individuals to visualize, at close range and in three-dimension, their homes, schools, and places of work and worship subjected to flooding (modeled upon a series of federally expected flood hazard levels). This paper also introduces initial tool development in this area and the related data input stream.
摘要每年,由于居民没有听从疏散通知,洪水造成了不必要的生命损失。风险沟通研究表明,洪水预警需要更加生动、情境化和可视化,以吸引信息接收者。本文提出了开发一种低成本的增强现实工具的案例,该工具使个人能够近距离地以三维方式可视化他们的家庭、学校、工作场所和礼拜场所遭受洪水的情况(以联邦政府预期的一系列洪水危险级别为模型)。本文还介绍了该领域的初步工具开发和相关的数据输入流。
{"title":"THE CASE FOR LOW-COST, PERSONALIZED VISUALIZATION FOR ENHANCING NATURAL HAZARD PREPAREDNESS","authors":"Peter Gmelch, R. Lejano, Evan O'Keeffe, D. Laefer, Cady Drell, M. Bertolotto, U. Ofterdinger, Jennifer, McKinley","doi":"10.5194/isprs-archives-xliv-m-2-2020-37-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-37-2020","url":null,"abstract":"Abstract. Each year, lives are needlessly lost to floods due to residents failing to heed evacuation advisories. Risk communication research suggests that flood warnings need to be more vivid, contextualized, and visualizable, in order to engage the message recipient. This paper makes the case for the development of a low-cost augmented reality tool that enables individuals to visualize, at close range and in three-dimension, their homes, schools, and places of work and worship subjected to flooding (modeled upon a series of federally expected flood hazard levels). This paper also introduces initial tool development in this area and the related data input stream.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"101 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88236556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CNN-BASED PLACE RECOGNITION TECHNIQUE FOR LIDAR SLAM 基于cnn的激光雷达slam位置识别技术
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-117-2020
Y. Yang, S. Song, C. Toth
Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.
摘要地点识别或闭环是一种识别以前在一个地区的移动传感平台访问过的地标和/或场景的技术。该技术能够对航位推算导航系统的漂移进行全局优化补偿,是在包括全球定位系统(GPS)拒绝环境在内的任何环境下稳健实现同时定位与制图(SLAM)的关键功能。三维点云中的位置识别是一项具有挑战性的任务,传统上需要借助相机和GPS等其他传感器来处理。不幸的是,视觉位置识别技术可能会受到光照和纹理变化的影响,而GPS在城市地区可能表现不佳。为了缓解这一问题,基于卷积神经网络(cnn)的三维描述符可以直接应用于三维点云。在这项工作中,我们研究了不同分类策略的性能,利用尖端的基于cnn的3D全局描述符(PointNetVLAD)在Oxford RobotCar数据集上进行位置识别任务。
{"title":"CNN-BASED PLACE RECOGNITION TECHNIQUE FOR LIDAR SLAM","authors":"Y. Yang, S. Song, C. Toth","doi":"10.5194/isprs-archives-xliv-m-2-2020-117-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-117-2020","url":null,"abstract":"Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"97 1","pages":"117-122"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83938331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
STUDY OF ACTIVE FARMLAND USE TO SUPPORT AGENT-BASED MODELING OF FOOD DESERTS 积极耕地利用研究支持基于agent的食物沙漠模型
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-9-2020
S. Dhamankar, L. Hashemi-Beni, L. Kurkalova, C. Liang, T. Mulrooney, M. Jha, G. Monty, H. Miao
Abstract. Food desert (FD) is the area that has limited access to affordable and nutritious foods such as fresh fruits, vegetables, and other healthful whole foods. FDs are important socio-economic problems in North Carolina (NC), potentially contributing to obesity in low-income areas. If farmland is available, local vegetable production could potentially help alleviate FDs. However, little is known about land use and land-use transitions (LUTs) in the vicinity of FDs. To fill this knowledge gap, we study the farmland use in three NC counties, Bladen, Guilford and, Rutherford, located in Coastal, Piedmont, and, Mountain regions of the state, respectively. The analysis combines the United States Department of Agriculture (USDA) 2015 FD/NFD delineation of census tracts, and geospatial soil productivity and 2008–2019 land cover data. The understanding of farmland use is expected to contribute to the development of LUT components of FD Agent-Based Models (ABM).
摘要食物沙漠(Food desert, FD)是指那些无法获得负担得起的营养食物,如新鲜水果、蔬菜和其他健康的天然食物的地区。fd是北卡罗来纳州重要的社会经济问题,可能导致低收入地区的肥胖。如果有耕地,当地的蔬菜生产可能有助于缓解口蹄疫。然而,对fd附近的土地利用和土地利用转换(LUTs)知之甚少。为了填补这一知识空白,我们研究了北卡罗来纳州布雷登县、吉尔福德县和卢瑟福县三个县的农田利用情况,这三个县分别位于该州的沿海地区、皮埃蒙特地区和山区。该分析结合了美国农业部(USDA) 2015年FD/NFD对人口普查区的划定,地理空间土壤生产力和2008-2019年土地覆盖数据。对农田利用的理解有望为基于agent的FD模型(ABM)的LUT组件的开发做出贡献。
{"title":"STUDY OF ACTIVE FARMLAND USE TO SUPPORT AGENT-BASED MODELING OF FOOD DESERTS","authors":"S. Dhamankar, L. Hashemi-Beni, L. Kurkalova, C. Liang, T. Mulrooney, M. Jha, G. Monty, H. Miao","doi":"10.5194/isprs-archives-xliv-m-2-2020-9-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-9-2020","url":null,"abstract":"Abstract. Food desert (FD) is the area that has limited access to affordable and nutritious foods such as fresh fruits, vegetables, and other healthful whole foods. FDs are important socio-economic problems in North Carolina (NC), potentially contributing to obesity in low-income areas. If farmland is available, local vegetable production could potentially help alleviate FDs. However, little is known about land use and land-use transitions (LUTs) in the vicinity of FDs. To fill this knowledge gap, we study the farmland use in three NC counties, Bladen, Guilford and, Rutherford, located in Coastal, Piedmont, and, Mountain regions of the state, respectively. The analysis combines the United States Department of Agriculture (USDA) 2015 FD/NFD delineation of census tracts, and geospatial soil productivity and 2008–2019 land cover data. The understanding of farmland use is expected to contribute to the development of LUT components of FD Agent-Based Models (ABM).","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"74 7 1","pages":"9-13"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90960019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
THE ACCESSIBILITY AND SPATIAL PATTERNS OF GREEN OPEN SPACE BASED ON GIS 基于gis的绿色开放空间可达性及空间格局
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-55-2020
D. Liu, Y. Shi
Abstract. Studies show that the green open space (GOS) is beneficial to visitors' mental and physical health and has positive social values. This study took four global cities as examples, namely Shanghai, Tokyo, New York and London. The per capita area, the coverage rate and the availability of GOS were calculated in this study. Then the GOS was classified according to the scales and morphological features. And the author analyzed the relations between availability and spatial patterns. The results showed that the four cities could be classified into two classes. Shanghai and Tokyo are high-population-density cities with medium GOS coverage and availability, and New York and London are medium-population-density cities with high GOS coverage and availability. It was found that the high GOS coverage rate did not necessarily lead to a higher availability. Shanghai and London could increase the amount of small GOS to ease the shortage of availability. And London and Tokyo could consider adding linear GOS to improve the connectivity of GOS.
摘要研究表明,绿色开放空间有利于游客身心健康,具有积极的社会价值。本研究以上海、东京、纽约和伦敦四个全球城市为例。本研究计算了GOS的人均面积、覆盖率和可得性。然后根据GOS的尺度和形态特征对其进行分类。分析了可得性与空间格局的关系。结果表明,这四个城市可分为两类。上海和东京为高人口密度城市,GOS覆盖率和可用性中等;纽约和伦敦为中等人口密度城市,GOS覆盖率和可用性较高。研究发现,高GOS覆盖率并不一定导致高可得性。上海和伦敦可以增加小型GOS的数量,以缓解可用性不足的问题。伦敦和东京可以考虑增加线性GOS,以改善GOS的连通性。
{"title":"THE ACCESSIBILITY AND SPATIAL PATTERNS OF GREEN OPEN SPACE BASED ON GIS","authors":"D. Liu, Y. Shi","doi":"10.5194/isprs-archives-xliv-m-2-2020-55-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-55-2020","url":null,"abstract":"Abstract. Studies show that the green open space (GOS) is beneficial to visitors' mental and physical health and has positive social values. This study took four global cities as examples, namely Shanghai, Tokyo, New York and London. The per capita area, the coverage rate and the availability of GOS were calculated in this study. Then the GOS was classified according to the scales and morphological features. And the author analyzed the relations between availability and spatial patterns. The results showed that the four cities could be classified into two classes. Shanghai and Tokyo are high-population-density cities with medium GOS coverage and availability, and New York and London are medium-population-density cities with high GOS coverage and availability. It was found that the high GOS coverage rate did not necessarily lead to a higher availability. Shanghai and London could increase the amount of small GOS to ease the shortage of availability. And London and Tokyo could consider adding linear GOS to improve the connectivity of GOS.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"15 1","pages":"55-59"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81781936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEP LEARNING FOR REMOTE SENSING IMAGE CLASSIFICATION FOR AGRICULTURE APPLICATIONS 深度学习在农业遥感图像分类中的应用
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-51-2020
L. Hashemi-Beni, A. Gebrehiwot
Abstract. This research examines the ability of deep learning methods for remote sensing image classification for agriculture applications. U-net and convolutional neural networks are fine-tuned, utilized and tested for crop/weed classification. The dataset for this study includes 60 top-down images of an organic carrots field, which was collected by an autonomous vehicle and labeled by experts. FCN-8s model achieved 75.1% accuracy on detecting weeds compared to 66.72% of U-net using 60 training images. However, the U-net model performed better on detecting crops which is 60.48% compared to 47.86% of FCN-8s.
摘要本研究考察了深度学习方法在农业遥感图像分类中的应用能力。U-net和卷积神经网络被微调、应用和测试用于作物/杂草分类。本研究的数据集包括60张有机胡萝卜田的自上而下的图像,这些图像由自动驾驶汽车收集并由专家标记。FCN-8s模型对60张训练图像的杂草检测准确率为75.1%,而U-net模型的准确率为66.72%。而U-net模型对作物的识别率为60.48%,优于fcn -8模型的47.86%。
{"title":"DEEP LEARNING FOR REMOTE SENSING IMAGE CLASSIFICATION FOR AGRICULTURE APPLICATIONS","authors":"L. Hashemi-Beni, A. Gebrehiwot","doi":"10.5194/isprs-archives-xliv-m-2-2020-51-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-51-2020","url":null,"abstract":"Abstract. This research examines the ability of deep learning methods for remote sensing image classification for agriculture applications. U-net and convolutional neural networks are fine-tuned, utilized and tested for crop/weed classification. The dataset for this study includes 60 top-down images of an organic carrots field, which was collected by an autonomous vehicle and labeled by experts. FCN-8s model achieved 75.1% accuracy on detecting weeds compared to 66.72% of U-net using 60 training images. However, the U-net model performed better on detecting crops which is 60.48% compared to 47.86% of FCN-8s.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"15 1","pages":"51-54"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73677266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
DIGITAL SURFACE MODEL DERIVED FROM UAS IMAGERY ASSESSMENT USING HIGH-PRECISION AERIAL LIDAR AS REFERENCE SURFACE 以高精度航空激光雷达为参考面,对卫星影像进行评估,得出数字表面模型
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-61-2020
J. Lopez, R. Munjy
Abstract. Imagery captured from aerial unmanned systems (UAS) has found significant utility in the field of surveying and mapping as the efforts of the computer vision field combined the principles of photogrammetry. Its respectability in the remote sensing community as increased as the miniaturization of on-board survey-grade global navigation satellite system (GNSS) signal receivers has made it possible to produce high network accuracy contributing to effective aerotriangulation. UAS photogrammetry has gained much popularity because of its effectiveness, efficiency, economy, and especially its availability and ease of use. Although photogrammetry has proven to meet and exceed planimetric precision and accuracy, variables tend to cause deficiencies in the achievement of accuracy in the vertical plane. This research aims to demonstrate achievable overall accuracy of surface modelling through minimization of systematic errors at a significant level using a fixed-wing platform designed for high-accuracy surveying with the eBee Plus and X models by SenseFly equipped with survey-grade GNSS signal-receiving capabilities and 20MP integrated, fixed-focal length camera. The UAS campaign was flown over a site 320 m by 320 m with 81 surveyed 3D ground control points, where horizontal positions were surveyed to 1.0 cm horizontal accuracy and 0.5 cm vertical accuracy using static GNSS methods and digital leveling respectively. All AT accuracy was based on 75 independent checkpoints. The digital surface model (DSM) was compared to a reference DSM generated from high-precision manned aerial LiDAR using the Optech Galaxy scanner. Overall accuracy was in the sub-decimeter level vertically in both commercial software used, including Pix4Dmapper and Agisoft Metashape.
摘要随着计算机视觉技术与摄影测量学原理的结合,从航空无人系统(UAS)中获取的图像在测绘领域有了重要的应用。随着星载测量级全球导航卫星系统(GNSS)信号接收器的小型化,它在遥感界的地位日益提高,从而有可能产生高网络精度,有助于有效的航空三角测量。无人机摄影测量因其有效性、高效性、经济性,特别是其可获得性和易用性而广受欢迎。虽然摄影测量已被证明满足并超过平面精度和精度,但变量往往导致在垂直平面上实现精度的不足。本研究旨在利用固定翼平台,通过将系统误差最小化,实现地面建模的整体精度,该平台采用SenseFly公司的eBee Plus和X模型设计的高精度测量平台,配备测量级GNSS信号接收能力和20MP集成定焦距相机。UAS活动在一个320米乘320米的场地上飞行,有81个三维地面控制点,其中水平位置分别使用静态GNSS方法和数字水准测量到1.0厘米的水平精度和0.5厘米的垂直精度。所有AT的准确度都是基于75个独立的检查点。将数字表面模型(DSM)与使用Optech Galaxy扫描仪的高精度载人航空激光雷达生成的参考DSM进行了比较。在使用的商业软件中,包括Pix4Dmapper和Agisoft Metashape,总体精度垂直在亚分米级别。
{"title":"DIGITAL SURFACE MODEL DERIVED FROM UAS IMAGERY ASSESSMENT USING HIGH-PRECISION AERIAL LIDAR AS REFERENCE SURFACE","authors":"J. Lopez, R. Munjy","doi":"10.5194/isprs-archives-xliv-m-2-2020-61-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-61-2020","url":null,"abstract":"Abstract. Imagery captured from aerial unmanned systems (UAS) has found significant utility in the field of surveying and mapping as the efforts of the computer vision field combined the principles of photogrammetry. Its respectability in the remote sensing community as increased as the miniaturization of on-board survey-grade global navigation satellite system (GNSS) signal receivers has made it possible to produce high network accuracy contributing to effective aerotriangulation. UAS photogrammetry has gained much popularity because of its effectiveness, efficiency, economy, and especially its availability and ease of use. Although photogrammetry has proven to meet and exceed planimetric precision and accuracy, variables tend to cause deficiencies in the achievement of accuracy in the vertical plane. This research aims to demonstrate achievable overall accuracy of surface modelling through minimization of systematic errors at a significant level using a fixed-wing platform designed for high-accuracy surveying with the eBee Plus and X models by SenseFly equipped with survey-grade GNSS signal-receiving capabilities and 20MP integrated, fixed-focal length camera. The UAS campaign was flown over a site 320 m by 320 m with 81 surveyed 3D ground control points, where horizontal positions were surveyed to 1.0 cm horizontal accuracy and 0.5 cm vertical accuracy using static GNSS methods and digital leveling respectively. All AT accuracy was based on 75 independent checkpoints. The digital surface model (DSM) was compared to a reference DSM generated from high-precision manned aerial LiDAR using the Optech Galaxy scanner. Overall accuracy was in the sub-decimeter level vertically in both commercial software used, including Pix4Dmapper and Agisoft Metashape.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"1 1","pages":"61-67"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77338882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EXTRACTING BUILT-UP FEATURES IN COMPLEX BIOPHYSICAL ENVIRONMENTS BY USING A LANDSAT BANDS RATIO 利用陆地卫星波段比提取复杂生物物理环境中的建筑物特征
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-79-2020
A. H. N. Mfondoum, Paul Gérard Gbetkom, R. Cooper, Sofia Hakdaoui, M. B. Mansour Badamassi
Abstract. This paper addresses the remote sensing challenging field of urban mixed pixels on a medium spatial resolution satellite data. The tentatively named Normalized Difference Built-up and Surroundings Unmixing Index (NDBSUI) is proposed by using Landsat-8 Operational Land Imager (OLI) bands. It uses the Shortwave Infrared 2 (SWIR2) as the main wavelength, the SWIR1 with the red wavelengths, for the built-up extraction. A ratio is computed based on the normalization process and the application is made on six cities with different urban and environmental characteristics. The built-up of the experimental site of Yaounde is extracted with an overall accuracy of 95.51% and a kappa coefficient of 0.90. The NDBSUI is validated over five other sites, chosen according to Cameroon’s bioclimatic zoning. The results are satisfactory for the cities of Yokadouma and Kumba in the bimodal and monomodal rainfall zones, where overall accuracies are up to 98.9% and 97.5%, with kappa coefficients of 0.88 and 0.94 respectively, although these values are close to those of three other indices. However, in the cities of Foumban, Ngaoundere and Garoua, representing the western highlands, the high Guinea savannah and the Sudano-sahelian zones where built-up is more confused with soil features, overall accuracies of 97.06%, 95.29% and 74.86%, corresponding to 0.918, 0.89 and 0.42 kappa coefficients were recorded. Difference of accuracy with EBBI, NDBI and UI are up to 31.66%, confirming the NDBSUI efficiency to automate built-up extraction and unmixing from surrounding noises with less biases.
摘要本文针对中空间分辨率卫星数据上城市混合像元的遥感挑战领域进行了研究。利用Landsat-8陆地业务成像仪(OLI)波段,提出了归一化建筑与环境分解指数(NDBSUI)。它使用短波红外2 (SWIR2)作为主要波长,SWIR1具有红色波长,用于组合提取。根据归一化过程计算出一个比率,并将其应用于具有不同城市和环境特征的六个城市。提取雅温得试验点建筑物的总体精度为95.51%,kappa系数为0.90。NDBSUI在其他五个地点进行了验证,这些地点是根据喀麦隆的生物气候分区选择的。在双峰型和单峰型降雨区,横滨市和昆巴市的总体精度分别达到98.9%和97.5%,kappa系数分别为0.88和0.94,但与其他3个指数接近。而在富班、恩oundere和Garoua等城市,建筑与土壤特征较为混淆的西部高地、高几内亚草原和苏丹-萨赫勒地区,总体精度分别为97.06%、95.29%和74.86%,kappa系数分别为0.918、0.89和0.42。与EBBI、NDBI和UI的准确率差达31.66%,证实了NDBSUI能够在较小偏差的情况下实现对周围噪声的自动提取和解混。
{"title":"EXTRACTING BUILT-UP FEATURES IN COMPLEX BIOPHYSICAL ENVIRONMENTS BY USING A LANDSAT BANDS RATIO","authors":"A. H. N. Mfondoum, Paul Gérard Gbetkom, R. Cooper, Sofia Hakdaoui, M. B. Mansour Badamassi","doi":"10.5194/isprs-archives-xliv-m-2-2020-79-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-79-2020","url":null,"abstract":"Abstract. This paper addresses the remote sensing challenging field of urban mixed pixels on a medium spatial resolution satellite data. The tentatively named Normalized Difference Built-up and Surroundings Unmixing Index (NDBSUI) is proposed by using Landsat-8 Operational Land Imager (OLI) bands. It uses the Shortwave Infrared 2 (SWIR2) as the main wavelength, the SWIR1 with the red wavelengths, for the built-up extraction. A ratio is computed based on the normalization process and the application is made on six cities with different urban and environmental characteristics. The built-up of the experimental site of Yaounde is extracted with an overall accuracy of 95.51% and a kappa coefficient of 0.90. The NDBSUI is validated over five other sites, chosen according to Cameroon’s bioclimatic zoning. The results are satisfactory for the cities of Yokadouma and Kumba in the bimodal and monomodal rainfall zones, where overall accuracies are up to 98.9% and 97.5%, with kappa coefficients of 0.88 and 0.94 respectively, although these values are close to those of three other indices. However, in the cities of Foumban, Ngaoundere and Garoua, representing the western highlands, the high Guinea savannah and the Sudano-sahelian zones where built-up is more confused with soil features, overall accuracies of 97.06%, 95.29% and 74.86%, corresponding to 0.918, 0.89 and 0.42 kappa coefficients were recorded. Difference of accuracy with EBBI, NDBI and UI are up to 31.66%, confirming the NDBSUI efficiency to automate built-up extraction and unmixing from surrounding noises with less biases.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"51 1","pages":"79-85"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78301962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
EVALUATION OF CONVERTING LANDSAT DN TO TA AND SR VALUES ON SELECT SPECTRAL INDICES 在选定的光谱指标上对陆地卫星dn转换成ta和sr值的评价
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-29-2020
A. Gettinger, R. Sivanpillai
Abstract. The complete archive of images collected across all Landsat missions has been reprocessed and categorized by the U.S. Geological Survey (USGS) into a three-tiered architecture: Real-time, Tier-1, and Tier-2. This tiered architecture ensures data compatibility and is convenient for acquiring high quality scenes for pixel-by-pixel change analyses. However, it is important to evaluate the effects of converting older Landsat images from digital numbers (DN) to top-of-the-atmosphere (TA) and surface reflectance (SR) values that are equivalent to more recent Landsat data. This study evaluated the effects of this conversion on spectral indices derived from Tier-1 (the highest quality) Landsat 5 and 8 scenes collected in 30 m spatial resolution. Spectral brightness and reflectance of mixed conifers, Northern Mixed Grass Prairie, deep water, shallow water, and edge water were extracted as DNs, TA, and SR values, respectively. Spectral indices were estimated and compared to determine if the analysis of these land cover classes or their conditions would differ depending on which preprocessed image type was used (DN, TA, or SR). Results from this study will be informative for others making use of indices with images from multiple Landsat satellites as well as for engineers planning to reprocess images for future Landsat collections. This time-series study showed that there was a significant difference between index values derived from three levels of pre-processing. Average index values of vegetation cover classes were consistently significantly different between levels of pre-processing whereas average water index values showed inconsistent significant differences between pre-processing levels.
摘要美国地质调查局(USGS)对所有Landsat任务收集的完整图像档案进行了重新处理,并将其分类为三层结构:实时、第一层和第二层。这种分层结构确保了数据的兼容性,并且便于获取高质量的场景进行逐像素变化分析。然而,评估将较旧的Landsat图像从数字数字(DN)转换为相当于较新的Landsat数据的大气顶部(TA)和表面反射率(SR)值的效果是很重要的。本研究评估了这种转换对从30 m空间分辨率的Tier-1(最高质量)Landsat 5和8场景中获得的光谱指数的影响。将混交林、北方混交草草原、深水、浅水和边缘水的光谱亮度和反射率分别提取为DNs、TA和SR值。对光谱指数进行估计和比较,以确定这些土地覆盖类别或其条件的分析是否会因使用哪种预处理图像类型(DN、TA或SR)而有所不同。这项研究的结果将为利用多颗地球资源卫星图像索引的其他人以及计划为未来的地球资源卫星收集重新处理图像的工程师提供信息。该时间序列研究表明,三个预处理水平的指数值之间存在显著差异。植被覆盖等级的平均指数在各预处理水平之间存在显著差异,而水体平均指数在各预处理水平之间存在不一致的显著差异。
{"title":"EVALUATION OF CONVERTING LANDSAT DN TO TA AND SR VALUES ON SELECT SPECTRAL INDICES","authors":"A. Gettinger, R. Sivanpillai","doi":"10.5194/isprs-archives-xliv-m-2-2020-29-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-29-2020","url":null,"abstract":"Abstract. The complete archive of images collected across all Landsat missions has been reprocessed and categorized by the U.S. Geological Survey (USGS) into a three-tiered architecture: Real-time, Tier-1, and Tier-2. This tiered architecture ensures data compatibility and is convenient for acquiring high quality scenes for pixel-by-pixel change analyses. However, it is important to evaluate the effects of converting older Landsat images from digital numbers (DN) to top-of-the-atmosphere (TA) and surface reflectance (SR) values that are equivalent to more recent Landsat data. This study evaluated the effects of this conversion on spectral indices derived from Tier-1 (the highest quality) Landsat 5 and 8 scenes collected in 30 m spatial resolution. Spectral brightness and reflectance of mixed conifers, Northern Mixed Grass Prairie, deep water, shallow water, and edge water were extracted as DNs, TA, and SR values, respectively. Spectral indices were estimated and compared to determine if the analysis of these land cover classes or their conditions would differ depending on which preprocessed image type was used (DN, TA, or SR). Results from this study will be informative for others making use of indices with images from multiple Landsat satellites as well as for engineers planning to reprocess images for future Landsat collections. This time-series study showed that there was a significant difference between index values derived from three levels of pre-processing. Average index values of vegetation cover classes were consistently significantly different between levels of pre-processing whereas average water index values showed inconsistent significant differences between pre-processing levels.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"37 1","pages":"29-36"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90795115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDENTIFYING EPIPHYTES IN DRONES PHOTOS WITH A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK (C-GAN) 用条件生成对抗网络(c-gan)识别无人机照片中的附生植物
Pub Date : 2020-11-17 DOI: 10.5194/isprs-archives-xliv-m-2-2020-99-2020
A. Shashank, V. Sajithvariyar, V. Sowmya, K. Soman, R. Sivanpillai, G. Brown
Abstract. Unmanned Aerial Vehicle (UAV) missions often collect large volumes of imagery data. However, not all images will have useful information, or be of sufficient quality. Manually sorting these images and selecting useful data are both time consuming and prone to interpreter bias. Deep neural network algorithms are capable of processing large image datasets and can be trained to identify specific targets. Generative Adversarial Networks (GANs) consist of two competing networks, Generator and Discriminator that can analyze, capture, and copy the variations within a given dataset. In this study, we selected a variant of GAN called Conditional-GAN that incorporates an additional label parameter, for identifying epiphytes in photos acquired by a UAV in forests within Costa Rica. We trained the network with 70%, 80%, and 90% of 119 photos containing the target epiphyte, Werauhia kupperiana (Bromeliaceae) and validated the algorithm’s performance using a validation data that were not used for training. The accuracy of the output was measured using structural similarity index measure (SSIM) index and histogram correlation (HC) coefficient. Results obtained in this study indicated that the output images generated by C-GAN were similar (average SSIM = 0.89–0.91 and average HC 0.97–0.99) to the analyst annotated images. However, C-GAN had difficulty to identify when the target plant was away from the camera, was not well lit, or covered by other plants. Results obtained in this study demonstrate the potential of C-GAN to reduce the time spent by botanists to identity epiphytes in images acquired by UAVs.
摘要无人机(UAV)任务经常收集大量的图像数据。然而,并不是所有的图像都有有用的信息,或者具有足够的质量。手动对这些图像进行分类并选择有用的数据既耗时又容易产生解释器偏见。深度神经网络算法能够处理大型图像数据集,并且可以通过训练来识别特定的目标。生成对抗网络(GANs)由两个相互竞争的网络,生成器和鉴别器组成,可以分析,捕获和复制给定数据集中的变化。在这项研究中,我们选择了一种称为条件GAN的变体,它包含了一个额外的标签参数,用于识别哥斯达黎加森林中无人机获取的照片中的附生植物。我们使用包含目标附生植物Werauhia kupperiana (Bromeliaceae)的119张照片中的70%,80%和90%来训练网络,并使用未用于训练的验证数据验证算法的性能。采用结构相似指数(SSIM)指数和直方图相关系数(HC)来衡量输出的准确性。本研究结果表明,C-GAN生成的输出图像与分析师注释的图像相似(平均SSIM = 0.89-0.91,平均HC = 0.97-0.99)。然而,C-GAN在目标植物远离相机、光线不佳或被其他植物覆盖时难以识别。本研究获得的结果表明,C-GAN可以减少植物学家在无人机获取的图像中识别附生植物的时间。
{"title":"IDENTIFYING EPIPHYTES IN DRONES PHOTOS WITH A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK (C-GAN)","authors":"A. Shashank, V. Sajithvariyar, V. Sowmya, K. Soman, R. Sivanpillai, G. Brown","doi":"10.5194/isprs-archives-xliv-m-2-2020-99-2020","DOIUrl":"https://doi.org/10.5194/isprs-archives-xliv-m-2-2020-99-2020","url":null,"abstract":"Abstract. Unmanned Aerial Vehicle (UAV) missions often collect large volumes of imagery data. However, not all images will have useful information, or be of sufficient quality. Manually sorting these images and selecting useful data are both time consuming and prone to interpreter bias. Deep neural network algorithms are capable of processing large image datasets and can be trained to identify specific targets. Generative Adversarial Networks (GANs) consist of two competing networks, Generator and Discriminator that can analyze, capture, and copy the variations within a given dataset. In this study, we selected a variant of GAN called Conditional-GAN that incorporates an additional label parameter, for identifying epiphytes in photos acquired by a UAV in forests within Costa Rica. We trained the network with 70%, 80%, and 90% of 119 photos containing the target epiphyte, Werauhia kupperiana (Bromeliaceae) and validated the algorithm’s performance using a validation data that were not used for training. The accuracy of the output was measured using structural similarity index measure (SSIM) index and histogram correlation (HC) coefficient. Results obtained in this study indicated that the output images generated by C-GAN were similar (average SSIM = 0.89–0.91 and average HC 0.97–0.99) to the analyst annotated images. However, C-GAN had difficulty to identify when the target plant was away from the camera, was not well lit, or covered by other plants. Results obtained in this study demonstrate the potential of C-GAN to reduce the time spent by botanists to identity epiphytes in images acquired by UAVs.","PeriodicalId":14757,"journal":{"name":"ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"16 1","pages":"99-104"},"PeriodicalIF":0.0,"publicationDate":"2020-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84406593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1