Semantic Segmentation for Full-Waveform LiDAR Data Using Local and Hierarchical Global Feature Extraction

T. Shinohara, H. Xiu, M. Matsuoka
{"title":"Semantic Segmentation for Full-Waveform LiDAR Data Using Local and Hierarchical Global Feature Extraction","authors":"T. Shinohara, H. Xiu, M. Matsuoka","doi":"10.1145/3397536.3422209","DOIUrl":null,"url":null,"abstract":"During the last few years, in the field of computer vision, sophisticated deep learning methods have been developed to accomplish semantic segmentation tasks of 3D point cloud data. Additionally, many researchers have extended the applicability of these methods, such as PointNet or PointNet++, beyond semantic segmentation tasks of indoor scene data to large-scale outdoor scene data observed using airborne laser scanning systems equipped with light detection and ranging (LiDAR) technology. Most extant studies have only investigated geometric information (x, y, and z or longitude, latitude, and height) and have omitted rich radiometric information. Therefore, we aim to extend the applicability of deep learning-based model from the geometric data into radiometric data acquired with airborne full-waveform LiDAR without converting the waveform into 2D images or 3D voxels. We simultaneously train two models: a local module for local feature extraction and a global module for acquiring wide receptive fields for the waveform. Furthermore, our proposed model is based on waveform-aware convolutional techniques. We evaluate the effectiveness of the proposed method using benchmark large-scale outdoor scene data. By integrating the two outputs from the local module and the global module, our proposed model had achieved higher mean recall value 0.92 than previous methods and higher F1 scores for all six classes than the other 3D Deep Learning method. Therefore, our proposed network consisting of the local and global module successfully resolves the semantic segmentation task of full-waveform LiDAR data without requiring expert knowledge.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397536.3422209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

During the last few years, in the field of computer vision, sophisticated deep learning methods have been developed to accomplish semantic segmentation tasks of 3D point cloud data. Additionally, many researchers have extended the applicability of these methods, such as PointNet or PointNet++, beyond semantic segmentation tasks of indoor scene data to large-scale outdoor scene data observed using airborne laser scanning systems equipped with light detection and ranging (LiDAR) technology. Most extant studies have only investigated geometric information (x, y, and z or longitude, latitude, and height) and have omitted rich radiometric information. Therefore, we aim to extend the applicability of deep learning-based model from the geometric data into radiometric data acquired with airborne full-waveform LiDAR without converting the waveform into 2D images or 3D voxels. We simultaneously train two models: a local module for local feature extraction and a global module for acquiring wide receptive fields for the waveform. Furthermore, our proposed model is based on waveform-aware convolutional techniques. We evaluate the effectiveness of the proposed method using benchmark large-scale outdoor scene data. By integrating the two outputs from the local module and the global module, our proposed model had achieved higher mean recall value 0.92 than previous methods and higher F1 scores for all six classes than the other 3D Deep Learning method. Therefore, our proposed network consisting of the local and global module successfully resolves the semantic segmentation task of full-waveform LiDAR data without requiring expert knowledge.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于局部和分层全局特征提取的全波形激光雷达数据语义分割
近年来,在计算机视觉领域,已经开发出复杂的深度学习方法来完成三维点云数据的语义分割任务。此外,许多研究人员已经扩展了这些方法的适用性,例如PointNet或PointNet++,将室内场景数据的语义分割任务扩展到使用配备光探测和测距(LiDAR)技术的机载激光扫描系统观测的大规模室外场景数据。大多数现有的研究只调查了几何信息(x、y和z或经度、纬度和高度),而忽略了丰富的辐射测量信息。因此,我们的目标是将基于深度学习的模型的适用性从几何数据扩展到机载全波形激光雷达获取的辐射数据,而无需将波形转换为2D图像或3D体素。我们同时训练了两个模型:一个用于局部特征提取的局部模块和一个用于获取波形宽接受域的全局模块。此外,我们提出的模型是基于波形感知卷积技术。我们使用基准大规模户外场景数据来评估所提出方法的有效性。通过整合局部模块和全局模块的两个输出,我们提出的模型比以前的方法获得了更高的平均召回值0.92,并且所有六个类的F1分数都比其他3D深度学习方法更高。因此,我们提出的由局部和全局模块组成的网络成功地解决了全波形激光雷达数据的语义分割任务,而不需要专家知识。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Poet Distributed Spatiotemporal Trajectory Query Processing in SQL A Time-Windowed Data Structure for Spatial Density Maps Distributed Spatial-Keyword kNN Monitoring for Location-aware Pub/Sub Platooning Graph for Safer Traffic Management
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1