基于卷积神经网络的点云和图像融合目标检测

Q3 Engineering 光电工程 Pub Date : 2021-04-15 DOI:10.12086/OEE.2021.200325
Zhang Jiesong, Huang Yingping, Zhang Rui
{"title":"基于卷积神经网络的点云和图像融合目标检测","authors":"Zhang Jiesong, Huang Yingping, Zhang Rui","doi":"10.12086/OEE.2021.200325","DOIUrl":null,"url":null,"abstract":"Addressing on the issues like varying object scale, complicated illumination conditions, and lack of reliable distance information in driverless applications, this paper proposes a multi-modal fusion method for object detection by using convolutional neural networks. The depth map is generated by mapping LiDAR point cloud onto the image plane and taken as input data together with the RGB image. The input data is also processed by the sliding window to reduce information loss. Two feature extracting networks are used to extract features of the image and the depth map respectively. The generated feature maps are fused through a connection layer. The objects are detected by processing the fused feature map through position regression and object classification. Non-maximal suppression is used to optimize the detection results. The experimental results on the KITTI dataset show that the proposed method is robust in various illumination conditions and especially effective on detecting small objects. Compared with other methods, the proposed method exhibits integrated advantages in terms of detection accuracy and speed.","PeriodicalId":39552,"journal":{"name":"光电工程","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fusing point cloud with image for object detection using convolutional neural networks\",\"authors\":\"Zhang Jiesong, Huang Yingping, Zhang Rui\",\"doi\":\"10.12086/OEE.2021.200325\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Addressing on the issues like varying object scale, complicated illumination conditions, and lack of reliable distance information in driverless applications, this paper proposes a multi-modal fusion method for object detection by using convolutional neural networks. The depth map is generated by mapping LiDAR point cloud onto the image plane and taken as input data together with the RGB image. The input data is also processed by the sliding window to reduce information loss. Two feature extracting networks are used to extract features of the image and the depth map respectively. The generated feature maps are fused through a connection layer. The objects are detected by processing the fused feature map through position regression and object classification. Non-maximal suppression is used to optimize the detection results. The experimental results on the KITTI dataset show that the proposed method is robust in various illumination conditions and especially effective on detecting small objects. Compared with other methods, the proposed method exhibits integrated advantages in terms of detection accuracy and speed.\",\"PeriodicalId\":39552,\"journal\":{\"name\":\"光电工程\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"光电工程\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://doi.org/10.12086/OEE.2021.200325\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"光电工程","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.12086/OEE.2021.200325","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

针对无人驾驶应用中物体尺度变化、光照条件复杂、缺乏可靠距离信息等问题,提出了一种基于卷积神经网络的多模态融合目标检测方法。将LiDAR点云映射到图像平面上生成深度图,与RGB图像一起作为输入数据。输入数据也通过滑动窗口进行处理,减少信息丢失。采用两种特征提取网络分别提取图像特征和深度图特征。生成的特征映射通过连接层进行融合。通过位置回归和目标分类对融合后的特征图进行处理,检测目标。采用非最大抑制对检测结果进行优化。在KITTI数据集上的实验结果表明,该方法在各种光照条件下都具有较强的鲁棒性,对小目标的检测尤其有效。与其他方法相比,该方法在检测精度和速度方面具有综合优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fusing point cloud with image for object detection using convolutional neural networks
Addressing on the issues like varying object scale, complicated illumination conditions, and lack of reliable distance information in driverless applications, this paper proposes a multi-modal fusion method for object detection by using convolutional neural networks. The depth map is generated by mapping LiDAR point cloud onto the image plane and taken as input data together with the RGB image. The input data is also processed by the sliding window to reduce information loss. Two feature extracting networks are used to extract features of the image and the depth map respectively. The generated feature maps are fused through a connection layer. The objects are detected by processing the fused feature map through position regression and object classification. Non-maximal suppression is used to optimize the detection results. The experimental results on the KITTI dataset show that the proposed method is robust in various illumination conditions and especially effective on detecting small objects. Compared with other methods, the proposed method exhibits integrated advantages in terms of detection accuracy and speed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
光电工程
光电工程 Engineering-Electrical and Electronic Engineering
CiteScore
2.00
自引率
0.00%
发文量
6622
期刊介绍:
期刊最新文献
The joint discriminative and generative learning for person re-identification of deep dual attention Fiber coupling technology of high brightness blue laser diode A few-shot learning based generative method for atmospheric polarization modelling Characteristics of wavefront correction using stacked liquid lens based on electrowetting-on-dielectric Research on joint coding for underwater single-photon video communication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1