使用改进的PointRCNN进行3D目标检测

Kazuki Fukitani, Ishiyama Shin, Huimin Lu, Shuo Yang, Tohru Kamiya, Yoshihisa Nakatoh, Seiichi Serikawa
{"title":"使用改进的PointRCNN进行3D目标检测","authors":"Kazuki Fukitani,&nbsp;Ishiyama Shin,&nbsp;Huimin Lu,&nbsp;Shuo Yang,&nbsp;Tohru Kamiya,&nbsp;Yoshihisa Nakatoh,&nbsp;Seiichi Serikawa","doi":"10.1016/j.cogr.2022.12.001","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, two-dimensional object detection (2D object detection) has been introduced in numerous applications such as building exterior diagnosis, crime prevention and surveillance, and medical fields. However, the distance (depth) information is not enough for indoor robot navigation, robot grasping, autonomous running, and so on, with conventional object detection. Therefore, in order to improve the accuracy of 3D object detection, this paper proposes an improvement of Point RCNN, which is a segmentation-based method using RPNs and has performed well in 3D detection benchmarks on the KITTI dataset commonly used in recognition tasks for automatic driving. The proposed improvement is to improve the network in the first stage of generating 3D box candidates in order to solve the problem of frequent false positives. Specifically, we added a Squeeze and Excitation (SE) Block to the network of pointnet++ that performs feature extraction in the first stage and changed the activation function from ReLU to Mish. Experiments were conducted on the KITTI dataset, which is commonly used in research aimed at automated driving, and an accurate comparison was conducted using AP. The proposed method outperforms the conventional method by several percent on all three difficulty levels.</p></div>","PeriodicalId":100288,"journal":{"name":"Cognitive Robotics","volume":"2 ","pages":"Pages 242-254"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667241322000222/pdfft?md5=976fa9833e04a5bb9d3751cbbe165535&pid=1-s2.0-S2667241322000222-main.pdf","citationCount":"0","resultStr":"{\"title\":\"3D object detection using improved PointRCNN\",\"authors\":\"Kazuki Fukitani,&nbsp;Ishiyama Shin,&nbsp;Huimin Lu,&nbsp;Shuo Yang,&nbsp;Tohru Kamiya,&nbsp;Yoshihisa Nakatoh,&nbsp;Seiichi Serikawa\",\"doi\":\"10.1016/j.cogr.2022.12.001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recently, two-dimensional object detection (2D object detection) has been introduced in numerous applications such as building exterior diagnosis, crime prevention and surveillance, and medical fields. However, the distance (depth) information is not enough for indoor robot navigation, robot grasping, autonomous running, and so on, with conventional object detection. Therefore, in order to improve the accuracy of 3D object detection, this paper proposes an improvement of Point RCNN, which is a segmentation-based method using RPNs and has performed well in 3D detection benchmarks on the KITTI dataset commonly used in recognition tasks for automatic driving. The proposed improvement is to improve the network in the first stage of generating 3D box candidates in order to solve the problem of frequent false positives. Specifically, we added a Squeeze and Excitation (SE) Block to the network of pointnet++ that performs feature extraction in the first stage and changed the activation function from ReLU to Mish. Experiments were conducted on the KITTI dataset, which is commonly used in research aimed at automated driving, and an accurate comparison was conducted using AP. The proposed method outperforms the conventional method by several percent on all three difficulty levels.</p></div>\",\"PeriodicalId\":100288,\"journal\":{\"name\":\"Cognitive Robotics\",\"volume\":\"2 \",\"pages\":\"Pages 242-254\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2667241322000222/pdfft?md5=976fa9833e04a5bb9d3751cbbe165535&pid=1-s2.0-S2667241322000222-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667241322000222\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Robotics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667241322000222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,二维物体检测(2D object detection)技术已被广泛应用于建筑外部诊断、犯罪预防与监控、医疗等领域。然而,在传统的目标检测中,距离(深度)信息不足以满足室内机器人的导航、抓取、自主运行等要求。因此,为了提高三维目标检测的精度,本文提出了一种改进的Point RCNN方法,该方法是一种基于RPNs的分割方法,在自动驾驶识别任务中常用的KITTI数据集上进行了良好的三维检测基准测试。本文提出的改进是在生成三维候选框的第一阶段对网络进行改进,以解决误报频繁的问题。具体而言,我们在pointnet++网络中增加了一个挤压和激励(SE)块,该块在第一阶段进行特征提取,并将激活函数从ReLU改为Mish。在自动驾驶研究中常用的KITTI数据集上进行了实验,并使用AP进行了准确的比较。所提出的方法在所有三个难度级别上都比传统方法高出几个百分点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
3D object detection using improved PointRCNN

Recently, two-dimensional object detection (2D object detection) has been introduced in numerous applications such as building exterior diagnosis, crime prevention and surveillance, and medical fields. However, the distance (depth) information is not enough for indoor robot navigation, robot grasping, autonomous running, and so on, with conventional object detection. Therefore, in order to improve the accuracy of 3D object detection, this paper proposes an improvement of Point RCNN, which is a segmentation-based method using RPNs and has performed well in 3D detection benchmarks on the KITTI dataset commonly used in recognition tasks for automatic driving. The proposed improvement is to improve the network in the first stage of generating 3D box candidates in order to solve the problem of frequent false positives. Specifically, we added a Squeeze and Excitation (SE) Block to the network of pointnet++ that performs feature extraction in the first stage and changed the activation function from ReLU to Mish. Experiments were conducted on the KITTI dataset, which is commonly used in research aimed at automated driving, and an accurate comparison was conducted using AP. The proposed method outperforms the conventional method by several percent on all three difficulty levels.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.40
自引率
0.00%
发文量
0
期刊最新文献
Optimizing Food Sample Handling and Placement Pattern Recognition with YOLO: Advanced Techniques in Robotic Object Detection Intelligent path planning for cognitive mobile robot based on Dhouib-Matrix-SPP method YOLOT: Multi-scale and diverse tire sidewall text region detection based on You-Only-Look-Once(YOLOv5) Scalable and cohesive swarm control based on reinforcement learning POMDP-based probabilistic decision making for path planning in wheeled mobile robot
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1