Range-Aware Attention Network for LiDAR-Based 3D Object Detection With Auxiliary Point Density Level Estimation

IF 7.1 2区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Vehicular Technology Pub Date : 2024-09-04 DOI:10.1109/TVT.2024.3454607
Yantao Lu;Xuetao Hao;Yilan Li;Weiheng Chai;Shiqi Sun;Senem Velipasalar
{"title":"Range-Aware Attention Network for LiDAR-Based 3D Object Detection With Auxiliary Point Density Level Estimation","authors":"Yantao Lu;Xuetao Hao;Yilan Li;Weiheng Chai;Shiqi Sun;Senem Velipasalar","doi":"10.1109/TVT.2024.3454607","DOIUrl":null,"url":null,"abstract":"3D object detection from LiDAR data for autonomous driving has been making remarkable strides in recent years. Among the state-of-the-art methodologies, encoding point clouds into a bird's eye view (BEV) has been demonstrated to be both effective and efficient. Different from perspective views, BEV preserves rich spatial and distance information between objects. Yet, while farther objects of the same type do not appear smaller in the BEV, they contain sparser point cloud features. This fact weakens BEV feature extraction using shared-weight convolutional neural networks (CNNs). In order to address this challenge, we propose Range-Aware Attention Network (RAANet), which extracts effective BEV features and generates superior 3D object detection outputs. The range-aware attention (RAA) convolutions significantly improve feature extraction for near as well as far objects. Moreover, we propose a novel auxiliary loss for point density estimation to further enhance the detection accuracy of RAANet for occluded objects. It is worth to note that our proposed RAA convolution is lightweight and compatible to be integrated into any CNN architecture used for detection from a BEV. Extensive experiments on the nuScenes and KITTI datasets demonstrate that our proposed approach outperforms the state-of-the-art methods for LiDAR-based 3D object detection, with real-time inference speed of 16 Hz for the full version and 22 Hz for the lite version tested on nuScenes lidar frames.","PeriodicalId":13421,"journal":{"name":"IEEE Transactions on Vehicular Technology","volume":"74 1","pages":"292-305"},"PeriodicalIF":7.1000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Vehicular Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10666000/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

3D object detection from LiDAR data for autonomous driving has been making remarkable strides in recent years. Among the state-of-the-art methodologies, encoding point clouds into a bird's eye view (BEV) has been demonstrated to be both effective and efficient. Different from perspective views, BEV preserves rich spatial and distance information between objects. Yet, while farther objects of the same type do not appear smaller in the BEV, they contain sparser point cloud features. This fact weakens BEV feature extraction using shared-weight convolutional neural networks (CNNs). In order to address this challenge, we propose Range-Aware Attention Network (RAANet), which extracts effective BEV features and generates superior 3D object detection outputs. The range-aware attention (RAA) convolutions significantly improve feature extraction for near as well as far objects. Moreover, we propose a novel auxiliary loss for point density estimation to further enhance the detection accuracy of RAANet for occluded objects. It is worth to note that our proposed RAA convolution is lightweight and compatible to be integrated into any CNN architecture used for detection from a BEV. Extensive experiments on the nuScenes and KITTI datasets demonstrate that our proposed approach outperforms the state-of-the-art methods for LiDAR-based 3D object detection, with real-time inference speed of 16 Hz for the full version and 22 Hz for the lite version tested on nuScenes lidar frames.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于激光雷达的三维物体检测的范围感知注意力网络与辅助点密度水平估计
近年来,基于激光雷达数据的3D目标检测在自动驾驶领域取得了显著进展。在最先进的方法中,将点云编码为鸟瞰图(BEV)已被证明是有效和高效的。与透视视图不同,BEV保留了物体之间丰富的空间和距离信息。然而,虽然相同类型的更远的物体在BEV中并不显得更小,但它们包含更稀疏的点云特征。这一事实削弱了使用共享权卷积神经网络(cnn)的BEV特征提取。为了解决这一挑战,我们提出了距离感知注意力网络(RAANet),它可以提取有效的BEV特征并生成卓越的3D目标检测输出。距离感知注意(RAA)卷积显著提高了近距离和远距离物体的特征提取。此外,我们提出了一种新的点密度估计辅助损失,以进一步提高RAANet对被遮挡物体的检测精度。值得注意的是,我们提出的RAA卷积轻量级且兼容,可以集成到用于从BEV检测的任何CNN架构中。在nuScenes和KITTI数据集上进行的大量实验表明,我们提出的方法优于基于激光雷达的3D物体检测的最先进方法,在nuScenes激光雷达框架上测试的完整版本的实时推理速度为16 Hz,简化版本的实时推理速度为22 Hz。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.00
自引率
8.80%
发文量
1245
审稿时长
6.3 months
期刊介绍: The scope of the Transactions is threefold (which was approved by the IEEE Periodicals Committee in 1967) and is published on the journal website as follows: Communications: The use of mobile radio on land, sea, and air, including cellular radio, two-way radio, and one-way radio, with applications to dispatch and control vehicles, mobile radiotelephone, radio paging, and status monitoring and reporting. Related areas include spectrum usage, component radio equipment such as cavities and antennas, compute control for radio systems, digital modulation and transmission techniques, mobile radio circuit design, radio propagation for vehicular communications, effects of ignition noise and radio frequency interference, and consideration of the vehicle as part of the radio operating environment. Transportation Systems: The use of electronic technology for the control of ground transportation systems including, but not limited to, traffic aid systems; traffic control systems; automatic vehicle identification, location, and monitoring systems; automated transport systems, with single and multiple vehicle control; and moving walkways or people-movers. Vehicular Electronics: The use of electronic or electrical components and systems for control, propulsion, or auxiliary functions, including but not limited to, electronic controls for engineer, drive train, convenience, safety, and other vehicle systems; sensors, actuators, and microprocessors for onboard use; electronic fuel control systems; vehicle electrical components and systems collision avoidance systems; electromagnetic compatibility in the vehicle environment; and electric vehicles and controls.
期刊最新文献
Transparent Transmission in Wall-Embedded Dynamic IOS Assisted Indoor Networks Random Access for Semantic Transmission under Finite Buffer and Retransmission in Vehicular Networks Multi-Modal Environment Semantics Information Aided UAV Beam Alignment On the Robustness of RSMA to Adversarial BD-RIS-Induced Interference Resource Allocation for STAR-RIS-enhanced Metaverse Systems with Augmented Reality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1