HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception

A. Malawade, Trier Mortlock, M. A. Faruque
{"title":"HydraFusion: Context-Aware Selective Sensor Fusion for Robust and Efficient Autonomous Vehicle Perception","authors":"A. Malawade, Trier Mortlock, M. A. Faruque","doi":"10.1109/iccps54341.2022.00013","DOIUrl":null,"url":null,"abstract":"Although autonomous vehicles (AVs) are expected to revolutionize transportation, robust perception across a wide range of driving contexts remains a significant challenge. Techniques to fuse sensor data from camera, radar, and lidar sensors have been proposed to improve AV perception. However, existing methods are insufficiently robust in difficult driving contexts (e.g., bad weather, low light, sensor obstruction) due to rigidity in their fusion implementations. These methods fall into two broad categories: (i) early fusion, which fails when sensor data is noisy or obscured, and (ii) late fusion, which cannot leverage features from multiple sensors and thus produces worse estimates. To address these limitations, we propose HydraFusion: a selective sensor fusion framework that learns to identify the current driving context and fuses the best combination of sensors to maximize robustness without compromising efficiency. HydraFusion is the first approach to propose dynamically adjusting between early fusion, late fusion, and combinations in-between, thus varying both how and when fusion is applied. We show that, on average, Hydrafusionoutperforms early and late fusion approaches by 13.66% and 14.54%, respectively, without increasing computational complexity or energy consumption on the industry-standard Nvidia Drive PX2 AV hardware platform. We also propose and evaluate both static and deep-learning-based context identification strategies. Our open-source code and model implementation are available at https://github.com/AICPS/hydrafusion.","PeriodicalId":340078,"journal":{"name":"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccps54341.2022.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Although autonomous vehicles (AVs) are expected to revolutionize transportation, robust perception across a wide range of driving contexts remains a significant challenge. Techniques to fuse sensor data from camera, radar, and lidar sensors have been proposed to improve AV perception. However, existing methods are insufficiently robust in difficult driving contexts (e.g., bad weather, low light, sensor obstruction) due to rigidity in their fusion implementations. These methods fall into two broad categories: (i) early fusion, which fails when sensor data is noisy or obscured, and (ii) late fusion, which cannot leverage features from multiple sensors and thus produces worse estimates. To address these limitations, we propose HydraFusion: a selective sensor fusion framework that learns to identify the current driving context and fuses the best combination of sensors to maximize robustness without compromising efficiency. HydraFusion is the first approach to propose dynamically adjusting between early fusion, late fusion, and combinations in-between, thus varying both how and when fusion is applied. We show that, on average, Hydrafusionoutperforms early and late fusion approaches by 13.66% and 14.54%, respectively, without increasing computational complexity or energy consumption on the industry-standard Nvidia Drive PX2 AV hardware platform. We also propose and evaluate both static and deep-learning-based context identification strategies. Our open-source code and model implementation are available at https://github.com/AICPS/hydrafusion.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
液压融合:用于鲁棒和高效自动驾驶车辆感知的环境感知选择性传感器融合
尽管自动驾驶汽车(AVs)有望彻底改变交通运输,但在广泛的驾驶环境中实现强大的感知仍然是一个重大挑战。融合来自相机、雷达和激光雷达传感器的传感器数据的技术已经被提出,以改善自动驾驶感知。然而,现有的方法在困难的驾驶环境下(例如,恶劣天气、低光、传感器障碍物)由于其融合实现的刚性而不够稳健。这些方法分为两大类:(i)早期融合,当传感器数据有噪声或模糊时失败;(ii)晚期融合,不能利用多个传感器的特征,因此产生较差的估计。为了解决这些限制,我们提出了HydraFusion:一种选择性传感器融合框架,可以学习识别当前驾驶环境,并融合传感器的最佳组合,在不影响效率的情况下最大化鲁棒性。HydraFusion是第一个提出在早期融合、晚期融合和两者之间的组合之间进行动态调整的方法,从而改变融合的方式和时间。我们发现,在行业标准Nvidia Drive PX2 AV硬件平台上,hydrfusion在不增加计算复杂性或能耗的情况下,平均比早期和晚期融合方法分别高出13.66%和14.54%。我们还提出并评估了静态和基于深度学习的上下文识别策略。我们的开源代码和模型实现可在https://github.com/AICPS/hydrafusion上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Poster Abstract: Scheduling Dynamic Software Updates in Safety-critical Embedded Systems - the Case of Aerial Drones Multi-fidelity Bayesian Optimization for Co-design of Resilient Cyber-Physical Systems Decentralized Multi-agent Coordination under MITL Tasks and Communication Constraints Safety from Fast, In-the-Loop Reachability with Application to UAVs Blind Spots of Objective Measures: Exploiting Imperceivable Errors for Immersive Tactile Internet
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1