CASTNet: A Context-Aware, Spatio-Temporal Dynamic Motion Prediction Ensemble for Autonomous Driving

Trier Mortlock, A. Malawade, Kohei Tsujio, M. A. Al Faruque
{"title":"CASTNet: A Context-Aware, Spatio-Temporal Dynamic Motion Prediction Ensemble for Autonomous Driving","authors":"Trier Mortlock, A. Malawade, Kohei Tsujio, M. A. Al Faruque","doi":"10.1145/3648622","DOIUrl":null,"url":null,"abstract":"\n Autonomous vehicles are cyber-physical systems that combine embedded computing and deep learning with physical systems to perceive the world, predict future states, and safely control the vehicle through changing environments. The ability of an autonomous vehicle to accurately predict the motion of other road users across a wide range of diverse scenarios is critical for both motion planning and safety. However, existing motion prediction methods do not explicitly model contextual information about the environment, which can cause significant variations in performance across diverse driving scenarios. To address this limitation, we propose\n CASTNet\n : a dynamic, context-aware approach for motion prediction that (i) identifies the current driving context using a spatio-temporal model, (ii) adapts an ensemble of motion prediction models to fit the current context, and (iii) applies novel trajectory fusion methods to combine predictions output by the ensemble. This approach enables CASTNet to improve robustness by minimizing motion prediction error across diverse driving scenarios. CASTNet is highly modular and can be used with various existing image processing backbones and motion predictors. We demonstrate how CASTNet can improve both CNN-based and graph-learning-based motion prediction approaches and conduct ablation studies on the performance, latency, and model size for various ensemble architecture choices. In addition, we propose and evaluate several attention-based spatio-temporal models for context identification and ensemble selection. We also propose a modular trajectory fusion algorithm that effectively filters, clusters, and fuses the predicted trajectories output by the ensemble. On the nuScenes dataset, our approach demonstrates more robust and consistent performance across diverse, real-world driving contexts than state-of-the-art techniques.\n","PeriodicalId":505086,"journal":{"name":"ACM Transactions on Cyber-Physical Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Cyber-Physical Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3648622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous vehicles are cyber-physical systems that combine embedded computing and deep learning with physical systems to perceive the world, predict future states, and safely control the vehicle through changing environments. The ability of an autonomous vehicle to accurately predict the motion of other road users across a wide range of diverse scenarios is critical for both motion planning and safety. However, existing motion prediction methods do not explicitly model contextual information about the environment, which can cause significant variations in performance across diverse driving scenarios. To address this limitation, we propose CASTNet : a dynamic, context-aware approach for motion prediction that (i) identifies the current driving context using a spatio-temporal model, (ii) adapts an ensemble of motion prediction models to fit the current context, and (iii) applies novel trajectory fusion methods to combine predictions output by the ensemble. This approach enables CASTNet to improve robustness by minimizing motion prediction error across diverse driving scenarios. CASTNet is highly modular and can be used with various existing image processing backbones and motion predictors. We demonstrate how CASTNet can improve both CNN-based and graph-learning-based motion prediction approaches and conduct ablation studies on the performance, latency, and model size for various ensemble architecture choices. In addition, we propose and evaluate several attention-based spatio-temporal models for context identification and ensemble selection. We also propose a modular trajectory fusion algorithm that effectively filters, clusters, and fuses the predicted trajectories output by the ensemble. On the nuScenes dataset, our approach demonstrates more robust and consistent performance across diverse, real-world driving contexts than state-of-the-art techniques.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CASTNet:用于自动驾驶的情境感知、时空动态运动预测集合
自动驾驶汽车是一种网络物理系统,它将嵌入式计算和深度学习与物理系统相结合,能够感知世界、预测未来状态,并在不断变化的环境中安全地控制汽车。自动驾驶车辆能否在各种不同的场景中准确预测其他道路使用者的运动,对于运动规划和安全至关重要。然而,现有的运动预测方法并没有明确模拟环境的上下文信息,这可能会导致不同驾驶场景下的性能出现显著差异。为解决这一局限性,我们提出了 CASTNet:一种动态、情境感知的运动预测方法,它(i)使用时空模型识别当前驾驶情境,(ii)调整运动预测模型集合以适应当前情境,以及(iii)应用新颖的轨迹融合方法来组合集合输出的预测结果。这种方法可使 CASTNet 在各种驾驶场景中最大限度地减少运动预测误差,从而提高鲁棒性。CASTNet 高度模块化,可与现有的各种图像处理骨干和运动预测器配合使用。我们展示了 CASTNet 如何改进基于 CNN 和基于图学习的运动预测方法,并对各种集合架构选择的性能、延迟和模型大小进行了消融研究。此外,我们还提出并评估了几种基于注意力的时空模型,用于上下文识别和集合选择。我们还提出了一种模块化轨迹融合算法,可有效过滤、聚类和融合集合输出的预测轨迹。在 nuScenes 数据集上,与最先进的技术相比,我们的方法在不同的真实世界驾驶环境中表现出了更稳健、更一致的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Introduction to the Special Issue on Fault-Resilient Cyber-Physical Systems – Part I ACM TCPS Foreword to Special Issue for ICCPS 2022 LEC-MiCs: Low-Energy Checkpointing in Mixed-Criticality Multi-Core Systems SIoV Mobility Management using SDVN-enabled Traffic Light Cooperative Framework Characterizing the effect of mind wandering on partially autonomous braking dynamics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1