{"title":"从端到端可微分仿真看自主车辆控制器","authors":"Asen Nachkov, Danda Pani Paudel, Luc Van Gool","doi":"arxiv-2409.07965","DOIUrl":null,"url":null,"abstract":"Current methods to learn controllers for autonomous vehicles (AVs) focus on\nbehavioural cloning. Being trained only on exact historic data, the resulting\nagents often generalize poorly to novel scenarios. Simulators provide the\nopportunity to go beyond offline datasets, but they are still treated as\ncomplicated black boxes, only used to update the global simulation state. As a\nresult, these RL algorithms are slow, sample-inefficient, and prior-agnostic.\nIn this work, we leverage a differentiable simulator and design an analytic\npolicy gradients (APG) approach to training AV controllers on the large-scale\nWaymo Open Motion Dataset. Our proposed framework brings the differentiable\nsimulator into an end-to-end training loop, where gradients of the environment\ndynamics serve as a useful prior to help the agent learn a more grounded\npolicy. We combine this setup with a recurrent architecture that can\nefficiently propagate temporal information across long simulated trajectories.\nThis APG method allows us to learn robust, accurate, and fast policies, while\nonly requiring widely-available expert trajectories, instead of scarce expert\nactions. We compare to behavioural cloning and find significant improvements in\nperformance and robustness to noise in the dynamics, as well as overall more\nintuitive human-like handling.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous Vehicle Controllers From End-to-End Differentiable Simulation\",\"authors\":\"Asen Nachkov, Danda Pani Paudel, Luc Van Gool\",\"doi\":\"arxiv-2409.07965\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current methods to learn controllers for autonomous vehicles (AVs) focus on\\nbehavioural cloning. Being trained only on exact historic data, the resulting\\nagents often generalize poorly to novel scenarios. Simulators provide the\\nopportunity to go beyond offline datasets, but they are still treated as\\ncomplicated black boxes, only used to update the global simulation state. As a\\nresult, these RL algorithms are slow, sample-inefficient, and prior-agnostic.\\nIn this work, we leverage a differentiable simulator and design an analytic\\npolicy gradients (APG) approach to training AV controllers on the large-scale\\nWaymo Open Motion Dataset. Our proposed framework brings the differentiable\\nsimulator into an end-to-end training loop, where gradients of the environment\\ndynamics serve as a useful prior to help the agent learn a more grounded\\npolicy. We combine this setup with a recurrent architecture that can\\nefficiently propagate temporal information across long simulated trajectories.\\nThis APG method allows us to learn robust, accurate, and fast policies, while\\nonly requiring widely-available expert trajectories, instead of scarce expert\\nactions. We compare to behavioural cloning and find significant improvements in\\nperformance and robustness to noise in the dynamics, as well as overall more\\nintuitive human-like handling.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"6 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07965\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07965","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
目前学习自动驾驶汽车(AV)控制器的方法主要集中在行为克隆上。由于只能在精确的历史数据基础上进行训练,由此产生的控制器对新场景的泛化能力往往很差。模拟器提供了超越离线数据集的机会,但仍被视为复杂的黑盒子,仅用于更新全局模拟状态。在这项工作中,我们利用可微分模拟器,设计了一种分析政策梯度(APG)方法,在大规模的 Waymo 开放运动数据集上训练 AV 控制器。我们提出的框架将可微分模拟器引入端到端训练循环,其中环境动力学梯度可作为有用的先验,帮助代理学习更接地气的政策。这种 APG 方法使我们能够学习稳健、准确和快速的策略,同时只需要广泛可用的专家轨迹,而不是稀缺的专家交互。我们将其与行为克隆进行了比较,发现其在性能和对动态噪声的鲁棒性方面都有显著提高,而且整体处理方式更直观,更像人类。
Autonomous Vehicle Controllers From End-to-End Differentiable Simulation
Current methods to learn controllers for autonomous vehicles (AVs) focus on
behavioural cloning. Being trained only on exact historic data, the resulting
agents often generalize poorly to novel scenarios. Simulators provide the
opportunity to go beyond offline datasets, but they are still treated as
complicated black boxes, only used to update the global simulation state. As a
result, these RL algorithms are slow, sample-inefficient, and prior-agnostic.
In this work, we leverage a differentiable simulator and design an analytic
policy gradients (APG) approach to training AV controllers on the large-scale
Waymo Open Motion Dataset. Our proposed framework brings the differentiable
simulator into an end-to-end training loop, where gradients of the environment
dynamics serve as a useful prior to help the agent learn a more grounded
policy. We combine this setup with a recurrent architecture that can
efficiently propagate temporal information across long simulated trajectories.
This APG method allows us to learn robust, accurate, and fast policies, while
only requiring widely-available expert trajectories, instead of scarce expert
actions. We compare to behavioural cloning and find significant improvements in
performance and robustness to noise in the dynamics, as well as overall more
intuitive human-like handling.