基于云的强化学习在汽车控制功能开发中的应用

Lucas Koch, Dennis Roeser, Kevin Badalian, Alexander Lieb, Jakob Andert
{"title":"基于云的强化学习在汽车控制功能开发中的应用","authors":"Lucas Koch, Dennis Roeser, Kevin Badalian, Alexander Lieb, Jakob Andert","doi":"10.3390/vehicles5030050","DOIUrl":null,"url":null,"abstract":"Automotive control functions are becoming increasingly complex and their development is becoming more and more elaborate, leading to a strong need for automated solutions within the development process. Here, reinforcement learning offers a significant potential for function development to generate optimized control functions in an automated manner. Despite its successful deployment in a variety of control tasks, there is still a lack of standard tooling solutions for function development based on reinforcement learning in the automotive industry. To address this gap, we present a flexible framework that couples the conventional development process with an open-source reinforcement learning library. It features modular, physical models for relevant vehicle components, a co-simulation with a microscopic traffic simulation to generate realistic scenarios, and enables distributed and parallelized training. We demonstrate the effectiveness of our proposed method in a feasibility study to learn a control function for automated longitudinal control of an electric vehicle in an urban traffic scenario. The evolved control strategy produces a smooth trajectory with energy savings of up to 14%. The results highlight the great potential of reinforcement learning for automated control function development and prove the effectiveness of the proposed framework.","PeriodicalId":73282,"journal":{"name":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Cloud-Based Reinforcement Learning in Automotive Control Function Development\",\"authors\":\"Lucas Koch, Dennis Roeser, Kevin Badalian, Alexander Lieb, Jakob Andert\",\"doi\":\"10.3390/vehicles5030050\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automotive control functions are becoming increasingly complex and their development is becoming more and more elaborate, leading to a strong need for automated solutions within the development process. Here, reinforcement learning offers a significant potential for function development to generate optimized control functions in an automated manner. Despite its successful deployment in a variety of control tasks, there is still a lack of standard tooling solutions for function development based on reinforcement learning in the automotive industry. To address this gap, we present a flexible framework that couples the conventional development process with an open-source reinforcement learning library. It features modular, physical models for relevant vehicle components, a co-simulation with a microscopic traffic simulation to generate realistic scenarios, and enables distributed and parallelized training. We demonstrate the effectiveness of our proposed method in a feasibility study to learn a control function for automated longitudinal control of an electric vehicle in an urban traffic scenario. The evolved control strategy produces a smooth trajectory with energy savings of up to 14%. The results highlight the great potential of reinforcement learning for automated control function development and prove the effectiveness of the proposed framework.\",\"PeriodicalId\":73282,\"journal\":{\"name\":\"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/vehicles5030050\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/vehicles5030050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

汽车控制功能正变得越来越复杂,它们的开发也变得越来越精细,导致在开发过程中对自动化解决方案的强烈需求。在这里,强化学习为函数开发提供了巨大的潜力,以自动化的方式生成优化的控制函数。尽管它成功地部署在各种控制任务中,但在汽车行业中仍然缺乏基于强化学习的功能开发的标准工具解决方案。为了解决这一差距,我们提出了一个灵活的框架,将传统的开发过程与开源的强化学习库结合起来。它具有相关车辆组件的模块化物理模型,与微观交通模拟的联合模拟以生成逼真的场景,并实现分布式和并行训练。我们在可行性研究中证明了我们提出的方法的有效性,以学习城市交通场景中电动汽车自动纵向控制的控制函数。改进的控制策略产生了一个平滑的轨迹,节能高达14%。结果突出了强化学习在自动控制功能开发中的巨大潜力,并证明了所提出框架的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cloud-Based Reinforcement Learning in Automotive Control Function Development
Automotive control functions are becoming increasingly complex and their development is becoming more and more elaborate, leading to a strong need for automated solutions within the development process. Here, reinforcement learning offers a significant potential for function development to generate optimized control functions in an automated manner. Despite its successful deployment in a variety of control tasks, there is still a lack of standard tooling solutions for function development based on reinforcement learning in the automotive industry. To address this gap, we present a flexible framework that couples the conventional development process with an open-source reinforcement learning library. It features modular, physical models for relevant vehicle components, a co-simulation with a microscopic traffic simulation to generate realistic scenarios, and enables distributed and parallelized training. We demonstrate the effectiveness of our proposed method in a feasibility study to learn a control function for automated longitudinal control of an electric vehicle in an urban traffic scenario. The evolved control strategy produces a smooth trajectory with energy savings of up to 14%. The results highlight the great potential of reinforcement learning for automated control function development and prove the effectiveness of the proposed framework.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Comparison of Feedback Field-Weakening Techniques for Synchronous Machines with Permanent Magnets Synthetic Drivers’ Performance Measures Related to Vehicle Dynamics to Control Road Safety in Curves Diesel Particle Filter Requirements for Euro 7 Technology Continuously Regenerating Heavy-Duty Applications Hybridisation Concept of Light Vehicles Utilising an Electrified Planetary Gear Set A Co-Simulation Platform with Tire and Brake Thermal Model for the Analysis and Reproduction of Blanking
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1