Enhanced deep reinforcement learning model with bird’s eye view design strategy for decision control in vehicle-road collaboration

IF 5.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Control Engineering Practice Pub Date : 2025-03-14 DOI:10.1016/j.conengprac.2025.106315
Yitao Luo , Runde Zhang , Zhuyun Chen , Chong Xie , Shaowu Zheng , Shanhu Yu , Weihua Li
{"title":"Enhanced deep reinforcement learning model with bird’s eye view design strategy for decision control in vehicle-road collaboration","authors":"Yitao Luo ,&nbsp;Runde Zhang ,&nbsp;Zhuyun Chen ,&nbsp;Chong Xie ,&nbsp;Shaowu Zheng ,&nbsp;Shanhu Yu ,&nbsp;Weihua Li","doi":"10.1016/j.conengprac.2025.106315","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous driving in complex traffic scenarios is a vital challenge, and deep reinforcement learning (DRL) has been extensively applied to address this issue. The recent advancement of vehicle-to-everything (V2X) technology has provided abundant perceptual information for DRL agents, improving the accuracy and safety of decision control. However, existing research on green wave traffic scenes has difficulty adapting to multi-signal scenarios with single-signal countdown models, which lack complete signal state information. To address this limitation, an enhanced DRL model with bird’s eye view (BEV) design strategy is proposed for vehicle-road collaborative autonomous driving scenarios. The constructed model introduces a state prediction fusion strategy to compensate for state information. Specifically, state information is first predicted by fusing perception results from vehicles and roadside units (RSUs) at different moments. Then, the recommended velocity is derived for green wave passage, called the green wave velocity belt, and incorporate it into the state space as two variables in the state vector. Finally, a relevant reward term in the reward function is designed to guide agent learning strategies. The proposed method is trained on the basis of the parallel DreamerV3 framework. The results show that the proposed approach can effectively integrate multi-source perceptual information, improving training efficiency and control performance, and demonstrating great effectiveness and practical application value.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"159 ","pages":"Article 106315"},"PeriodicalIF":5.4000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066125000784","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Autonomous driving in complex traffic scenarios is a vital challenge, and deep reinforcement learning (DRL) has been extensively applied to address this issue. The recent advancement of vehicle-to-everything (V2X) technology has provided abundant perceptual information for DRL agents, improving the accuracy and safety of decision control. However, existing research on green wave traffic scenes has difficulty adapting to multi-signal scenarios with single-signal countdown models, which lack complete signal state information. To address this limitation, an enhanced DRL model with bird’s eye view (BEV) design strategy is proposed for vehicle-road collaborative autonomous driving scenarios. The constructed model introduces a state prediction fusion strategy to compensate for state information. Specifically, state information is first predicted by fusing perception results from vehicles and roadside units (RSUs) at different moments. Then, the recommended velocity is derived for green wave passage, called the green wave velocity belt, and incorporate it into the state space as two variables in the state vector. Finally, a relevant reward term in the reward function is designed to guide agent learning strategies. The proposed method is trained on the basis of the parallel DreamerV3 framework. The results show that the proposed approach can effectively integrate multi-source perceptual information, improving training efficiency and control performance, and demonstrating great effectiveness and practical application value.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Control Engineering Practice
Control Engineering Practice 工程技术-工程:电子与电气
CiteScore
9.20
自引率
12.20%
发文量
183
审稿时长
44 days
期刊介绍: Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper. The scope of Control Engineering Practice matches the activities of IFAC. Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.
期刊最新文献
Bumpless transfer control for DC-DC buck-boost converter modeled by switched affine systems Robust temperature control of a diesel oxidation catalyst using continuous terminal sliding mode with extended state observer Enhanced deep reinforcement learning model with bird’s eye view design strategy for decision control in vehicle-road collaboration A robust distributed fault detection scheme for interconnected systems based on subspace identification technique Adaptive aircraft anti-skid braking control for runway disturbance compensation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1