Yitao Luo , Runde Zhang , Zhuyun Chen , Chong Xie , Shaowu Zheng , Shanhu Yu , Weihua Li
{"title":"Enhanced deep reinforcement learning model with bird’s eye view design strategy for decision control in vehicle-road collaboration","authors":"Yitao Luo , Runde Zhang , Zhuyun Chen , Chong Xie , Shaowu Zheng , Shanhu Yu , Weihua Li","doi":"10.1016/j.conengprac.2025.106315","DOIUrl":null,"url":null,"abstract":"<div><div>Autonomous driving in complex traffic scenarios is a vital challenge, and deep reinforcement learning (DRL) has been extensively applied to address this issue. The recent advancement of vehicle-to-everything (V2X) technology has provided abundant perceptual information for DRL agents, improving the accuracy and safety of decision control. However, existing research on green wave traffic scenes has difficulty adapting to multi-signal scenarios with single-signal countdown models, which lack complete signal state information. To address this limitation, an enhanced DRL model with bird’s eye view (BEV) design strategy is proposed for vehicle-road collaborative autonomous driving scenarios. The constructed model introduces a state prediction fusion strategy to compensate for state information. Specifically, state information is first predicted by fusing perception results from vehicles and roadside units (RSUs) at different moments. Then, the recommended velocity is derived for green wave passage, called the green wave velocity belt, and incorporate it into the state space as two variables in the state vector. Finally, a relevant reward term in the reward function is designed to guide agent learning strategies. The proposed method is trained on the basis of the parallel DreamerV3 framework. The results show that the proposed approach can effectively integrate multi-source perceptual information, improving training efficiency and control performance, and demonstrating great effectiveness and practical application value.</div></div>","PeriodicalId":50615,"journal":{"name":"Control Engineering Practice","volume":"159 ","pages":"Article 106315"},"PeriodicalIF":5.4000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Control Engineering Practice","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0967066125000784","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Autonomous driving in complex traffic scenarios is a vital challenge, and deep reinforcement learning (DRL) has been extensively applied to address this issue. The recent advancement of vehicle-to-everything (V2X) technology has provided abundant perceptual information for DRL agents, improving the accuracy and safety of decision control. However, existing research on green wave traffic scenes has difficulty adapting to multi-signal scenarios with single-signal countdown models, which lack complete signal state information. To address this limitation, an enhanced DRL model with bird’s eye view (BEV) design strategy is proposed for vehicle-road collaborative autonomous driving scenarios. The constructed model introduces a state prediction fusion strategy to compensate for state information. Specifically, state information is first predicted by fusing perception results from vehicles and roadside units (RSUs) at different moments. Then, the recommended velocity is derived for green wave passage, called the green wave velocity belt, and incorporate it into the state space as two variables in the state vector. Finally, a relevant reward term in the reward function is designed to guide agent learning strategies. The proposed method is trained on the basis of the parallel DreamerV3 framework. The results show that the proposed approach can effectively integrate multi-source perceptual information, improving training efficiency and control performance, and demonstrating great effectiveness and practical application value.
期刊介绍:
Control Engineering Practice strives to meet the needs of industrial practitioners and industrially related academics and researchers. It publishes papers which illustrate the direct application of control theory and its supporting tools in all possible areas of automation. As a result, the journal only contains papers which can be considered to have made significant contributions to the application of advanced control techniques. It is normally expected that practical results should be included, but where simulation only studies are available, it is necessary to demonstrate that the simulation model is representative of a genuine application. Strictly theoretical papers will find a more appropriate home in Control Engineering Practice''s sister publication, Automatica. It is also expected that papers are innovative with respect to the state of the art and are sufficiently detailed for a reader to be able to duplicate the main results of the paper (supplementary material, including datasets, tables, code and any relevant interactive material can be made available and downloaded from the website). The benefits of the presented methods must be made very clear and the new techniques must be compared and contrasted with results obtained using existing methods. Moreover, a thorough analysis of failures that may happen in the design process and implementation can also be part of the paper.
The scope of Control Engineering Practice matches the activities of IFAC.
Papers demonstrating the contribution of automation and control in improving the performance, quality, productivity, sustainability, resource and energy efficiency, and the manageability of systems and processes for the benefit of mankind and are relevant to industrial practitioners are most welcome.