Leveraging reinforcement learning for dynamic traffic control: A survey and challenges for field implementation

IF 12.5 Q1 TRANSPORTATION Communications in Transportation Research Pub Date : 2023-11-03 DOI:10.1016/j.commtr.2023.100104
Yu Han , Meng Wang , Ludovic Leclercq
{"title":"Leveraging reinforcement learning for dynamic traffic control: A survey and challenges for field implementation","authors":"Yu Han ,&nbsp;Meng Wang ,&nbsp;Ludovic Leclercq","doi":"10.1016/j.commtr.2023.100104","DOIUrl":null,"url":null,"abstract":"<div><p>In recent years, the advancement of artificial intelligence techniques has led to significant interest in reinforcement learning (RL) within the traffic and transportation community. Dynamic traffic control has emerged as a prominent application field for RL in traffic systems. This paper presents a comprehensive survey of RL studies in dynamic traffic control, addressing the challenges associated with implementing RL-based traffic control strategies in practice, and identifying promising directions for future research. The first part of this paper provides a comprehensive overview of existing studies on RL-based traffic control strategies, encompassing their model designs, training algorithms, and evaluation methods. It is found that only a few studies have isolated the training and testing environments while evaluating their RL controllers. Subsequently, we examine the challenges involved in implementing existing RL-based traffic control strategies. We investigate the learning costs associated with online RL methods and the transferability of offline RL methods through simulation experiments. The simulation results reveal that online training methods with random exploration suffer from high exploration and learning costs. Additionally, the performance of offline RL methods is highly reliant on the accuracy of the training simulator. These limitations hinder the practical implementation of existing RL-based traffic control strategies. The final part of this paper summarizes and discusses a few existing efforts which attempt to overcome these challenges. This review highlights a rising volume of studies dedicated to mitigating the limitations of RL strategies, with the specific aim of enhancing their practical implementation in recent years.</p></div>","PeriodicalId":100292,"journal":{"name":"Communications in Transportation Research","volume":null,"pages":null},"PeriodicalIF":12.5000,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277242472300015X/pdfft?md5=127199f7739f428aa7133722ddb48d9f&pid=1-s2.0-S277242472300015X-main.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications in Transportation Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277242472300015X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
引用次数: 1

Abstract

In recent years, the advancement of artificial intelligence techniques has led to significant interest in reinforcement learning (RL) within the traffic and transportation community. Dynamic traffic control has emerged as a prominent application field for RL in traffic systems. This paper presents a comprehensive survey of RL studies in dynamic traffic control, addressing the challenges associated with implementing RL-based traffic control strategies in practice, and identifying promising directions for future research. The first part of this paper provides a comprehensive overview of existing studies on RL-based traffic control strategies, encompassing their model designs, training algorithms, and evaluation methods. It is found that only a few studies have isolated the training and testing environments while evaluating their RL controllers. Subsequently, we examine the challenges involved in implementing existing RL-based traffic control strategies. We investigate the learning costs associated with online RL methods and the transferability of offline RL methods through simulation experiments. The simulation results reveal that online training methods with random exploration suffer from high exploration and learning costs. Additionally, the performance of offline RL methods is highly reliant on the accuracy of the training simulator. These limitations hinder the practical implementation of existing RL-based traffic control strategies. The final part of this paper summarizes and discusses a few existing efforts which attempt to overcome these challenges. This review highlights a rising volume of studies dedicated to mitigating the limitations of RL strategies, with the specific aim of enhancing their practical implementation in recent years.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用强化学习进行动态交通控制:实地实施的调查和挑战
近年来,人工智能技术的进步引起了交通和运输界对强化学习(RL)的极大兴趣。动态交通控制已成为RL在交通系统中的一个重要应用领域。本文对动态交通控制中的RL研究进行了全面的综述,解决了在实践中实施基于RL的交通控制策略所面临的挑战,并确定了未来研究的前景。本文的第一部分全面概述了基于rl的交通控制策略的现有研究,包括其模型设计,训练算法和评估方法。研究发现,只有少数研究在评估RL控制器时分离了训练和测试环境。随后,我们研究了实施现有基于rl的交通控制策略所涉及的挑战。我们通过模拟实验研究了在线强化学习方法的学习成本和离线强化学习方法的可移植性。仿真结果表明,随机探索的在线训练方法存在较高的探索成本和学习成本。此外,离线强化学习方法的性能高度依赖于训练模拟器的准确性。这些限制阻碍了现有的基于rl的交通控制策略的实际实施。本文的最后一部分总结和讨论了一些现有的努力,试图克服这些挑战。这篇综述强调了近年来越来越多的研究致力于减轻RL策略的局限性,其具体目标是加强其实际实施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
15.20
自引率
0.00%
发文量
0
期刊最新文献
Harnessing multimodal large language models for traffic knowledge graph generation and decision-making Controllability test for nonlinear datatic systems Intelligent vehicle platooning transit A multi-functional simulation platform for on-demand ride service operations Traffic expertise meets residual RL: Knowledge-informed model-based residual reinforcement learning for CAV trajectory control
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1