{"title":"基于双深Q网络的时变交通流智能交通控制","authors":"Priyadharshini Shanmugasundaram, Aakash Sinha","doi":"10.1109/SPIN52536.2021.9565961","DOIUrl":null,"url":null,"abstract":"Reinforcement learning, a sub-field of Machine Learning has been garnering lot of research attention lately. It helps create intelligent agents that can incrementally learn optimal strategies for challenging environments by interacting with it. Such agents are best suited for solving problems like traffic congestion, which demand solutions that eater to dynamic changes in the traffic throughput. Intelligent transportation systems which use deep reinforcement learning can adapt to varying traffic demands and learn to maintain reduced congestion. In this paper, we propose a solution approach to use Double Deep Q Networks for traffic signal control of varied traffic flows in an isolated intersection. To improve the stability of our proposed method we have used target networks, delayed updates and experience replay mechanisms. We evaluate the performance of our method on different time-varying traffic flows and find that our method learns a robust and optimal strategy which reduces vehicle waiting time and queue length significantly. Our method achieved superior performance compared to traditional traffic signal control strategies. The method has been trained and evaluated through simulations of road networks created on Simulation of Urban Mobility (SUMO).","PeriodicalId":343177,"journal":{"name":"2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intelligent Traffic Control using Double Deep Q Networks for time-varying Traffic Flows\",\"authors\":\"Priyadharshini Shanmugasundaram, Aakash Sinha\",\"doi\":\"10.1109/SPIN52536.2021.9565961\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning, a sub-field of Machine Learning has been garnering lot of research attention lately. It helps create intelligent agents that can incrementally learn optimal strategies for challenging environments by interacting with it. Such agents are best suited for solving problems like traffic congestion, which demand solutions that eater to dynamic changes in the traffic throughput. Intelligent transportation systems which use deep reinforcement learning can adapt to varying traffic demands and learn to maintain reduced congestion. In this paper, we propose a solution approach to use Double Deep Q Networks for traffic signal control of varied traffic flows in an isolated intersection. To improve the stability of our proposed method we have used target networks, delayed updates and experience replay mechanisms. We evaluate the performance of our method on different time-varying traffic flows and find that our method learns a robust and optimal strategy which reduces vehicle waiting time and queue length significantly. Our method achieved superior performance compared to traditional traffic signal control strategies. The method has been trained and evaluated through simulations of road networks created on Simulation of Urban Mobility (SUMO).\",\"PeriodicalId\":343177,\"journal\":{\"name\":\"2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"volume\":\"60 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPIN52536.2021.9565961\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th International Conference on Signal Processing and Integrated Networks (SPIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPIN52536.2021.9565961","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Intelligent Traffic Control using Double Deep Q Networks for time-varying Traffic Flows
Reinforcement learning, a sub-field of Machine Learning has been garnering lot of research attention lately. It helps create intelligent agents that can incrementally learn optimal strategies for challenging environments by interacting with it. Such agents are best suited for solving problems like traffic congestion, which demand solutions that eater to dynamic changes in the traffic throughput. Intelligent transportation systems which use deep reinforcement learning can adapt to varying traffic demands and learn to maintain reduced congestion. In this paper, we propose a solution approach to use Double Deep Q Networks for traffic signal control of varied traffic flows in an isolated intersection. To improve the stability of our proposed method we have used target networks, delayed updates and experience replay mechanisms. We evaluate the performance of our method on different time-varying traffic flows and find that our method learns a robust and optimal strategy which reduces vehicle waiting time and queue length significantly. Our method achieved superior performance compared to traditional traffic signal control strategies. The method has been trained and evaluated through simulations of road networks created on Simulation of Urban Mobility (SUMO).