{"title":"基于深度强化学习的路边通信网络调度","authors":"Ribal Atallah, C. Assi, Maurice J. Khabbaz","doi":"10.23919/WIOPT.2017.7959912","DOIUrl":null,"url":null,"abstract":"The proper design of a vehicular network is the key expeditor for establishing an efficient Intelligent Transportation System, which enables diverse applications associated with traffic safety, traffic efficiency, and the entertainment of commuting passengers. In this paper, we address both safety and Quality-of-Service (QoS) concerns in a green Vehicle-to-Infrastructure communication scenario. Using the recent advances in training deep neural networks, we present a deep reinforcement learning model, namely deep Q-network, that learns an energy-efficient scheduling policy from high-dimensional inputs corresponding to the characteristics and requirements of vehicles residing within a RoadSide Unit's (RSU) communication range. The realized policy serves to extend the lifetime of the battery-powered RSU while promoting a safe environment that meets acceptable QoS levels. Our presented deep reinforcement learning model is found to outperform both random and greedy scheduling benchmarks.","PeriodicalId":6630,"journal":{"name":"2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)","volume":"5 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":"{\"title\":\"Deep reinforcement learning-based scheduling for roadside communication networks\",\"authors\":\"Ribal Atallah, C. Assi, Maurice J. Khabbaz\",\"doi\":\"10.23919/WIOPT.2017.7959912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The proper design of a vehicular network is the key expeditor for establishing an efficient Intelligent Transportation System, which enables diverse applications associated with traffic safety, traffic efficiency, and the entertainment of commuting passengers. In this paper, we address both safety and Quality-of-Service (QoS) concerns in a green Vehicle-to-Infrastructure communication scenario. Using the recent advances in training deep neural networks, we present a deep reinforcement learning model, namely deep Q-network, that learns an energy-efficient scheduling policy from high-dimensional inputs corresponding to the characteristics and requirements of vehicles residing within a RoadSide Unit's (RSU) communication range. The realized policy serves to extend the lifetime of the battery-powered RSU while promoting a safe environment that meets acceptable QoS levels. Our presented deep reinforcement learning model is found to outperform both random and greedy scheduling benchmarks.\",\"PeriodicalId\":6630,\"journal\":{\"name\":\"2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)\",\"volume\":\"5 1\",\"pages\":\"1-8\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"58\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/WIOPT.2017.7959912\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 15th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/WIOPT.2017.7959912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep reinforcement learning-based scheduling for roadside communication networks
The proper design of a vehicular network is the key expeditor for establishing an efficient Intelligent Transportation System, which enables diverse applications associated with traffic safety, traffic efficiency, and the entertainment of commuting passengers. In this paper, we address both safety and Quality-of-Service (QoS) concerns in a green Vehicle-to-Infrastructure communication scenario. Using the recent advances in training deep neural networks, we present a deep reinforcement learning model, namely deep Q-network, that learns an energy-efficient scheduling policy from high-dimensional inputs corresponding to the characteristics and requirements of vehicles residing within a RoadSide Unit's (RSU) communication range. The realized policy serves to extend the lifetime of the battery-powered RSU while promoting a safe environment that meets acceptable QoS levels. Our presented deep reinforcement learning model is found to outperform both random and greedy scheduling benchmarks.