Serene Banerjee, Joy Bose, Sleeba Paul Puthepurakel, Pratyush Kiran Uppuluri, Subhadip Bandyopadhyay, Y. S. K. Reddy, Ranjani H. G.
{"title":"利用强化学习改进V2V通信服务质量的链路自适应","authors":"Serene Banerjee, Joy Bose, Sleeba Paul Puthepurakel, Pratyush Kiran Uppuluri, Subhadip Bandyopadhyay, Y. S. K. Reddy, Ranjani H. G.","doi":"10.1145/3564121.3564122","DOIUrl":null,"url":null,"abstract":"For autonomous driving, safer travel, and fleet management, Vehicle-to-Vehicle (V2V) communication protocols are an emerging area of research and development. State-of-the-art techniques include machine learning (ML) and reinforcement learning (RL) to adapt modulation and coding rates as the vehicle moves. However, channel state estimations are often incorrect and rapidly changing in a V2V scenario. We propose a combination of input features, including (a) sensor inputs from other parameters in the vehicle, such as speed and global positioning system (GPS), (b) estimation of interference and load for each of the vehicles, and (c) channel state estimation to find the optimal rate that would maximize Quality-of-Service. Our model uses an ensemble of RL-agents to predict trends in the input parameters and to find the inter-dependencies of these input parameters. An RL agent then utilizes these inputs to find the best modulation and coding rate as the vehicle moves. We demonstrate our results through prototype experiments using real data collected from customer networks.","PeriodicalId":166150,"journal":{"name":"Proceedings of the Second International Conference on AI-ML Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Link-Adaptation for Improved Quality-of-Service in V2V Communication using Reinforcement Learning\",\"authors\":\"Serene Banerjee, Joy Bose, Sleeba Paul Puthepurakel, Pratyush Kiran Uppuluri, Subhadip Bandyopadhyay, Y. S. K. Reddy, Ranjani H. G.\",\"doi\":\"10.1145/3564121.3564122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For autonomous driving, safer travel, and fleet management, Vehicle-to-Vehicle (V2V) communication protocols are an emerging area of research and development. State-of-the-art techniques include machine learning (ML) and reinforcement learning (RL) to adapt modulation and coding rates as the vehicle moves. However, channel state estimations are often incorrect and rapidly changing in a V2V scenario. We propose a combination of input features, including (a) sensor inputs from other parameters in the vehicle, such as speed and global positioning system (GPS), (b) estimation of interference and load for each of the vehicles, and (c) channel state estimation to find the optimal rate that would maximize Quality-of-Service. Our model uses an ensemble of RL-agents to predict trends in the input parameters and to find the inter-dependencies of these input parameters. An RL agent then utilizes these inputs to find the best modulation and coding rate as the vehicle moves. We demonstrate our results through prototype experiments using real data collected from customer networks.\",\"PeriodicalId\":166150,\"journal\":{\"name\":\"Proceedings of the Second International Conference on AI-ML Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Second International Conference on AI-ML Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564121.3564122\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Second International Conference on AI-ML Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564121.3564122","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Link-Adaptation for Improved Quality-of-Service in V2V Communication using Reinforcement Learning
For autonomous driving, safer travel, and fleet management, Vehicle-to-Vehicle (V2V) communication protocols are an emerging area of research and development. State-of-the-art techniques include machine learning (ML) and reinforcement learning (RL) to adapt modulation and coding rates as the vehicle moves. However, channel state estimations are often incorrect and rapidly changing in a V2V scenario. We propose a combination of input features, including (a) sensor inputs from other parameters in the vehicle, such as speed and global positioning system (GPS), (b) estimation of interference and load for each of the vehicles, and (c) channel state estimation to find the optimal rate that would maximize Quality-of-Service. Our model uses an ensemble of RL-agents to predict trends in the input parameters and to find the inter-dependencies of these input parameters. An RL agent then utilizes these inputs to find the best modulation and coding rate as the vehicle moves. We demonstrate our results through prototype experiments using real data collected from customer networks.