{"title":"Multi-agent Q-learning for autonomous D2D communication","authors":"Alia Asheralieva, Y. Miyanaga","doi":"10.1109/ISPACS.2016.7824674","DOIUrl":null,"url":null,"abstract":"This paper is devoted to autonomous device-to-device (D2D) communication in cellular networks. The aim of each D2D pair is to maximize its throughput subject to the minimum signal-to-interference-plus-noise ratio (SINR) constraints. This problem is represented by a stochastic non-cooperative game where the players (D2D pairs) have no prior information on the availability and quality of selected channels. Therefore, each player in this game becomes a “learner” which explores all of its possible strategies based on the locally-observed throughput and state (defined by the channel quality). Consequently, we propose a multi-agent Q-learning algorithm based on the players' “beliefs” about the strategies of their counterparts and show its implementation in a Long Term Evolution - Advanced (LTE-A) network. As follows from simulations, the algorithm achieves a near-optimal performance after a small number of iterations.","PeriodicalId":131543,"journal":{"name":"2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","volume":"33 22","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISPACS.2016.7824674","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
This paper is devoted to autonomous device-to-device (D2D) communication in cellular networks. The aim of each D2D pair is to maximize its throughput subject to the minimum signal-to-interference-plus-noise ratio (SINR) constraints. This problem is represented by a stochastic non-cooperative game where the players (D2D pairs) have no prior information on the availability and quality of selected channels. Therefore, each player in this game becomes a “learner” which explores all of its possible strategies based on the locally-observed throughput and state (defined by the channel quality). Consequently, we propose a multi-agent Q-learning algorithm based on the players' “beliefs” about the strategies of their counterparts and show its implementation in a Long Term Evolution - Advanced (LTE-A) network. As follows from simulations, the algorithm achieves a near-optimal performance after a small number of iterations.