{"title":"Multi-agent deep reinforcement learning for cross-layer scheduling in mobile ad-hoc networks","authors":"Xinxing Zheng, Yu Zhao, Joohyun Lee, Wei Chen","doi":"10.23919/jcc.fa.2022-0496.202308","DOIUrl":null,"url":null,"abstract":"Due to the fading characteristics of wireless channels and the burstiness of data traffic, how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging. In this paper, we focus on enabling congestion control to minimize network transmission delays through flexible power control. To effectively solve the congestion problem, we propose a distributed cross-layer scheduling algorithm, which is empowered by graph-based multi-agent deep reinforcement learning. The transmit power is adaptively adjusted in real-time by our algorithm based only on local information (i.e., channel state information and queue length) and local communication (i.e., information exchanged with neighbors). Moreover, the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network. In the evaluation, we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states, and demonstrate the adaptability and stability in different topologies. The method is general and can be extended to various types of topologies.","PeriodicalId":9814,"journal":{"name":"China Communications","volume":"1 1","pages":"0"},"PeriodicalIF":3.1000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"China Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/jcc.fa.2022-0496.202308","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Due to the fading characteristics of wireless channels and the burstiness of data traffic, how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging. In this paper, we focus on enabling congestion control to minimize network transmission delays through flexible power control. To effectively solve the congestion problem, we propose a distributed cross-layer scheduling algorithm, which is empowered by graph-based multi-agent deep reinforcement learning. The transmit power is adaptively adjusted in real-time by our algorithm based only on local information (i.e., channel state information and queue length) and local communication (i.e., information exchanged with neighbors). Moreover, the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network. In the evaluation, we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states, and demonstrate the adaptability and stability in different topologies. The method is general and can be extended to various types of topologies.
期刊介绍:
China Communications (ISSN 1673-5447) is an English-language monthly journal cosponsored by the China Institute of Communications (CIC) and IEEE Communications Society (IEEE ComSoc). It is aimed at readers in industry, universities, research and development organizations, and government agencies in the field of Information and Communications Technologies (ICTs) worldwide.
The journal's main objective is to promote academic exchange in the ICTs sector and publish high-quality papers to contribute to the global ICTs industry. It provides instant access to the latest articles and papers, presenting leading-edge research achievements, tutorial overviews, and descriptions of significant practical applications of technology.
China Communications has been indexed in SCIE (Science Citation Index-Expanded) since January 2007. Additionally, all articles have been available in the IEEE Xplore digital library since January 2013.