Adel Mounir Said , Michel Marot , Chérifa Boucetta , Hossam Afifi , Hassine Moungla , Gatien Roujanski
{"title":"无人机辅助网络中的强化学习与基于规则的动态运动策略对比","authors":"Adel Mounir Said , Michel Marot , Chérifa Boucetta , Hossam Afifi , Hassine Moungla , Gatien Roujanski","doi":"10.1016/j.vehcom.2024.100788","DOIUrl":null,"url":null,"abstract":"<div><p>Since resource allocation of cellular networks is not dynamic, some cells may experience unplanned high traffic demands due to unexpected events. Unmanned aerial vehicles (UAV) can be used to provide the additional bandwidth required for data offloading.</p><p>Considering real-time and non-real-time traffic classes, our work is dedicated to optimize the placement of UAVs in cellular networks by two approaches. A first rule-based, low complexity method, that can be embedded in the UAV, while the other approach uses Reinforcement Learning (RL). It is based on Markov Decision Processes (MDP) for providing optimal results. The energy of the UAV battery and charging time constraints have been taken into account to cover a typical cellular environment consisting of many cells.</p><p>We used an open dataset for the Milan cellular network provided by Telecom Italia to evaluate the performance of both proposed models. Considering this dataset, the MDP model outperforms the rule-based algorithm. Nevertheless, the rule-based one requires less processing complexity and can be used immediately without any prior data. This work makes a notable contribution to developing practical and optimal solutions for UAV deployment in modern cellular networks.</p></div>","PeriodicalId":54346,"journal":{"name":"Vehicular Communications","volume":"48 ","pages":"Article 100788"},"PeriodicalIF":5.8000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning vs rule-based dynamic movement strategies in UAV assisted networks\",\"authors\":\"Adel Mounir Said , Michel Marot , Chérifa Boucetta , Hossam Afifi , Hassine Moungla , Gatien Roujanski\",\"doi\":\"10.1016/j.vehcom.2024.100788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Since resource allocation of cellular networks is not dynamic, some cells may experience unplanned high traffic demands due to unexpected events. Unmanned aerial vehicles (UAV) can be used to provide the additional bandwidth required for data offloading.</p><p>Considering real-time and non-real-time traffic classes, our work is dedicated to optimize the placement of UAVs in cellular networks by two approaches. A first rule-based, low complexity method, that can be embedded in the UAV, while the other approach uses Reinforcement Learning (RL). It is based on Markov Decision Processes (MDP) for providing optimal results. The energy of the UAV battery and charging time constraints have been taken into account to cover a typical cellular environment consisting of many cells.</p><p>We used an open dataset for the Milan cellular network provided by Telecom Italia to evaluate the performance of both proposed models. Considering this dataset, the MDP model outperforms the rule-based algorithm. Nevertheless, the rule-based one requires less processing complexity and can be used immediately without any prior data. This work makes a notable contribution to developing practical and optimal solutions for UAV deployment in modern cellular networks.</p></div>\",\"PeriodicalId\":54346,\"journal\":{\"name\":\"Vehicular Communications\",\"volume\":\"48 \",\"pages\":\"Article 100788\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2024-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Vehicular Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214209624000639\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vehicular Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214209624000639","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Reinforcement learning vs rule-based dynamic movement strategies in UAV assisted networks
Since resource allocation of cellular networks is not dynamic, some cells may experience unplanned high traffic demands due to unexpected events. Unmanned aerial vehicles (UAV) can be used to provide the additional bandwidth required for data offloading.
Considering real-time and non-real-time traffic classes, our work is dedicated to optimize the placement of UAVs in cellular networks by two approaches. A first rule-based, low complexity method, that can be embedded in the UAV, while the other approach uses Reinforcement Learning (RL). It is based on Markov Decision Processes (MDP) for providing optimal results. The energy of the UAV battery and charging time constraints have been taken into account to cover a typical cellular environment consisting of many cells.
We used an open dataset for the Milan cellular network provided by Telecom Italia to evaluate the performance of both proposed models. Considering this dataset, the MDP model outperforms the rule-based algorithm. Nevertheless, the rule-based one requires less processing complexity and can be used immediately without any prior data. This work makes a notable contribution to developing practical and optimal solutions for UAV deployment in modern cellular networks.
期刊介绍:
Vehicular communications is a growing area of communications between vehicles and including roadside communication infrastructure. Advances in wireless communications are making possible sharing of information through real time communications between vehicles and infrastructure. This has led to applications to increase safety of vehicles and communication between passengers and the Internet. Standardization efforts on vehicular communication are also underway to make vehicular transportation safer, greener and easier.
The aim of the journal is to publish high quality peer–reviewed papers in the area of vehicular communications. The scope encompasses all types of communications involving vehicles, including vehicle–to–vehicle and vehicle–to–infrastructure. The scope includes (but not limited to) the following topics related to vehicular communications:
Vehicle to vehicle and vehicle to infrastructure communications
Channel modelling, modulating and coding
Congestion Control and scalability issues
Protocol design, testing and verification
Routing in vehicular networks
Security issues and countermeasures
Deployment and field testing
Reducing energy consumption and enhancing safety of vehicles
Wireless in–car networks
Data collection and dissemination methods
Mobility and handover issues
Safety and driver assistance applications
UAV
Underwater communications
Autonomous cooperative driving
Social networks
Internet of vehicles
Standardization of protocols.