{"title":"车载和飞行Ad Hoc网络中基于强化学习的路由协议——文献综述","authors":"Pavle D. Bugarčić, N. Jevtic, Marija Z. Malnar","doi":"10.7307/ptt.v34i6.4159","DOIUrl":null,"url":null,"abstract":"Vehicular and flying ad hoc networks (VANETs and FANETs) are becoming increasingly important with the development of smart cities and intelligent transportation systems (ITSs). The high mobility of nodes in these networks leads to frequent link breaks, which complicates the discovery of optimal route from source to destination and degrades network performance. One way to overcome this problem is to use machine learning (ML) in the routing process, and the most promising among different ML types is reinforcement learning (RL). Although there are several surveys on RL-based routing protocols for VANETs and FANETs, an important issue of integrating RL with well-established modern technologies, such as software-defined networking (SDN) or blockchain, has not been adequately addressed, especially when used in complex ITSs. In this paper, we focus on performing a comprehensive categorisation of RL-based routing protocols for both network types, having in mind their simultaneous use and the inclusion with other technologies. A detailed comparative analysis of protocols is carried out based on different factors that influence the reward function in RL and the consequences they have on network performance. Also, the key advantages and limitations of RL-based routing are discussed in detail.","PeriodicalId":54546,"journal":{"name":"Promet-Traffic & Transportation","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement Learning-Based Routing Protocols in Vehicular and Flying Ad Hoc Networks – A Literature Survey\",\"authors\":\"Pavle D. Bugarčić, N. Jevtic, Marija Z. Malnar\",\"doi\":\"10.7307/ptt.v34i6.4159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vehicular and flying ad hoc networks (VANETs and FANETs) are becoming increasingly important with the development of smart cities and intelligent transportation systems (ITSs). The high mobility of nodes in these networks leads to frequent link breaks, which complicates the discovery of optimal route from source to destination and degrades network performance. One way to overcome this problem is to use machine learning (ML) in the routing process, and the most promising among different ML types is reinforcement learning (RL). Although there are several surveys on RL-based routing protocols for VANETs and FANETs, an important issue of integrating RL with well-established modern technologies, such as software-defined networking (SDN) or blockchain, has not been adequately addressed, especially when used in complex ITSs. In this paper, we focus on performing a comprehensive categorisation of RL-based routing protocols for both network types, having in mind their simultaneous use and the inclusion with other technologies. A detailed comparative analysis of protocols is carried out based on different factors that influence the reward function in RL and the consequences they have on network performance. Also, the key advantages and limitations of RL-based routing are discussed in detail.\",\"PeriodicalId\":54546,\"journal\":{\"name\":\"Promet-Traffic & Transportation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2022-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Promet-Traffic & Transportation\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.7307/ptt.v34i6.4159\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"TRANSPORTATION SCIENCE & TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Promet-Traffic & Transportation","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.7307/ptt.v34i6.4159","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TRANSPORTATION SCIENCE & TECHNOLOGY","Score":null,"Total":0}
Reinforcement Learning-Based Routing Protocols in Vehicular and Flying Ad Hoc Networks – A Literature Survey
Vehicular and flying ad hoc networks (VANETs and FANETs) are becoming increasingly important with the development of smart cities and intelligent transportation systems (ITSs). The high mobility of nodes in these networks leads to frequent link breaks, which complicates the discovery of optimal route from source to destination and degrades network performance. One way to overcome this problem is to use machine learning (ML) in the routing process, and the most promising among different ML types is reinforcement learning (RL). Although there are several surveys on RL-based routing protocols for VANETs and FANETs, an important issue of integrating RL with well-established modern technologies, such as software-defined networking (SDN) or blockchain, has not been adequately addressed, especially when used in complex ITSs. In this paper, we focus on performing a comprehensive categorisation of RL-based routing protocols for both network types, having in mind their simultaneous use and the inclusion with other technologies. A detailed comparative analysis of protocols is carried out based on different factors that influence the reward function in RL and the consequences they have on network performance. Also, the key advantages and limitations of RL-based routing are discussed in detail.
期刊介绍:
This scientific journal publishes scientific papers in the area of technical sciences, field of transport and traffic technology.
The basic guidelines of the journal, which support the mission - promotion of transport science, are: relevancy of published papers and reviewer competency, established identity in the print and publishing profile, as well as other formal and informal details. The journal organisation consists of the Editorial Board, Editors, Reviewer Selection Committee and the Scientific Advisory Committee.
The received papers are subject to peer review in accordance with the recommendations for international scientific journals.
The papers published in the journal are placed in sections which explain their focus in more detail. The sections are: transportation economy, information and communication technology, intelligent transport systems, human-transport interaction, intermodal transport, education in traffic and transport, traffic planning, traffic and environment (ecology), traffic on motorways, traffic in the cities, transport and sustainable development, traffic and space, traffic infrastructure, traffic policy, transport engineering, transport law, safety and security in traffic, transport logistics, transport technology, transport telematics, internal transport, traffic management, science in traffic and transport, traffic engineering, transport in emergency situations, swarm intelligence in transportation engineering.
The Journal also publishes information not subject to review, and classified under the following headings: book and other reviews, symposia, conferences and exhibitions, scientific cooperation, anniversaries, portraits, bibliographies, publisher information, news, etc.