Olivia Nakayima;Mostafa I. Soliman;Kazunori Ueda;Samir A. Elsagheer Mohamed
{"title":"将软件定义和容错网络概念与深度强化学习技术相结合,增强车载网络功能","authors":"Olivia Nakayima;Mostafa I. Soliman;Kazunori Ueda;Samir A. Elsagheer Mohamed","doi":"10.1109/OJVT.2024.3396637","DOIUrl":null,"url":null,"abstract":"Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"5 ","pages":"721-736"},"PeriodicalIF":5.3000,"publicationDate":"2024-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10518068","citationCount":"0","resultStr":"{\"title\":\"Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks\",\"authors\":\"Olivia Nakayima;Mostafa I. Soliman;Kazunori Ueda;Samir A. Elsagheer Mohamed\",\"doi\":\"10.1109/OJVT.2024.3396637\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.\",\"PeriodicalId\":34270,\"journal\":{\"name\":\"IEEE Open Journal of Vehicular Technology\",\"volume\":\"5 \",\"pages\":\"721-736\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10518068\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Vehicular Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10518068/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10518068/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks
Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.