{"title":"DRL-OR: Deep Reinforcement Learning-based Online Routing for Multi-type Service Requirements","authors":"Chenyi Liu, Mingwei Xu, Yuan Yang, Nan Geng","doi":"10.1109/INFOCOM42981.2021.9488736","DOIUrl":null,"url":null,"abstract":"Emerging applications raise critical QoS requirements for the Internet. The improvements of flow classification technologies, software defined networks (SDN), and programmable network devices make it possible to fast identify users’ requirements and control the routing for fine-grained traffic flows. Meanwhile, the problem of optimizing the forwarding paths for traffic flows with multiple QoS requirements in an online fashion is not addressed sufficiently. To address the problem, we propose DRL-OR, an online routing algorithm using multi-agent deep reinforcement learning. DRL-OR organizes the agents to generate routes in a hop-by-hop manner, which inherently has good scalability. It adopts a comprehensive reward function, an efficient learning algorithm, and a novel deep neural network structure to learn an appropriate routing policy for different types of flow requirements. To guarantee the reliability and accelerate the online learning process, we further introduce safe learning mechanism to DRL-OR. We implement DRL-OR under SDN architecture and conduct Mininet-based experiments by using real network topologies and traffic traces. The results validate that DRL-OR can well satisfy the requirements of latency-sensitive, throughput-sensitive, latency-throughput-sensitive, and latency-loss-sensitive flows at the same time, while exhibiting great adaptiveness and reliability under the scenarios of link failure, traffic change, and partial deployment.","PeriodicalId":293079,"journal":{"name":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","volume":"214 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM42981.2021.9488736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Emerging applications raise critical QoS requirements for the Internet. The improvements of flow classification technologies, software defined networks (SDN), and programmable network devices make it possible to fast identify users’ requirements and control the routing for fine-grained traffic flows. Meanwhile, the problem of optimizing the forwarding paths for traffic flows with multiple QoS requirements in an online fashion is not addressed sufficiently. To address the problem, we propose DRL-OR, an online routing algorithm using multi-agent deep reinforcement learning. DRL-OR organizes the agents to generate routes in a hop-by-hop manner, which inherently has good scalability. It adopts a comprehensive reward function, an efficient learning algorithm, and a novel deep neural network structure to learn an appropriate routing policy for different types of flow requirements. To guarantee the reliability and accelerate the online learning process, we further introduce safe learning mechanism to DRL-OR. We implement DRL-OR under SDN architecture and conduct Mininet-based experiments by using real network topologies and traffic traces. The results validate that DRL-OR can well satisfy the requirements of latency-sensitive, throughput-sensitive, latency-throughput-sensitive, and latency-loss-sensitive flows at the same time, while exhibiting great adaptiveness and reliability under the scenarios of link failure, traffic change, and partial deployment.