For highly dynamic and complex communication networks, existing DRL-based routing optimization solutions suffer from inefficient training, leading to degraded network performance. In this paper, we propose an Intelligent Routing Optimization method with Deep Reinforcement Learning and Betweenness Centrality Theory (IROD-BC). This SDN routing solution based on distributed proximal policy optimization can achieve fast convergence of training and improve the overall performance of the network. First, before training, we select a set of controlled nodes in the network based on the Betweenness Centrality Theory. Second, during training, we adjust the weights of the links in the weighted shortest path algorithm based on this set of controlled nodes to improve the convergence efficiency of distributed proximal policy optimization. The learning agent modifies the weights of the links in the controlled nodes links based on the network traffic state information of this set of controlled nodes to reduce the agent’s dependence on the network topology. We utilize SDN controller to collect network traffic state information including packet loss and latency. Ultimately, the IROD-BC proposed in this paper can learn to make better routing control decisions from its own experience by interacting with the network environment until the learning agent converges and obtains the optimal routing paths. We conducted extensive experiments on three real network topologies to evaluate the performance of IROD-BC. The experimental results show that IROD-BC outperforms existing DRL-based routing solutions and OSPF algorithm in terms of latency, link throughput, and packet loss.
扫码关注我们
求助内容:
应助结果提醒方式:
