{"title":"TCP Congestion Management Using Deep Reinforcement Trained Agent for RED","authors":"Majid Hamid Ali, Serkan Öztürk","doi":"10.1002/cpe.8300","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Increasing data transmission volumes are causing more frequent and more severe network congestion. In order to handle spikes in network traffic, a substantially bigger buffer has been included into the system. Bufferbloat, which happens when a bigger buffer is implemented, exacerbates network congestion. Using the transfer control protocol (TCP) congestion management strategy with active queue management (AQM) can fix this issue. As congestion increases, it becomes increasingly difficult to forecast and fine-tune dynamic AQM/TCP systems in order to achieve acceptable performance. To shed new light on the AQM system, we plan to use deep reinforcement learning (DRL) techniques. It is possible that AQM can learn about the appropriate drop policy the same way people do when using a model-free technique like DRL-AQM. After training in a simple network scenario, DRL-AQM is able to recognize complex patterns in the data traffic model and apply them to improve performance in a wide variety of scenarios. Offline training precedes deployment in our approach. In many cases, the model does not require any further parameter tweaks after training. Even in the most complicated networks, AQM algorithms have proven to be effective, regardless of the network's complexity. Minimizing buffer capacity use is an important goal of DRL-AQM. It automatically and continually adjusts to changes in network connectivity.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"36 28","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8300","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Increasing data transmission volumes are causing more frequent and more severe network congestion. In order to handle spikes in network traffic, a substantially bigger buffer has been included into the system. Bufferbloat, which happens when a bigger buffer is implemented, exacerbates network congestion. Using the transfer control protocol (TCP) congestion management strategy with active queue management (AQM) can fix this issue. As congestion increases, it becomes increasingly difficult to forecast and fine-tune dynamic AQM/TCP systems in order to achieve acceptable performance. To shed new light on the AQM system, we plan to use deep reinforcement learning (DRL) techniques. It is possible that AQM can learn about the appropriate drop policy the same way people do when using a model-free technique like DRL-AQM. After training in a simple network scenario, DRL-AQM is able to recognize complex patterns in the data traffic model and apply them to improve performance in a wide variety of scenarios. Offline training precedes deployment in our approach. In many cases, the model does not require any further parameter tweaks after training. Even in the most complicated networks, AQM algorithms have proven to be effective, regardless of the network's complexity. Minimizing buffer capacity use is an important goal of DRL-AQM. It automatically and continually adjusts to changes in network connectivity.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.