Chunglae Cho, Seungjae Shin, H. Jeon, Seunghyun Yoon
{"title":"Elastic Network Cache Control Using Deep Reinforcement Learning","authors":"Chunglae Cho, Seungjae Shin, H. Jeon, Seunghyun Yoon","doi":"10.1109/ICTC55196.2022.9952648","DOIUrl":null,"url":null,"abstract":"Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.","PeriodicalId":441404,"journal":{"name":"2022 13th International Conference on Information and Communication Technology Convergence (ICTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 13th International Conference on Information and Communication Technology Convergence (ICTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTC55196.2022.9952648","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.