Dimitrios Spatharakis, Ioannis Dimolitsas, E. Vlahakis, Dimitrios Dechouniotis, N. Athanasopoulos, S. Papavassiliou
{"title":"Distributed Resource Autoscaling in Kubernetes Edge Clusters","authors":"Dimitrios Spatharakis, Ioannis Dimolitsas, E. Vlahakis, Dimitrios Dechouniotis, N. Athanasopoulos, S. Papavassiliou","doi":"10.23919/CNSM55787.2022.9965056","DOIUrl":null,"url":null,"abstract":"Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting specific application requirements subject to a dynamic workload profile of incoming requests is extremely challenging. To this end, the fundamental problems of task scheduling and resource autoscaling must be jointly addressed. This paper presents a scalable architecture compatible with the decentralized nature of Kubernetes [1], to solve both. Exploiting the stability guarantees of a novel AIMD-like task scheduling solution, we dynamically redirect the incoming requests towards the containerized application. To cope with dynamic workloads, a prediction mechanism allows us to estimate the number of incoming requests. Additionally, a Machine Learning-based (ML) Application Profiling Modeling is introduced to address the scaling, by co-designing the theoretically-computed service rates obtained from the AIMD algorithm with the current performance metrics. The proposed solution is compared with the state-of-the-art autoscaling techniques under a realistic dataset in a small edge infrastructure and the trade-off between resource utilization and QoS violations are analyzed. Our solution provides better resource utilization by reducing CPU cores by 8% with only an acceptable increase in QoS violations.","PeriodicalId":232521,"journal":{"name":"2022 18th International Conference on Network and Service Management (CNSM)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 18th International Conference on Network and Service Management (CNSM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CNSM55787.2022.9965056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting specific application requirements subject to a dynamic workload profile of incoming requests is extremely challenging. To this end, the fundamental problems of task scheduling and resource autoscaling must be jointly addressed. This paper presents a scalable architecture compatible with the decentralized nature of Kubernetes [1], to solve both. Exploiting the stability guarantees of a novel AIMD-like task scheduling solution, we dynamically redirect the incoming requests towards the containerized application. To cope with dynamic workloads, a prediction mechanism allows us to estimate the number of incoming requests. Additionally, a Machine Learning-based (ML) Application Profiling Modeling is introduced to address the scaling, by co-designing the theoretically-computed service rates obtained from the AIMD algorithm with the current performance metrics. The proposed solution is compared with the state-of-the-art autoscaling techniques under a realistic dataset in a small edge infrastructure and the trade-off between resource utilization and QoS violations are analyzed. Our solution provides better resource utilization by reducing CPU cores by 8% with only an acceptable increase in QoS violations.