云原生网络功能中sla感知资源推荐的性能建模方法

Michel Gokan Khan, J. Taheri, M. Khoshkholghi, A. Kassler, Carolyn Cartwright, M. Darula, Shuiguang Deng
{"title":"云原生网络功能中sla感知资源推荐的性能建模方法","authors":"Michel Gokan Khan, J. Taheri, M. Khoshkholghi, A. Kassler, Carolyn Cartwright, M. Darula, Shuiguang Deng","doi":"10.1109/netsoft48620.2020.9165482","DOIUrl":null,"url":null,"abstract":"Network Function Virtualization (NFV) becomes the primary driver for the evolution of 5G networks, and in recent years, Network Function Cloudification (NFC) proved to be an inevitable part of this evolution. Microservice architecture also becomes the de facto choice for designing a modern Cloud Native Network Function (CNF) due to its ability to decouple components of each CNF into multiple independently manageable microservices. Even though taking advantage of microservice architecture in designing CNFs solves specific problems, this additional granularity makes estimating resource requirements for a Production Environment (PE) a complex task and sometimes leads to an over-provisioned PE. Traditionally, performance engineers dimension each CNF within a Service Function Chain (SFC) in a smaller Performance Testing Environment (PTE) through a series of performance benchmarks. Then, considering the Quality of Service (QoS) constraints of a Service Provider (SP) that are guaranteed in the Service Level Agreement (SLA), they estimate the required resources to set up the PE. In this paper, we used a machine learning approach to model the impact of each microservice's resource configuration (i.e., CPU and memory) on the QoS metrics (i.e. serving throughput and latency) of each SFC in a PTE. Then, considering an SP's Service Level Objectives (SLO), we proposed an algorithm to predict each microservice's resource capacities in a PE. We evaluated the accuracy of our prediction on a prototype of a cloud native 5G Home Subscriber Server (HSS). Our model showed 95%-78% accuracy in a PE that has 2–5 times more computing resources than the PTE.","PeriodicalId":239961,"journal":{"name":"2020 6th IEEE Conference on Network Softwarization (NetSoft)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"A Performance Modelling Approach for SLA-Aware Resource Recommendation in Cloud Native Network Functions\",\"authors\":\"Michel Gokan Khan, J. Taheri, M. Khoshkholghi, A. Kassler, Carolyn Cartwright, M. Darula, Shuiguang Deng\",\"doi\":\"10.1109/netsoft48620.2020.9165482\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network Function Virtualization (NFV) becomes the primary driver for the evolution of 5G networks, and in recent years, Network Function Cloudification (NFC) proved to be an inevitable part of this evolution. Microservice architecture also becomes the de facto choice for designing a modern Cloud Native Network Function (CNF) due to its ability to decouple components of each CNF into multiple independently manageable microservices. Even though taking advantage of microservice architecture in designing CNFs solves specific problems, this additional granularity makes estimating resource requirements for a Production Environment (PE) a complex task and sometimes leads to an over-provisioned PE. Traditionally, performance engineers dimension each CNF within a Service Function Chain (SFC) in a smaller Performance Testing Environment (PTE) through a series of performance benchmarks. Then, considering the Quality of Service (QoS) constraints of a Service Provider (SP) that are guaranteed in the Service Level Agreement (SLA), they estimate the required resources to set up the PE. In this paper, we used a machine learning approach to model the impact of each microservice's resource configuration (i.e., CPU and memory) on the QoS metrics (i.e. serving throughput and latency) of each SFC in a PTE. Then, considering an SP's Service Level Objectives (SLO), we proposed an algorithm to predict each microservice's resource capacities in a PE. We evaluated the accuracy of our prediction on a prototype of a cloud native 5G Home Subscriber Server (HSS). Our model showed 95%-78% accuracy in a PE that has 2–5 times more computing resources than the PTE.\",\"PeriodicalId\":239961,\"journal\":{\"name\":\"2020 6th IEEE Conference on Network Softwarization (NetSoft)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 6th IEEE Conference on Network Softwarization (NetSoft)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/netsoft48620.2020.9165482\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 6th IEEE Conference on Network Softwarization (NetSoft)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/netsoft48620.2020.9165482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

网络功能虚拟化(NFV)成为5G网络演进的主要驱动力,近年来,网络功能云化(NFC)被证明是这一演进的必然组成部分。微服务架构也成为设计现代云原生网络功能(CNF)的事实上的选择,因为它能够将每个CNF的组件解耦成多个独立可管理的微服务。尽管在设计cnf时利用微服务架构可以解决特定问题,但这种额外的粒度使得估计生产环境(PE)的资源需求成为一项复杂的任务,有时还会导致PE供应过剩。传统上,性能工程师在较小的性能测试环境(PTE)中通过一系列性能基准对服务功能链(SFC)中的每个CNF进行维度分析。然后,考虑服务水平协议(SLA)中保证的服务提供者(SP)的服务质量(QoS)约束,他们估计建立PE所需的资源。在本文中,我们使用机器学习方法来建模每个微服务的资源配置(即CPU和内存)对PTE中每个SFC的QoS指标(即服务吞吐量和延迟)的影响,然后,考虑SP的服务水平目标(SLO),我们提出了一种算法来预测PE中每个微服务的资源容量。我们在云原生5G家庭用户服务器(HSS)的原型上评估了我们预测的准确性。我们的模型显示,在计算资源比PTE多2-5倍的PE中,准确率为95%-78%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Performance Modelling Approach for SLA-Aware Resource Recommendation in Cloud Native Network Functions
Network Function Virtualization (NFV) becomes the primary driver for the evolution of 5G networks, and in recent years, Network Function Cloudification (NFC) proved to be an inevitable part of this evolution. Microservice architecture also becomes the de facto choice for designing a modern Cloud Native Network Function (CNF) due to its ability to decouple components of each CNF into multiple independently manageable microservices. Even though taking advantage of microservice architecture in designing CNFs solves specific problems, this additional granularity makes estimating resource requirements for a Production Environment (PE) a complex task and sometimes leads to an over-provisioned PE. Traditionally, performance engineers dimension each CNF within a Service Function Chain (SFC) in a smaller Performance Testing Environment (PTE) through a series of performance benchmarks. Then, considering the Quality of Service (QoS) constraints of a Service Provider (SP) that are guaranteed in the Service Level Agreement (SLA), they estimate the required resources to set up the PE. In this paper, we used a machine learning approach to model the impact of each microservice's resource configuration (i.e., CPU and memory) on the QoS metrics (i.e. serving throughput and latency) of each SFC in a PTE. Then, considering an SP's Service Level Objectives (SLO), we proposed an algorithm to predict each microservice's resource capacities in a PE. We evaluated the accuracy of our prediction on a prototype of a cloud native 5G Home Subscriber Server (HSS). Our model showed 95%-78% accuracy in a PE that has 2–5 times more computing resources than the PTE.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Cloud-native SDN Controller Based on Micro-Services for Transport Networks Techno-economic evaluation of a brokerage role in the context of integrated satellite-5G networks Attack Detection on the Software Defined Networking Switches Linking QoE and Performance Models for DASH-based Video Streaming ANI: Abstracted Network Inventory for Streamlined Service Placement in Distributed Clouds
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1