xai支持细颗粒垂直资源自动缩放

Mohamed-Anis Mekki, B. Brik, A. Ksentini, C. Verikoukis
{"title":"xai支持细颗粒垂直资源自动缩放","authors":"Mohamed-Anis Mekki, B. Brik, A. Ksentini, C. Verikoukis","doi":"10.1109/NetSoft57336.2023.10175438","DOIUrl":null,"url":null,"abstract":"Fine-granular management of cloud-native computing resources is one of the key features sought by cloud and edge operators. It consists in giving the exact amount of computing resources needed by a microservice to avoid resource over-provisioning, which is, by default, the adopted solution to prevent service degradation. Fine-granular resource management guarantees better computing resource usage, which is critical to reducing energy consumption and resource wastage (vital in edge computing). In this paper, we propose a novel Zero-touch management (ZSM) framework featuring a fine-granular computing resource scaler in a cloud-native environment. The proposed scaler algorithm uses Artificial Intelligence (AI)/Machine Learning (ML) models to predict microservice performances; if a service degradation is detected, then a root-cause analysis is conducted using eXplainable AI (XAI). Based on the XAI output, the proposed framework scales only the needed (exact amount) resources (i.e., CPU or memory) to overcome the service degradation. The proposed framework and resource scheduler have been implemented on top of a cloud-native platform based on the well-known Kubernetes tool. The obtained results clearly indicate that the proposed scheduler with lesser resources achieves the same service quality as the default scheduler of Kubernetes.","PeriodicalId":223208,"journal":{"name":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"XAI-Enabled Fine Granular Vertical Resources Autoscaler\",\"authors\":\"Mohamed-Anis Mekki, B. Brik, A. Ksentini, C. Verikoukis\",\"doi\":\"10.1109/NetSoft57336.2023.10175438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Fine-granular management of cloud-native computing resources is one of the key features sought by cloud and edge operators. It consists in giving the exact amount of computing resources needed by a microservice to avoid resource over-provisioning, which is, by default, the adopted solution to prevent service degradation. Fine-granular resource management guarantees better computing resource usage, which is critical to reducing energy consumption and resource wastage (vital in edge computing). In this paper, we propose a novel Zero-touch management (ZSM) framework featuring a fine-granular computing resource scaler in a cloud-native environment. The proposed scaler algorithm uses Artificial Intelligence (AI)/Machine Learning (ML) models to predict microservice performances; if a service degradation is detected, then a root-cause analysis is conducted using eXplainable AI (XAI). Based on the XAI output, the proposed framework scales only the needed (exact amount) resources (i.e., CPU or memory) to overcome the service degradation. The proposed framework and resource scheduler have been implemented on top of a cloud-native platform based on the well-known Kubernetes tool. The obtained results clearly indicate that the proposed scheduler with lesser resources achieves the same service quality as the default scheduler of Kubernetes.\",\"PeriodicalId\":223208,\"journal\":{\"name\":\"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)\",\"volume\":\"112 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NetSoft57336.2023.10175438\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 9th International Conference on Network Softwarization (NetSoft)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NetSoft57336.2023.10175438","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

云原生计算资源的细粒度管理是云和边缘运营商寻求的关键特性之一。它包括提供微服务所需的精确计算资源,以避免资源过度供应,这是默认情况下采用的防止服务退化的解决方案。细粒度资源管理保证了更好的计算资源使用,这对于减少能源消耗和资源浪费(在边缘计算中至关重要)至关重要。在本文中,我们提出了一种新的零接触管理(ZSM)框架,该框架在云原生环境中具有细粒度计算资源缩放器。提出的缩放算法使用人工智能(AI)/机器学习(ML)模型来预测微服务性能;如果检测到服务降级,则使用可解释AI (eXplainable AI, XAI)进行根本原因分析。根据XAI输出,建议的框架只扩展所需的(确切数量的)资源(即CPU或内存)以克服服务退化。提出的框架和资源调度器已经在基于知名Kubernetes工具的云原生平台上实现。得到的结果清楚地表明,建议的调度程序使用较少的资源获得与Kubernetes默认调度程序相同的服务质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
XAI-Enabled Fine Granular Vertical Resources Autoscaler
Fine-granular management of cloud-native computing resources is one of the key features sought by cloud and edge operators. It consists in giving the exact amount of computing resources needed by a microservice to avoid resource over-provisioning, which is, by default, the adopted solution to prevent service degradation. Fine-granular resource management guarantees better computing resource usage, which is critical to reducing energy consumption and resource wastage (vital in edge computing). In this paper, we propose a novel Zero-touch management (ZSM) framework featuring a fine-granular computing resource scaler in a cloud-native environment. The proposed scaler algorithm uses Artificial Intelligence (AI)/Machine Learning (ML) models to predict microservice performances; if a service degradation is detected, then a root-cause analysis is conducted using eXplainable AI (XAI). Based on the XAI output, the proposed framework scales only the needed (exact amount) resources (i.e., CPU or memory) to overcome the service degradation. The proposed framework and resource scheduler have been implemented on top of a cloud-native platform based on the well-known Kubernetes tool. The obtained results clearly indicate that the proposed scheduler with lesser resources achieves the same service quality as the default scheduler of Kubernetes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Autonomous Network Management in Multi-Domain 6G Networks based on Graph Neural Networks Showcasing In-Switch Machine Learning Inference Latency-Aware Kubernetes Scheduling for Microservices Orchestration at the Edge DRL-based Service Migration for MEC Cloud-Native 5G and beyond Networks Hierarchical Control Plane Framework for Multi-Domain TSN Orchestration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1