AutoDECK: Automated Declarative Performance Evaluation and Tuning Framework on Kubernetes

Q1 Computer Science IEEE Cloud Computing Pub Date : 2022-07-01 DOI:10.1109/CLOUD55607.2022.00053
Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Takeshi Yoshimura, Marcelo Amaral
{"title":"AutoDECK: Automated Declarative Performance Evaluation and Tuning Framework on Kubernetes","authors":"Sunyanan Choochotkaew, Tatsuhiro Chiba, Scott Trent, Takeshi Yoshimura, Marcelo Amaral","doi":"10.1109/CLOUD55607.2022.00053","DOIUrl":null,"url":null,"abstract":"Containerization and application variety bring many challenges in automating evaluations for performance tuning and comparison among infrastructure choices. Due to the tightly-coupled design of benchmarks and evaluation tools, the present automated tools on Kubernetes are limited to trivial microbenchmarks and cannot be extended to complex cloudnative architectures such as microservices and serverless, which are usually managed by customized operators for setting up workload dependencies. In this paper, we propose AutoDECK, a performance evaluation framework with a fully declarative manner. The proposed framework automates configuring, deploying, evaluating, summarizing, and visualizing the benchmarking workload. It seamlessly integrates mature Kubernetes-native systems and extends multiple functionalities such as tracking the image-build pipeline, and auto-tuning. We present five use cases of evaluations and analysis through various kinds of bench-marks including microbenchmarks and HPC/AI benchmarks. The evaluation results can also differentiate characteristics such as resource usage behavior and parallelism effectiveness between different clusters. Furthermore, the results demonstrate the benefit of integrating an auto-tuning feature in the proposed framework, as shown by the 10% transferred memory bytes in the Sysbench benchmark.","PeriodicalId":54281,"journal":{"name":"IEEE Cloud Computing","volume":"102 1","pages":"309-314"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLOUD55607.2022.00053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 1

Abstract

Containerization and application variety bring many challenges in automating evaluations for performance tuning and comparison among infrastructure choices. Due to the tightly-coupled design of benchmarks and evaluation tools, the present automated tools on Kubernetes are limited to trivial microbenchmarks and cannot be extended to complex cloudnative architectures such as microservices and serverless, which are usually managed by customized operators for setting up workload dependencies. In this paper, we propose AutoDECK, a performance evaluation framework with a fully declarative manner. The proposed framework automates configuring, deploying, evaluating, summarizing, and visualizing the benchmarking workload. It seamlessly integrates mature Kubernetes-native systems and extends multiple functionalities such as tracking the image-build pipeline, and auto-tuning. We present five use cases of evaluations and analysis through various kinds of bench-marks including microbenchmarks and HPC/AI benchmarks. The evaluation results can also differentiate characteristics such as resource usage behavior and parallelism effectiveness between different clusters. Furthermore, the results demonstrate the benefit of integrating an auto-tuning feature in the proposed framework, as shown by the 10% transferred memory bytes in the Sysbench benchmark.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AutoDECK: Kubernetes上的自动声明性性能评估和调优框架
容器化和应用程序的多样性给自动评估性能调优和基础设施选择之间的比较带来了许多挑战。由于基准测试和评估工具的紧密耦合设计,目前Kubernetes上的自动化工具仅限于琐碎的微基准测试,无法扩展到复杂的云原生架构(如微服务和无服务器),这些架构通常由定制的操作人员管理,以设置工作负载依赖关系。在本文中,我们提出了AutoDECK,一个性能评估框架,具有完全声明的方式。建议的框架可以自动配置、部署、评估、汇总和可视化基准测试工作负载。它无缝地集成了成熟的kubernetes本地系统,并扩展了多种功能,如跟踪映像构建管道和自动调优。我们通过各种基准测试(包括微基准测试和HPC/AI基准测试)提出了五个评估和分析用例。评估结果还可以区分不同集群之间的资源使用行为和并行效率等特征。此外,结果证明了在提议的框架中集成自动调优特性的好处,如Sysbench基准测试中传输的10%内存字节所示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Cloud Computing
IEEE Cloud Computing Computer Science-Computer Networks and Communications
CiteScore
11.20
自引率
0.00%
发文量
0
期刊介绍: Cessation. IEEE Cloud Computing is committed to the timely publication of peer-reviewed articles that provide innovative research ideas, applications results, and case studies in all areas of cloud computing. Topics relating to novel theory, algorithms, performance analyses and applications of techniques are covered. More specifically: Cloud software, Cloud security, Trade-offs between privacy and utility of cloud, Cloud in the business environment, Cloud economics, Cloud governance, Migrating to the cloud, Cloud standards, Development tools, Backup and recovery, Interoperability, Applications management, Data analytics, Communications protocols, Mobile cloud, Private clouds, Liability issues for data loss on clouds, Data integration, Big data, Cloud education, Cloud skill sets, Cloud energy consumption, The architecture of cloud computing, Applications in commerce, education, and industry, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Business Process as a Service (BPaaS)
期刊最新文献
Different in different ways: A network-analysis approach to voice and prosody in Autism Spectrum Disorder. Layered Contention Mitigation for Cloud Storage Towards More Effective and Explainable Fault Management Using Cross-Layer Service Topology Bypass Container Overlay Networks with Transparent BPF-driven Socket Replacement Event-Driven Approach for Monitoring and Orchestration of Cloud and Edge-Enabled IoT Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1