Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack

IF 2.2 Q3 COMPUTER SCIENCE, CYBERNETICS International Journal of Intelligent Computing and Cybernetics Pub Date : 2022-09-05 DOI:10.34133/2022/9863054
Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen
{"title":"Queueing-Theoretic Performance Analysis of a Low-Entropy Labeled Network Stack","authors":"Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen","doi":"10.34133/2022/9863054","DOIUrl":null,"url":null,"abstract":"Theoretical modeling is a popular method for quantitative analysis and performance prediction of computer systems, including cloud systems. Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. However, it is desirable to figure out 1) where the low tail latency and the low entropy of LNS mainly come from, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in industry, and 2) how much LNS can be further optimized. Therefore, we propose a queueing theory-based analytical method defining a bottleneck stage to simplify the quantitative analysis of tail latency. Facilitated by the analytical method, we establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. Under such traffic, each network service stage's processing speed is obtained by non-intrusive basic tests to identify the slowest stage as the bottleneck according to traffic and system characteristics. Our models reveal that the full-datapath prioritized processing and the full-path zero-copy are primary sources of the low tail latency and the low entropy of the LNS-based server, with 0.8%-24.4% error for the 99th percentile latency. In addition, the model of the LNS-based server can give the best number of worker threads querying a database, improving 2.1×-3.5× in concurrency.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Computing and Cybernetics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/2022/9863054","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

Theoretical modeling is a popular method for quantitative analysis and performance prediction of computer systems, including cloud systems. Low entropy cloud (i.e., low interference among workloads and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case to gain orders of magnitude performance improvement compared to servers based on traditional network stacks. However, it is desirable to figure out 1) where the low tail latency and the low entropy of LNS mainly come from, compared with mTCP, a typical user-space network stack in academia, and Linux network stack, the mainstream network stack in industry, and 2) how much LNS can be further optimized. Therefore, we propose a queueing theory-based analytical method defining a bottleneck stage to simplify the quantitative analysis of tail latency. Facilitated by the analytical method, we establish models characterizing the change of processing speed in different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with bursty traffic as an example. Under such traffic, each network service stage's processing speed is obtained by non-intrusive basic tests to identify the slowest stage as the bottleneck according to traffic and system characteristics. Our models reveal that the full-datapath prioritized processing and the full-path zero-copy are primary sources of the low tail latency and the low entropy of the LNS-based server, with 0.8%-24.4% error for the 99th percentile latency. In addition, the model of the LNS-based server can give the best number of worker threads querying a database, improving 2.1×-3.5× in concurrency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
低熵标记网络堆栈的排队理论性能分析
理论建模是计算机系统(包括云系统)定量分析和性能预测的一种流行方法。低熵云(即工作负载之间的低干扰和低系统抖动)正在成为一种新的趋势,其中基于标记网络堆栈(LNS)的服务器是一个很好的例子,与基于传统网络堆栈的服务器相比,它可以获得数量级的性能提升。但是,与学术界典型的用户空间网络堆栈mTCP和工业上主流的网络堆栈Linux相比,需要弄清楚的是:1)LNS的低尾延迟和低熵主要来自哪里;2)LNS还有多少可以进一步优化的空间。因此,我们提出了一种基于排队理论的分析方法,定义瓶颈阶段,以简化尾部延迟的定量分析。在分析方法的帮助下,以突发流量为例,建立了基于lns、mtcp和linux的服务器在不同阶段处理速度变化的模型。在这种流量下,通过非侵入性的基础测试得到各网络服务阶段的处理速度,根据流量和系统特点,识别出最慢的阶段作为瓶颈。我们的模型表明,全数据路径优先处理和全路径零复制是基于lns的服务器低尾部延迟和低熵的主要来源,第99百分位延迟的误差为0.8%-24.4%。此外,基于lns的服务器模型可以提供查询数据库的最佳工作线程数,从而提高2.1×-3.5×的并发性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
4.70%
发文量
26
期刊最新文献
X-News dataset for online news categorization X-News dataset for online news categorization A novel ensemble causal feature selection approach with mutual information and group fusion strategy for multi-label data Contextualized dynamic meta embeddings based on Gated CNNs and self-attention for Arabic machine translation Dynamic community detection algorithm based on hyperbolic graph convolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1