Enabling Service Cache in Edge Clouds

IF 3.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Internet of Things Pub Date : 2021-07-01 DOI:10.1145/3456564
Chih-Kai Huang, Shan-Hsiang Shen
{"title":"Enabling Service Cache in Edge Clouds","authors":"Chih-Kai Huang, Shan-Hsiang Shen","doi":"10.1145/3456564","DOIUrl":null,"url":null,"abstract":"The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache, which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Internet of Things","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3456564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 7

Abstract

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache, which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
启用边缘云服务缓存
下一代5G蜂窝网络旨在支持物联网(IoT)网络;网络组件和服务是虚拟化的,可以在虚拟机或容器中运行。此外,利用边缘云(更接近最终用户)来减少端到端延迟,特别是对于一些需要短响应时间的物联网应用程序。然而,边缘云的计算资源是有限的。为了最大限度地减少整体服务延迟,必须仔细确定应该在边缘云中提供哪些服务,并在本地为更多的移动或物联网设备提供服务。在本文中,我们提出了一个名为S-Cache的新型服务缓存框架,它可以自动缓存边缘云中流行的服务。此外,我们还设计了一种新的缓存替换策略来最大化缓存命中率。我们的评估使用来自Google的真实日志文件来形成两个数据集来评估性能。将提出的缓存替换策略与其他策略(如贪心双大小频率(GDSF)和最少使用频率(LFU))进行了比较。实验结果表明,在这两个数据集上,我们的缓存替换策略的缓存命中率平均提高了39%,缓存替换策略的平均延迟平均降低了41%和38%。这表明我们的方法优于其他现有的缓存策略,更适合于多访问边缘计算环境。在实现过程中,S-Cache依靠OpenStack将服务克隆到边缘云,引导网络流量。我们还评估了将服务克隆到边缘云的成本。在提出的框架和不同的环境下,通过实验研究了各种实际应用的克隆成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.20
自引率
3.70%
发文量
0
期刊最新文献
FLAShadow: A Flash-based Shadow Stack for Low-end Embedded Systems CoSense: Deep Learning Augmented Sensing for Coexistence with Networking in Millimeter-Wave Picocells CASPER: Context-Aware IoT Anomaly Detection System for Industrial Robotic Arms Collaborative Video Caching in the Edge Network using Deep Reinforcement Learning ARIoTEDef: Adversarially Robust IoT Early Defense System Based on Self-Evolution against Multi-step Attacks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1