SLAP: Segmented Reuse-Time-Label Based Admission Policy for Content Delivery Network Caching

IF 1.5 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE ACM Transactions on Architecture and Code Optimization Pub Date : 2024-02-09 DOI:10.1145/3646550
Ke Liu, Kan Wu, Hua Wang, Ke Zhou, Peng Wang, Ji Zhang, Cong Li
{"title":"SLAP: Segmented Reuse-Time-Label Based Admission Policy for Content Delivery Network Caching","authors":"Ke Liu, Kan Wu, Hua Wang, Ke Zhou, Peng Wang, Ji Zhang, Cong Li","doi":"10.1145/3646550","DOIUrl":null,"url":null,"abstract":"<p>“Learned” admission policies have shown promise in improving Content Delivery Network (CDN) cache performance and lowering operational costs. Unfortunately, existing learned policies are optimized with a few fixed cache sizes while in reality, cache sizes often vary over time in an unpredictable manner. As a result, existing solutions cannot provide consistent benefits in production settings. </p><p>We present <i>SLAP</i>, a learned CDN cache admission approach based on segmented object reuse time prediction. <i>SLAP</i> predicts an object’s reuse time range using the Long-Short-Term-Memory model and admits objects that will be reused (before eviction) given the current cache size. <i>SLAP</i> decouples model training from cache size, allowing it to adapt to arbitrary sizes. The key to our solution is a novel segmented labeling scheme that makes <i>SLAP</i> without requiring precise prediction on object reuse time. To further make <i>SLAP</i> a practical and efficient solution, we propose aggressive reusing of computation and training on sampled traces to optimize model training, and a specialized predictor architecture that overlaps prediction computation with miss object fetching to optimize model inference. Our experiments using production CDN traces show that SLAP achieves significantly lower write traffic (38%-59%), longer SSDs lifetime (104%-178%), a consistently higher hit rate (3.2%-11.7%), and requires no effort to adapt to changing cache sizes, outperforming existing policies.</p>","PeriodicalId":50920,"journal":{"name":"ACM Transactions on Architecture and Code Optimization","volume":"80 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Architecture and Code Optimization","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3646550","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

“Learned” admission policies have shown promise in improving Content Delivery Network (CDN) cache performance and lowering operational costs. Unfortunately, existing learned policies are optimized with a few fixed cache sizes while in reality, cache sizes often vary over time in an unpredictable manner. As a result, existing solutions cannot provide consistent benefits in production settings.

We present SLAP, a learned CDN cache admission approach based on segmented object reuse time prediction. SLAP predicts an object’s reuse time range using the Long-Short-Term-Memory model and admits objects that will be reused (before eviction) given the current cache size. SLAP decouples model training from cache size, allowing it to adapt to arbitrary sizes. The key to our solution is a novel segmented labeling scheme that makes SLAP without requiring precise prediction on object reuse time. To further make SLAP a practical and efficient solution, we propose aggressive reusing of computation and training on sampled traces to optimize model training, and a specialized predictor architecture that overlaps prediction computation with miss object fetching to optimize model inference. Our experiments using production CDN traces show that SLAP achieves significantly lower write traffic (38%-59%), longer SSDs lifetime (104%-178%), a consistently higher hit rate (3.2%-11.7%), and requires no effort to adapt to changing cache sizes, outperforming existing policies.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SLAP:基于分段重复使用时间标签的内容交付网络缓存准入策略
"学习 "接纳策略在提高内容分发网络(CDN)缓存性能和降低运营成本方面大有可为。遗憾的是,现有的学习策略是根据几个固定的缓存大小进行优化的,而在现实中,缓存大小往往会以不可预测的方式随时间而变化。因此,现有解决方案无法在生产环境中提供一致的优势。我们提出的 SLAP 是一种基于分段对象重用时间预测的学习型 CDN 缓存接纳方法。SLAP 利用长短期内存模型预测对象的重用时间范围,并根据当前的缓存大小接纳将被重用(在驱逐之前)的对象。SLAP 将模型训练与缓存大小脱钩,使其能够适应任意大小的缓存。我们解决方案的关键在于一种新颖的分段标签方案,它使 SLAP 无需精确预测对象的重用时间。为了进一步使 SLAP 成为实用高效的解决方案,我们提出了在采样跟踪上积极重复使用计算和训练以优化模型训练的方案,并提出了一种专门的预测器架构,将预测计算与缺失对象获取重叠,以优化模型推理。我们使用生产 CDN 跟踪进行的实验表明,SLAP 可显著降低写入流量(38%-59%),延长固态硬盘的使用寿命(104%-178%),持续提高命中率(3.2%-11.7%),而且无需努力适应不断变化的缓存大小,性能优于现有策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Transactions on Architecture and Code Optimization
ACM Transactions on Architecture and Code Optimization 工程技术-计算机:理论方法
CiteScore
3.60
自引率
6.20%
发文量
78
审稿时长
6-12 weeks
期刊介绍: ACM Transactions on Architecture and Code Optimization (TACO) focuses on hardware, software, and system research spanning the fields of computer architecture and code optimization. Articles that appear in TACO will either present new techniques and concepts or report on experiences and experiments with actual systems. Insights useful to architects, hardware or software developers, designers, builders, and users will be emphasized.
期刊最新文献
A Survey of General-purpose Polyhedral Compilers Sectored DRAM: A Practical Energy-Efficient and High-Performance Fine-Grained DRAM Architecture Scythe: A Low-latency RDMA-enabled Distributed Transaction System for Disaggregated Memory FASA-DRAM: Reducing DRAM Latency with Destructive Activation and Delayed Restoration CoolDC: A Cost-Effective Immersion-Cooled Datacenter with Workload-Aware Temperature Scaling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1