DIDACache

Zhaoyan Shen, Feng Chen, Yichen Jia, Z. Shao
{"title":"DIDACache","authors":"Zhaoyan Shen, Feng Chen, Yichen Jia, Z. Shao","doi":"10.1145/3203410","DOIUrl":null,"url":null,"abstract":"Key-value caching is crucial to today’s low-latency Internet services. Conventional key-value cache systems, such as Memcached, heavily rely on expensive DRAM memory. To lower Total Cost of Ownership, the industry recently is moving toward more cost-efficient flash-based solutions, such as Facebook’s McDipper [14] and Twitter’s Fatcache [56]. These cache systems typically take commercial SSDs and adopt a Memcached-like scheme to store and manage key-value cache data in flash. Such a practice, though simple, is inefficient due to the huge semantic gap between the key-value cache manager and the underlying flash devices. In this article, we advocate to reconsider the cache system design and directly open device-level details of the underlying flash storage for key-value caching. We propose an enhanced flash-aware key-value cache manager, which consists of a novel unified address mapping module, an integrated garbage collection policy, a dynamic over-provisioning space management, and a customized wear-leveling policy, to directly drive the flash management. A thin intermediate library layer provides a slab-based abstraction of low-level flash memory space and an API interface for directly and easily operating flash devices. A special flash memory SSD hardware that exposes flash physical details is adopted to store key-value items. This co-design approach bridges the semantic gap and well connects the two layers together, which allows us to leverage both the domain knowledge of key-value caches and the unique device properties. In this way, we can maximize the efficiency of key-value caching on flash devices while minimizing its weakness. We implemented a prototype, called DIDACache, based on the Open-Channel SSD platform. Our experiments on real hardware show that we can significantly increase the throughput by 35.5%, reduce the latency by 23.6%, and remove unnecessary erase operations by 28%.","PeriodicalId":273014,"journal":{"name":"ACM Transactions on Storage (TOS)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Storage (TOS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3203410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Key-value caching is crucial to today’s low-latency Internet services. Conventional key-value cache systems, such as Memcached, heavily rely on expensive DRAM memory. To lower Total Cost of Ownership, the industry recently is moving toward more cost-efficient flash-based solutions, such as Facebook’s McDipper [14] and Twitter’s Fatcache [56]. These cache systems typically take commercial SSDs and adopt a Memcached-like scheme to store and manage key-value cache data in flash. Such a practice, though simple, is inefficient due to the huge semantic gap between the key-value cache manager and the underlying flash devices. In this article, we advocate to reconsider the cache system design and directly open device-level details of the underlying flash storage for key-value caching. We propose an enhanced flash-aware key-value cache manager, which consists of a novel unified address mapping module, an integrated garbage collection policy, a dynamic over-provisioning space management, and a customized wear-leveling policy, to directly drive the flash management. A thin intermediate library layer provides a slab-based abstraction of low-level flash memory space and an API interface for directly and easily operating flash devices. A special flash memory SSD hardware that exposes flash physical details is adopted to store key-value items. This co-design approach bridges the semantic gap and well connects the two layers together, which allows us to leverage both the domain knowledge of key-value caches and the unique device properties. In this way, we can maximize the efficiency of key-value caching on flash devices while minimizing its weakness. We implemented a prototype, called DIDACache, based on the Open-Channel SSD platform. Our experiments on real hardware show that we can significantly increase the throughput by 35.5%, reduce the latency by 23.6%, and remove unnecessary erase operations by 28%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
键值缓存对于当今的低延迟Internet服务至关重要。传统的键值缓存系统(如Memcached)严重依赖昂贵的DRAM内存。为了降低总拥有成本,该行业最近正在转向更具成本效益的基于闪存的解决方案,如Facebook的McDipper[14]和Twitter的Fatcache[56]。这些缓存系统通常使用商用ssd,并采用类似memcached的方案在闪存中存储和管理键值缓存数据。这种做法虽然简单,但效率很低,因为键值缓存管理器和底层闪存设备之间存在巨大的语义差距。在本文中,我们建议重新考虑缓存系统设计,并直接打开用于键值缓存的底层闪存的设备级详细信息。我们提出了一种增强的闪存感知键值缓存管理器,它由一个新颖的统一地址映射模块、一个集成的垃圾收集策略、一个动态的过度供应空间管理和一个定制的损耗均衡策略组成,直接驱动闪存管理。一个瘦的中间库层提供了一个基于slab的低级闪存空间抽象和一个API接口,用于直接和轻松地操作闪存设备。采用一种特殊的闪存SSD硬件,可以暴露闪存的物理细节,用于存储键值项。这种协同设计方法弥合了语义上的差距,并将两层很好地连接在一起,这使我们能够利用键值缓存的领域知识和独特的设备属性。通过这种方式,我们可以最大化闪存设备上键值缓存的效率,同时最小化其弱点。我们基于Open-Channel SSD平台实现了一个名为DIDACache的原型。我们在真实硬件上的实验表明,我们可以显着提高吞吐量35.5%,减少延迟23.6%,并消除不必要的擦除操作28%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
WebAssembly-based Delta Sync for Cloud Storage Services DEFUSE: An Interface for Fast and Correct User Space File System Access Donag: Generating Efficient Patches and Diffs for Compressed Archives Building GC-free Key-value Store on HM-SMR Drives with ZoneFS Kangaroo: Theory and Practice of Caching Billions of Tiny Objects on Flash
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1