首页 > 最新文献

ACM Transactions on Storage最新文献

英文 中文
Introduction to the Special Section on USENIX ATC 2022 介绍USENIX ATC 2022的特殊部分
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-04-08 DOI: 10.1145/3582557
J. Schindler, Noa Zilberman
The USENIX Annual Technical Conference (ATC) publishes current computer systems research across system disciplines including networking, storage, security, operating systems, databases, and machine learning. This special section of the ACM Transactions on Storage presents some highlights from the storage-related papers published in the USENIX ATC in 2022. A large proportion of ATC papers have traditionally been related to storage. ATC ’22 has continued this trend. Out of 393 submissions, the authors tagged 124 (32%) with one or more topic labels related to Storage, File Systems, Key-Value Stores, and Data Management Systems. The conference accepted 14 storage-related works (22% of all published submissions). We selected three storage papers. They have been expanded since their publication and rereviewed by several of their original ATC ’22 reviewers. Collectively, they represent the mission of the USENIX organization: to bring together researchers from academia and systems practitioners working on production systems and/or large installations of cloud services providers. The ATC complements other USENIX venues including the premier research conference on Operating Systems Design and Implementation (OSDI) as well as storageand networked-systems-focused conferences of File and Storage Technologies (FAST) and Networked Systems Design and Implementation (USENIX NSDI), respectively. We are pleased to present these papers representing this cross section in their expanded form. The Realizing Strong Determinism Contract on Log-Structured Merge Key-Value Stores paper advocates for a hardware and software co-designed framework that advances the state-of-the-art of a widely used persistent data structure of log-structured merge trees for NVMe SSDs. The ZNSwap: un-Block your Swap paper presents a new approach for Zoned Namespace SSDs that significantly improves the performance of Linux memory swap on SSD devices. Finally, the CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches paper, submitted to the ATC Operational Systems Track, describes the design of using Flash caches to lower I/O access latency, drawing on years of research and experiences of the authors. We hope that you will find new insights into the complex world of storage by reading them.
USENIX年度技术会议(ATC)发布了当前跨系统学科的计算机系统研究,包括网络、存储、安全、操作系统、数据库和机器学习。ACM存储事务的这一特别部分介绍了2022年在USENIX ATC上发表的存储相关论文中的一些亮点。ATC文件的很大一部分传统上与存储有关。ATC’22延续了这一趋势。在393份投稿中,作者用一个或多个与存储、文件系统、关键价值存储和数据管理系统相关的主题标签标记了124份(32%)。会议接受了14件与存储相关的作品(占所有已发表作品的22%)。我们选择了三张存储纸。自出版以来,它们得到了扩展,并被ATC的22位原始评审员中的几位重新评审。他们共同代表了USENIX组织的使命:将学术界的研究人员和从事生产系统和/或大型云服务提供商安装的系统从业者聚集在一起。ATC补充了USENIX的其他场地,包括操作系统设计与实施(OSDI)的首要研究会议,以及文件和存储技术(FAST)和网络系统设计与实现(USENIX NSDI)的存储和网络系统重点会议。我们很高兴以扩展的形式展示这些代表这一横截面的论文。《在日志结构合并键值存储上实现强确定性契约》论文主张建立一个硬件和软件共同设计的框架,以推进NVMe SSD中广泛使用的日志结构合并树的持久数据结构的最新技术。ZNSwap:un-BlockyourSwap论文为分区命名空间SSD提供了一种新方法,显著提高了SSD设备上Linux内存交换的性能。最后,提交给ATC操作系统轨道的CacheSack:谷歌数据中心闪存准入优化的理论和经验论文,结合作者多年的研究和经验,描述了使用闪存来降低I/O访问延迟的设计。我们希望您能通过阅读它们,对复杂的存储世界有新的见解。
{"title":"Introduction to the Special Section on USENIX ATC 2022","authors":"J. Schindler, Noa Zilberman","doi":"10.1145/3582557","DOIUrl":"https://doi.org/10.1145/3582557","url":null,"abstract":"The USENIX Annual Technical Conference (ATC) publishes current computer systems research across system disciplines including networking, storage, security, operating systems, databases, and machine learning. This special section of the ACM Transactions on Storage presents some highlights from the storage-related papers published in the USENIX ATC in 2022. A large proportion of ATC papers have traditionally been related to storage. ATC ’22 has continued this trend. Out of 393 submissions, the authors tagged 124 (32%) with one or more topic labels related to Storage, File Systems, Key-Value Stores, and Data Management Systems. The conference accepted 14 storage-related works (22% of all published submissions). We selected three storage papers. They have been expanded since their publication and rereviewed by several of their original ATC ’22 reviewers. Collectively, they represent the mission of the USENIX organization: to bring together researchers from academia and systems practitioners working on production systems and/or large installations of cloud services providers. The ATC complements other USENIX venues including the premier research conference on Operating Systems Design and Implementation (OSDI) as well as storageand networked-systems-focused conferences of File and Storage Technologies (FAST) and Networked Systems Design and Implementation (USENIX NSDI), respectively. We are pleased to present these papers representing this cross section in their expanded form. The Realizing Strong Determinism Contract on Log-Structured Merge Key-Value Stores paper advocates for a hardware and software co-designed framework that advances the state-of-the-art of a widely used persistent data structure of log-structured merge trees for NVMe SSDs. The ZNSwap: un-Block your Swap paper presents a new approach for Zoned Namespace SSDs that significantly improves the performance of Linux memory swap on SSD devices. Finally, the CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches paper, submitted to the ATC Operational Systems Track, describes the design of using Flash caches to lower I/O access latency, drawing on years of research and experiences of the authors. We hope that you will find new insights into the complex world of storage by reading them.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42438967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realizing Strong Determinism Contract on Log-Structured Merge Key-Value Stores 日志结构合并键值存储的强确定性契约实现
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-25 DOI: https://dl.acm.org/doi/10.1145/3582695
Miryeong Kwon, Seungjun Lee, Hyunkyu Choi, Jooyoung Hwang, Myoungsoo Jung

We propose Vigil-KV, a hardware and software co-designed framework that eliminates long-tail latency almost perfectly by introducing strong latency determinism. To make Get latency deterministic, Vigil-KV first enables a predictable latency mode (PLM) interface on a real datacenter-scale NVMe SSD, having knowledge about the nature of the underlying flash technologies. Vigil-KV at the system-level then hides the non-deterministic time window (associated with SSD’s internal tasks and/or write services) by internally scheduling the different device states of PLM across multiple physical functions. Vigil-KV further schedules compaction/flush operations and client requests being aware of PLM’s restrictions thereby integrating strong latency determinism into LSM KVs. We implement Vigil-KV upon a 1.92TB NVMe SSD prototype and Linux 4.19.91, but other LSM KVs can adopt its concept. We evaluate diverse Facebook and Yahoo scenarios with Vigil-KV, and the results show that Vigil-KV can reducethe tail latency of a baseline KV system by 3.19× while reducing the average latency by 34%, on average.

我们提出了Vigil-KV,这是一个硬件和软件协同设计的框架,通过引入强延迟确定性,几乎完美地消除了长尾延迟。为了使Get延迟具有确定性,Vigil-KV首先在真正的数据中心规模的NVMe SSD上启用可预测的延迟模式(PLM)接口,了解底层闪存技术的本质。然后,系统级的Vigil-KV通过内部调度PLM跨多个物理功能的不同设备状态来隐藏非确定性时间窗口(与SSD的内部任务和/或写服务相关)。Vigil-KV进一步调度压缩/刷新操作和客户端请求,了解PLM的限制,从而将强延迟确定性集成到LSM kv中。我们在1.92TB NVMe SSD原型和Linux 4.19.91上实现了Vigil-KV,但其他LSM kv可以采用它的概念。我们使用Vigil-KV对Facebook和Yahoo的不同场景进行了评估,结果表明Vigil-KV可以将基线KV系统的尾部延迟减少3.19倍,平均延迟减少34%。
{"title":"Realizing Strong Determinism Contract on Log-Structured Merge Key-Value Stores","authors":"Miryeong Kwon, Seungjun Lee, Hyunkyu Choi, Jooyoung Hwang, Myoungsoo Jung","doi":"https://dl.acm.org/doi/10.1145/3582695","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582695","url":null,"abstract":"<p>We propose <i>Vigil-KV</i>, a hardware and software co-designed framework that eliminates long-tail latency almost perfectly by introducing strong latency determinism. To make Get latency deterministic, Vigil-KV first enables a predictable latency mode (PLM) interface on a real datacenter-scale NVMe SSD, having knowledge about the nature of the underlying flash technologies. Vigil-KV at the system-level then hides the non-deterministic time window (associated with SSD’s internal tasks and/or write services) by internally scheduling the different device states of PLM across multiple physical functions. Vigil-KV further schedules compaction/flush operations and client requests being aware of PLM’s restrictions thereby integrating strong latency determinism into LSM KVs. We implement Vigil-KV upon a 1.92TB NVMe SSD prototype and Linux 4.19.91, but other LSM KVs can adopt its concept. We evaluate diverse Facebook and Yahoo scenarios with Vigil-KV, and the results show that Vigil-KV can reducethe tail latency of a baseline KV system by 3.19× while reducing the average latency by 34%, on average.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TriCache: A User-Transparent Block Cache Enabling High-Performance Out-of-Core Processing with In-Memory Programs TriCache:一个用户透明的块缓存,可以在内存程序中实现高性能的核外处理
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-22 DOI: https://dl.acm.org/doi/10.1145/3583139
Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, Wenguang Chen

Out-of-core systems rely on high-performance cache sub-systems to reduce the number of I/O operations. Although the page cache in modern operating systems enables transparent access to memory and storage devices, it suffers from efficiency and scalability issues on cache misses, forcing out-of-core systems to design and implement their own cache components, which is a non-trivial task.

This study proposes TriCache, a cache mechanism that enables in-memory programs to efficiently process out-of-core datasets without requiring any code rewrite. It provides a virtual memory interface on top of the conventional block interface to simultaneously achieve user transparency and sufficient out-of-core performance. A multi-level block cache design is proposed to address the challenge of per-access address translations required by a memory interface. It can exploit spatial and temporal localities in memory or storage accesses to render storage-to-memory address translation and page-level concurrency control adequately efficient for the virtual memory interface.

Our evaluation shows that in-memory systems operating on top of TriCache can outperform Linux OS page cache by more than one order of magnitude, and can deliver performance comparable to or even better than that of corresponding counterparts designed specifically for out-of-core scenarios.

外核系统依赖于高性能缓存子系统来减少I/O操作的数量。尽管现代操作系统中的页面缓存支持对内存和存储设备的透明访问,但它在缓存丢失时存在效率和可伸缩性问题,迫使核心外系统设计和实现自己的缓存组件,这是一项非常重要的任务。本研究提出了TriCache,这是一种缓存机制,使内存程序能够有效地处理核心外数据集,而无需重写任何代码。它在传统块接口之上提供了一个虚拟内存接口,以同时实现用户透明性和足够的核外性能。提出了一种多级块缓存设计,以解决存储器接口要求的每次访问地址转换的挑战。它可以利用内存中的空间和时间位置或存储访问,为虚拟内存接口提供足够有效的存储到内存地址转换和页面级并发控制。我们的评估表明,在TriCache之上运行的内存系统可以比Linux操作系统的页面缓存性能高出一个数量级以上,并且可以提供与专门为非核心场景设计的相应系统相当甚至更好的性能。
{"title":"TriCache: A User-Transparent Block Cache Enabling High-Performance Out-of-Core Processing with In-Memory Programs","authors":"Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, Wenguang Chen","doi":"https://dl.acm.org/doi/10.1145/3583139","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583139","url":null,"abstract":"<p>Out-of-core systems rely on high-performance cache sub-systems to reduce the number of I/O operations. Although the page cache in modern operating systems enables transparent access to memory and storage devices, it suffers from efficiency and scalability issues on cache misses, forcing out-of-core systems to design and implement their own cache components, which is a non-trivial task.</p><p>This study proposes TriCache, a cache mechanism that enables in-memory programs to efficiently process out-of-core datasets without requiring any code rewrite. It provides a virtual memory interface on top of the conventional block interface to simultaneously achieve user transparency and sufficient out-of-core performance. A multi-level block cache design is proposed to address the challenge of per-access address translations required by a memory interface. It can exploit spatial and temporal localities in memory or storage accesses to render storage-to-memory address translation and page-level concurrency control adequately efficient for the virtual memory interface.</p><p>Our evaluation shows that in-memory systems operating on top of TriCache can outperform Linux OS page cache by more than one order of magnitude, and can deliver performance comparable to or even better than that of corresponding counterparts designed specifically for out-of-core scenarios.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Universal SMR-aware Cache Framework with Deep Optimization for DM-SMR and HM-SMR Disks 一种用于DM-SMR和HM-SMR磁盘的具有深度优化的通用SMR感知缓存框架
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-21 DOI: 10.1145/3588442
Diansen Sun, Ruixiong Tan, Yunpeng Chai
To satisfy the enormous storage capacities required for big data, data centers have been adopting high-density shingled magnetic recording (SMR) disks. However, the weak fine-grained random write performance of SMR disks caused by their inherent write amplification and unbalanced read–write performance poses a severe challenge. Many studies have proposed solid-state drive (SSD) cache systems to address this issue. However, existing cache algorithms, such as the least recently used (LRU) algorithm, which is used to optimize cache popularity, and the MOST algorithm, which is used to optimize the write amplification factor, cannot exploit the full performance of the proposed cache systems because of their inappropriate optimization objectives. This article proposes a new SMR-aware cache framework called SAC+ to improve SMR-based hybrid storage. SAC+ integrates the two dominant types of SMR drives—namely, drive-managed and host-managed SMR drives—and provides a universal framework implementation. In addition, SAC+ integrally combines the drive characteristics to optimize I/O performance. The results of evaluations conducted using real-world traces indicate that SAC+ reduces the I/O time by 36–93% compared with state-of-the-art algorithms.
为了满足大数据所需的巨大存储容量,数据中心一直在采用高密度叠片磁记录(SMR)磁盘。然而,SMR磁盘固有的写入放大和不平衡的读写性能导致其细粒度随机写入性能较弱,这是一个严峻的挑战。许多研究提出了固态驱动器(SSD)缓存系统来解决这个问题。然而,现有的缓存算法,例如用于优化缓存流行度的最近最少使用(LRU)算法和用于优化写放大因子的MOST算法,由于其不适当的优化目标,不能充分利用所提出的缓存系统的性能。本文提出了一种新的SMR感知缓存框架SAC+,以改进基于SMR的混合存储。SAC+集成了两种主要类型的SMR驱动器,即驱动器管理的和主机管理的SMR驱动,并提供了通用的框架实现。此外,SAC+集成了驱动器特性,以优化I/O性能。使用真实世界轨迹进行的评估结果表明,与最先进的算法相比,SAC+将I/O时间减少了36-93%。
{"title":"A Universal SMR-aware Cache Framework with Deep Optimization for DM-SMR and HM-SMR Disks","authors":"Diansen Sun, Ruixiong Tan, Yunpeng Chai","doi":"10.1145/3588442","DOIUrl":"https://doi.org/10.1145/3588442","url":null,"abstract":"To satisfy the enormous storage capacities required for big data, data centers have been adopting high-density shingled magnetic recording (SMR) disks. However, the weak fine-grained random write performance of SMR disks caused by their inherent write amplification and unbalanced read–write performance poses a severe challenge. Many studies have proposed solid-state drive (SSD) cache systems to address this issue. However, existing cache algorithms, such as the least recently used (LRU) algorithm, which is used to optimize cache popularity, and the MOST algorithm, which is used to optimize the write amplification factor, cannot exploit the full performance of the proposed cache systems because of their inappropriate optimization objectives. This article proposes a new SMR-aware cache framework called SAC+ to improve SMR-based hybrid storage. SAC+ integrates the two dominant types of SMR drives—namely, drive-managed and host-managed SMR drives—and provides a universal framework implementation. In addition, SAC+ integrally combines the drive characteristics to optimize I/O performance. The results of evaluations conducted using real-world traces indicate that SAC+ reduces the I/O time by 36–93% compared with state-of-the-art algorithms.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48920469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Section on USENIX OSDI 2022 USENIX OSDI 2022特别章节简介
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: 10.1145/3584363
M. Aguilera, Hakim Weatherspoon
This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.
ACM存储事务杂志的这个特殊部分重点介绍了在第16届USENIX操作系统设计和实现研讨会(OSDI ' 22)上发表的工作。虽然OSDI广泛地关注系统研究,但存储系统构成了这个社区的重要组成部分。在提交给OSDI ' 22的253个申请中,有36个(14%)与存储相关。在49篇被接受的论文中,9篇(18%)与存储相关。我们邀请了OSDI ' 22中质量最高的两篇存储论文来为ACM存储事务的这个特殊部分提供扩展版本。其中一个接受了邀请,扩展版本随后由原始OSDI ' 22审稿人的一个子集以快速通道模式进行审查。本文是由冯冠宇、曹焕琪、朱晓伟、于博文、王元伟、马子轩、陈胜奇和陈文光撰写的“TriCache:一个用户透明的块缓存,支持内存程序的高性能核外处理”。这项工作提出了一种高性能用户级块缓存的通用设计,用于最近ssd的核外处理。我们希望你喜欢这个扩展版本,并发现工作有趣和有见地。
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"M. Aguilera, Hakim Weatherspoon","doi":"10.1145/3584363","DOIUrl":"https://doi.org/10.1145/3584363","url":null,"abstract":"This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42293164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TPFS: A High-Performance Tiered File System for Persistent Memories and Disks 持久性存储器和磁盘的高性能分级文件系统
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3580280
Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang

Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.

与传统磁盘相比,新兴的快速、可字节寻址的持久内存(PM)保证了显著的存储性能提升。TPFS是一种分级文件系统,它结合了PM和慢速磁盘来创建一个具有接近PM性能和大容量的存储系统。TPFS根据同步性、写入大小和读取频率,将传入的文件输入/输出(I/O)引导到PM、动态随机访问内存(DRAM)或磁盘。TPFS在线配置应用程序的访问流,以预测文件访问的行为。在后台,TPFS估计文件数据的“温度”,并将写冷和读热的文件数据从PM迁移到磁盘。为了充分利用磁盘带宽,TPFS将数据块合并成大型的顺序写操作。实验结果表明,与单独在SSD上运行的EXT4和XFS相比,TPFS在少量PM和大型固态硬盘(SSD)下的吞吐量分别提高了7.3倍和7.9倍。随着PM数量的增加,TPFS的性能会不断提高,直到达到仅PM文件系统的性能。
{"title":"TPFS: A High-Performance Tiered File System for Persistent Memories and Disks","authors":"Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang","doi":"https://dl.acm.org/doi/10.1145/3580280","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580280","url":null,"abstract":"<p>Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance PSA-Cache:一种提高3D NAND闪存性能的页面状态感知缓存方案
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3574324
Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin

Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.

垃圾收集(GC)在3D NAND闪存的性能中起着至关重要的作用,在3D NAND闪存中,Copyback被广泛用于加速垃圾收集期间的有效页面迁移。不幸的是,回拷受到奇偶对称性问题的限制:从奇/偶页读取的数据必须写入奇/偶页。在迁移两个奇/偶连续页面之后,两个迁移页面之间的空闲页面将被浪费。这种浪费的页面明显降低了闪存上的可用空间,并导致额外的gc,从而降低了固态磁盘(SSD)的性能。为了解决这个问题,我们提出了一种名为PSA-Cache的页面状态感知缓存方案,它可以防止页面浪费,从而提高基于NAND闪存的ssd的性能。为了便于制定回写调度决策,PSA-Cache根据受害块中页面的状态来调节缓存页面的回写优先级。通过将高回写优先级的页面写回闪存芯片,PSA-Cache通过在随后的垃圾收集中打破奇数/偶数连续页面,有效地避免了页面浪费。我们从浪费的页面数量、gc数量和响应时间方面定量地评估了PSA-Cache的性能。我们将PSA-Cache与两种最先进的方案GCaR和TTflash以及基线方案LRU进行比较。实验结果表明,PSA-Cache方案优于现有方案。特别是,PSA-Cache将GCaR和TTflash的浪费页面数量分别减少了25.7%和62.1%。PSA-Cache极大地减少了GC计数,最多减少了78.7%,平均减少了49.6%。此外,PSA-Cache将平均写响应时间减少了85.4%,平均为30.05%。
{"title":"PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance","authors":"Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin","doi":"https://dl.acm.org/doi/10.1145/3574324","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3574324","url":null,"abstract":"<p>Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called <i>PSA-Cache</i>, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Section on USENIX OSDI 2022 USENIX OSDI 2022特别章节简介
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3584363
Marcos K. Aguilera, Hakim Weatherspoon

No abstract available.

没有摘要。
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"Marcos K. Aguilera, Hakim Weatherspoon","doi":"https://dl.acm.org/doi/10.1145/3584363","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3584363","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlatLSM: Write-Optimized LSM-Tree for PM-Based KV Stores FlatLSM:基于pmv存储的写优化lsm树
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3579855
Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang

The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow L0 flush and L0-L1 compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent L0 into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on L0-L1 compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.

日志结构合并树(Log-Structured Merge Tree, LSM-Tree)以其优异的写入性能被广泛应用于键值存储中。但是基于lsm树的KV存储仍然有提前写日志的开销,并且由于L0刷新和L0- l1压缩速度慢而导致写停顿。新的字节可寻址、持久内存(PM)设备为改进LSM-Tree的写性能带来了机会。以往基于PM的LSM-Tree研究没有充分利用PM的主存和外存的“双重作用”。在本文中,我们首先分析了基于PM的两种memtables策略,并分析了出现write stall问题的原因。受分析结果的启发,我们提出了FlatLSM,这是一种专门为基于非易失性存储器的KV存储设计的扁平lsm树。首先,我们提出了索引和数据分离的PMTable。PM日志利用缓冲区日志存储大小小于256B的kv。其次,为了解决写失速问题,FlatLSM将易失性memtable和持久性L0合并为大型pmtable,这可以减少LSM-Tree的深度,并将I/O带宽集中在L0- l1压缩上。为了减轻由于将大型pmtable刷新到SSD而造成的写失速,我们提出了一种基于KV分离的并行刷新/压缩算法。我们基于RocksDB实现了FlatLSM,并在英特尔最新的PM设备上评估了它的性能,英特尔Optane DC PMM具有最先进的基于PM的LSM-Tree KV存储,FlatLSM在随机写工作负载上提高了5.2倍的吞吐量,在YCSB-A上提高了2.55倍。
{"title":"FlatLSM: Write-Optimized LSM-Tree for PM-Based KV Stores","authors":"Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang","doi":"https://dl.acm.org/doi/10.1145/3579855","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579855","url":null,"abstract":"<p>The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow <i>L<sub>0</sub></i> flush and <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent <i>L<sub>0</sub></i> into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches CacheSack: Google对数据中心闪存缓存的准入优化理论与经验
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3582014
Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid

This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).

本文描述了CacheSack的算法、实现和部署经验,CacheSack是Google数据中心闪存缓存的接纳算法。CacheSack最大限度地减少了Google数据中心闪存缓存的主要成本:磁盘IO和闪存占用。CacheSack将缓存流量划分为不相关的类别,分析观察到的每个子集的缓存效益,并制定一个背包问题,为每个子集分配最优的允许策略。在此之前,谷歌数据中心的闪存缓存准入策略是手动优化的,大多数缓存使用Lazy Adaptive Replacement cache算法。生产实验表明,CacheSack显著优于之前的静态准入策略,总拥有成本提高了7.7%,磁盘读取(减少9.5%)和闪存磨损(减少17.8%)方面也有显著改善。
{"title":"CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches","authors":"Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid","doi":"https://dl.acm.org/doi/10.1145/3582014","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582014","url":null,"abstract":"<p>This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1