首页 > 最新文献

ACM Transactions on Storage最新文献

英文 中文
Introduction to the Special Section on USENIX OSDI 2022 USENIX OSDI 2022特别章节简介
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: 10.1145/3584363
M. Aguilera, Hakim Weatherspoon
This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.
ACM存储事务杂志的这个特殊部分重点介绍了在第16届USENIX操作系统设计和实现研讨会(OSDI ' 22)上发表的工作。虽然OSDI广泛地关注系统研究,但存储系统构成了这个社区的重要组成部分。在提交给OSDI ' 22的253个申请中,有36个(14%)与存储相关。在49篇被接受的论文中,9篇(18%)与存储相关。我们邀请了OSDI ' 22中质量最高的两篇存储论文来为ACM存储事务的这个特殊部分提供扩展版本。其中一个接受了邀请,扩展版本随后由原始OSDI ' 22审稿人的一个子集以快速通道模式进行审查。本文是由冯冠宇、曹焕琪、朱晓伟、于博文、王元伟、马子轩、陈胜奇和陈文光撰写的“TriCache:一个用户透明的块缓存,支持内存程序的高性能核外处理”。这项工作提出了一种高性能用户级块缓存的通用设计,用于最近ssd的核外处理。我们希望你喜欢这个扩展版本,并发现工作有趣和有见地。
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"M. Aguilera, Hakim Weatherspoon","doi":"10.1145/3584363","DOIUrl":"https://doi.org/10.1145/3584363","url":null,"abstract":"This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"19 1","pages":"1 - 1"},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42293164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TPFS: A High-Performance Tiered File System for Persistent Memories and Disks 持久性存储器和磁盘的高性能分级文件系统
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3580280
Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang

Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.

与传统磁盘相比,新兴的快速、可字节寻址的持久内存(PM)保证了显著的存储性能提升。TPFS是一种分级文件系统,它结合了PM和慢速磁盘来创建一个具有接近PM性能和大容量的存储系统。TPFS根据同步性、写入大小和读取频率,将传入的文件输入/输出(I/O)引导到PM、动态随机访问内存(DRAM)或磁盘。TPFS在线配置应用程序的访问流,以预测文件访问的行为。在后台,TPFS估计文件数据的“温度”,并将写冷和读热的文件数据从PM迁移到磁盘。为了充分利用磁盘带宽,TPFS将数据块合并成大型的顺序写操作。实验结果表明,与单独在SSD上运行的EXT4和XFS相比,TPFS在少量PM和大型固态硬盘(SSD)下的吞吐量分别提高了7.3倍和7.9倍。随着PM数量的增加,TPFS的性能会不断提高,直到达到仅PM文件系统的性能。
{"title":"TPFS: A High-Performance Tiered File System for Persistent Memories and Disks","authors":"Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang","doi":"https://dl.acm.org/doi/10.1145/3580280","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580280","url":null,"abstract":"<p>Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"97 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance PSA-Cache:一种提高3D NAND闪存性能的页面状态感知缓存方案
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3574324
Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin

Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.

垃圾收集(GC)在3D NAND闪存的性能中起着至关重要的作用,在3D NAND闪存中,Copyback被广泛用于加速垃圾收集期间的有效页面迁移。不幸的是,回拷受到奇偶对称性问题的限制:从奇/偶页读取的数据必须写入奇/偶页。在迁移两个奇/偶连续页面之后,两个迁移页面之间的空闲页面将被浪费。这种浪费的页面明显降低了闪存上的可用空间,并导致额外的gc,从而降低了固态磁盘(SSD)的性能。为了解决这个问题,我们提出了一种名为PSA-Cache的页面状态感知缓存方案,它可以防止页面浪费,从而提高基于NAND闪存的ssd的性能。为了便于制定回写调度决策,PSA-Cache根据受害块中页面的状态来调节缓存页面的回写优先级。通过将高回写优先级的页面写回闪存芯片,PSA-Cache通过在随后的垃圾收集中打破奇数/偶数连续页面,有效地避免了页面浪费。我们从浪费的页面数量、gc数量和响应时间方面定量地评估了PSA-Cache的性能。我们将PSA-Cache与两种最先进的方案GCaR和TTflash以及基线方案LRU进行比较。实验结果表明,PSA-Cache方案优于现有方案。特别是,PSA-Cache将GCaR和TTflash的浪费页面数量分别减少了25.7%和62.1%。PSA-Cache极大地减少了GC计数,最多减少了78.7%,平均减少了49.6%。此外,PSA-Cache将平均写响应时间减少了85.4%,平均为30.05%。
{"title":"PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance","authors":"Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin","doi":"https://dl.acm.org/doi/10.1145/3574324","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3574324","url":null,"abstract":"<p>Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called <i>PSA-Cache</i>, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"40 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Section on USENIX OSDI 2022 USENIX OSDI 2022特别章节简介
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3584363
Marcos K. Aguilera, Hakim Weatherspoon

No abstract available.

没有摘要。
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"Marcos K. Aguilera, Hakim Weatherspoon","doi":"https://dl.acm.org/doi/10.1145/3584363","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3584363","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"373 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlatLSM: Write-Optimized LSM-Tree for PM-Based KV Stores FlatLSM:基于pmv存储的写优化lsm树
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3579855
Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang

The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow L0 flush and L0-L1 compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent L0 into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on L0-L1 compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.

日志结构合并树(Log-Structured Merge Tree, LSM-Tree)以其优异的写入性能被广泛应用于键值存储中。但是基于lsm树的KV存储仍然有提前写日志的开销,并且由于L0刷新和L0- l1压缩速度慢而导致写停顿。新的字节可寻址、持久内存(PM)设备为改进LSM-Tree的写性能带来了机会。以往基于PM的LSM-Tree研究没有充分利用PM的主存和外存的“双重作用”。在本文中,我们首先分析了基于PM的两种memtables策略,并分析了出现write stall问题的原因。受分析结果的启发,我们提出了FlatLSM,这是一种专门为基于非易失性存储器的KV存储设计的扁平lsm树。首先,我们提出了索引和数据分离的PMTable。PM日志利用缓冲区日志存储大小小于256B的kv。其次,为了解决写失速问题,FlatLSM将易失性memtable和持久性L0合并为大型pmtable,这可以减少LSM-Tree的深度,并将I/O带宽集中在L0- l1压缩上。为了减轻由于将大型pmtable刷新到SSD而造成的写失速,我们提出了一种基于KV分离的并行刷新/压缩算法。我们基于RocksDB实现了FlatLSM,并在英特尔最新的PM设备上评估了它的性能,英特尔Optane DC PMM具有最先进的基于PM的LSM-Tree KV存储,FlatLSM在随机写工作负载上提高了5.2倍的吞吐量,在YCSB-A上提高了2.55倍。
{"title":"FlatLSM: Write-Optimized LSM-Tree for PM-Based KV Stores","authors":"Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang","doi":"https://dl.acm.org/doi/10.1145/3579855","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579855","url":null,"abstract":"<p>The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow <i>L<sub>0</sub></i> flush and <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent <i>L<sub>0</sub></i> into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"30 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches CacheSack: Google对数据中心闪存缓存的准入优化理论与经验
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3582014
Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid

This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).

本文描述了CacheSack的算法、实现和部署经验,CacheSack是Google数据中心闪存缓存的接纳算法。CacheSack最大限度地减少了Google数据中心闪存缓存的主要成本:磁盘IO和闪存占用。CacheSack将缓存流量划分为不相关的类别,分析观察到的每个子集的缓存效益,并制定一个背包问题,为每个子集分配最优的允许策略。在此之前,谷歌数据中心的闪存缓存准入策略是手动优化的,大多数缓存使用Lazy Adaptive Replacement cache算法。生产实验表明,CacheSack显著优于之前的静态准入策略,总拥有成本提高了7.7%,磁盘读取(减少9.5%)和闪存磨损(减少17.8%)方面也有显著改善。
{"title":"CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches","authors":"Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid","doi":"https://dl.acm.org/doi/10.1145/3582014","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582014","url":null,"abstract":"<p>This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"61 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ZNSwap: un-Block your Swap zswap:取消阻塞您的Swap
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3582434
Shai Bergman, Niklas Cassel, Matias Bjørling, Mark Silberstein

We introduce ZNSwap , a novel swap subsystem optimized for the recent Zoned Namespace (ZNS) SSDs. ZNSwap leverages ZNS’s explicit control over data management on the drive and introduces a space-efficient host-side Garbage Collector (GC) for swap storage co-designed with the OS swap logic. ZNSwap enables cross-layer optimizations, such as direct access to the in-kernel swap usage statistics by the GC to enable fine-grain swap storage management, and correct accounting of the GC bandwidth usage in the OS resource isolation mechanisms to improve performance isolation in multi-tenant environments. We evaluate ZNSwap using standard Linux swap benchmarks and two production key-value stores. ZNSwap shows significant performance benefits over the Linux swap on traditional SSDs, such as stable throughput for different memory access patterns, and 10× lower 99th percentile latency and 5× higher throughput for memcached key-value store under realistic usage scenarios.

我们介绍了znsswap,这是一种针对最近的Zoned Namespace (ZNS) ssd优化的新型交换子系统。zswap利用ZNS对驱动器上数据管理的显式控制,并为与操作系统交换逻辑共同设计的交换存储引入了一个空间高效的主机端垃圾收集器(GC)。znsswap支持跨层优化,例如GC直接访问内核内交换空间使用统计信息以实现细粒度交换存储管理,以及在操作系统资源隔离机制中正确计算GC带宽使用情况以提高多租户环境中的性能隔离。我们使用标准Linux交换基准和两个生产键值存储来评估znsswap。与传统ssd上的Linux交换相比,zswap显示出显著的性能优势,例如不同内存访问模式的稳定吞吐量,在实际使用场景下,memcached键值存储的第99百分位延迟降低10倍,吞吐量提高5倍。
{"title":"ZNSwap: un-Block your Swap","authors":"Shai Bergman, Niklas Cassel, Matias Bjørling, Mark Silberstein","doi":"https://dl.acm.org/doi/10.1145/3582434","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582434","url":null,"abstract":"<p>We introduce <i>ZNSwap</i> , a novel swap subsystem optimized for the recent Zoned Namespace (ZNS) SSDs. ZNSwap leverages ZNS’s explicit control over data management on the drive and introduces a space-efficient host-side Garbage Collector (GC) for swap storage co-designed with the OS swap logic. ZNSwap enables cross-layer optimizations, such as direct access to the in-kernel swap usage statistics by the GC to enable fine-grain swap storage management, and correct accounting of the GC bandwidth usage in the OS resource isolation mechanisms to improve performance isolation in multi-tenant environments. We evaluate ZNSwap using standard Linux swap benchmarks and two production key-value stores. ZNSwap shows significant performance benefits over the Linux swap on traditional SSDs, such as stable throughput for different memory access patterns, and 10× lower 99th percentile latency and 5× higher throughput for <monospace>memcached</monospace> key-value store under realistic usage scenarios.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"6 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An In-depth Comparative Analysis of Cloud Block Storage Workloads: Findings and Implications 云块存储工作负载的深度比较分析:发现和启示
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3572779
Jinhong Li, Qiuping Wang, Patrick P. C. Lee, Chao Shi

Cloud block storage systems support diverse types of applications in modern cloud services. Characterizing their input/output (I/O) activities is critical for guiding better system designs and optimizations. In this article, we present an in-depth comparative analysis of production cloud block storage workloads through the block-level I/O traces of billions of I/O requests collected from two production systems, Alibaba Cloud and Tencent Cloud Block Storage. We study their characteristics of load intensities, spatial patterns, and temporal patterns. We also compare the cloud block storage workloads with the notable public block-level I/O workloads from the enterprise data centers at Microsoft Research Cambridge, and we identify the commonalities and differences of the three sources of traces. To this end, we provide 6 findings through the high-level analysis and 16 findings through the detailed analysis on load intensity, spatial patterns, and temporal patterns. We discuss the implications of our findings on load balancing, cache efficiency, and storage cluster management in cloud block storage systems.

云块存储系统支持现代云服务中各种类型的应用。描述它们的输入/输出(I/O)活动对于指导更好的系统设计和优化至关重要。在本文中,我们通过从阿里云和腾讯云块存储两个生产系统收集的数十亿个I/O请求的块级I/O跟踪,对生产云块存储工作负载进行了深入的比较分析。我们研究了它们的载荷强度特征、空间模式和时间模式。我们还将云块存储工作负载与来自微软剑桥研究院企业数据中心的公共块级I/O工作负载进行了比较,并确定了三种跟踪源的共同点和差异。为此,我们通过高层次分析得出了6个结论,通过对负荷强度、空间格局和时间格局的详细分析得出了16个结论。我们讨论了我们的发现对云块存储系统中的负载平衡、缓存效率和存储集群管理的影响。
{"title":"An In-depth Comparative Analysis of Cloud Block Storage Workloads: Findings and Implications","authors":"Jinhong Li, Qiuping Wang, Patrick P. C. Lee, Chao Shi","doi":"https://dl.acm.org/doi/10.1145/3572779","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3572779","url":null,"abstract":"<p>Cloud block storage systems support diverse types of applications in modern cloud services. Characterizing their input/output (I/O) activities is critical for guiding better system designs and optimizations. In this article, we present an in-depth comparative analysis of production cloud block storage workloads through the block-level I/O traces of billions of I/O requests collected from two production systems, Alibaba Cloud and Tencent Cloud Block Storage. We study their characteristics of load intensities, spatial patterns, and temporal patterns. We also compare the cloud block storage workloads with the notable public block-level I/O workloads from the enterprise data centers at Microsoft Research Cambridge, and we identify the commonalities and differences of the three sources of traces. To this end, we provide 6 findings through the high-level analysis and 16 findings through the detailed analysis on load intensity, spatial patterns, and temporal patterns. We discuss the implications of our findings on load balancing, cache efficiency, and storage cluster management in cloud block storage systems.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"53 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models 基于线程架构模型的分布式存储系统可调度性分析
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-06 DOI: https://dl.acm.org/doi/10.1145/3574323
Suli Yang, Jing Liu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau

In this article, we present an approach to systematically examine the schedulability of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use Thread Architecture Models (TAMs) to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.

在本文中,我们提出了一种方法来系统地检查分布式存储系统的可调度性,识别它们的调度问题,并在这些系统中实现有效的调度。我们使用线程架构模型(TAMs)来描述系统中不同线程的行为和交互,并展示了如何为现有系统构建TAMs以及如何利用TAMs来识别关键调度问题。我们指定了可调度TAM应满足的三个可调度条件:完整性、局部可执行性和独立性;满足这些条件可以使系统轻松支持不同的调度策略。我们指出了五个常见的阻碍系统满足可调度性条件的问题,并指出这些问题出现在HBase、Cassandra、MongoDB和Riak等现有系统中,使得各种调度规则难以或不可能实现。我们将演示如何使用直接和间接解决方案来解决这些可调度性问题,并进行不同的权衡。为了展示如何应用我们的方法在现实系统中实现调度,我们开发了tame -HBase和muzzed -HBase,这两组对HBase的修改可以实现所需的调度规则,包括公平性和优先级调度,即使在面临具有挑战性的工作负载时也是如此。
{"title":"Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models","authors":"Suli Yang, Jing Liu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau","doi":"https://dl.acm.org/doi/10.1145/3574323","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3574323","url":null,"abstract":"<p>In this article, we present an approach to systematically examine the <i>schedulability</i> of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use <i>Thread Architecture Models (TAMs)</i> to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"229 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visibility Graph-based Cache Management for DRAM Buffer Inside Solid-state Drives 基于可见性图的固态硬盘内DRAM缓冲区缓存管理
IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2023-03-03 DOI: 10.1145/3586576
Zhibing Sha, Jun Li, Fengxiang Zhang, Min Huang, Zhigang Cai, François Trahay, Jianwei Liao
Most solid-state drives (SSDs) adopt an on-board Dynamic Random Access Memory (DRAM) to buffer the write data, which can significantly reduce the amount of write operations committed to the flash array of SSD if data exhibits locality in write operations. This article focuses on efficiently managing the small amount of DRAM cache inside SSDs. The basic idea is to employ the visibility graph technique to unify both temporal and spatial locality of references of I/O accesses, for directing cache management in SSDs. Specifically, we propose to adaptively generate the visibility graph of cached data pages and then support batch adjustment of adjacent or nearby (hot) cached data pages by referring to the connection situations in the visibility graph. In addition, we propose to evict the buffered data pages in batches by also referring to the connection situations, to maximize the internal flushing parallelism of SSD devices without worsening I/O congestion. The trace-driven simulation experiments show that our proposal can yield improvements on cache hits by between 0.8% and 19.8%, and the overall I/O latency by 25.6% on average, compared to state-of-the-art cache management schemes inside SSDs.
大多数固态驱动器(SSD)采用板载动态随机存取存储器(DRAM)来缓冲写入数据,如果数据在写入操作中表现出局部性,这可以显著减少提交给SSD的闪存阵列的写入操作量。本文的重点是有效地管理SSD中的少量DRAM缓存。其基本思想是采用可见性图技术来统一I/O访问引用的时间和空间位置,以指导SSD中的缓存管理。具体来说,我们建议自适应地生成缓存数据页的可见性图,然后通过参考可见性图中的连接情况,支持对相邻或附近(热)缓存数据页进行批量调整。此外,我们还建议通过参考连接情况,批量驱逐缓冲的数据页,以最大限度地提高SSD设备的内部刷新并行性,而不会加剧I/O拥塞。跟踪驱动的模拟实验表明,与SSD内最先进的缓存管理方案相比,我们的方案可以将缓存命中率提高0.8%至19.8%,总体I/O延迟平均提高25.6%。
{"title":"Visibility Graph-based Cache Management for DRAM Buffer Inside Solid-state Drives","authors":"Zhibing Sha, Jun Li, Fengxiang Zhang, Min Huang, Zhigang Cai, François Trahay, Jianwei Liao","doi":"10.1145/3586576","DOIUrl":"https://doi.org/10.1145/3586576","url":null,"abstract":"Most solid-state drives (SSDs) adopt an on-board Dynamic Random Access Memory (DRAM) to buffer the write data, which can significantly reduce the amount of write operations committed to the flash array of SSD if data exhibits locality in write operations. This article focuses on efficiently managing the small amount of DRAM cache inside SSDs. The basic idea is to employ the visibility graph technique to unify both temporal and spatial locality of references of I/O accesses, for directing cache management in SSDs. Specifically, we propose to adaptively generate the visibility graph of cached data pages and then support batch adjustment of adjacent or nearby (hot) cached data pages by referring to the connection situations in the visibility graph. In addition, we propose to evict the buffered data pages in batches by also referring to the connection situations, to maximize the internal flushing parallelism of SSD devices without worsening I/O congestion. The trace-driven simulation experiments show that our proposal can yield improvements on cache hits by between 0.8% and 19.8%, and the overall I/O latency by 25.6% on average, compared to state-of-the-art cache management schemes inside SSDs.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":" ","pages":"1 - 21"},"PeriodicalIF":1.7,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43129495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1