Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3580280
Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang
Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.
{"title":"TPFS: A High-Performance Tiered File System for Persistent Memories and Disks","authors":"Shengan Zheng, Morteza Hoseinzadeh, Steven Swanson, Linpeng Huang","doi":"https://dl.acm.org/doi/10.1145/3580280","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580280","url":null,"abstract":"<p>Emerging fast, byte-addressable persistent memory (PM) promises substantial storage performance gains compared with traditional disks. We present TPFS, a tiered file system that combines PM and slow disks to create a storage system with near-PM performance and large capacity. TPFS steers incoming file input/output (I/O) to PM, dynamic random access memory (DRAM), or disk depending on the synchronicity, write size, and read frequency. TPFS profiles the application’s access stream online to predict the behavior of file access. In the background, TPFS estimates the “temperature” of file data and migrates the write-cold and read-hot file data from PM to disks. To fully utilize disk bandwidth, TPFS coalesces data blocks into large, sequential writes. Experimental results show that with a small amount of PM and a large solid-state drive (SSD), TPFS achieves up to 7.3× and 7.9× throughput improvement compared with EXT4 and XFS running on an SSD alone, respectively. As the amount of PM grows, TPFS’s performance improves until it matches the performance of a PM-only file system.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"97 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"M. Aguilera, Hakim Weatherspoon","doi":"10.1145/3584363","DOIUrl":"https://doi.org/10.1145/3584363","url":null,"abstract":"This special section of the ACM Transactions on Storage journal highlights work published in the 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI’22). While OSDI is focused on systems research broadly, storage systems constitute a significant part of this community. Out of the 253 submissions to OSDI’22, 36 of them (14%) were related to storage. Out of the 49 accepted papers, 9 (18%) were related to storage. We invited two of the highest quality storage papers in OSDI’22 to provide an extended version for this special section of ACM Transactions on Storage. One of them accepted the invitation and the extended version was subsequently reviewed in fast-track mode by a subset of the original OSDI’22 reviewers. This article is “TriCache: A User-Transparent Block Cache Enabling High-Performance Out-ofCore Processing with In-Memory Programs” by Guanyu Feng, Huanqi Cao, Xiaowei Zhu, Bowen Yu, Yuanwei Wang, Zixuan Ma, Shengqi Chen, and Wenguang Chen. This work proposes a generic design of a high-performance user-level block cache for out-of-core processing in recent SSDs. We hope you enjoy this expanded version and find the work interesting and insightful.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"19 1","pages":"1 - 1"},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42293164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.
{"title":"PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance","authors":"Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin","doi":"https://dl.acm.org/doi/10.1145/3574324","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3574324","url":null,"abstract":"<p>Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called <i>PSA-Cache</i>, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"40 2","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3584363
Marcos K. Aguilera, Hakim Weatherspoon
No abstract available.
没有摘要。
{"title":"Introduction to the Special Section on USENIX OSDI 2022","authors":"Marcos K. Aguilera, Hakim Weatherspoon","doi":"https://dl.acm.org/doi/10.1145/3584363","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3584363","url":null,"abstract":"<p>No abstract available.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"373 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3579855
Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang
The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow L0 flush and L0-L1 compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent L0 into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on L0-L1 compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.
日志结构合并树(Log-Structured Merge Tree, LSM-Tree)以其优异的写入性能被广泛应用于键值存储中。但是基于lsm树的KV存储仍然有提前写日志的开销,并且由于L0刷新和L0- l1压缩速度慢而导致写停顿。新的字节可寻址、持久内存(PM)设备为改进LSM-Tree的写性能带来了机会。以往基于PM的LSM-Tree研究没有充分利用PM的主存和外存的“双重作用”。在本文中,我们首先分析了基于PM的两种memtables策略,并分析了出现write stall问题的原因。受分析结果的启发,我们提出了FlatLSM,这是一种专门为基于非易失性存储器的KV存储设计的扁平lsm树。首先,我们提出了索引和数据分离的PMTable。PM日志利用缓冲区日志存储大小小于256B的kv。其次,为了解决写失速问题,FlatLSM将易失性memtable和持久性L0合并为大型pmtable,这可以减少LSM-Tree的深度,并将I/O带宽集中在L0- l1压缩上。为了减轻由于将大型pmtable刷新到SSD而造成的写失速,我们提出了一种基于KV分离的并行刷新/压缩算法。我们基于RocksDB实现了FlatLSM,并在英特尔最新的PM设备上评估了它的性能,英特尔Optane DC PMM具有最先进的基于PM的LSM-Tree KV存储,FlatLSM在随机写工作负载上提高了5.2倍的吞吐量,在YCSB-A上提高了2.55倍。
{"title":"FlatLSM: Write-Optimized LSM-Tree for PM-Based KV Stores","authors":"Kewen He, Yujie An, Yijing Luo, Xiaoguang Liu, Gang Wang","doi":"https://dl.acm.org/doi/10.1145/3579855","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3579855","url":null,"abstract":"<p>The Log-Structured Merge Tree (LSM-Tree) is widely used in key-value (KV) stores because of its excwrite performance. But LSM-Tree-based KV stores still have the overhead of write-ahead log and write stall caused by slow <i>L<sub>0</sub></i> flush and <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. New byte-addressable, persistent memory (PM) devices bring an opportunity to improve the write performance of LSM-Tree. Previous studies on PM-based LSM-Tree have not fully exploited PM’s “dual role” of main memory and external storage. In this article, we analyze two strategies of memtables based on PM and the reasons write stall problems occur in the first place. Inspired by the analysis result, we propose FlatLSM, a specially designed flat LSM-Tree for non-volatile memory based KV stores. First, we propose PMTable with separated index and data. The PM Log utilizes the Buffer Log to store KVs of size less than 256B. Second, to solve the write stall problem, FlatLSM merges the volatile memtables and the persistent <i>L<sub>0</sub></i> into large PMTables, which can reduce the depth of LSM-Tree and concentrate I/O bandwidth on <i>L<sub>0</sub></i>-<i>L<sub>1</sub></i> compaction. To mitigate write stall caused by flushing large PMTables to SSD, we propose a parallel flush/compaction algorithm based on KV separation. We implemented FlatLSM based on RocksDB and evaluated its performance on Intel’s latest PM device, the Intel Optane DC PMM with the state-of-the-art PM-based LSM-Tree KV stores, FlatLSM improves the throughput 5.2× on random write workload and 2.55× on YCSB-A.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"30 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3582014
Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid
This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).
{"title":"CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches","authors":"Tzu-Wei Yang, Seth Pollen, Mustafa Uysal, Arif Merchant, Homer Wolfmeister, Junaid Khalid","doi":"https://dl.acm.org/doi/10.1145/3582014","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582014","url":null,"abstract":"<p>This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"61 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3582434
Shai Bergman, Niklas Cassel, Matias Bjørling, Mark Silberstein
We introduce ZNSwap , a novel swap subsystem optimized for the recent Zoned Namespace (ZNS) SSDs. ZNSwap leverages ZNS’s explicit control over data management on the drive and introduces a space-efficient host-side Garbage Collector (GC) for swap storage co-designed with the OS swap logic. ZNSwap enables cross-layer optimizations, such as direct access to the in-kernel swap usage statistics by the GC to enable fine-grain swap storage management, and correct accounting of the GC bandwidth usage in the OS resource isolation mechanisms to improve performance isolation in multi-tenant environments. We evaluate ZNSwap using standard Linux swap benchmarks and two production key-value stores. ZNSwap shows significant performance benefits over the Linux swap on traditional SSDs, such as stable throughput for different memory access patterns, and 10× lower 99th percentile latency and 5× higher throughput for memcached key-value store under realistic usage scenarios.
{"title":"ZNSwap: un-Block your Swap","authors":"Shai Bergman, Niklas Cassel, Matias Bjørling, Mark Silberstein","doi":"https://dl.acm.org/doi/10.1145/3582434","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582434","url":null,"abstract":"<p>We introduce <i>ZNSwap</i> , a novel swap subsystem optimized for the recent Zoned Namespace (ZNS) SSDs. ZNSwap leverages ZNS’s explicit control over data management on the drive and introduces a space-efficient host-side Garbage Collector (GC) for swap storage co-designed with the OS swap logic. ZNSwap enables cross-layer optimizations, such as direct access to the in-kernel swap usage statistics by the GC to enable fine-grain swap storage management, and correct accounting of the GC bandwidth usage in the OS resource isolation mechanisms to improve performance isolation in multi-tenant environments. We evaluate ZNSwap using standard Linux swap benchmarks and two production key-value stores. ZNSwap shows significant performance benefits over the Linux swap on traditional SSDs, such as stable throughput for different memory access patterns, and 10× lower 99th percentile latency and 5× higher throughput for <monospace>memcached</monospace> key-value store under realistic usage scenarios.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"6 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3572779
Jinhong Li, Qiuping Wang, Patrick P. C. Lee, Chao Shi
Cloud block storage systems support diverse types of applications in modern cloud services. Characterizing their input/output (I/O) activities is critical for guiding better system designs and optimizations. In this article, we present an in-depth comparative analysis of production cloud block storage workloads through the block-level I/O traces of billions of I/O requests collected from two production systems, Alibaba Cloud and Tencent Cloud Block Storage. We study their characteristics of load intensities, spatial patterns, and temporal patterns. We also compare the cloud block storage workloads with the notable public block-level I/O workloads from the enterprise data centers at Microsoft Research Cambridge, and we identify the commonalities and differences of the three sources of traces. To this end, we provide 6 findings through the high-level analysis and 16 findings through the detailed analysis on load intensity, spatial patterns, and temporal patterns. We discuss the implications of our findings on load balancing, cache efficiency, and storage cluster management in cloud block storage systems.
{"title":"An In-depth Comparative Analysis of Cloud Block Storage Workloads: Findings and Implications","authors":"Jinhong Li, Qiuping Wang, Patrick P. C. Lee, Chao Shi","doi":"https://dl.acm.org/doi/10.1145/3572779","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3572779","url":null,"abstract":"<p>Cloud block storage systems support diverse types of applications in modern cloud services. Characterizing their input/output (I/O) activities is critical for guiding better system designs and optimizations. In this article, we present an in-depth comparative analysis of production cloud block storage workloads through the block-level I/O traces of billions of I/O requests collected from two production systems, Alibaba Cloud and Tencent Cloud Block Storage. We study their characteristics of load intensities, spatial patterns, and temporal patterns. We also compare the cloud block storage workloads with the notable public block-level I/O workloads from the enterprise data centers at Microsoft Research Cambridge, and we identify the commonalities and differences of the three sources of traces. To this end, we provide 6 findings through the high-level analysis and 16 findings through the detailed analysis on load intensity, spatial patterns, and temporal patterns. We discuss the implications of our findings on load balancing, cache efficiency, and storage cluster management in cloud block storage systems.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"53 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-06DOI: https://dl.acm.org/doi/10.1145/3574323
Suli Yang, Jing Liu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau
In this article, we present an approach to systematically examine the schedulability of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use Thread Architecture Models (TAMs) to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.
{"title":"Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models","authors":"Suli Yang, Jing Liu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau","doi":"https://dl.acm.org/doi/10.1145/3574323","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3574323","url":null,"abstract":"<p>In this article, we present an approach to systematically examine the <i>schedulability</i> of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use <i>Thread Architecture Models (TAMs)</i> to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":"229 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhibing Sha, Jun Li, Fengxiang Zhang, Min Huang, Zhigang Cai, François Trahay, Jianwei Liao
Most solid-state drives (SSDs) adopt an on-board Dynamic Random Access Memory (DRAM) to buffer the write data, which can significantly reduce the amount of write operations committed to the flash array of SSD if data exhibits locality in write operations. This article focuses on efficiently managing the small amount of DRAM cache inside SSDs. The basic idea is to employ the visibility graph technique to unify both temporal and spatial locality of references of I/O accesses, for directing cache management in SSDs. Specifically, we propose to adaptively generate the visibility graph of cached data pages and then support batch adjustment of adjacent or nearby (hot) cached data pages by referring to the connection situations in the visibility graph. In addition, we propose to evict the buffered data pages in batches by also referring to the connection situations, to maximize the internal flushing parallelism of SSD devices without worsening I/O congestion. The trace-driven simulation experiments show that our proposal can yield improvements on cache hits by between 0.8% and 19.8%, and the overall I/O latency by 25.6% on average, compared to state-of-the-art cache management schemes inside SSDs.
{"title":"Visibility Graph-based Cache Management for DRAM Buffer Inside Solid-state Drives","authors":"Zhibing Sha, Jun Li, Fengxiang Zhang, Min Huang, Zhigang Cai, François Trahay, Jianwei Liao","doi":"10.1145/3586576","DOIUrl":"https://doi.org/10.1145/3586576","url":null,"abstract":"Most solid-state drives (SSDs) adopt an on-board Dynamic Random Access Memory (DRAM) to buffer the write data, which can significantly reduce the amount of write operations committed to the flash array of SSD if data exhibits locality in write operations. This article focuses on efficiently managing the small amount of DRAM cache inside SSDs. The basic idea is to employ the visibility graph technique to unify both temporal and spatial locality of references of I/O accesses, for directing cache management in SSDs. Specifically, we propose to adaptively generate the visibility graph of cached data pages and then support batch adjustment of adjacent or nearby (hot) cached data pages by referring to the connection situations in the visibility graph. In addition, we propose to evict the buffered data pages in batches by also referring to the connection situations, to maximize the internal flushing parallelism of SSD devices without worsening I/O congestion. The trace-driven simulation experiments show that our proposal can yield improvements on cache hits by between 0.8% and 19.8%, and the overall I/O latency by 25.6% on average, compared to state-of-the-art cache management schemes inside SSDs.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":" ","pages":"1 - 21"},"PeriodicalIF":1.7,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43129495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}