LAC:面向高性能固态硬盘的工作负载强度感知缓存方案

IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE IEEE Transactions on Computers Pub Date : 2024-04-04 DOI:10.1109/TC.2024.3385290
Hui Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin
{"title":"LAC:面向高性能固态硬盘的工作负载强度感知缓存方案","authors":"Hui Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin","doi":"10.1109/TC.2024.3385290","DOIUrl":null,"url":null,"abstract":"Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a \n<italic>work<b>L</b></i>\noad intensity-aware and \n<bold><i>A</i></b>\nctive parallel \n<bold><i>Caching</i></b>\n scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 7","pages":"1738-1752"},"PeriodicalIF":3.6000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LAC: A Workload Intensity-Aware Caching Scheme for High-Performance SSDs\",\"authors\":\"Hui Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin\",\"doi\":\"10.1109/TC.2024.3385290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a \\n<italic>work<b>L</b></i>\\noad intensity-aware and \\n<bold><i>A</i></b>\\nctive parallel \\n<bold><i>Caching</i></b>\\n scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"73 7\",\"pages\":\"1738-1752\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10492468/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10492468/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

在基于 NAND 闪存的固态硬盘(SSD)中,利用基于 DRAM 的回写缓存是提高 SSD 性能的一种实用方法。由于 I/O 访问量急剧增加,现有的缓存方案忽略了用户 I/O 高强度的问题。高I/O强度会导致固态硬盘内I/O请求的访问冲突:大量请求被阻塞,从而影响响应时间。传统的被动更新缓存方案只是在缓存已满的情况下,在访问未命中时替换页面。面对巨大的 I/O 强度,会出现尾部延迟。主动回写缓存方案利用请求之间的空闲时间和空闲的内部带宽,提前将脏数据刷新到闪存中,从而缩短响应时间。然而,频繁的主动回写操作会导致请求之间的访问冲突--这是扩大写放大(WA)和降低固态硬盘寿命的罪魁祸首。为了解决上述问题,我们提出了一种工作负载强度感知和主动并行缓存方案--LAC,该方案由协作负载感知驱动。在高I/O强度的工作负载下,LAC能抵御用户I/O的访问冲突。如果 I/O 强度较低--连续 I/O 请求之间的间隔较大--且目标裸片空闲,LAC 就会主动并发地将相邻地址的脏数据写回裸片,同时培养主动回写产生的干净数据。优先替换干净数据可以缩短响应时间,防止闪存事务被阻塞。我们设计了一种数据保护方法,根据缓存替换和主动回写中的各种标准来写回冷数据。因此,LAC 减少了因主动回写热数据而产生的 WA,并延长了固态硬盘的使用寿命。我们在现代 MQSim 模拟器中将 LAC 与六种缓存方案(LRU、CFLRU、GCaR-LRU、MQSim、VS-Batch 和 Co-Active)进行了比较。结果表明,LAC 可将响应时间和擦除次数分别缩短 78.5% 和 47.8%,平均分别缩短 64.4% 和 16.6%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LAC: A Workload Intensity-Aware Caching Scheme for High-Performance SSDs
Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a workL oad intensity-aware and A ctive parallel Caching scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Computers
IEEE Transactions on Computers 工程技术-工程:电子与电气
CiteScore
6.60
自引率
5.40%
发文量
199
审稿时长
6.0 months
期刊介绍: The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.
期刊最新文献
CUSPX: Efficient GPU Implementations of Post-Quantum Signature SPHINCS+ Chiplet-Gym: Optimizing Chiplet-based AI Accelerator Design with Reinforcement Learning FLALM: A Flexible Low Area-Latency Montgomery Modular Multiplication on FPGA Novel Lagrange Multipliers-Driven Adaptive Offloading for Vehicular Edge Computing Leveraging GPU in Homomorphic Encryption: Framework Design and Analysis of BFV Variants
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1