{"title":"Balancing I/O and wear-out distribution inside SSDs with optimized cache management","authors":"Jiaxu Wu, Jiaojiao Wu, Aobo Yang, Fan Yang, Zhigang Cai, Jianwei Liao","doi":"10.1016/j.sysarc.2025.103392","DOIUrl":null,"url":null,"abstract":"<div><div>NAND flash memory-based solid-state drives (SSDs) have been adopted as storage infrastructure in a wide range of computing systems. In order to service an I/O request, the logical page address (<em>LPA</em>) of the request should be mapped to a physical page address (<em>PPA</em>), termed page-level address mapping in SSDs. As a fundamental mapping scheme, static mapping needs a small-scale mapping table and ensures good read parallelism, but it may bring about uneven I/O and wear-out distribution across SSD parallel units (<em>e.g.</em> flash planes), thus resulting in low write efficiency. To mitigate the negative effects of static mapping, this paper proposes a novel cache management scheme to not only guarantee I/O responsiveness but also balance I/O and wear-out distribution. Specifically, we first introduce directly flushing a portion of data pages onto the flash array while they are cold and the target parallel units have endured a small number of erase operations. After that, we present a method for selecting victim data pages from the data cache, by referring to the factors of pending I/O requests and the wear-out level on the flash memory. Through a series of simulation experiments on selected block I/O traces of real-world applications, we show that our approach achieves an average I/O latency reduction of <span>16.1</span>% compared to <em>Baseline</em>, <span>13.6</span>% over <em>GCaR</em>, <span>12.4</span>% over <em>LCR</em>, and <span>6.6</span>% over <em>ARB</em> while simultaneously balancing I/O and wear-out distribution. These results demonstrate its superiority over existing state-of-the-art schemes.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"162 ","pages":"Article 103392"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762125000645","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
NAND flash memory-based solid-state drives (SSDs) have been adopted as storage infrastructure in a wide range of computing systems. In order to service an I/O request, the logical page address (LPA) of the request should be mapped to a physical page address (PPA), termed page-level address mapping in SSDs. As a fundamental mapping scheme, static mapping needs a small-scale mapping table and ensures good read parallelism, but it may bring about uneven I/O and wear-out distribution across SSD parallel units (e.g. flash planes), thus resulting in low write efficiency. To mitigate the negative effects of static mapping, this paper proposes a novel cache management scheme to not only guarantee I/O responsiveness but also balance I/O and wear-out distribution. Specifically, we first introduce directly flushing a portion of data pages onto the flash array while they are cold and the target parallel units have endured a small number of erase operations. After that, we present a method for selecting victim data pages from the data cache, by referring to the factors of pending I/O requests and the wear-out level on the flash memory. Through a series of simulation experiments on selected block I/O traces of real-world applications, we show that our approach achieves an average I/O latency reduction of 16.1% compared to Baseline, 13.6% over GCaR, 12.4% over LCR, and 6.6% over ARB while simultaneously balancing I/O and wear-out distribution. These results demonstrate its superiority over existing state-of-the-art schemes.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.