Revisiting widely held SSD expectations and rethinking system-level implications

Myoungsoo Jung, M. Kandemir
{"title":"Revisiting widely held SSD expectations and rethinking system-level implications","authors":"Myoungsoo Jung, M. Kandemir","doi":"10.1145/2465529.2465548","DOIUrl":null,"url":null,"abstract":"Storage applications leveraging Solid State Disk (SSD) technology are being widely deployed in diverse computing systems. These applications accelerate system performance by exploiting several SSD-specific characteristics. However, modern SSDs have undergone a dramatic technology and architecture shift in the past few years, which makes widely held assumptions and expectations regarding them highly questionable. The main goal of this paper is to question popular assumptions and expectations regarding SSDs through an extensive experimental analysis using 6 state-of-the-art SSDs from different vendors. Our analysis leads to several conclusions which are either not reported in prior SSD literature, or contradict to current conceptions. For example, we found that SSDs are not biased toward read-intensive workloads in terms of performance and reliability. Specifically, random read performance of SSDs is worse than sequential and random write performance by 40% and 39% on average, and more importantly, the performance of sequential reads gets significantly worse over time. Further, we found that reads can shorten SSD lifetime more than writes, which is very unfortunate, given the fact that many existing systems/platforms already employ SSDs as read caches or in applications that are highly read intensive. We also performed a comprehensive study to understand the worst-case performance characteristics of our SSDs, and investigated the viability of recently proposed enhancements that are geared towards alleviating the worst-case performance challenges, such as TRIM commands and background-tasks. Lastly, we uncover the overheads of these enhancements and their limits, and discuss system-level implications.","PeriodicalId":306456,"journal":{"name":"Measurement and Modeling of Computer Systems","volume":"508 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"119","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement and Modeling of Computer Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2465529.2465548","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 119

Abstract

Storage applications leveraging Solid State Disk (SSD) technology are being widely deployed in diverse computing systems. These applications accelerate system performance by exploiting several SSD-specific characteristics. However, modern SSDs have undergone a dramatic technology and architecture shift in the past few years, which makes widely held assumptions and expectations regarding them highly questionable. The main goal of this paper is to question popular assumptions and expectations regarding SSDs through an extensive experimental analysis using 6 state-of-the-art SSDs from different vendors. Our analysis leads to several conclusions which are either not reported in prior SSD literature, or contradict to current conceptions. For example, we found that SSDs are not biased toward read-intensive workloads in terms of performance and reliability. Specifically, random read performance of SSDs is worse than sequential and random write performance by 40% and 39% on average, and more importantly, the performance of sequential reads gets significantly worse over time. Further, we found that reads can shorten SSD lifetime more than writes, which is very unfortunate, given the fact that many existing systems/platforms already employ SSDs as read caches or in applications that are highly read intensive. We also performed a comprehensive study to understand the worst-case performance characteristics of our SSDs, and investigated the viability of recently proposed enhancements that are geared towards alleviating the worst-case performance challenges, such as TRIM commands and background-tasks. Lastly, we uncover the overheads of these enhancements and their limits, and discuss system-level implications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
重新审视广泛持有的SSD期望并重新思考系统级影响
利用固态硬盘(SSD)技术的存储应用程序被广泛部署在各种计算系统中。这些应用程序通过利用一些特定于ssd的特性来提高系统性能。然而,现代ssd在过去几年中经历了巨大的技术和架构转变,这使得人们对它们的普遍假设和期望非常值得怀疑。本文的主要目标是通过使用来自不同供应商的6个最先进的ssd进行广泛的实验分析,质疑有关ssd的流行假设和期望。我们的分析得出了几个结论,这些结论要么在以前的SSD文献中没有报道,要么与当前的概念相矛盾。例如,我们发现ssd在性能和可靠性方面并不偏向于读取密集型工作负载。具体来说,ssd的随机读性能比顺序和随机写性能平均差40%和39%,更重要的是,随着时间的推移,顺序读的性能会明显变差。此外,我们发现读操作比写操作更能缩短SSD的生命周期,这是非常不幸的,因为许多现有的系统/平台已经将SSD用作读缓存或在高度读密集型的应用程序中使用。我们还进行了全面的研究,以了解ssd的最坏情况性能特征,并调查了最近提出的增强功能的可行性,这些增强功能旨在减轻最坏情况性能挑战,例如TRIM命令和后台任务。最后,我们揭示了这些增强的开销及其限制,并讨论了系统级的含义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Queueing delays in buffered multistage interconnection networks Data dissemination performance in large-scale sensor networks Index policies for a multi-class queue with convex holding cost and abandonments Neighbor-cell assisted error correction for MLC NAND flash memories Collecting, organizing, and sharing pins in pinterest: interest-driven or social-driven?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1