首页 > 最新文献

INFLOW '13最新文献

英文 中文
Phase change memory in enterprise storage systems: silver bullet or snake oil? 企业存储系统中的相变存储器:银弹还是万金油?
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527794
Hyojun Kim, S. Seshadri, Clem Dickey, Lawrence Chiu
Storage devices based on Phase Change Memory (PCM) devices are beginning to generate considerable attention in both industry and academic communities. But whether the technology in its current state will be a commercially and technically viable alternative to entrenched technologies such as flash-based SSDs still remains unanswered. To address this it is important to consider PCM SSD devices not just from a device standpoint, but also from a holistic perspective. This paper presents the results of our performance measurement study of a recent all-PCM SSD prototype. The average latency for 4 KB random read is 6.7 μs, which is about 16x faster than a comparable eMLC flash SSD. The distribution of I/O response times is also much narrower than the flash SSD for both reads and writes. Based on real-world workload traces, we model a hypothetical storage device which consists of flash, HDD, and PCM to identify the combinations of device types that offer the best performance within cost constraints. Our results show that - even at current price points - PCM storage devices show promise as a new component in multi-tiered enterprise storage systems.
基于相变存储器(PCM)器件的存储器件开始引起工业界和学术界的广泛关注。但目前状态下的这项技术能否在商业和技术上成为现有技术(如基于闪存的固态硬盘)的可行替代方案,仍然没有答案。为了解决这个问题,重要的是要考虑PCM SSD设备,不仅从设备的角度来看,而且从整体的角度来看。本文介绍了我们对最近的全pcm固态硬盘原型的性能测量研究结果。4kb随机读取的平均延迟为6.7 μs,比同类eMLC闪存SSD快16倍左右。在读写方面,I/O响应时间的分布也比闪存SSD窄得多。基于真实的工作负载跟踪,我们对一个由闪存、HDD和PCM组成的假设存储设备进行建模,以确定在成本限制下提供最佳性能的设备类型组合。我们的研究结果表明,即使在当前的价格点上,PCM存储设备作为多层企业存储系统的新组件也很有希望。
{"title":"Phase change memory in enterprise storage systems: silver bullet or snake oil?","authors":"Hyojun Kim, S. Seshadri, Clem Dickey, Lawrence Chiu","doi":"10.1145/2527792.2527794","DOIUrl":"https://doi.org/10.1145/2527792.2527794","url":null,"abstract":"Storage devices based on Phase Change Memory (PCM) devices are beginning to generate considerable attention in both industry and academic communities. But whether the technology in its current state will be a commercially and technically viable alternative to entrenched technologies such as flash-based SSDs still remains unanswered. To address this it is important to consider PCM SSD devices not just from a device standpoint, but also from a holistic perspective.\u0000 This paper presents the results of our performance measurement study of a recent all-PCM SSD prototype. The average latency for 4 KB random read is 6.7 μs, which is about 16x faster than a comparable eMLC flash SSD. The distribution of I/O response times is also much narrower than the flash SSD for both reads and writes. Based on real-world workload traces, we model a hypothetical storage device which consists of flash, HDD, and PCM to identify the combinations of device types that offer the best performance within cost constraints. Our results show that - even at current price points - PCM storage devices show promise as a new component in multi-tiered enterprise storage systems.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130128514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
MetaData persistence using storage class memory: experiences with flash-backed DRAM 使用存储类内存的元数据持久性:使用闪存支持的DRAM的经验
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527800
Jithin Jose, M. Banikazemi, W. Belluomini, Chet Murthy, D. Panda
Storage Class Memory (SCM) blends the best properties of main memory and hard disk drives. It offers non-volatility and byte addressability, and promises short access times with low cost per bit. Earlier research in this field explored designs exploiting SCM features and used either simulations or theoretical models for evaluations. In this work, we explore the design challenges for achieving non-volatility using real SCM hardware that is available now: Flash-Backed DRAM. We present performance analysis of flash-backed DRAM and describe the system issues involved in achieving true non-volatility using the system memory hierarchy which was designed assuming that data is volatile. We present software abstractions which allow applications to be redesigned easily using SCM features, without having to worry about system issues. Furthermore, we present case studies using two applications with different characteristics: an SSD-based caching layer used in enterprise storage (Flash Cache) and an in-memory database (SolidDB), and redesign them using software abstractions. Our performance evaluations reveal that SCM aware Flash Cache design could enable persistence with less than 2% degradation in performance. Similarly, redesigning SolidDB persistence layer using SCM improved the performance by a factor of two. To the best of our knowledge, this is the first work that evaluates SCM performance and demonstrates application redesign using real SCM hardware.
存储类内存(SCM)混合了主存储器和硬盘驱动器的最佳性能。它提供了非易失性和字节可寻址性,并承诺以较低的每比特成本缩短访问时间。该领域的早期研究探索了利用SCM特性的设计,并使用模拟或理论模型进行评估。在这项工作中,我们探讨了使用现在可用的真正的SCM硬件实现非易失性的设计挑战:闪存支持的DRAM。我们介绍了闪存支持的DRAM的性能分析,并描述了使用假设数据易失性而设计的系统内存层次结构实现真正的非易失性所涉及的系统问题。我们提出的软件抽象允许使用SCM功能轻松地重新设计应用程序,而不必担心系统问题。此外,我们还介绍了使用两个具有不同特征的应用程序的案例研究:企业存储中使用的基于ssd的缓存层(Flash Cache)和内存数据库(SolidDB),并使用软件抽象对它们进行了重新设计。我们的性能评估显示,SCM感知的闪存缓存设计可以在性能下降不到2%的情况下实现持久性。类似地,使用SCM重新设计SolidDB持久层将性能提高了两倍。据我们所知,这是第一个评估SCM性能并使用真正的SCM硬件演示应用程序重新设计的工作。
{"title":"MetaData persistence using storage class memory: experiences with flash-backed DRAM","authors":"Jithin Jose, M. Banikazemi, W. Belluomini, Chet Murthy, D. Panda","doi":"10.1145/2527792.2527800","DOIUrl":"https://doi.org/10.1145/2527792.2527800","url":null,"abstract":"Storage Class Memory (SCM) blends the best properties of main memory and hard disk drives. It offers non-volatility and byte addressability, and promises short access times with low cost per bit. Earlier research in this field explored designs exploiting SCM features and used either simulations or theoretical models for evaluations. In this work, we explore the design challenges for achieving non-volatility using real SCM hardware that is available now: Flash-Backed DRAM. We present performance analysis of flash-backed DRAM and describe the system issues involved in achieving true non-volatility using the system memory hierarchy which was designed assuming that data is volatile. We present software abstractions which allow applications to be redesigned easily using SCM features, without having to worry about system issues. Furthermore, we present case studies using two applications with different characteristics: an SSD-based caching layer used in enterprise storage (Flash Cache) and an in-memory database (SolidDB), and redesign them using software abstractions. Our performance evaluations reveal that SCM aware Flash Cache design could enable persistence with less than 2% degradation in performance. Similarly, redesigning SolidDB persistence layer using SCM improved the performance by a factor of two. To the best of our knowledge, this is the first work that evaluates SCM performance and demonstrates application redesign using real SCM hardware.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122576886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Bankshot: caching slow storage in fast non-volatile memory 爆款:在快速非易失性存储器中缓存慢速存储器
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527793
Meenakshi Sundaram Bhaskaran, Jian Xu, S. Swanson
Emerging non-volatile storage (e.g., Phase Change Memory, STT-RAM) allow access to persistent data at latencies an order of magnitude lower than SSDs. The density and price gap between NVMs and denser storage make NVM economically most suitable as a cache for larger, more conventional storage (i.e., NAND flash-based SSDs and disks). Existing storage caching architectures (even those that use fast flash-based SSDs) introduce significant software overhead that can obscure the performance benefits of faster memories. We propose Bankshot, a caching architecture that allows cache hits to bypass the OS (and the associated software overheads) entirely, while relying on the OS for heavy-weight operations like servicing misses and performing write backs. We evaluate several design decisions in Bankshot including different cache management policies and different levels of hardware, software support for tracking dirty data and maintaining meta-data. We find that with hardware support Bankshot can offer upto 5× speedup over conventional caching systems.
新兴的非易失性存储(例如,相变存储器,STT-RAM)允许以比ssd低一个数量级的延迟访问持久数据。NVM和高密度存储之间的密度和价格差距使得NVM在经济上最适合作为更大、更传统的存储(即基于NAND闪存的ssd和磁盘)的缓存。现有的存储缓存架构(即使是那些使用基于快速闪存的ssd的架构)引入了大量的软件开销,这可能会掩盖更快内存的性能优势。我们提出了Bankshot,这是一种缓存架构,它允许缓存命中完全绕过操作系统(以及相关的软件开销),而依赖于操作系统进行诸如服务失误和执行回写等重量级操作。我们在Bankshot中评估了几个设计决策,包括不同的缓存管理策略和不同级别的硬件、软件对跟踪脏数据和维护元数据的支持。我们发现,在硬件支持下,Bankshot可以提供比传统缓存系统高达5倍的加速。
{"title":"Bankshot: caching slow storage in fast non-volatile memory","authors":"Meenakshi Sundaram Bhaskaran, Jian Xu, S. Swanson","doi":"10.1145/2527792.2527793","DOIUrl":"https://doi.org/10.1145/2527792.2527793","url":null,"abstract":"Emerging non-volatile storage (e.g., Phase Change Memory, STT-RAM) allow access to persistent data at latencies an order of magnitude lower than SSDs. The density and price gap between NVMs and denser storage make NVM economically most suitable as a cache for larger, more conventional storage (i.e., NAND flash-based SSDs and disks). Existing storage caching architectures (even those that use fast flash-based SSDs) introduce significant software overhead that can obscure the performance benefits of faster memories. We propose Bankshot, a caching architecture that allows cache hits to bypass the OS (and the associated software overheads) entirely, while relying on the OS for heavy-weight operations like servicing misses and performing write backs. We evaluate several design decisions in Bankshot including different cache management policies and different levels of hardware, software support for tracking dirty data and maintaining meta-data. We find that with hardware support Bankshot can offer upto 5× speedup over conventional caching systems.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122064206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
High performance & low latency in solid-state drives through redundancy 通过冗余实现固态硬盘的高性能和低延迟
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527798
Dimitris Skourtis, D. Achlioptas, C. Maltzahn, S. Brandt
Solid-state drives are becoming increasingly popular in enterprise storage systems, playing the role of large caches and permanent storage. Although SSDs provide faster random access than hard-drives, their performance under read/write workloads is highly variable often exceeding that of hard-drives (e.g., taking 100ms for a single read). Many systems with mixed workloads have low latency requirements, or require predictable performance and guarantees. In such cases, the performance variance of SSDs becomes a problem for both predictability and raw performance. In this paper, we propose a design based on redundancy, which provides high performance and low latency for reads under read/write workloads by physically separating reads from writes. More specifically, reads achieve read-only performance while writes perform at least as good as before. We evaluate our design using micro-benchmarks and real traces, illustrating the performance benefits of read/write separation in solid-state drives.
固态硬盘在企业存储系统中越来越流行,扮演着大缓存和永久存储的角色。虽然ssd提供比硬盘更快的随机访问,但它们在读/写工作负载下的性能变化很大,经常超过硬盘(例如,一次读需要100ms)。许多混合工作负载的系统具有低延迟需求,或者需要可预测的性能和保证。在这种情况下,ssd的性能差异对可预测性和原始性能都是一个问题。在本文中,我们提出了一种基于冗余的设计,通过物理隔离读和写,为读/写工作负载下的读提供高性能和低延迟。更具体地说,读实现了只读性能,而写的性能至少和以前一样好。我们使用微基准测试和真实跟踪来评估我们的设计,说明固态驱动器中读写分离的性能优势。
{"title":"High performance & low latency in solid-state drives through redundancy","authors":"Dimitris Skourtis, D. Achlioptas, C. Maltzahn, S. Brandt","doi":"10.1145/2527792.2527798","DOIUrl":"https://doi.org/10.1145/2527792.2527798","url":null,"abstract":"Solid-state drives are becoming increasingly popular in enterprise storage systems, playing the role of large caches and permanent storage. Although SSDs provide faster random access than hard-drives, their performance under read/write workloads is highly variable often exceeding that of hard-drives (e.g., taking 100ms for a single read). Many systems with mixed workloads have low latency requirements, or require predictable performance and guarantees. In such cases, the performance variance of SSDs becomes a problem for both predictability and raw performance.\u0000 In this paper, we propose a design based on redundancy, which provides high performance and low latency for reads under read/write workloads by physically separating reads from writes. More specifically, reads achieve read-only performance while writes perform at least as good as before. We evaluate our design using micro-benchmarks and real traces, illustrating the performance benefits of read/write separation in solid-state drives.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116545584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving performance and lifetime of the SSD RAID-based host cache through a log-structured approach 通过日志结构化方法提高基于SSD raid的主机缓存的性能和生命周期
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527795
Y. Oh, Jongmoo Choi, Donghee Lee, S. Noh
This paper proposes a cost-effective and reliable SSD host cache solution that we call SRC (SSD RAID Cache). Cost-effectiveness is brought about by using multiple low-cost SSDs and reliability is enhanced through RAID-based data redundancy. RAID, however, is managed in a log-structured manner on multiple SSDs effectively eliminating the detrimental read-modify-write operations found in conventional RAID-5. Within the proposed framework, we also propose to eliminate parity blocks for stripes that are composed of clean blocks as the original data resides in primary storage. We also propose the use of destaging, instead of garbage collection, to make space in the cache when the SSD cache is full. We show that the proposed techniques have significant implications on the performance of the cache and lifetime of the SSDs that comprise the cache. Finally, we study various ways in which stripes can be formed based on data and parity block allocation policies. Our experimental results using different realistic I/O workloads show using the SRC scheme is on average 59% better than the conventional SSD cache scheme supporting RAID-5. In case of lifetime, our results show that SRC reduces the erase count of the SSD drives by an average of 47% compared to the RAID-5 scheme.
本文提出了一种经济可靠的SSD主机缓存解决方案,我们称之为SRC (SSD RAID cache)。使用多个低成本的ssd盘可以提高成本效益,通过raid数据冗余可以提高可靠性。但是,RAID在多个ssd上以日志结构的方式进行管理,有效地消除了传统RAID-5中存在的有害的读-修改-写操作。在提出的框架内,我们还建议消除由干净块组成的条带的奇偶校验块,因为原始数据驻留在主存储中。我们还建议使用删除而不是垃圾收集,以便在SSD缓存已满时在缓存中腾出空间。我们表明,所提出的技术对缓存的性能和组成缓存的ssd的寿命有重大影响。最后,我们研究了基于数据和奇偶块分配策略形成条纹的各种方法。我们使用不同实际I/O工作负载的实验结果表明,使用SRC方案比支持RAID-5的传统SSD缓存方案平均要好59%。就寿命而言,我们的结果表明,与RAID-5方案相比,SRC将SSD驱动器的擦除计数平均减少了47%。
{"title":"Improving performance and lifetime of the SSD RAID-based host cache through a log-structured approach","authors":"Y. Oh, Jongmoo Choi, Donghee Lee, S. Noh","doi":"10.1145/2527792.2527795","DOIUrl":"https://doi.org/10.1145/2527792.2527795","url":null,"abstract":"This paper proposes a cost-effective and reliable SSD host cache solution that we call SRC (SSD RAID Cache). Cost-effectiveness is brought about by using multiple low-cost SSDs and reliability is enhanced through RAID-based data redundancy. RAID, however, is managed in a log-structured manner on multiple SSDs effectively eliminating the detrimental read-modify-write operations found in conventional RAID-5. Within the proposed framework, we also propose to eliminate parity blocks for stripes that are composed of clean blocks as the original data resides in primary storage. We also propose the use of destaging, instead of garbage collection, to make space in the cache when the SSD cache is full. We show that the proposed techniques have significant implications on the performance of the cache and lifetime of the SSDs that comprise the cache. Finally, we study various ways in which stripes can be formed based on data and parity block allocation policies. Our experimental results using different realistic I/O workloads show using the SRC scheme is on average 59% better than the conventional SSD cache scheme supporting RAID-5. In case of lifetime, our results show that SRC reduces the erase count of the SSD drives by an average of 47% compared to the RAID-5 scheme.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126111682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Log-structured cache: trading hit-rate for storage performance (and winning) in mobile devices 日志结构缓存:在移动设备中以命中率换取存储性能(和胜利)
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527797
Abutalib Aghayev, Peter Desnoyers
Browser caches are typically designed to maximize hit rates---if all other factors are held equal, then the highest hit rate will result in the highest performance. However, if the performance of the underlying cache storage (i.e. the file system) varies with differing workloads, then these other factors may in fact not be equal when comparing different cache strategies. Mobile systems such as smart phones are typically equipped with low-speed flash storage, and suffer severe degradation in file system performance under sufficiently random write workloads. A cache implementation which performs random writes will thus spend more time reading and writing its cache, possibly resulting in lower overall system performance than a lower-hit-rate implementation which achieves higher storage performance. We present a log-structured browser cache, generating almost purely sequential writes, and in which cleaning is efficiently performed by cache eviction. An implementation of this cache for the Chromium browser on Android was developed; using captured user browsing traces we test the log-structured cache and compare its performance to the existing Chromium implementation. We achieve a ten-fold performance improvement in basic cache operations (as measured on a Nexus 7 tablet), while in the worst case increasing miss rate by less than 3% (from 65% to 68%). For network bandwidths of 1Mb/s or higher the increased cache performance more than makes up for the decrease in hit rate; the effect is more pronounced when examining 95th percentile delays.
浏览器缓存通常设计为最大化命中率——如果所有其他因素保持不变,那么最高的命中率将导致最高的性能。但是,如果底层缓存存储(即文件系统)的性能随不同的工作负载而变化,那么在比较不同的缓存策略时,这些其他因素实际上可能不相等。智能手机等移动系统通常配备低速闪存,在足够随机的写工作负载下,文件系统性能会严重下降。因此,执行随机写入的缓存实现将花费更多的时间读写其缓存,可能导致整体系统性能低于低命中率实现,而实现更高的存储性能。我们提供了一个日志结构的浏览器缓存,生成几乎纯粹的顺序写入,并且通过缓存清除有效地执行清理。开发了Android上Chromium浏览器的此缓存实现;使用捕获的用户浏览记录,我们测试了日志结构的缓存,并将其性能与现有的Chromium实现进行了比较。我们在基本缓存操作方面实现了十倍的性能提升(在Nexus 7平板电脑上进行了测试),而在最坏的情况下,丢失率只增加了不到3%(从65%增加到68%)。对于1Mb/s或更高的网络带宽,增加的缓存性能可以弥补命中率的下降;在检查第95个百分位数的延迟时,这种影响更为明显。
{"title":"Log-structured cache: trading hit-rate for storage performance (and winning) in mobile devices","authors":"Abutalib Aghayev, Peter Desnoyers","doi":"10.1145/2527792.2527797","DOIUrl":"https://doi.org/10.1145/2527792.2527797","url":null,"abstract":"Browser caches are typically designed to maximize hit rates---if all other factors are held equal, then the highest hit rate will result in the highest performance. However, if the performance of the underlying cache storage (i.e. the file system) varies with differing workloads, then these other factors may in fact not be equal when comparing different cache strategies.\u0000 Mobile systems such as smart phones are typically equipped with low-speed flash storage, and suffer severe degradation in file system performance under sufficiently random write workloads. A cache implementation which performs random writes will thus spend more time reading and writing its cache, possibly resulting in lower overall system performance than a lower-hit-rate implementation which achieves higher storage performance.\u0000 We present a log-structured browser cache, generating almost purely sequential writes, and in which cleaning is efficiently performed by cache eviction. An implementation of this cache for the Chromium browser on Android was developed; using captured user browsing traces we test the log-structured cache and compare its performance to the existing Chromium implementation.\u0000 We achieve a ten-fold performance improvement in basic cache operations (as measured on a Nexus 7 tablet), while in the worst case increasing miss rate by less than 3% (from 65% to 68%). For network bandwidths of 1Mb/s or higher the increased cache performance more than makes up for the decrease in hit rate; the effect is more pronounced when examining 95th percentile delays.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131100388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
NVM heaps for accelerating browser-based applications 用于加速基于浏览器的应用程序的NVM堆
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527796
Sudarsun Kannan, Ada Gavrilovska, K. Schwan, Sanjay Kumar
The growth in browser-based computations is raising the need for efficient local storage for browser-based applications. A standard approach to control how such applications access and manipulate the underlying platform resources, is to run in-browser applications in a sandbox environment. Sandboxing works by static code analysis and system call interception, and as a result, the performance of browser applications making frequent I/O calls can be severely impacted. To address this, we explore the utility of next generation non-volatile memories (NVM) in client platforms. By using NVM as virtual memory, and integrating NVM support for browser applications with byte-addressable I/O interfaces, our approach shows up to 3.5x reduction in sandboxing cost and around 3x reduction in serialization overheads for browser-based applications, and improved application performance.
基于浏览器的计算的增长提高了基于浏览器的应用程序对高效本地存储的需求。控制这些应用程序如何访问和操作底层平台资源的标准方法是在沙盒环境中运行浏览器内应用程序。沙箱通过静态代码分析和系统调用拦截来工作,因此,频繁进行I/O调用的浏览器应用程序的性能可能会受到严重影响。为了解决这个问题,我们将探讨下一代非易失性存储器(NVM)在客户机平台中的效用。通过使用NVM作为虚拟内存,并将NVM支持集成到具有字节寻址I/O接口的浏览器应用程序中,我们的方法显示,基于浏览器的应用程序的沙箱成本降低了3.5倍,序列化开销降低了3倍左右,并提高了应用程序的性能。
{"title":"NVM heaps for accelerating browser-based applications","authors":"Sudarsun Kannan, Ada Gavrilovska, K. Schwan, Sanjay Kumar","doi":"10.1145/2527792.2527796","DOIUrl":"https://doi.org/10.1145/2527792.2527796","url":null,"abstract":"The growth in browser-based computations is raising the need for efficient local storage for browser-based applications. A standard approach to control how such applications access and manipulate the underlying platform resources, is to run in-browser applications in a sandbox environment. Sandboxing works by static code analysis and system call interception, and as a result, the performance of browser applications making frequent I/O calls can be severely impacted. To address this, we explore the utility of next generation non-volatile memories (NVM) in client platforms. By using NVM as virtual memory, and integrating NVM support for browser applications with byte-addressable I/O interfaces, our approach shows up to 3.5x reduction in sandboxing cost and around 3x reduction in serialization overheads for browser-based applications, and improved application performance.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131904657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring storage class memory with key value stores 使用键值存储探索存储类内存
Pub Date : 2013-11-03 DOI: 10.1145/2527792.2527799
Katelin Bailey, Peter Hornyack, L. Ceze, S. Gribble, H. Levy
In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties. This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.
在不久的将来,新的存储级存储器(SCM)技术——如相变存储器和忆阻器——将从根本上改变长期存储的性质。这些设备将是廉价的、非易失性的、字节可寻址的,并且接近DRAM的密度和速度。虽然SCM提供了巨大的机会,但从中获利将需要专门为SCM的属性设计的新存储系统。本文介绍了Echo,一个持久的键值存储系统,旨在利用SCM的优势和解决挑战。Echo的目标包括小型和大型数据对象的高性能、故障后的可恢复性以及多核系统上的可伸缩性。Echo通过使用针对包含DRAM和SCM的内存系统的两级内存设计来实现其目标,利用SCM的字节可寻址性来处理非易失性内存中的细粒度事务,并使用快照隔离来实现并发性、一致性和版本控制。我们的评估表明,Echo以scm为中心的设计实现了最佳基于磁盘存储的持久性保证,其性能特征接近最佳内存中键值存储。
{"title":"Exploring storage class memory with key value stores","authors":"Katelin Bailey, Peter Hornyack, L. Ceze, S. Gribble, H. Levy","doi":"10.1145/2527792.2527799","DOIUrl":"https://doi.org/10.1145/2527792.2527799","url":null,"abstract":"In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\u0000 This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.","PeriodicalId":404573,"journal":{"name":"INFLOW '13","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121792741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
INFLOW '13
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1