首页 > 最新文献

012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)最新文献

英文 中文
Jitter-free co-processing on a prototype exascale storage stack 在百亿亿级存储堆栈原型上的无抖动协同处理
Pub Date : 2012-04-16 DOI: 10.1109/MSST.2012.6232382
John Bent, S. Faibish, J. Ahrens, G. Grider, J. Patchett, P. Tzelnic, J. Woodring
In the petascale era, the storage stack used by the extreme scale high performance computing community is fairly homogeneous across sites. On the compute edge of the stack, file system clients or IO forwarding services direct IO over an interconnect network to a relatively small set of IO nodes. These nodes forward the requests over a secondary storage network to a spindle-based parallel file system. Unfortunately, this architecture will become unviable in the exascale era. As the density growth of disks continues to outpace increases in their rotational speeds, disks are becoming increasingly cost-effective for capacity but decreasingly so for bandwidth. Fortunately, new storage media such as solid state devices are filling this gap; although not cost-effective for capacity, they are so for performance. This suggests that the storage stack at exascale will incorporate solid state storage between the compute nodes and the parallel file systems. There are three natural places into which to position this new storage layer: within the compute nodes, the IO nodes, or the parallel file system. In this paper, we argue that the IO nodes are the appropriate location for HPC workloads and show results from a prototype system that we have built accordingly. Running a pipeline of computational simulation and visualization, we show that our prototype system reduces total time to completion by up to 30%.
在千兆级时代,超大规模高性能计算社区使用的存储堆栈在各个站点之间是相当一致的。在堆栈的计算边缘,文件系统客户端或IO转发服务通过互连网络将IO直接发送到相对较小的IO节点集。这些节点通过二级存储网络将请求转发到基于轴的并行文件系统。不幸的是,这种架构在百亿亿次时代将变得不可行。随着磁盘密度的增长继续超过其转速的增长,磁盘在容量方面的成本效益越来越高,但在带宽方面的成本效益却越来越低。幸运的是,固态设备等新的存储介质正在填补这一空白;尽管在容量方面不具有成本效益,但在性能方面却是如此。这表明exascale的存储堆栈将在计算节点和并行文件系统之间合并固态存储。有三个自然的位置可以放置这个新的存储层:计算节点、IO节点或并行文件系统。在本文中,我们认为IO节点是HPC工作负载的合适位置,并展示了我们相应地构建的原型系统的结果。通过运行计算模拟和可视化管道,我们发现我们的原型系统将总完成时间缩短了30%。
{"title":"Jitter-free co-processing on a prototype exascale storage stack","authors":"John Bent, S. Faibish, J. Ahrens, G. Grider, J. Patchett, P. Tzelnic, J. Woodring","doi":"10.1109/MSST.2012.6232382","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232382","url":null,"abstract":"In the petascale era, the storage stack used by the extreme scale high performance computing community is fairly homogeneous across sites. On the compute edge of the stack, file system clients or IO forwarding services direct IO over an interconnect network to a relatively small set of IO nodes. These nodes forward the requests over a secondary storage network to a spindle-based parallel file system. Unfortunately, this architecture will become unviable in the exascale era. As the density growth of disks continues to outpace increases in their rotational speeds, disks are becoming increasingly cost-effective for capacity but decreasingly so for bandwidth. Fortunately, new storage media such as solid state devices are filling this gap; although not cost-effective for capacity, they are so for performance. This suggests that the storage stack at exascale will incorporate solid state storage between the compute nodes and the parallel file systems. There are three natural places into which to position this new storage layer: within the compute nodes, the IO nodes, or the parallel file system. In this paper, we argue that the IO nodes are the appropriate location for HPC workloads and show results from a prototype system that we have built accordingly. Running a pipeline of computational simulation and visualization, we show that our prototype system reduces total time to completion by up to 30%.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128579581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Exploiting superpages in a nonvolatile memory file system 利用非易失性内存文件系统中的超页
Pub Date : 2012-04-16 DOI: 10.1109/MSST.2012.6232384
Sheng Qiu, A. L. Narasimha Reddy
Emerging nonvolatile memory technologies (sometimes referred as Storage Class Memory (SCM)), are poised to close the enormous performance gap between persistent storage and main memory. The SCM devices can be attached directly to memory bus and accessed like normal DRAM. It becomes then possible to exploit memory management hardware resources to improve file system performance. However, in this case, SCM may share critical system resources such as the TLB, page table with DRAM which can potentially impact SCM's performance. In this paper, we propose to solve this problem by employing superpages to reduce the pressure on memory management resources such as the TLB. As a result, the file system performance is further improved. We also analyze the space utilization efficiency of superpages. We improve space efficiency of the file system by allocating normal pages (4KB) for small files while allocating super pages (2MB on ×86) for large files. We show that it is possible to achieve better performance without loss of space utilization efficiency of nonvolatile memory.
新兴的非易失性存储器技术(有时称为存储类存储器(SCM))有望缩小持久存储器和主存储器之间巨大的性能差距。单片机设备可以直接连接到存储器总线,并像普通的DRAM一样访问。这样就可以利用内存管理硬件资源来提高文件系统的性能。然而,在这种情况下,SCM可能会与DRAM共享关键的系统资源,如TLB、页表,这可能会影响SCM的性能。在本文中,我们建议通过使用超页来减少对内存管理资源(如TLB)的压力来解决这个问题。从而进一步提高文件系统的性能。我们还分析了超级页面的空间利用效率。我们为小文件分配普通页面(4KB),为大文件分配超级页面(×86上的2MB),从而提高了文件系统的空间效率。我们证明了在不损失非易失性存储器的空间利用效率的情况下实现更好的性能是可能的。
{"title":"Exploiting superpages in a nonvolatile memory file system","authors":"Sheng Qiu, A. L. Narasimha Reddy","doi":"10.1109/MSST.2012.6232384","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232384","url":null,"abstract":"Emerging nonvolatile memory technologies (sometimes referred as Storage Class Memory (SCM)), are poised to close the enormous performance gap between persistent storage and main memory. The SCM devices can be attached directly to memory bus and accessed like normal DRAM. It becomes then possible to exploit memory management hardware resources to improve file system performance. However, in this case, SCM may share critical system resources such as the TLB, page table with DRAM which can potentially impact SCM's performance. In this paper, we propose to solve this problem by employing superpages to reduce the pressure on memory management resources such as the TLB. As a result, the file system performance is further improved. We also analyze the space utilization efficiency of superpages. We improve space efficiency of the file system by allocating normal pages (4KB) for small files while allocating super pages (2MB on ×86) for large files. We show that it is possible to achieve better performance without loss of space utilization efficiency of nonvolatile memory.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125407327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
BloomStore: Bloom-Filter based memory-efficient key-value store for indexing of data deduplication on flash BloomStore:基于bloomfilter的内存高效键值存储,用于对flash上的重复数据删除进行索引
Pub Date : 2012-04-16 DOI: 10.1109/MSST.2012.6232390
Guanlin Lu, Youngjin Nam, D. Du
Due to its better scalability, Key-Value (KV) store has superseded traditional relational databases for many applications, such as data deduplication, on-line multi-player gaming, and Internet services like Amazon and Facebook. The KV store efficiently supports two operations (key lookup and KV pair insertion) through an index structure that maps keys to their associated values. The KV store is also commonly used to implement the chunk index in data deduplication, where a chunk ID (SHA1 value computed based on the chunk's content) is a key and its associative chunk metadata (e.g., physical storage location, stream ID) is the value. For a deduplication system, typically the number of chunks is too large to store the KV store solely in RAM. Thus, the KV store maintains a large (hash-table based) index structure in RAM to index all KV pairs stored on secondary storage. Hence, its available RAM space limits the maximum number of KV pairs that can be stored. Moving the index data structure from RAM to flash can possibly overcome the space limitation. In this paper, we propose efficient KV store on flash with a Bloom Filter based index structure called BloomStore. The unique features of the BloomStore include (1) no index structure is required to be stored in RAM so that a small RAM space can support a large number of KV pairs and (2) both index structure and KV pairs are stored compactly on flash memory to improve its performance. Compared with the state-of-the-art KV store designs, the BloomStore achieves a significantly better key lookup performance and roughly the same insertion performance with multiple times less RAM usage based on our experiments with deduplication workloads.
由于具有更好的可伸缩性,键值(KV)存储已经在许多应用程序中取代了传统的关系数据库,例如重复数据删除、在线多人游戏以及Amazon和Facebook等Internet服务。KV存储通过一个索引结构有效地支持两种操作(键查找和KV对插入),该索引结构将键映射到它们的关联值。KV存储也常用于实现重复数据删除中的块索引,其中块ID(基于块内容计算的SHA1值)是键,其关联的块元数据(如物理存储位置,流ID)是值。对于重复数据删除系统,通常块的数量太大,无法将KV存储单独存储在RAM中。因此,KV存储在RAM中维护一个大的(基于哈希表的)索引结构,以索引存储在二级存储上的所有KV对。因此,它的可用RAM空间限制了可以存储的KV对的最大数量。将索引数据结构从RAM移到闪存可能会克服空间限制。在本文中,我们提出了一种基于Bloom Filter的索引结构,称为BloomStore,用于flash上的高效KV存储。BloomStore的独特之处在于(1)不需要将索引结构存储在RAM中,使得较小的RAM空间可以支持大量的KV对;(2)索引结构和KV对都紧凑地存储在闪存中,从而提高了其性能。与最先进的KV存储设计相比,基于我们对重复数据删除工作负载的实验,BloomStore实现了更好的键查找性能和大致相同的插入性能,并且RAM使用量减少了几倍。
{"title":"BloomStore: Bloom-Filter based memory-efficient key-value store for indexing of data deduplication on flash","authors":"Guanlin Lu, Youngjin Nam, D. Du","doi":"10.1109/MSST.2012.6232390","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232390","url":null,"abstract":"Due to its better scalability, Key-Value (KV) store has superseded traditional relational databases for many applications, such as data deduplication, on-line multi-player gaming, and Internet services like Amazon and Facebook. The KV store efficiently supports two operations (key lookup and KV pair insertion) through an index structure that maps keys to their associated values. The KV store is also commonly used to implement the chunk index in data deduplication, where a chunk ID (SHA1 value computed based on the chunk's content) is a key and its associative chunk metadata (e.g., physical storage location, stream ID) is the value. For a deduplication system, typically the number of chunks is too large to store the KV store solely in RAM. Thus, the KV store maintains a large (hash-table based) index structure in RAM to index all KV pairs stored on secondary storage. Hence, its available RAM space limits the maximum number of KV pairs that can be stored. Moving the index data structure from RAM to flash can possibly overcome the space limitation. In this paper, we propose efficient KV store on flash with a Bloom Filter based index structure called BloomStore. The unique features of the BloomStore include (1) no index structure is required to be stored in RAM so that a small RAM space can support a large number of KV pairs and (2) both index structure and KV pairs are stored compactly on flash memory to improve its performance. Compared with the state-of-the-art KV store designs, the BloomStore achieves a significantly better key lookup performance and roughly the same insertion performance with multiple times less RAM usage based on our experiments with deduplication workloads.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Enhancing shared RAID performance through online profiling 通过在线分析增强共享RAID性能
Pub Date : 2012-04-16 DOI: 10.1109/MSST.2012.6232383
Ji-guang Wan, Jibin Wang, Yan Liu, Qing Yang, Jianzong Wang, C. Xie
Enterprise storage systems are generally shared by multiple servers in a SAN environment. Our experiments as well as industry reports have shown that disk arrays show poor performance when multiple servers share one RAID due to resource contention as well as frequent disk head movements. We have studied IO performance characteristics of several shared storage settings of practical business operations. To avoid the IO contention, we propose a new dynamic data relocation technique on shared RAID storages, referred to as DROP, Dynamic data Relocation to Optimize Performance. DROP allocates/manages a group of cache data areas and relocates/drops the portion of hot data at a predefined sub array that is a physical partition on the top of the entire shared array. By analyzing profiling data to make each cache area owned by one server, we are able to determine optimal data relocation and partition of disks in the RAID to maximize large sequential block accesses on individual disks and at the same time maximize parallel accesses across disks in the array. As a result, DROP minimizes disk head movements in the array at run time giving rise to high IO performance. A prototype DROP has been implemented as a software module at the storage target controller. Extensive experiments have been carried out using real world IO workloads to evaluate the performance of the DROP implementation. Experimental results have shown that DROP improves shared IO performance greatly. The performance improvements in terms of average IO response time range from 20% to a factor 2.5 at no additional hardware cost.
企业存储系统通常由SAN环境中的多个服务器共享。我们的实验和行业报告都表明,当多个服务器共享一个RAID时,由于资源争用和频繁的磁盘磁头移动,磁盘阵列的性能很差。我们研究了实际业务操作中几种共享存储设置的IO性能特征。为了避免IO争用,我们提出了一种新的基于共享RAID存储的动态数据重定位技术,称为DROP (dynamic data relocation To Optimize Performance)。DROP分配/管理一组缓存数据区域,并在预定义的子数组中重新定位/删除热数据部分,该子数组是整个共享数组顶部的物理分区。通过分析概要数据,使每个缓存区域由一台服务器拥有,我们能够确定RAID中磁盘的最佳数据重定位和分区,以最大限度地提高对单个磁盘的大顺序块访问,同时最大限度地提高阵列中磁盘的并行访问。因此,DROP在运行时最大限度地减少了阵列中的磁盘磁头移动,从而提高了IO性能。在存储目标控制器上实现了一个原型DROP作为软件模块。使用真实的IO工作负载进行了大量的实验,以评估DROP实现的性能。实验结果表明,DROP可以显著提高共享IO性能。在不增加硬件成本的情况下,平均IO响应时间的性能改进范围从20%到2.5倍。
{"title":"Enhancing shared RAID performance through online profiling","authors":"Ji-guang Wan, Jibin Wang, Yan Liu, Qing Yang, Jianzong Wang, C. Xie","doi":"10.1109/MSST.2012.6232383","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232383","url":null,"abstract":"Enterprise storage systems are generally shared by multiple servers in a SAN environment. Our experiments as well as industry reports have shown that disk arrays show poor performance when multiple servers share one RAID due to resource contention as well as frequent disk head movements. We have studied IO performance characteristics of several shared storage settings of practical business operations. To avoid the IO contention, we propose a new dynamic data relocation technique on shared RAID storages, referred to as DROP, Dynamic data Relocation to Optimize Performance. DROP allocates/manages a group of cache data areas and relocates/drops the portion of hot data at a predefined sub array that is a physical partition on the top of the entire shared array. By analyzing profiling data to make each cache area owned by one server, we are able to determine optimal data relocation and partition of disks in the RAID to maximize large sequential block accesses on individual disks and at the same time maximize parallel accesses across disks in the array. As a result, DROP minimizes disk head movements in the array at run time giving rise to high IO performance. A prototype DROP has been implemented as a software module at the storage target controller. Extensive experiments have been carried out using real world IO workloads to evaluate the performance of the DROP implementation. Experimental results have shown that DROP improves shared IO performance greatly. The performance improvements in terms of average IO response time range from 20% to a factor 2.5 at no additional hardware cost.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Storage challenges at Los Alamos National Lab 洛斯阿拉莫斯国家实验室的存储挑战
Pub Date : 2012-04-16 DOI: 10.1109/MSST.2012.6232376
John Bent, G. Grider, B. Kettering, A. Manzanares, Meghan McClelland, Aaron Torres, Alfred Torrez
There yet exist no truly parallel file systems. Those that make the claim fall short when it comes to providing adequate concurrent write performance at large scale. This limitation causes large usability headaches in HPC. Users need two major capabilities missing from current parallel file systems. One, they need low latency interactivity. Two, they need high bandwidth for large parallel IO; this capability must be resistant to IO patterns and should not require tuning. There are no existing parallel file systems which provide these features. Frighteningly, exascale renders these features even less attainable from currently available parallel file systems. Fortunately, there is a path forward.
目前还不存在真正的并行文件系统。当涉及到在大规模下提供足够的并发写性能时,那些做出这种声明的公司就显得有些不足了。这个限制在HPC中造成了很大的可用性问题。用户需要当前并行文件系统缺少的两个主要功能。首先,它们需要低延迟的交互性。二是对大型并行IO需要高带宽;此功能必须能够抵抗IO模式,并且不需要调优。目前还没有提供这些特性的并行文件系统。令人担忧的是,exascale使这些特性更难以从当前可用的并行文件系统中实现。幸运的是,有一条前进的道路。
{"title":"Storage challenges at Los Alamos National Lab","authors":"John Bent, G. Grider, B. Kettering, A. Manzanares, Meghan McClelland, Aaron Torres, Alfred Torrez","doi":"10.1109/MSST.2012.6232376","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232376","url":null,"abstract":"There yet exist no truly parallel file systems. Those that make the claim fall short when it comes to providing adequate concurrent write performance at large scale. This limitation causes large usability headaches in HPC. Users need two major capabilities missing from current parallel file systems. One, they need low latency interactivity. Two, they need high bandwidth for large parallel IO; this capability must be resistant to IO patterns and should not require tuning. There are no existing parallel file systems which provide these features. Frighteningly, exascale renders these features even less attainable from currently available parallel file systems. Fortunately, there is a path forward.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"394 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115684929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Adaptive pipeline for deduplication 重复数据删除的自适应管道
Pub Date : 2012-04-01 DOI: 10.1109/MSST.2012.6232377
Jingwei Ma, Bin Zhao, G. Wang, X. Liu
Deduplication has become one of the hottest topics in the field of data storage. Quite a few methods towards reducing disk I/O caused by deduplication have been proposed. Some methods also have been studied to accelerate computational sub-tasks in deduplication. However, the order of computational sub-tasks can affect overall deduplication throughput significantly, because computational sub-tasks exhibit quite different workload and concurrency in different orders and with different data sets. This paper proposes an adaptive pipelining model for the computational sub-tasks in deduplication. It takes both data type and hardware platform into account. Taking the compression ratio and the duplicate ratio of the data stream, and the compression speed and the fingerprinting speed on different processing units as parameters, it determines the optimal order of the pipeline stages (computational sub-tasks) and assigns each stage to the processing unit which processes it fastest. That is, “adaptive” refers to both data adaptive and hardware adaptive. Experimental results show that the adaptive pipeline improves the deduplication throughput up to 50% compared with the plain fixed pipeline, which implies that it is suitable for simultaneous deduplication of various data types on modern heterogeneous multi-core systems.
重复数据删除技术已成为数据存储领域的研究热点之一。已经提出了许多减少重复数据删除引起的磁盘I/O的方法。研究了一些加速重复数据删除子任务计算的方法。但是,计算子任务的顺序会显著影响整体重复数据删除吞吐量,因为计算子任务在不同顺序和不同数据集上的工作负载和并发性差异很大。提出了一种用于重复数据删除计算子任务的自适应流水线模型。它同时考虑了数据类型和硬件平台。该算法以数据流的压缩比和重复比、不同处理单元上的压缩速度和指纹识别速度为参数,确定流水线各阶段(计算子任务)的最优顺序,并将各阶段分配给处理速度最快的处理单元。也就是说,“自适应”既指数据自适应,也指硬件自适应。实验结果表明,与普通的固定管道相比,自适应管道的重复数据删除吞吐量提高了50%以上,这意味着它适用于现代异构多核系统上各种数据类型的同时重复数据删除。
{"title":"Adaptive pipeline for deduplication","authors":"Jingwei Ma, Bin Zhao, G. Wang, X. Liu","doi":"10.1109/MSST.2012.6232377","DOIUrl":"https://doi.org/10.1109/MSST.2012.6232377","url":null,"abstract":"Deduplication has become one of the hottest topics in the field of data storage. Quite a few methods towards reducing disk I/O caused by deduplication have been proposed. Some methods also have been studied to accelerate computational sub-tasks in deduplication. However, the order of computational sub-tasks can affect overall deduplication throughput significantly, because computational sub-tasks exhibit quite different workload and concurrency in different orders and with different data sets. This paper proposes an adaptive pipelining model for the computational sub-tasks in deduplication. It takes both data type and hardware platform into account. Taking the compression ratio and the duplicate ratio of the data stream, and the compression speed and the fingerprinting speed on different processing units as parameters, it determines the optimal order of the pipeline stages (computational sub-tasks) and assigns each stage to the processing unit which processes it fastest. That is, “adaptive” refers to both data adaptive and hardware adaptive. Experimental results show that the adaptive pipeline improves the deduplication throughput up to 50% compared with the plain fixed pipeline, which implies that it is suitable for simultaneous deduplication of various data types on modern heterogeneous multi-core systems.","PeriodicalId":348234,"journal":{"name":"012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126491353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1