首页 > 最新文献

ACM Transactions on Storage最新文献

英文 中文
Extending and Programming the NVMe I/O Determinism Interface for Flash Arrays Flash阵列NVMe I/O确定性接口的扩展与编程
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3568427
Huaicheng Li, Martin L. Putra, Ronald Shi, Fadhil I. Kurnia, Xing Lin, Jaeyoung Do, Achmad Imam Kistijantoro, Gregory R. Ganger, Haryadi S. Gunawi

Predictable latency on flash storage is a long-pursuit goal, yet unpredictability stays due to the unavoidable disturbance from many well-known SSD internal activities. To combat this issue, the recent NVMe IO Determinism (IOD) interface advocates host-level controls to SSD internal management tasks. Although promising, challenges remain on how to exploit it for truly predictable performance.

We present IODA,1 an I/O deterministic flash array design built on top of small but powerful extensions to the IOD interface for easy deployment. IODA exploits data redundancy in the context of IOD for a strong latency predictability contract. In IODA, SSDs are expected to quickly fail an I/O on purpose to allow predictable I/Os through proactive data reconstruction. In the case of concurrent internal operations, IODA introduces busy remaining time exposure and predictable-latency-window formulation to guarantee predictable data reconstructions. Overall, IODA only adds five new fields to the NVMe interface and a small modification in the flash firmware while keeping most of the complexity in the host OS. Our evaluation shows that IODA improves the 95–99.99th latencies by up to 75×. IODA is also the nearest to the ideal, no disturbance case compared to seven state-of-the-art preemption, suspension, GC coordination, partitioning, tiny-tail flash controller, prediction, and proactive approaches.

闪存存储的可预测延迟是一个长期追求的目标,但是由于许多众所周知的SSD内部活动不可避免的干扰,不可预测性仍然存在。为了解决这个问题,最近的NVMe IO Determinism (IOD)接口提倡对SSD内部管理任务进行主机级控制。尽管前景光明,但如何利用它实现真正可预测的性能仍然存在挑战。我们提出IODA,一个I/O确定性闪存阵列设计,建立在IOD接口的小而强大的扩展之上,便于部署。IODA利用IOD上下文中的数据冗余来实现强大的延迟可预测性合约。在IODA中,ssd预计会故意快速失败I/O,以便通过主动数据重建实现可预测的I/O。在并发内部操作的情况下,IODA引入了繁忙剩余时间暴露和可预测延迟窗口公式,以保证可预测的数据重建。总的来说,IODA只向NVMe接口添加了五个新字段,并在flash固件中进行了小修改,同时保留了主机操作系统的大部分复杂性。我们的评估表明,IODA将95 - 99.99延迟提高了75倍。与七种最先进的抢占、悬浮、GC协调、分区、小尾闪存控制器、预测和主动方法相比,IODA也是最接近理想的、无干扰的情况。
{"title":"Extending and Programming the NVMe I/O Determinism Interface for Flash Arrays","authors":"Huaicheng Li, Martin L. Putra, Ronald Shi, Fadhil I. Kurnia, Xing Lin, Jaeyoung Do, Achmad Imam Kistijantoro, Gregory R. Ganger, Haryadi S. Gunawi","doi":"https://dl.acm.org/doi/10.1145/3568427","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3568427","url":null,"abstract":"<p>Predictable latency on flash storage is a long-pursuit goal, yet unpredictability stays due to the unavoidable disturbance from many well-known SSD internal activities. To combat this issue, the recent NVMe IO Determinism (IOD) interface advocates host-level controls to SSD internal management tasks. Although promising, challenges remain on how to exploit it for truly predictable performance.</p><p>We present <span>IODA</span>,<sup>1</sup> an I/O deterministic flash array design built on top of small but powerful extensions to the IOD interface for easy deployment. <span>IODA</span> exploits data redundancy in the context of IOD for a strong latency predictability contract. In <span>IODA</span>, SSDs are expected to quickly fail an I/O on purpose to allow predictable I/Os through proactive data reconstruction. In the case of concurrent internal operations, <span>IODA</span> introduces busy remaining time exposure and predictable-latency-window formulation to guarantee predictable data reconstructions. Overall, <span>IODA</span> only adds five new fields to the NVMe interface and a small modification in the flash firmware while keeping most of the complexity in the host OS. Our evaluation shows that <span>IODA</span> improves the 95–99.99<sup>th</sup> latencies by up to 75×. <span>IODA</span> is also the nearest to the ideal, no disturbance case compared to seven state-of-the-art preemption, suspension, GC coordination, partitioning, tiny-tail flash controller, prediction, and proactive approaches.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end I/O Monitoring on Leading Supercomputers 领先超级计算机的端到端I/O监控
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3568425
Bin Yang, Wei Xue, Tianyu Zhang, Shichao Liu, Xiaosong Ma, Xiyang Wang, Weiguo Liu

This paper offers a solution to overcome the complexities of production system I/O performance monitoring. We present Beacon, an end-to-end I/O resource monitoring and diagnosis system for the 40960-node Sunway TaihuLight supercomputer, currently the fourth-ranked supercomputer in the world. Beacon simultaneously collects and correlates I/O tracing/profiling data from all the compute nodes, forwarding nodes, storage nodes, and metadata servers. With mechanisms such as aggressive online and offline trace compression and distributed caching/storage, it delivers scalable, low-overhead, and sustainable I/O diagnosis under production use. With Beacon’s deployment on TaihuLight for more than three years, we demonstrate Beacon’s effectiveness with real-world use cases for I/O performance issue identification and diagnosis. It has already successfully helped center administrators identify obscure design or configuration flaws, system anomaly occurrences, I/O performance interference, and resource under- or over-provisioning problems. Several of the exposed problems have already been fixed, with others being currently addressed. Encouraged by Beacon’s success in I/O monitoring, we extend it to monitor interconnection networks, which is another contention point on supercomputers. In addition, we demonstrate Beacon’s generality by extending it to other supercomputers. Both Beacon codes and part of collected monitoring data are released.1

本文提供了一种解决方案来克服生产系统I/O性能监控的复杂性。我们为目前世界排名第四的超级计算机——40960节点的神威太湖之光超级计算机提供了端到端I/O资源监测和诊断系统Beacon。Beacon同时收集并关联所有计算节点、转发节点、存储节点和元数据服务器的I/O跟踪/分析数据。通过积极的在线和离线跟踪压缩以及分布式缓存/存储等机制,它可以在生产使用下提供可扩展、低开销和可持续的I/O诊断。通过Beacon在太湖之光上三年多的部署,我们展示了Beacon在I/O性能问题识别和诊断方面的实际用例的有效性。它已经成功地帮助中心管理员识别模糊的设计或配置缺陷、系统异常事件、I/O性能干扰以及资源供应不足或过度的问题。一些暴露的问题已经得到修复,其他问题目前正在解决中。受Beacon在I/O监控方面的成功鼓舞,我们将其扩展到监控互连网络,这是超级计算机上的另一个争论点。此外,我们通过将Beacon扩展到其他超级计算机来演示它的通用性。发布Beacon码和部分采集到的监控数据
{"title":"End-to-end I/O Monitoring on Leading Supercomputers","authors":"Bin Yang, Wei Xue, Tianyu Zhang, Shichao Liu, Xiaosong Ma, Xiyang Wang, Weiguo Liu","doi":"https://dl.acm.org/doi/10.1145/3568425","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3568425","url":null,"abstract":"<p>This paper offers a solution to overcome the complexities of production system I/O performance monitoring. We present Beacon, an end-to-end I/O resource monitoring and diagnosis system for the 40960-node Sunway TaihuLight supercomputer, currently the fourth-ranked supercomputer in the world. Beacon simultaneously collects and correlates I/O tracing/profiling data from all the compute nodes, forwarding nodes, storage nodes, and metadata servers. With mechanisms such as aggressive online and offline trace compression and distributed caching/storage, it delivers scalable, low-overhead, and sustainable I/O diagnosis under production use. With Beacon’s deployment on TaihuLight for more than three years, we demonstrate Beacon’s effectiveness with real-world use cases for I/O performance issue identification and diagnosis. It has already successfully helped center administrators identify obscure design or configuration flaws, system anomaly occurrences, I/O performance interference, and resource under- or over-provisioning problems. Several of the exposed problems have already been fixed, with others being currently addressed. Encouraged by Beacon’s success in I/O monitoring, we extend it to monitor interconnection networks, which is another contention point on supercomputers. In addition, we demonstrate Beacon’s generality by extending it to other supercomputers. Both Beacon codes and part of collected monitoring data are released.<sup>1</sup></p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability Evaluation of Erasure-coded Storage Systems with Latent Errors 带有潜在错误的擦除编码存储系统可靠性评估
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2023-01-11 DOI: https://dl.acm.org/doi/10.1145/3568313
Ilias Iliadis

Large-scale storage systems employ erasure-coding redundancy schemes to protect against device failures. The adverse effect of latent sector errors on the Mean Time to Data Loss (MTTDL) and the Expected Annual Fraction of Data Loss (EAFDL) reliability metrics is evaluated. A theoretical model capturing the effect of latent errors and device failures is developed, and closed-form expressions for the metrics of interest are derived. The MTTDL and EAFDL of erasure-coded systems are obtained analytically for (i) the entire range of bit error rates; (ii) the symmetric, clustered, and declustered data placement schemes; and (iii) arbitrary device failure and rebuild time distributions under network rebuild bandwidth constraints. The range of error rates that deteriorate system reliability is derived analytically. For realistic values of sector error rates, the results obtained demonstrate that MTTDL degrades, whereas, for moderate erasure codes, EAFDL remains practically unaffected. It is demonstrated that, in the range of typical sector error rates and for very powerful erasure codes, EAFDL degrades as well. It is also shown that the declustered data placement scheme offers superior reliability.

大规模存储系统采用erasure-coding冗余机制来防止设备故障。潜在扇区错误对平均数据丢失时间(MTTDL)和数据丢失预期年分数(EAFDL)可靠性指标的不利影响进行了评估。一个理论模型捕捉潜在的错误和设备故障的影响,并为感兴趣的度量导出了封闭形式的表达式。对擦除编码系统的MTTDL和EAFDL进行分析,得到(i)误码率的整个范围;(ii)对称、聚类和非聚类数据放置方案;(iii)在网络重构带宽约束下的任意设备故障和重构时间分布。导出了影响系统可靠性的误差率范围。对于扇区错误率的实际值,获得的结果表明MTTDL会降级,而对于中等擦除码,EAFDL实际上不受影响。结果表明,在典型扇区错误率范围内,对于非常强大的擦除码,EAFDL也会退化。实验还表明,这种分散的数据放置方案具有较高的可靠性。
{"title":"Reliability Evaluation of Erasure-coded Storage Systems with Latent Errors","authors":"Ilias Iliadis","doi":"https://dl.acm.org/doi/10.1145/3568313","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3568313","url":null,"abstract":"<p>Large-scale storage systems employ erasure-coding redundancy schemes to protect against device failures. The adverse effect of latent sector errors on the Mean Time to Data Loss (MTTDL) and the Expected Annual Fraction of Data Loss (EAFDL) reliability metrics is evaluated. A theoretical model capturing the effect of latent errors and device failures is developed, and closed-form expressions for the metrics of interest are derived. The MTTDL and EAFDL of erasure-coded systems are obtained analytically for (i) the entire range of bit error rates; (ii) the symmetric, clustered, and declustered data placement schemes; and (iii) arbitrary device failure and rebuild time distributions under network rebuild bandwidth constraints. The range of error rates that deteriorate system reliability is derived analytically. For realistic values of sector error rates, the results obtained demonstrate that MTTDL degrades, whereas, for moderate erasure codes, EAFDL remains practically unaffected. It is demonstrated that, in the range of typical sector error rates and for very powerful erasure codes, EAFDL degrades as well. It is also shown that the declustered data placement scheme offers superior reliability.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ctFS: Replacing File Indexing with Hardware Memory Translation through Contiguous File Allocation for Persistent Memory ctFS:通过为持久内存分配连续文件,用硬件内存转换代替文件索引
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-12-16 DOI: https://dl.acm.org/doi/10.1145/3565026
Ruibin Li, Xiang Ren, Xu Zhao, Siwei He, Michael Stumm, Ding Yuan

Persistent byte-addressable memory (PM) is poised to become prevalent in future computer systems. PMs are significantly faster than disk storage, and accesses to PMs are governed by the Memory Management Unit (MMU) just as accesses with volatile RAM. These unique characteristics shift the bottleneck from I/O to operations such as block address lookup—for example, in write workloads, up to 45% of the overhead in ext4-DAX is due to building and searching extent trees to translate file offsets to addresses on persistent memory.

We propose a novel contiguous file system, ctFS, that eliminates most of the overhead associated with indexing structures such as extent trees in the file system. ctFS represents each file as a contiguous region of virtual memory, hence a lookup from the file offset to the address is simply an offset operation, which can be efficiently performed by the hardware MMU at a fraction of the cost of software-maintained indexes. Evaluating ctFS on real-world workloads such as LevelDB shows it outperforms ext4-DAX and SplitFS by 3.6× and 1.8×, respectively.

持久字节可寻址存储器(PM)将在未来的计算机系统中变得普遍。pm比磁盘存储快得多,并且对pm的访问由内存管理单元(MMU)控制,就像访问易失性RAM一样。这些独特的特性将瓶颈从I/O转移到块地址查找等操作上——例如,在写工作负载中,ext4-DAX中高达45%的开销是由于构建和搜索区段树以将文件偏移量转换为持久内存上的地址。我们提出了一种新的连续文件系统ctFS,它消除了与文件系统中的索引结构(如区段树)相关的大部分开销。ctFS将每个文件表示为虚拟内存的一个连续区域,因此从文件偏移量到地址的查找只是一个偏移量操作,它可以由硬件MMU有效地执行,而成本只是软件维护索引的一小部分。在LevelDB等实际工作负载上评估ctFS显示,它的性能分别比ext4-DAX和SplitFS高3.6倍和1.8倍。
{"title":"ctFS: Replacing File Indexing with Hardware Memory Translation through Contiguous File Allocation for Persistent Memory","authors":"Ruibin Li, Xiang Ren, Xu Zhao, Siwei He, Michael Stumm, Ding Yuan","doi":"https://dl.acm.org/doi/10.1145/3565026","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3565026","url":null,"abstract":"<p><b>Persistent byte-addressable memory (PM)</b> is poised to become prevalent in future computer systems. PMs are significantly faster than disk storage, and accesses to PMs are governed by the <b>Memory Management Unit (MMU)</b> just as accesses with volatile RAM. These unique characteristics shift the bottleneck from I/O to operations such as block address lookup—for example, in write workloads, up to 45% of the overhead in ext4-DAX is due to building and searching extent trees to translate file offsets to addresses on persistent memory.</p><p>We propose a novel <i>contiguous</i> file system, ctFS, that eliminates most of the overhead associated with indexing structures such as extent trees in the file system. ctFS represents each file as a contiguous region of virtual memory, hence a lookup from the file offset to the address is simply an offset operation, which can be efficiently performed by the hardware MMU at a fraction of the cost of software-maintained indexes. Evaluating ctFS on real-world workloads such as LevelDB shows it outperforms ext4-DAX and SplitFS by 3.6× and 1.8×, respectively.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138512858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Endurance of Next Generation SSD’s using WOM-v Codes 利用WOM-v码提高下一代固态硬盘的续航能力
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-12-16 DOI: https://dl.acm.org/doi/10.1145/3565027
Shehbaz Jaffer, Kaveh Mahdaviani, Bianca Schroeder

High density Solid State Drives, such as QLC drives, offer increased storage capacity, but a magnitude lower Program and Erase (P/E) cycles, limiting their endurance and hence usability. We present the design and implementation of non-binary, Voltage-Based Write-Once-Memory (WOM-v) Codes to improve the lifetime of QLC drives. First, we develop a FEMU based simulator test-bed to evaluate the gains of WOM-v codes on real world workloads. Second, we propose and implement two optimizations, an efficient garbage collection mechanism and an encoding optimization to drastically improve WOM-v code endurance without compromising performance. Third, we propose analytical approaches to obtain estimates of the endurance gains under WOM-v codes. We analyze the Greedy garbage collection technique with uniform page access distribution and the Least Recently Written (LRW) garbage collection technique with skewed page access distribution in the context of WOM-v codes. We find that although both approaches overestimate the number of required erase operations, the model based on greedy garbage collection with uniform page access distribution provides tighter bounds. A careful evaluation, including microbenchmarks and trace-driven evaluation, demonstrates that WOM-v codes can reduce Erase cycles for QLC drives by 4.4×–11.1× for real world workloads with minimal performance overheads resulting in improved QLC SSD lifetime.

高密度固态驱动器,如QLC驱动器,提供了更大的存储容量,但大大降低了程序和擦除(P/E)周期,限制了它们的耐用性和可用性。我们提出了非二进制,基于电压的写一次存储器(WOM-v)代码的设计和实现,以提高QLC驱动器的使用寿命。首先,我们开发了一个基于FEMU的模拟器测试平台,以评估WOM-v代码在现实世界工作负载上的增益。其次,我们提出并实现了两项优化,一种有效的垃圾收集机制和一种编码优化,以在不影响性能的情况下大幅提高WOM-v的代码耐久性。第三,我们提出了分析方法来估计WOM-v代码下的续航增益。在WOM-v代码环境下,分析了具有均匀页访问分布的贪婪垃圾收集技术和具有倾斜页访问分布的最近最少写入(LRW)垃圾收集技术。我们发现,尽管这两种方法都高估了所需擦除操作的数量,但基于统一页面访问分布的贪婪垃圾收集模型提供了更严格的边界。仔细的评估,包括微基准测试和跟踪驱动评估,表明WOM-v代码可以减少QLC驱动器的擦除周期4.4×-11.1×,以最小的性能开销,从而提高QLC SSD的使用寿命。
{"title":"Improving the Endurance of Next Generation SSD’s using WOM-v Codes","authors":"Shehbaz Jaffer, Kaveh Mahdaviani, Bianca Schroeder","doi":"https://dl.acm.org/doi/10.1145/3565027","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3565027","url":null,"abstract":"<p>High density Solid State Drives, such as QLC drives, offer increased storage capacity, but a magnitude lower Program and Erase (P/E) cycles, limiting their endurance and hence usability. We present the design and implementation of non-binary, Voltage-Based Write-Once-Memory (WOM-v) Codes to improve the lifetime of QLC drives. First, we develop a FEMU based simulator test-bed to evaluate the gains of WOM-v codes on real world workloads. Second, we propose and implement two optimizations, an efficient garbage collection mechanism and an encoding optimization to drastically improve WOM-v code endurance without compromising performance. Third, we propose analytical approaches to obtain estimates of the endurance gains under WOM-v codes. We analyze the Greedy garbage collection technique with uniform page access distribution and the Least Recently Written (LRW) garbage collection technique with skewed page access distribution in the context of WOM-v codes. We find that although both approaches overestimate the number of required erase operations, the model based on greedy garbage collection with uniform page access distribution provides tighter bounds. A careful evaluation, including microbenchmarks and trace-driven evaluation, demonstrates that WOM-v codes can reduce Erase cycles for QLC drives by 4.4×–11.1× for real world workloads with minimal performance overheads resulting in improved QLC SSD lifetime.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138526271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models 基于线程架构模型的分布式存储系统可调度性分析
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-12-12 DOI: 10.1145/3574323
Suli Yang, Jing Liu, A. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau
In this article, we present an approach to systematically examine the schedulability of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use Thread Architecture Models (TAMs) to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.
在本文中,我们提出了一种方法来系统地检查分布式存储系统的可调度性,识别它们的调度问题,并在这些系统中实现有效的调度。我们使用线程体系结构模型(TAM)来描述系统中不同线程的行为和交互,并展示如何为现有系统构建TAM,以及如何利用TAM来识别关键的调度问题。我们指定了可调度TAM应满足的三个可调度性条件:完整性、局部可执行性和独立性;满足这些条件使得系统能够容易地支持不同的调度策略。我们确定了阻碍系统满足可调度性条件的五个常见问题,并表明这些问题出现在现有的系统中,如HBase、Cassandra、MongoDB和Riak,使得实现各种调度规则变得困难或不可能。我们展示了如何使用直接和间接解决方案,通过不同的权衡来解决这些可调度性问题。为了展示如何在现实系统中应用我们的方法来实现调度,我们开发了Tamed HBase和Muzzled HBase,这两组对HBase的修改可以实现所需的调度原则,包括公平性和优先级调度,即使在面临具有挑战性的工作负载时也是如此。
{"title":"Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models","authors":"Suli Yang, Jing Liu, A. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau","doi":"10.1145/3574323","DOIUrl":"https://doi.org/10.1145/3574323","url":null,"abstract":"In this article, we present an approach to systematically examine the schedulability of distributed storage systems, identify their scheduling problems, and enable effective scheduling in these systems. We use Thread Architecture Models (TAMs) to describe the behavior and interactions of different threads in a system, and show both how to construct TAMs for existing systems and utilize TAMs to identify critical scheduling problems. We specify three schedulability conditions that a schedulable TAM should satisfy: completeness, local enforceability, and independence; meeting these conditions enables a system to easily support different scheduling policies. We identify five common problems that prevent a system from satisfying the schedulability conditions, and show that these problems arise in existing systems such as HBase, Cassandra, MongoDB, and Riak, making it difficult or impossible to realize various scheduling disciplines. We demonstrate how to address these schedulability problems using both direct and indirect solutions, with different trade-offs. To show how to apply our approach to enable scheduling in realistic systems, we develop Tamed-HBase and Muzzled-HBase, sets of modifications to HBase that can realize the desired scheduling disciplines, including fairness and priority scheduling, even when presented with challenging workloads.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46883404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
EMPRESS: Accelerating Scientific Discovery through Descriptive Metadata Management 女皇:通过描述性元数据管理加速科学发现
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-12-12 DOI: https://dl.acm.org/doi/10.1145/3523698
Margaret Lawson, William Gropp, Jay Lofstead

High-performance computing scientists are producing unprecedented volumes of data that take a long time to load for analysis. However, many analyses only require loading in the data containing particular features of interest and scientists have many approaches for identifying these features. Therefore, if scientists store information (descriptive metadata) about these identified features, then for subsequent analyses they can use this information to only read in the data containing these features. This can greatly reduce the amount of data that scientists have to read in, thereby accelerating analysis. Despite the potential benefits of descriptive metadata management, no prior work has created a descriptive metadata system that can help scientists working with a wide range of applications and analyses to restrict their reads to data containing features of interest. In this article, we present EMPRESS, the first such solution. EMPRESS offers all of the features needed to help accelerate discovery: It can accelerate analysis by up to 300 ×, supports a wide range of applications and analyses, is high-performing, is highly scalable, and requires minimal storage space. In addition, EMPRESS offers features required for a production-oriented system: scalable metadata consistency techniques, flexible system configurations, fault tolerance as a service, and portability.

高性能计算科学家正在产生前所未有的大量数据,这些数据需要很长时间才能加载以供分析。然而,许多分析只需要加载包含感兴趣的特定特征的数据,科学家有许多方法来识别这些特征。因此,如果科学家存储了关于这些已识别特征的信息(描述性元数据),那么在后续分析中,他们可以使用这些信息只读取包含这些特征的数据。这可以大大减少科学家必须读取的数据量,从而加快分析速度。尽管描述性元数据管理具有潜在的好处,但之前还没有工作创建了一个描述性元数据系统,可以帮助科学家处理广泛的应用和分析,将他们的读取限制在包含感兴趣特征的数据上。在本文中,我们提出了第一个这样的解决方案EMPRESS。EMPRESS提供了加速发现所需的所有功能:它可以加速高达300倍的分析,支持广泛的应用和分析,高性能,高度可扩展,并且需要最小的存储空间。此外,EMPRESS还提供面向生产的系统所需的特性:可扩展的元数据一致性技术、灵活的系统配置、作为服务的容错以及可移植性。
{"title":"EMPRESS: Accelerating Scientific Discovery through Descriptive Metadata Management","authors":"Margaret Lawson, William Gropp, Jay Lofstead","doi":"https://dl.acm.org/doi/10.1145/3523698","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3523698","url":null,"abstract":"<p>High-performance computing scientists are producing unprecedented volumes of data that take a long time to load for analysis. However, many analyses only require loading in the data containing particular features of interest and scientists have many approaches for identifying these features. Therefore, if scientists store information (descriptive metadata) about these identified features, then for subsequent analyses they can use this information to only read in the data containing these features. This can greatly reduce the amount of data that scientists have to read in, thereby accelerating analysis. Despite the potential benefits of descriptive metadata management, no prior work has created a descriptive metadata system that can help scientists working with a wide range of applications and analyses to restrict their reads to data containing features of interest. In this article, we present EMPRESS, the first such solution. EMPRESS offers all of the features needed to help accelerate discovery: It can accelerate analysis by up to 300 ×, supports a wide range of applications and analyses, is high-performing, is highly scalable, and requires minimal storage space. In addition, EMPRESS offers features required for a production-oriented system: scalable metadata consistency techniques, flexible system configurations, fault tolerance as a service, and portability.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138512829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance PSA-Cache:一种提高3D NAND闪存性能的页面状态感知缓存方案
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-12-06 DOI: 10.1145/3574324
Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin
Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.
垃圾回收(GC)在3D NAND闪存的性能中起着关键作用,回写已被广泛用于加速GC期间的有效页面迁移。不幸的是,回写受到奇偶对称性问题的限制:从奇数/偶数页读取的数据必须写入奇数/偶数页面。在迁移两个奇数/偶数连续的页面后,两个迁移页面之间的空闲页面将被浪费。这种浪费的页面显著降低了闪存上的可用空间,并导致额外的GC,从而降低了固态硬盘(SSD)的性能。为了解决这个问题,我们提出了一种称为PSA-cache的页面状态感知缓存方案,该方案可以防止页面浪费,从而提高基于NAND闪存的SSD的性能。为了便于做出写回调度决策,PSACache根据受害者块中页面的状态来调节缓存页面的写回优先级。通过将高写回优先级页面写回闪存芯片,PSACache通过在随后的垃圾收集中破坏奇数/偶数连续页面,有效地防止了页面浪费。我们根据浪费的页面数量、GC数量和响应时间来定量评估PSA缓存的性能。我们将PSA-Cache与两种最先进的方案(GCaR和TTflash)以及基线方案LRU进行了比较。实验结果表明,PSACache的性能优于现有方案。特别是,PSA缓存将GCaR和TTflash的浪费页面数量分别减少了25.7%和62.1%。PSA Cache将GC计数大幅减少78.7%,平均减少49.6%。此外,PSA Cache将平均写入响应时间减少85.4%,平均减少30.05%。
{"title":"PSA-Cache: A Page-state-aware Cache Scheme for Boosting 3D NAND Flash Performance","authors":"Shujie Pang, Yuhui Deng, Genxiong Zhang, Yi Zhou, Yaoqin Huang, Xiao Qin","doi":"10.1145/3574324","DOIUrl":"https://doi.org/10.1145/3574324","url":null,"abstract":"Garbage collection (GC) plays a pivotal role in the performance of 3D NAND flash memory, where Copyback has been widely used to accelerate valid page migration during GC. Unfortunately, copyback is constrained by the parity symmetry issue: data read from an odd/even page must be written to an odd/even page. After migrating two odd/even consecutive pages, a free page between the two migrated pages will be wasted. Such wasted pages noticeably lower free space on flash memory and cause extra GCs, thereby degrading solid-state-disk (SSD) performance. To address this problem, we propose a page-state-aware cache scheme called PSA-Cache, which prevents page waste to boost the performance of NAND Flash-based SSDs. To facilitate making write-back scheduling decisions, PSA-Cache regulates write-back priorities for cached pages according to the state of pages in victim blocks. With high write-back-priority pages written back to flash chips, PSA-Cache effectively fends off page waste by breaking odd/even consecutive pages in subsequent garbage collections. We quantitatively evaluate the performance of PSA-Cache in terms of the number of wasted pages, the number of GCs, and response time. We compare PSA-Cache with two state-of-the-art schemes, GCaR and TTflash, in addition to a baseline scheme LRU. The experimental results unveil that PSA-Cache outperforms the existing schemes. In particular, PSA-Cache curtails the number of wasted pages of GCaR and TTflash by 25.7% and 62.1%, respectively. PSA-Cache immensely cuts back the number of GC counts by up to 78.7% with an average of 49.6%. Furthermore, PSA-Cache slashes the average write response time by up to 85.4% with an average of 30.05%.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45256041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving Storage Systems Using Machine Learning 利用机器学习改进存储系统
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-11-21 DOI: 10.1145/3568429
I. Akgun, A. S. Aydin, Andrew Burford, Michael McNeill, Michael Arkhangelskiy, E. Zadok
Operating systems include many heuristic algorithms designed to improve overall storage performance and throughput. Because such heuristics cannot work well for all conditions and workloads, system designers resorted to exposing numerous tunable parameters to users—thus burdening users with continually optimizing their own storage systems and applications. Storage systems are usually responsible for most latency in I/O-heavy applications, so even a small latency improvement can be significant. Machine learning (ML) techniques promise to learn patterns, generalize from them, and enable optimal solutions that adapt to changing workloads. We propose that ML solutions become a first-class component in OSs and replace manual heuristics to optimize storage systems dynamically. In this article, we describe our proposed ML architecture, called KML. We developed a prototype KML architecture and applied it to two case studies: optimizing readahead and NFS read-size values. Our experiments show that KML consumes less than 4 KB of dynamic kernel memory, has a CPU overhead smaller than 0.2%, and yet can learn patterns and improve I/O throughput by as much as 2.3× and 15× for two case studies—even for complex, never-seen-before, concurrently running mixed workloads on different storage devices.
操作系统包括许多启发式算法,这些算法旨在提高整体存储性能和吞吐量。由于这种启发式方法不能很好地适用于所有条件和工作负载,系统设计者不得不向用户公开许多可调参数,从而给用户带来不断优化自己的存储系统和应用程序的负担。在I/O密集型应用程序中,存储系统通常是造成大多数延迟的原因,因此即使是一个小的延迟改进也可能意义重大。机器学习(ML)技术承诺学习模式,从中归纳,并实现适应不断变化的工作负载的最佳解决方案。我们建议ML解决方案成为操作系统中的一流组件,并取代手动启发式来动态优化存储系统。在本文中,我们描述了我们提出的ML架构,称为KML。我们开发了一个原型KML体系结构,并将其应用于两个案例研究:优化预读和NFS读取大小值。我们的实验表明,KML消耗的动态内核内存不到4 KB,CPU开销小于0.2%,但在两个案例研究中,它可以学习模式并将I/O吞吐量提高2.3倍和15倍——即使是以前从未见过的复杂的、在不同存储设备上同时运行混合工作负载的情况。
{"title":"Improving Storage Systems Using Machine Learning","authors":"I. Akgun, A. S. Aydin, Andrew Burford, Michael McNeill, Michael Arkhangelskiy, E. Zadok","doi":"10.1145/3568429","DOIUrl":"https://doi.org/10.1145/3568429","url":null,"abstract":"Operating systems include many heuristic algorithms designed to improve overall storage performance and throughput. Because such heuristics cannot work well for all conditions and workloads, system designers resorted to exposing numerous tunable parameters to users—thus burdening users with continually optimizing their own storage systems and applications. Storage systems are usually responsible for most latency in I/O-heavy applications, so even a small latency improvement can be significant. Machine learning (ML) techniques promise to learn patterns, generalize from them, and enable optimal solutions that adapt to changing workloads. We propose that ML solutions become a first-class component in OSs and replace manual heuristics to optimize storage systems dynamically. In this article, we describe our proposed ML architecture, called KML. We developed a prototype KML architecture and applied it to two case studies: optimizing readahead and NFS read-size values. Our experiments show that KML consumes less than 4 KB of dynamic kernel memory, has a CPU overhead smaller than 0.2%, and yet can learn patterns and improve I/O throughput by as much as 2.3× and 15× for two case studies—even for complex, never-seen-before, concurrently running mixed workloads on different storage devices.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47966877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
InDe: An Inline Data Deduplication Approach via Adaptive Detection of Valid Container Utilization InDe:通过自适应检测有效容器利用率的内联重复数据删除方法
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2022-11-19 DOI: 10.1145/3568426
Lifang Lin, Yuhui Deng, Yi Zhou, Yifeng Zhu
Inline deduplication removes redundant data in real-time as data is being sent to the storage system. However, it causes data fragmentation: logically consecutive chunks are physically scattered across various containers after data deduplication. Many rewrite algorithms aim to alleviate the performance degradation due to fragmentation by rewriting fragmented duplicate chunks as unique chunks into new containers. Unfortunately, these algorithms determine whether a chunk is fragmented based on a simple pre-set fixed value, ignoring the variance of data characteristics between data segments. Accordingly, when backups are restored, they often fail to select an appropriate set of old containers for rewrite, generating a substantial number of invalid chunks in retrieved containers. To address this issue, we propose an inline deduplication approach for storage systems, called InDe, which uses a greedy algorithm to detect valid container utilization and dynamically adjusts the number of old container references in each segment. InDe fully leverages the distribution of duplicated chunks to improve the restore performance while maintaining high backup performance. We define an effectiveness metric, valid container referenced counts (VCRC), to identify appropriate containers for the rewrite. We design a rewrite algorithm F-greedy that detects valid container utilization to rewrite low-VCRC containers. According to the VCRC distribution of containers, F-greedy dynamically adjusts the number of old container references to only share duplicate chunks with high-utilization containers for each segment, thereby improving the restore speed. To take full advantage of the above features, we further propose another rewrite algorithm called F-greedy+ based on adaptive interval detection of valid container utilization. F-greedy+ makes a more accurate estimation of the valid utilization of old containers by detecting trends of VCRC’s change in two directions and selecting referenced containers in the global scope. We quantitatively evaluate InDe using three real-world backup workloads. The experimental results show that compared with two state-of-the-art algorithms (Capping and SMR), our scheme improves the restore speed by 1.3×–2.4× while achieving almost the same backup performance.
内联重复数据消除可在数据发送到存储系统时实时删除冗余数据。然而,它会导致数据碎片化:重复数据消除后,逻辑上连续的块在物理上分散在不同的容器中。许多重写算法旨在通过将碎片化的重复块作为唯一块重写到新的容器中来缓解由于碎片化而导致的性能下降。不幸的是,这些算法基于简单的预设固定值来确定块是否分段,忽略了数据段之间数据特征的差异。因此,在恢复备份时,它们通常无法选择一组合适的旧容器进行重写,从而在检索到的容器中生成大量无效块。为了解决这个问题,我们为存储系统提出了一种称为InDe的内联重复数据消除方法,该方法使用贪婪算法来检测有效的容器利用率,并动态调整每个段中旧容器引用的数量。InDe充分利用重复数据块的分布来提高恢复性能,同时保持高备份性能。我们定义了一个有效性度量,即有效容器引用计数(VCRC),以确定用于重写的适当容器。我们设计了一种重写算法F-贪婪,该算法检测有效的容器利用率来重写低VCRC容器。根据容器的VCRC分布,F-贪婪动态调整旧容器引用的数量,使每个分段只与高利用率容器共享重复的块,从而提高恢复速度。为了充分利用上述特征,我们进一步提出了另一种基于有效容器利用率的自适应区间检测的重写算法,称为F-贪婪+。F-greed+通过检测VCRC在两个方向上的变化趋势并在全局范围内选择引用容器,对旧容器的有效利用率进行了更准确的估计。我们使用三种真实世界的备份工作负载对InDe进行了定量评估。实验结果表明,与两种最先进的算法(Capping和SMR)相比,我们的方案将恢复速度提高了1.3×–2.4×,同时实现了几乎相同的备份性能。
{"title":"InDe: An Inline Data Deduplication Approach via Adaptive Detection of Valid Container Utilization","authors":"Lifang Lin, Yuhui Deng, Yi Zhou, Yifeng Zhu","doi":"10.1145/3568426","DOIUrl":"https://doi.org/10.1145/3568426","url":null,"abstract":"Inline deduplication removes redundant data in real-time as data is being sent to the storage system. However, it causes data fragmentation: logically consecutive chunks are physically scattered across various containers after data deduplication. Many rewrite algorithms aim to alleviate the performance degradation due to fragmentation by rewriting fragmented duplicate chunks as unique chunks into new containers. Unfortunately, these algorithms determine whether a chunk is fragmented based on a simple pre-set fixed value, ignoring the variance of data characteristics between data segments. Accordingly, when backups are restored, they often fail to select an appropriate set of old containers for rewrite, generating a substantial number of invalid chunks in retrieved containers. To address this issue, we propose an inline deduplication approach for storage systems, called InDe, which uses a greedy algorithm to detect valid container utilization and dynamically adjusts the number of old container references in each segment. InDe fully leverages the distribution of duplicated chunks to improve the restore performance while maintaining high backup performance. We define an effectiveness metric, valid container referenced counts (VCRC), to identify appropriate containers for the rewrite. We design a rewrite algorithm F-greedy that detects valid container utilization to rewrite low-VCRC containers. According to the VCRC distribution of containers, F-greedy dynamically adjusts the number of old container references to only share duplicate chunks with high-utilization containers for each segment, thereby improving the restore speed. To take full advantage of the above features, we further propose another rewrite algorithm called F-greedy+ based on adaptive interval detection of valid container utilization. F-greedy+ makes a more accurate estimation of the valid utilization of old containers by detecting trends of VCRC’s change in two directions and selecting referenced containers in the global scope. We quantitatively evaluate InDe using three real-world backup workloads. The experimental results show that compared with two state-of-the-art algorithms (Capping and SMR), our scheme improves the restore speed by 1.3×–2.4× while achieving almost the same backup performance.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48159630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
ACM Transactions on Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1