首页 > 最新文献

ACM Transactions on Storage最新文献

英文 中文
ReadGuard: Integrated SSD Management for Priority-Aware Read Performance Differentiation ReadGuard:集成固态硬盘管理,实现优先级感知的读取性能差异化
IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-25 DOI: 10.1145/3676884
Myoungjun Chun, Myungsuk Kim, Dusol Lee, Jisung Park, Jihong Kim
When multiple apps with different I/O priorities share a high-performance SSD, it is important to differentiate the I/O QoS level based on the I/O priority of each app. In this paper, we study how a modern flash-based SSD should be designed to support priority-aware read performance differentiation. From an in-depth evaluation study using 3D TLC SSDs, we observed that existing FTLs have several weaknesses that need to be improved for better read performance differentiation. In order to overcome the existing FTL weaknesses, we propose ReadGuard , a novel priority-aware SSD management technique that enables an FTL to manage its blocks in a fully read-latency-aware fashion. ReadGuard leverages a new read-latency-centric block quality marker that can accurately distinguish the read latency of a block and ensures that higher-quality blocks are used for higher-priority apps. ReadGuard extends an existing suspend/resume technique to handle collisions among reads. Our experimental results show that a ReadGuard -enabled SSD is effective in supporting differentiated read performance in modern 3D flash SSDs.
当具有不同 I/O 优先级的多个应用程序共享一个高性能固态硬盘时,根据每个应用程序的 I/O 优先级来区分 I/O QoS 级别非常重要。在本文中,我们研究了应如何设计基于闪存的现代固态硬盘,以支持优先级感知的读取性能区分。通过使用 3D TLC SSD 进行深入评估研究,我们发现现有的 FTL 有几个弱点需要改进,以实现更好的读取性能区分。为了克服现有 FTL 的弱点,我们提出了 ReadGuard,这是一种新颖的优先级感知 SSD 管理技术,可使 FTL 以完全感知读取延迟的方式管理其区块。ReadGuard 利用以读取延迟为中心的新型区块质量标记,可以准确区分区块的读取延迟,确保质量较高的区块用于优先级较高的应用程序。ReadGuard 扩展了现有的暂停/恢复技术,以处理读取之间的碰撞。我们的实验结果表明,支持 ReadGuard 的固态硬盘能有效支持现代 3D 闪存固态硬盘的差异化读取性能。
{"title":"ReadGuard: Integrated SSD Management for Priority-Aware Read Performance Differentiation","authors":"Myoungjun Chun, Myungsuk Kim, Dusol Lee, Jisung Park, Jihong Kim","doi":"10.1145/3676884","DOIUrl":"https://doi.org/10.1145/3676884","url":null,"abstract":"\u0000 When multiple apps with different I/O priorities share a high-performance SSD, it is important to differentiate the I/O QoS level based on the I/O priority of each app. In this paper, we study how a modern flash-based SSD should be designed to support priority-aware read performance differentiation. From an in-depth evaluation study using 3D TLC SSDs, we observed that existing FTLs have several weaknesses that need to be improved for better read performance differentiation. In order to overcome the existing FTL weaknesses, we propose\u0000 ReadGuard\u0000 , a novel priority-aware SSD management technique that enables an FTL to manage its blocks in a fully read-latency-aware fashion.\u0000 ReadGuard\u0000 leverages a new read-latency-centric block quality marker that can accurately distinguish the read latency of a block and ensures that higher-quality blocks are used for higher-priority apps.\u0000 ReadGuard\u0000 extends an existing suspend/resume technique to handle collisions among reads. Our experimental results show that a\u0000 ReadGuard\u0000 -enabled SSD is effective in supporting differentiated read performance in modern 3D flash SSDs.\u0000","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141806258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From SSDs Back to HDDs: Optimizing VDO to Support Inline Deduplication and Compression for HDDs as Primary Storage Media 从 SSD 回到 HDD:优化 VDO 以支持作为主存储介质的 HDD 的内联重复数据删除和压缩功能
IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-23 DOI: 10.1145/3678250
Patrick Raaf, André Brinkmann, E. Borba, Hossen Asadi, Sai Narasimhamurthy, John Bent, Mohamad El-Batal, Reza Salkhordeh
Deduplication and compression are powerful techniques to reduce the ratio between the quantity of logical data stored and the physical amount of consumed storage. Deduplication can impose significant performance overheads, as duplicate detection for large systems induces random accesses to the backend storage. These random accesses have led to the concern that deduplication for primary storage and HDDs are not compatible. Most inline data reduction solutions are therefore optimized for SSDs and discourage their use for HDDs, even for sequential workloads. In this work, we show that these concerns are valid if and only if the lessons learned from deduplication research are not applied. We have therefore investigated data reduction solutions for primary storage based on the RedHat Virtual Disk Optimizer (VDO) and show that directly applying them can decrease sequential write performance for HDDs by 36-times. We then show that slight modifications to VDO plus the integration of a very small SSD area significantly improve performance even beyond the performance without data reduction enabled, making HDDs more cost-efficient for a wide range of mostly sequential Cloud workloads than SSDs. Additionally, these VDO optimizations do not require to maintain different code bases for HDDs and SSDs and we therefore provide the first data reduction solution applicable to both storage media.
重复数据删除和压缩是降低逻辑数据存储量和物理存储消耗量之间比率的强大技术。重复数据删除会带来巨大的性能开销,因为大型系统的重复检测会导致对后端存储的随机访问。这些随机访问导致人们担心重复数据删除与主存储和硬盘不兼容。因此,大多数内联数据缩减解决方案都针对固态硬盘进行了优化,而不鼓励在硬盘上使用,即使对于顺序工作负载也是如此。 在这项工作中,我们表明,只有在不应用重复数据删除研究的经验教训时,这些担忧才是正确的。因此,我们研究了基于 RedHat 虚拟磁盘优化器(VDO)的主存储数据缩减解决方案,结果表明,直接应用这些解决方案可将硬盘的顺序写入性能降低 36 倍。我们还证明,对 VDO 稍作修改,再加上集成一个很小的固态硬盘区域,就能显著提高性能,甚至超过未启用数据缩减功能时的性能,从而使 HDD 在各种大多数顺序云工作负载中比固态硬盘更具成本效益。此外,这些 VDO 优化无需为 HDD 和 SSD 维护不同的代码库,因此我们提供了首个适用于两种存储介质的数据缩减解决方案。
{"title":"From SSDs Back to HDDs: Optimizing VDO to Support Inline Deduplication and Compression for HDDs as Primary Storage Media","authors":"Patrick Raaf, André Brinkmann, E. Borba, Hossen Asadi, Sai Narasimhamurthy, John Bent, Mohamad El-Batal, Reza Salkhordeh","doi":"10.1145/3678250","DOIUrl":"https://doi.org/10.1145/3678250","url":null,"abstract":"Deduplication and compression are powerful techniques to reduce the ratio between the quantity of logical data stored and the physical amount of consumed storage. Deduplication can impose significant performance overheads, as duplicate detection for large systems induces random accesses to the backend storage. These random accesses have led to the concern that deduplication for primary storage and HDDs are not compatible. Most inline data reduction solutions are therefore optimized for SSDs and discourage their use for HDDs, even for sequential workloads.\u0000 \u0000 In this work, we show that these concerns are valid if and only if the lessons learned from deduplication research are not applied. We have therefore investigated data reduction solutions for primary storage based on the RedHat\u0000 Virtual Disk Optimizer\u0000 (VDO) and show that directly applying them can decrease sequential write performance for HDDs by 36-times. We then show that slight modifications to VDO plus the integration of a very small SSD area significantly improve performance even beyond the performance without data reduction enabled, making HDDs more cost-efficient for a wide range of mostly sequential Cloud workloads than SSDs. Additionally, these VDO optimizations do not require to maintain different code bases for HDDs and SSDs and we therefore provide the first data reduction solution applicable to both storage media.\u0000","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141810465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extremely-Compressed SSDs with I/O Behavior Prediction 具有 I/O 行为预测功能的极压缩固态硬盘
IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-16 DOI: 10.1145/3677044
Xiangyu Yao, Qiao Li, Kaihuan Lin, Xinbiao Gan, Jie Zhang, Congming Gao, Zhirong Shen, Quanqing Xu, Chuanhui Yang, Jason Xue
As the data volume continues to grow exponentially, there is an increasing demand for large storage system capacity. Data compression techniques effectively reduce the volume of written data, enhancing space efficiency. As a result, many modern SSDs have already incorporated data compression capabilities. However, data compression introduces additional processing overhead in critical I/O paths, potentially affecting system performance. Currently, most compression solutions in flash-based storage systems employ fixed compression algorithms for all incoming data without leveraging differences among various data access patterns. This leads to sub-optimal compression efficiency. This paper proposes a data-type-aware Flash Translation Layer (DAFTL) scheme to maximize space efficiency without compromising system performance. First, we propose an I/O behavior prediction method to forecast future access on specific data. Then, DAFTL matches data types with distinct I/O behaviors to compression algorithms of varying intensities, achieving an optimal balance between performance and space efficiency. Specifically, it employs higher-intensity compression algorithms for less frequently accessed data to maximize space efficiency. For frequently accessed data, it utilizes lower-intensity but faster compression algorithms to maintain system performance. Finally, an improved compact compression method is proposed to effectively eliminate page fragmentation and further enhance space efficiency. Extensive evaluations using a variety of real-world workloads, as well as the workloads with real data we collected on our platforms, demonstrate that DAFTL achieves more data reductions than other approaches. When compared to the state-of-the-art compression schemes, DAFTL reduces the total number of pages written to the SSD by an average of 8%, 21.3%, and 25.6% for data with high, medium, and low compressibility, respectively. In the case of workloads with real data, DAFTL achieves an average reduction of 10.4% in the total number of pages written to SSD. Furthermore, DAFTL exhibits comparable or even improved read and write performance compared to other solutions.
随着数据量不断呈指数级增长,对大容量存储系统的需求也越来越大。数据压缩技术能有效减少写入数据量,提高空间效率。因此,许多现代固态硬盘已经集成了数据压缩功能。但是,数据压缩会在关键 I/O 路径中引入额外的处理开销,从而可能影响系统性能。目前,基于闪存的存储系统中的大多数压缩解决方案都对所有输入数据采用固定的压缩算法,而不利用各种数据访问模式之间的差异。这导致压缩效率达不到最优。本文提出了一种数据类型感知闪存转换层(DAFTL)方案,在不影响系统性能的前提下最大限度地提高空间效率。首先,我们提出了一种 I/O 行为预测方法,以预测未来对特定数据的访问。然后,DAFTL 将具有不同 I/O 行为的数据类型与不同强度的压缩算法相匹配,从而在性能和空间效率之间实现最佳平衡。具体来说,它对访问频率较低的数据采用强度较高的压缩算法,以最大限度地提高空间效率。对于频繁访问的数据,则采用强度较低但速度较快的压缩算法,以保持系统性能。最后,还提出了一种改进的压缩方法,以有效消除页面碎片,进一步提高空间效率。使用各种实际工作负载以及我们在平台上收集的真实数据进行的广泛评估表明,DAFTL 比其他方法减少了更多数据。与最先进的压缩方案相比,DAFTL 在高压缩率、中压缩率和低压缩率数据的情况下,写入固态硬盘的总页数平均分别减少了 8%、21.3% 和 25.6%。在使用真实数据的工作负载中,DAFTL平均减少了写入固态硬盘的总页数的10.4%。此外,与其他解决方案相比,DAFTL 的读写性能相当,甚至有所提高。
{"title":"Extremely-Compressed SSDs with I/O Behavior Prediction","authors":"Xiangyu Yao, Qiao Li, Kaihuan Lin, Xinbiao Gan, Jie Zhang, Congming Gao, Zhirong Shen, Quanqing Xu, Chuanhui Yang, Jason Xue","doi":"10.1145/3677044","DOIUrl":"https://doi.org/10.1145/3677044","url":null,"abstract":"As the data volume continues to grow exponentially, there is an increasing demand for large storage system capacity. Data compression techniques effectively reduce the volume of written data, enhancing space efficiency. As a result, many modern SSDs have already incorporated data compression capabilities. However, data compression introduces additional processing overhead in critical I/O paths, potentially affecting system performance. Currently, most compression solutions in flash-based storage systems employ fixed compression algorithms for all incoming data without leveraging differences among various data access patterns. This leads to sub-optimal compression efficiency.\u0000 This paper proposes a data-type-aware Flash Translation Layer (DAFTL) scheme to maximize space efficiency without compromising system performance. First, we propose an I/O behavior prediction method to forecast future access on specific data. Then, DAFTL matches data types with distinct I/O behaviors to compression algorithms of varying intensities, achieving an optimal balance between performance and space efficiency. Specifically, it employs higher-intensity compression algorithms for less frequently accessed data to maximize space efficiency. For frequently accessed data, it utilizes lower-intensity but faster compression algorithms to maintain system performance. Finally, an improved compact compression method is proposed to effectively eliminate page fragmentation and further enhance space efficiency. Extensive evaluations using a variety of real-world workloads, as well as the workloads with real data we collected on our platforms, demonstrate that DAFTL achieves more data reductions than other approaches. When compared to the state-of-the-art compression schemes, DAFTL reduces the total number of pages written to the SSD by an average of 8%, 21.3%, and 25.6% for data with high, medium, and low compressibility, respectively. In the case of workloads with real data, DAFTL achieves an average reduction of 10.4% in the total number of pages written to SSD. Furthermore, DAFTL exhibits comparable or even improved read and write performance compared to other solutions.","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Section on USENIX OSDI 2023 USENIX OSDI 2023 特别分会简介
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-06-06 DOI: 10.1145/3654801
Roxana Geambasu, Ed Nightingale
An Efficient Authenticated Storage for Blockchain” by
区块链的高效认证存储",作者
{"title":"Introduction to the Special Section on USENIX OSDI 2023","authors":"Roxana Geambasu, Ed Nightingale","doi":"10.1145/3654801","DOIUrl":"https://doi.org/10.1145/3654801","url":null,"abstract":"An Efficient Authenticated Storage for Blockchain” by","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141379782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LVMT: An Efficient Authenticated Storage for Blockchain LVMT:区块链的高效认证存储
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-05-16 DOI: 10.1145/3664818
Chenxing Li, Sidi Mohamed Beillahi, Guang Yang, Ming Wu, Wei Xu, Fan Long

Authenticated storage access is the performance bottleneck of a blockchain, because each access can be amplified to potentially O(log n) disk I/O operations in the standard Merkle Patricia Trie (MPT) storage structure. In this paper, we propose a multi-Layer Versioned Multipoint Trie (LVMT), a novel high-performance blockchain storage with significantly reduced I/O amplifications. LVMT uses the authenticated multipoint evaluation tree (AMT) vector commitment protocol to update commitment proofs in constant time. LVMT adopts a multi-layer design to support unlimited key-value pairs and stores version numbers instead of value hashes to avoid costly elliptic curve multiplication operations. In our experiment, LVMT outperforms the MPT in real Ethereum traces, delivering read and write operations six times faster. It also boosts blockchain system execution throughput by up to 2.7 times.

认证存储访问是区块链的性能瓶颈,因为在标准的 Merkle Patricia Trie(MPT)存储结构中,每次访问都可能被放大到潜在的 O(log n) 磁盘 I/O 操作。在本文中,我们提出了一种多层版本化多点三角形(LVMT),这是一种新型高性能区块链存储,可显著减少 I/O 放大。LVMT 使用认证多点评估树(AMT)向量承诺协议,在恒定时间内更新承诺证明。LVMT 采用多层设计,支持无限键值对,并存储版本号而非值哈希值,以避免昂贵的椭圆曲线乘法运算。在我们的实验中,LVMT 在实际以太坊跟踪中的表现优于 MPT,其读写操作速度是 MPT 的六倍。它还将区块链系统的执行吞吐量提高了 2.7 倍。
{"title":"LVMT: An Efficient Authenticated Storage for Blockchain","authors":"Chenxing Li, Sidi Mohamed Beillahi, Guang Yang, Ming Wu, Wei Xu, Fan Long","doi":"10.1145/3664818","DOIUrl":"https://doi.org/10.1145/3664818","url":null,"abstract":"<p>Authenticated storage access is the performance bottleneck of a blockchain, because each access can be amplified to potentially <i>O</i>(log <i>n</i>) disk I/O operations in the standard Merkle Patricia Trie (MPT) storage structure. In this paper, we propose a multi-Layer Versioned Multipoint Trie (LVMT), a novel high-performance blockchain storage with significantly reduced I/O amplifications. LVMT uses the authenticated multipoint evaluation tree (AMT) vector commitment protocol to update commitment proofs in constant time. LVMT adopts a multi-layer design to support unlimited key-value pairs and stores version numbers instead of value hashes to avoid costly elliptic curve multiplication operations. In our experiment, LVMT outperforms the MPT in real Ethereum traces, delivering read and write operations six times faster. It also boosts blockchain system execution throughput by up to 2.7 times.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Design of Fast Delta Encoding for Delta Compression Based Storage Systems 基于德尔塔压缩的存储系统的快速德尔塔编码设计
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-05-14 DOI: 10.1145/3664817
Haoliang Tan, Wen Xia, Xiangyu Zou, Cai Deng, Qing Liao, Zhaoquan Gu

Delta encoding is a data reduction technique capable of calculating the differences (i.e., delta) among very similar files and chunks. It is widely used for various applications, such as synchronization replication, backup/archival storage, cache compression, etc. However, delta encoding is computationally costly due to its time-consuming word-matching operations for delta calculation. Existing delta encoding approaches either run at a slow encoding speed, such as Xdelta and Zdelta, or at a low compression ratio, such as Ddelta and Edelta. In this paper, we propose Gdelta, a fast delta encoding approach with a high compression ratio. The key idea behind Gdelta is the combined use of five techniques: (1) employing an improved Gear-based rolling hash to replace Adler32 hash for fast scanning overlapping words of similar chunks, (2) adopting a quick array-based indexing for word-matching, (3) applying a sampling indexing scheme to reduce the cost of traditional building full indexes for base chunks’ words, (4) skipping unmatched words to accelerate delta encoding through non-redundant areas, and (5) last but not least, after word-matching, further batch compressing the remainder to improve the compression ratio. Our evaluation results driven by seven real-world datasets suggest that Gdelta achieves encoding/decoding speedups of 3.5X ∼ 25X over the classic Xdelta and Zdelta approaches while increasing the compression ratio by about 10% ∼ 240%.

三角洲编码是一种数据缩减技术,能够计算非常相似的文件和数据块之间的差异(即三角洲)。它被广泛用于各种应用,如同步复制、备份/存档存储、缓存压缩等。然而,由于 delta 编码的计算需要耗费大量时间进行单词匹配操作,因此计算成本很高。现有的 delta 编码方法要么编码速度慢,如 Xdelta 和 Zdelta,要么压缩率低,如 Ddelta 和 Edelta。在本文中,我们提出了一种具有高压缩比的快速三角编码方法 Gdelta。Gdelta 背后的关键理念是综合利用五种技术:(1) 采用改进的基于 Gear 的滚动哈希取代 Adler32 哈希,以快速扫描相似数据块的重叠词;(2) 采用基于数组的快速索引进行词匹配;(3) 采用抽样索引方案,以降低为基础数据块的词建立完整索引的传统成本;(4) 跳过不匹配的词,通过非冗余区域加速 delta 编码;(5) 最后但并非最不重要的是,在词匹配后,进一步批量压缩剩余部分,以提高压缩率。我们通过七个真实世界数据集得出的评估结果表明,与经典的 Xdelta 和 Zdelta 方法相比,Gdelta 的编码/解码速度提高了 3.5 倍 ∼ 25 倍,同时压缩率提高了约 10% ∼ 240%。
{"title":"The Design of Fast Delta Encoding for Delta Compression Based Storage Systems","authors":"Haoliang Tan, Wen Xia, Xiangyu Zou, Cai Deng, Qing Liao, Zhaoquan Gu","doi":"10.1145/3664817","DOIUrl":"https://doi.org/10.1145/3664817","url":null,"abstract":"<p>Delta encoding is a data reduction technique capable of calculating the differences (i.e., delta) among very similar files and chunks. It is widely used for various applications, such as synchronization replication, backup/archival storage, cache compression, etc. However, delta encoding is computationally costly due to its time-consuming word-matching operations for delta calculation. Existing delta encoding approaches either run at a slow encoding speed, such as Xdelta and Zdelta, or at a low compression ratio, such as Ddelta and Edelta. In this paper, we propose Gdelta, a fast delta encoding approach with a high compression ratio. The key idea behind Gdelta is the combined use of five techniques: (1) employing an improved Gear-based rolling hash to replace Adler32 hash for fast scanning overlapping words of similar chunks, (2) adopting a quick array-based indexing for word-matching, (3) applying a sampling indexing scheme to reduce the cost of traditional building full indexes for base chunks’ words, (4) skipping unmatched words to accelerate delta encoding through non-redundant areas, and (5) last but not least, after word-matching, further batch compressing the remainder to improve the compression ratio. Our evaluation results driven by seven real-world datasets suggest that Gdelta achieves encoding/decoding speedups of 3.5X ∼ 25X over the classic Xdelta and Zdelta approaches while increasing the compression ratio by about 10% ∼ 240%.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140928573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Memory-Disaggregated Radix Tree 内存分解的 Radix 树
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-05-08 DOI: 10.1145/3664289
Xuchuan Luo, Pengfei Zuo, Jiacheng Shen, Jiazhen Gu, Xin Wang, Michael Lyu, Yangfan Zhou

Disaggregated memory (DM) is an increasingly prevalent architecture with high resource utilization. It separates computing and memory resources into two pools and interconnects them with fast networks. Existing range indexes on DM are based on B+ trees, which suffer from large inherent read and write amplifications. The read and write amplifications rapidly saturate the network bandwidth, resulting in low request throughput and high access latency of B+ trees on DM.

In this paper, we propose that the radix tree is more suitable for DM than the B+ tree due to smaller read and write amplifications. However, constructing a radix tree on DM is challenging due to the costly lock-based concurrency control, the bounded memory-side IOPS, and the complicated computing-side cache validation. To address these challenges, we design SMART, the first radix tree for disaggregated memory with high performance. Specifically, we leverage 1) a hybrid concurrency control scheme including lock-free internal nodes and fine-grained lock-based leaf nodes to reduce lock overhead, 2) a computing-side read-delegation and write-combining technique to break through the IOPS upper bound by reducing redundant I/Os, and 3) a simple yet effective reverse check mechanism for computing-side cache validation. Experimental results show that SMART achieves 6.1 × higher throughput under typical write-intensive workloads and 2.8 × higher throughput under read-only workloads in YCSB benchmarks, compared with state-of-the-art B+ trees on DM.

分解内存(DM)是一种日益流行的高资源利用率架构。它将计算和内存资源分成两个池,并通过快速网络互连。DM 上现有的范围索引基于 B+ 树,这种索引存在较大的固有读写放大。读写放大会使网络带宽迅速饱和,导致 DM 上 B+ 树的请求吞吐量低、访问延迟高。本文提出,由于读写放大较小,弧度树比 B+ 树更适合 DM。然而,由于基于锁的并发控制成本高昂、内存侧 IOPS 受限以及计算侧缓存验证复杂,在 DM 上构建弧度树具有挑战性。为了应对这些挑战,我们设计了 SMART,这是第一个用于高性能分解内存的弧度树。具体来说,我们利用:1)一种混合并发控制方案,包括无锁内部节点和基于细粒度锁的叶节点,以减少锁开销;2)一种计算侧读委托和写合并技术,通过减少冗余 I/O 来突破 IOPS 上限;3)一种简单而有效的反向检查机制,用于计算侧缓存验证。实验结果表明,在 YCSB 基准中,与 DM 上最先进的 B+ 树相比,SMART 在典型写密集型工作负载下的吞吐量提高了 6.1 倍,在只读工作负载下的吞吐量提高了 2.8 倍。
{"title":"A Memory-Disaggregated Radix Tree","authors":"Xuchuan Luo, Pengfei Zuo, Jiacheng Shen, Jiazhen Gu, Xin Wang, Michael Lyu, Yangfan Zhou","doi":"10.1145/3664289","DOIUrl":"https://doi.org/10.1145/3664289","url":null,"abstract":"<p>Disaggregated memory (DM) is an increasingly prevalent architecture with high resource utilization. It separates computing and memory resources into two pools and interconnects them with fast networks. Existing range indexes on DM are based on B+ trees, which suffer from large inherent read and write amplifications. The read and write amplifications rapidly saturate the network bandwidth, resulting in low request throughput and high access latency of B+ trees on DM. </p><p>In this paper, we propose that the radix tree is more suitable for DM than the B+ tree due to smaller read and write amplifications. However, constructing a radix tree on DM is challenging due to the costly lock-based concurrency control, the bounded memory-side IOPS, and the complicated computing-side cache validation. To address these challenges, we design <b>SMART</b>, the first radix tree for disaggregated memory with high performance. Specifically, we leverage 1) a <i>hybrid concurrency control</i> scheme including lock-free internal nodes and fine-grained lock-based leaf nodes to reduce lock overhead, 2) a computing-side <i>read-delegation and write-combining</i> technique to break through the IOPS upper bound by reducing redundant I/Os, and 3) a simple yet effective <i>reverse check</i> mechanism for computing-side cache validation. Experimental results show that SMART achieves 6.1 × higher throughput under typical write-intensive workloads and 2.8 × higher throughput under read-only workloads in YCSB benchmarks, compared with state-of-the-art B+ trees on DM.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140928577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fastmove: A Comprehensive Study of On-Chip DMA and its Demonstration for Accelerating Data Movement in NVM-based Storage Systems Fastmove:片上 DMA 综合研究及其在基于 NVM 的存储系统中加速数据移动的演示
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-05-06 DOI: 10.1145/3656477
Jiahao Li, Jingbo Su, Luofan Chen, Cheng Li, Kai Zhang, Liang Yang, Sam Noh, Yinlong Xu

Data-intensive applications executing on NVM-based storage systems experience serious bottlenecks when moving data between DRAM and NVM. We advocate for the use of the long-existing but recently neglected on-chip DMA to expedite data movement with three contributions. First, we explore new latency-oriented optimization directions, driven by a comprehensive DMA study, to design a high-performance DMA module, which significantly lowers the I/O size threshold to observe benefits. Second, we propose a new data movement engine, Fastmove, that coordinates the use of the DMA along with the CPU with DDIO-aware strategies, judicious scheduling and load splitting such that the DMA’s limitations are compensated, and the overall gains are maximized. Finally, with a general kernel-based design, simple APIs, and DAX file system integration, Fastmove allows applications to transparently exploit the DMA and its new features without code change. We run three data-intensive applications MySQL, GraphWalker, and Filebench atop NOVA, ext4-DAX, and XFS-DAX, with standard benchmarks like TPC-C, and popular graph algorithms like PageRank. Across single- and multi-socket settings, compared to the conventional CPU-only NVM accesses, Fastmove introduces to TPC-C with MySQL 1.13-2.16 × speedups of peak throughput, reduces the average latency by 17.7-60.8%, and saves 37.1-68.9% CPU usage spent in data movement. It also shortens the execution time of graph algorithms with GraphWalker by 39.7-53.4%, and introduces 1.01-1.48 × throughput speedups for Filebench.

在基于 NVM 的存储系统上执行的数据密集型应用在 DRAM 和 NVM 之间移动数据时会遇到严重的瓶颈。我们主张利用存在已久但最近被忽视的片上 DMA 来加速数据移动,并为此做出了三项贡献。首先,我们在全面的 DMA 研究的推动下,探索了新的面向延迟的优化方向,设计出了高性能 DMA 模块,大大降低了 I/O 大小门槛,从而观察到效益。其次,我们提出了一种新的数据移动引擎 Fastmove,它通过 DDIO 感知策略、明智的调度和负载分流来协调 DMA 和 CPU 的使用,从而弥补 DMA 的局限性,实现整体收益最大化。最后,通过基于内核的通用设计、简单的应用程序接口和 DAX 文件系统集成,Fastmove 允许应用程序在不修改代码的情况下透明地利用 DMA 及其新功能。我们在NOVA、ext4-DAX和XFS-DAX上运行了三个数据密集型应用程序MySQL、GraphWalker和Filebench,并进行了TPC-C等标准基准测试和PageRank等流行图形算法测试。在单插槽和多插槽设置中,与传统的仅使用 CPU 的 NVM 访问相比,Fastmove 将峰值吞吐量提高了 1.13-2.16 倍,将平均延迟降低了 17.7-60.8%,并节省了 37.1-68.9% 用于数据移动的 CPU 占用率。它还将使用 GraphWalker 的图形算法的执行时间缩短了 39.7-53.4%,并将 Filebench 的吞吐量速度提高了 1.01-1.48倍。
{"title":"Fastmove: A Comprehensive Study of On-Chip DMA and its Demonstration for Accelerating Data Movement in NVM-based Storage Systems","authors":"Jiahao Li, Jingbo Su, Luofan Chen, Cheng Li, Kai Zhang, Liang Yang, Sam Noh, Yinlong Xu","doi":"10.1145/3656477","DOIUrl":"https://doi.org/10.1145/3656477","url":null,"abstract":"<p>Data-intensive applications executing on NVM-based storage systems experience serious bottlenecks when moving data between DRAM and NVM. We advocate for the use of the long-existing but recently neglected on-chip DMA to expedite data movement with three contributions. First, we explore new latency-oriented optimization directions, driven by a comprehensive DMA study, to design a high-performance DMA module, which significantly lowers the I/O size threshold to observe benefits. Second, we propose a new data movement engine, <monospace>Fastmove</monospace>, that coordinates the use of the DMA along with the CPU with DDIO-aware strategies, judicious scheduling and load splitting such that the DMA’s limitations are compensated, and the overall gains are maximized. Finally, with a general kernel-based design, simple APIs, and DAX file system integration, <monospace>Fastmove</monospace> allows applications to transparently exploit the DMA and its new features without code change. We run three data-intensive applications MySQL, GraphWalker, and Filebench atop <monospace>NOVA</monospace>, <monospace>ext4-DAX</monospace>, and <monospace>XFS-DAX</monospace>, with standard benchmarks like TPC-C, and popular graph algorithms like PageRank. Across single- and multi-socket settings, compared to the conventional CPU-only NVM accesses, <monospace>Fastmove</monospace> introduces to TPC-C with MySQL 1.13-2.16 × speedups of peak throughput, reduces the average latency by 17.7-60.8%, and saves 37.1-68.9% CPU usage spent in data movement. It also shortens the execution time of graph algorithms with GraphWalker by 39.7-53.4%, and introduces 1.01-1.48 × throughput speedups for Filebench.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSDedup: Feature-Aware and Selective Deduplication for Improving Performance of Encrypted Non-Volatile Main Memory FSDedup:提高加密非易失性主存储器性能的特征感知和选择性重复数据删除技术
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-05-01 DOI: 10.1145/3662736
Chunfeng Du, Zihang Lin, Suzhen Wu, Yifei Chen, Jiapeng Wu, Shengzhe Wang, Weichun Wang, Qingfeng Wu, Bo Mao

Enhancing the endurance, performance, and energy efficiency of encrypted Non-Volatile Main Memory (NVMM) can be achieved by minimizing written data through inline deduplication. However, existing approaches applying inline deduplication to encrypted NVMM suffer from substantial performance degradation due to high computing, memory footprint, and index-lookup overhead to generate, store, and query the cryptographic hash (fingerprint). In the preliminary ESD [14], we proposed the Error Correcting Code (ECC) assisted selective deduplication scheme, utilizing the ECC information as a fingerprint to identify similar data effectively and then leveraging the selective deduplication technique to eliminate a large amount of redundant data with high reference counts. In this paper, we proposed FSDedup. Compared with ESD, FSDedup could leverage the prefetch cache to reduce the read overhead during similarity comparison and utilize the cache refresh mechanism to identify further and eliminate more redundant data. Extensive experimental evaluations demonstrate that FSDedup can enhance the performance of the NVMM system further than the ESD. Experimental results show that FSDedup can improve both write and read speed by up to 1.8 ×, enhance Instructions Per Cycle (IPC) by up to 1.5 ×, and reduce energy consumption by up to 2.0 ×, compared to ESD.

通过内联重复数据删除来减少写入数据,可以提高加密非易失性主存储器(NVMM)的耐用性、性能和能效。然而,由于生成、存储和查询加密哈希值(指纹)所需的计算、内存占用和索引查找开销较高,因此将内联重复数据删除应用于加密非易失性主存储器的现有方法存在性能大幅下降的问题。在最初的 ESD [14]中,我们提出了纠错码(ECC)辅助选择性重复数据删除方案,利用 ECC 信息作为指纹有效识别相似数据,然后利用选择性重复数据删除技术消除大量具有高参考计数的冗余数据。本文提出了 FSDedup。与 ESD 相比,FSDedup 可以利用预取缓存减少相似性比较过程中的读取开销,并利用缓存刷新机制进一步识别和消除更多冗余数据。广泛的实验评估证明,与 ESD 相比,FSDedup 可以进一步提高 NVMM 系统的性能。实验结果表明,与 ESD 相比,FSDedup 可将写入和读取速度提高 1.8 倍,将每周期指令数(IPC)提高 1.5 倍,将能耗降低 2.0 倍。
{"title":"FSDedup: Feature-Aware and Selective Deduplication for Improving Performance of Encrypted Non-Volatile Main Memory","authors":"Chunfeng Du, Zihang Lin, Suzhen Wu, Yifei Chen, Jiapeng Wu, Shengzhe Wang, Weichun Wang, Qingfeng Wu, Bo Mao","doi":"10.1145/3662736","DOIUrl":"https://doi.org/10.1145/3662736","url":null,"abstract":"<p>Enhancing the endurance, performance, and energy efficiency of encrypted Non-Volatile Main Memory (NVMM) can be achieved by minimizing written data through inline deduplication. However, existing approaches applying inline deduplication to encrypted NVMM suffer from substantial performance degradation due to high computing, memory footprint, and index-lookup overhead to generate, store, and query the cryptographic hash (fingerprint). In the preliminary ESD [14], we proposed the Error Correcting Code (ECC) assisted selective deduplication scheme, utilizing the ECC information as a fingerprint to identify similar data effectively and then leveraging the selective deduplication technique to eliminate a large amount of redundant data with high reference counts. In this paper, we proposed FSDedup. Compared with ESD, FSDedup could leverage the prefetch cache to reduce the read overhead during similarity comparison and utilize the cache refresh mechanism to identify further and eliminate more redundant data. Extensive experimental evaluations demonstrate that FSDedup can enhance the performance of the NVMM system further than the ESD. Experimental results show that FSDedup can improve both write and read speed by up to 1.8 ×, enhance Instructions Per Cycle (IPC) by up to 1.5 ×, and reduce energy consumption by up to 2.0 ×, compared to ESD.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140840473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Implementation of Deduplication on F2FS F2FS 重复数据删除的设计与实施
IF 1.7 3区 计算机科学 Q3 Computer Science Pub Date : 2024-04-29 DOI: 10.1145/3662735
Tiangmeng Zhang, Renhui Chen, Zijing Li, Congming Gao, Chengke Wang, Jiwu Shu

Data deduplication technology has gained popularity in modern file systems due to its ability to eliminate redundant writes and improve storage space efficiency. In recent years, the flash-friendly file system (F2FS) has been widely adopted in flash memory based storage devices, including smartphones, fast-speed servers and Internet of Things. In this paper, we propose F2DFS (deduplication-based F2FS), which introduces three main design contributions. First, F2DFS integrates inline and offline hybrid deduplication. Inline deduplication eliminates redundant writes and enhances flash device endurance, while offline deduplication mitigates the negative I/O performance impact and saves more storage space. Second, F2DFS follows the file system coupling design principle, effectively leveraging the potentials and benefits of both deduplication and native F2FS. Also, with the aid of this principle, F2DFS achieves high-performance and space-efficient incremental deduplication. Third, F2DFS adopts virtual indexing to mitigate deduplication-induced many-to-one mapping updates during the segment cleaning. We conducted comprehensive experimental comparisons between F2DFS, native F2FS, and other state-of-the-art deduplication schemes, using both synthetic and real-world workloads. For inline deduplication, F2DFS outperforms SmartDedup, Dmdedup, and ZFS, in terms of both I/O bandwidth performance and deduplication rates. And for offline deduplication, compared to SmartDedup, XFS and BtrFS, F2DFS shows higher execution efficiency, lower resource usage and greater storage space savings. Moreover, F2DFS demonstrates more efficient segment cleanings than native F2FS.

重复数据删除技术能够消除冗余写入并提高存储空间效率,因此在现代文件系统中越来越受欢迎。近年来,基于闪存的存储设备(包括智能手机、高速服务器和物联网)广泛采用了闪存友好型文件系统(F2FS)。本文提出的 F2DFS(基于重复数据删除的 F2FS)主要有三个设计贡献。首先,F2DFS 集成了在线和离线混合重复数据删除功能。内联重复数据删除消除了冗余写入,增强了闪存设备的耐用性,而离线重复数据删除则减轻了对 I/O 性能的负面影响,节省了更多存储空间。其次,F2DFS 遵循文件系统耦合设计原则,有效利用了重复数据删除和本地 F2FS 的潜力和优势。同时,借助这一原理,F2DFS 实现了高性能和空间效率高的增量重复数据删除。第三,F2DFS 采用了虚拟索引技术,以减轻段清理过程中重复数据删除引起的多对一映射更新。我们使用合成和实际工作负载对 F2DFS、本地 F2FS 和其他最先进的重复数据删除方案进行了全面的实验比较。就在线重复数据删除而言,F2DFS 在 I/O 带宽性能和重复数据删除率方面都优于 SmartDedup、Dmdedup 和 ZFS。在离线重复数据删除方面,与 SmartDedup、XFS 和 BtrFS 相比,F2DFS 表现出更高的执行效率、更低的资源使用率和更大的存储空间节省。此外,与本地 F2FS 相比,F2DFS 的段清理效率更高。
{"title":"Design and Implementation of Deduplication on F2FS","authors":"Tiangmeng Zhang, Renhui Chen, Zijing Li, Congming Gao, Chengke Wang, Jiwu Shu","doi":"10.1145/3662735","DOIUrl":"https://doi.org/10.1145/3662735","url":null,"abstract":"<p>Data deduplication technology has gained popularity in modern file systems due to its ability to eliminate redundant writes and improve storage space efficiency. In recent years, the flash-friendly file system (F2FS) has been widely adopted in flash memory based storage devices, including smartphones, fast-speed servers and Internet of Things. In this paper, we propose F2DFS (deduplication-based F2FS), which introduces three main design contributions. First, F2DFS integrates inline and offline hybrid deduplication. Inline deduplication eliminates redundant writes and enhances flash device endurance, while offline deduplication mitigates the negative I/O performance impact and saves more storage space. Second, F2DFS follows the file system coupling design principle, effectively leveraging the potentials and benefits of both deduplication and native F2FS. Also, with the aid of this principle, F2DFS achieves high-performance and space-efficient incremental deduplication. Third, F2DFS adopts virtual indexing to mitigate deduplication-induced many-to-one mapping updates during the segment cleaning. We conducted comprehensive experimental comparisons between F2DFS, native F2FS, and other state-of-the-art deduplication schemes, using both synthetic and real-world workloads. For inline deduplication, F2DFS outperforms SmartDedup, Dmdedup, and ZFS, in terms of both I/O bandwidth performance and deduplication rates. And for offline deduplication, compared to SmartDedup, XFS and BtrFS, F2DFS shows higher execution efficiency, lower resource usage and greater storage space savings. Moreover, F2DFS demonstrates more efficient segment cleanings than native F2FS.</p>","PeriodicalId":49113,"journal":{"name":"ACM Transactions on Storage","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140812433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Storage
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1