首页 > 最新文献

Proceedings of the 9th ACM International on Systems and Storage Conference最新文献

英文 中文
Using Storage Class Memory Efficiently for an In-memory Database 在内存数据库中有效地使用存储类内存
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933273
Yonatan Gottesman, J. Nider, Ronen I. Kat, Y. Weinsberg, M. Factor
Storage class memory (SCM) is an emerging class of memory devices that are both byte addressable, and persistent. There are many different technologies that can be considered SCM, at different stages of maturity. Examples of such technologies include NVDIMM-N, PCM, SttRAM, Racetrack, FeRAM, and others. Currently, applications rely on storage technologies such as Flash memory and hard disks as the physical media for persistency. SCM behaves differently and has significantly different characteristics than these existing technologies. That means there is a need for a fundamental change in the way we program data persistency in applications to fully realize the potential of this new class of device. Previous work such as [1] focuses on designing a filesystem optimized to run over SCM memory. Other projects such as Mnemosyne [4] provide a general purpose API for applications to use SCM memory. Mnemosyne however, does not provide a transaction mechanism flexible enough to allow different SCM updates spanning different parts of the code to be considered one transaction. Our work is focused on employing minimal changes needed to retrofit an existing key-value store to take advantage of SCM technology. We demonstrate these changes on Redis (REmote DIctionary Server) [3], which is a popular key-value store. We show how Redis can be modified to take advantage of these new abilities by allowing the application to manage its own storage in a unique way. Our approach is to use two types of memory technologies (DRAM and SCM) for different purposes in a single application. To optimize the system data capacity, we keep a minimal dataset in persistent memory, while keeping metadata (such as indexing) in DRAM, which can be rebuilt upon failures. Persistency in Redis is currently performed by logging all transactions to an append-only log file (AOF). These transactions can then be replayed to recover from a failure. The transactions are not made persistent until the AOF file is flushed to disk, which is very slow. Flushing after every transaction has a performance impact, but flushing periodically creates a risk of lost data. By using SCM instead of a disk, we can effectively flush every transaction without impacting performance. In order to change Redis to store data objects on the SCM, we must ensure consistency of persistent data even after an unexpected shutdown. To ensure consistency, a modified version of dlmalloc [2] is used for all allocations done on the SCM, and mfence commands are used to overcome unexpected reordering. We model the SCM using a memory mapped file backed on a ramdisk. We compared our changes to Redis using an AOF backed on a ramdisk. Although we don't take into account the latency overheads of accessing the SCM, this comparison gives us a good upper bound of performance benefits of using SCM memory. Our results demonstrate an average latency reduction of 43% and average throughput increase of 75%.
存储类内存(SCM)是一种新兴的内存设备,它既是字节可寻址的,又是持久的。在不同的成熟阶段,有许多不同的技术可以被认为是SCM。这些技术的例子包括NVDIMM-N、PCM、stram、Racetrack、FeRAM等。目前,应用程序依赖于存储技术,如闪存和硬盘作为持久性的物理介质。与这些现有技术相比,单片机的行为方式和特点有很大的不同。这意味着我们需要从根本上改变应用程序中数据持久性的编程方式,以充分发挥这类新设备的潜力。以前的工作,如[1]侧重于设计一个优化的文件系统,以便在SCM内存上运行。其他项目如Mnemosyne[4]为应用程序使用SCM内存提供了一个通用的API。然而,Mnemosyne并没有提供足够灵活的事务机制来允许跨代码不同部分的不同SCM更新被视为一个事务。我们的工作集中在利用SCM技术对现有的键值存储进行改造所需的最小更改上。我们在Redis(远程字典服务器)[3]上演示了这些变化,Redis是一个流行的键值存储。我们将展示如何修改Redis以利用这些新功能,允许应用程序以独特的方式管理自己的存储。我们的方法是在单个应用程序中为不同的目的使用两种类型的内存技术(DRAM和SCM)。为了优化系统数据容量,我们在持久内存中保留最小的数据集,而在DRAM中保留元数据(如索引),可以在故障时重建。Redis的持久性目前是通过将所有事务记录到一个仅追加的日志文件(AOF)来执行的。然后可以重放这些事务以从故障中恢复。在AOF文件被刷新到磁盘之前,事务不会被持久化,这是非常慢的。在每个事务之后进行刷新会对性能产生影响,但定期刷新会造成数据丢失的风险。通过使用SCM而不是磁盘,我们可以在不影响性能的情况下有效地刷新每个事务。为了改变Redis在SCM上存储数据对象,我们必须确保持久数据的一致性,即使在意外关机之后。为了确保一致性,在SCM上完成的所有分配都使用修改版本的dlmalloc[2],并且使用mfence命令来克服意外的重新排序。我们使用存储在ramdisk上的内存映射文件对SCM进行建模。我们使用备份在ram磁盘上的AOF将我们的更改与Redis进行了比较。虽然我们没有考虑访问SCM的延迟开销,但这种比较为我们提供了使用SCM内存的性能优势的一个很好的上限。我们的结果表明,平均延迟减少了43%,平均吞吐量增加了75%。
{"title":"Using Storage Class Memory Efficiently for an In-memory Database","authors":"Yonatan Gottesman, J. Nider, Ronen I. Kat, Y. Weinsberg, M. Factor","doi":"10.1145/2928275.2933273","DOIUrl":"https://doi.org/10.1145/2928275.2933273","url":null,"abstract":"Storage class memory (SCM) is an emerging class of memory devices that are both byte addressable, and persistent. There are many different technologies that can be considered SCM, at different stages of maturity. Examples of such technologies include NVDIMM-N, PCM, SttRAM, Racetrack, FeRAM, and others. Currently, applications rely on storage technologies such as Flash memory and hard disks as the physical media for persistency. SCM behaves differently and has significantly different characteristics than these existing technologies. That means there is a need for a fundamental change in the way we program data persistency in applications to fully realize the potential of this new class of device. Previous work such as [1] focuses on designing a filesystem optimized to run over SCM memory. Other projects such as Mnemosyne [4] provide a general purpose API for applications to use SCM memory. Mnemosyne however, does not provide a transaction mechanism flexible enough to allow different SCM updates spanning different parts of the code to be considered one transaction. Our work is focused on employing minimal changes needed to retrofit an existing key-value store to take advantage of SCM technology. We demonstrate these changes on Redis (REmote DIctionary Server) [3], which is a popular key-value store. We show how Redis can be modified to take advantage of these new abilities by allowing the application to manage its own storage in a unique way. Our approach is to use two types of memory technologies (DRAM and SCM) for different purposes in a single application. To optimize the system data capacity, we keep a minimal dataset in persistent memory, while keeping metadata (such as indexing) in DRAM, which can be rebuilt upon failures. Persistency in Redis is currently performed by logging all transactions to an append-only log file (AOF). These transactions can then be replayed to recover from a failure. The transactions are not made persistent until the AOF file is flushed to disk, which is very slow. Flushing after every transaction has a performance impact, but flushing periodically creates a risk of lost data. By using SCM instead of a disk, we can effectively flush every transaction without impacting performance. In order to change Redis to store data objects on the SCM, we must ensure consistency of persistent data even after an unexpected shutdown. To ensure consistency, a modified version of dlmalloc [2] is used for all allocations done on the SCM, and mfence commands are used to overcome unexpected reordering. We model the SCM using a memory mapped file backed on a ramdisk. We compared our changes to Redis using an AOF backed on a ramdisk. Although we don't take into account the latency overheads of accessing the SCM, this comparison gives us a good upper bound of performance benefits of using SCM memory. Our results demonstrate an average latency reduction of 43% and average throughput increase of 75%.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74985204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Software-Defined Emulation Infrastructure for High Speed Storage 高速存储的软件定义仿真基础结构
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933277
Krishna T. Malladi, M. Awasthi, Hongzhong Zheng
NVMe, being a new I/O communication protocol, suffers from a lack of tools to evaluate storage solutions built on the standard. In this paper, we provide the design and analysis of a comprehensive, fully customizable emulation infrastructure that builds on the NVMe protocol. It provides a number of knobs that allow system architects to quickly evaluate performance implications of a wide variety of storage solutions while natively executing workloads.
NVMe作为一种新的I/O通信协议,缺乏工具来评估基于该标准构建的存储解决方案。在本文中,我们提供了一个基于NVMe协议的全面的、完全可定制的仿真基础设施的设计和分析。它提供了许多旋钮,允许系统架构师在本机执行工作负载时快速评估各种存储解决方案的性能影响。
{"title":"Software-Defined Emulation Infrastructure for High Speed Storage","authors":"Krishna T. Malladi, M. Awasthi, Hongzhong Zheng","doi":"10.1145/2928275.2933277","DOIUrl":"https://doi.org/10.1145/2928275.2933277","url":null,"abstract":"NVMe, being a new I/O communication protocol, suffers from a lack of tools to evaluate storage solutions built on the standard. In this paper, we provide the design and analysis of a comprehensive, fully customizable emulation infrastructure that builds on the NVMe protocol. It provides a number of knobs that allow system architects to quickly evaluate performance implications of a wide variety of storage solutions while natively executing workloads.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85421213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Coded Network Switches for Improved Throughput 改进吞吐量的编码网络交换机
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933281
Rami Cohen, Yuval Cassuto
With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a
随着网络带宽需求的不断增长,网络交换机面临着数据速率不断增长的挑战。为了使向交换机内存写入和读取数据包的过程并行化,在交换结构中并行部署多个内存单元(mu)。但是,由于内存带宽限制,如果请求读取的数据包碰巧共享一个或多个mu,则可能发生内存争用。在写阶段避免这种争用是有限的,因为数据包到达交换机时不知道数据包的读取计划。因此,需要有效的数据包放置和读取策略。为了在读取过程中获得更大的灵活性,编码交换机在数据包写入路径中引入了冗余。这是通过从传入数据包中计算额外的编码块,并将它们与原始数据包块一起写入交换机内存中的mu来完成的。编码方案以k个数据包块为输入,并将其编码为n个块(k≤n)的码字,其中冗余的n—k个块旨在提供改进的读取灵活性。由于冗余,重构原始(未编码)数据包只需要编码块的一个子集。因此,即使数据包的数据块只有一部分可以读取,也可以读取。一种自然的编码方法是使用[n, k]最大距离可分离(MDS)码,它具有吸引人的特性,即从n个码块中取出的任何k个块都可以用于恢复原始的k个数据包块。虽然MDS码提供了最大的灵活性,但我们的结果表明,即使使用更弱(和更低成本)的码,如二进制循环码,也可以获得良好的切换性能。以前的交换编码工作[1],[2]考虑了一个更强(和更昂贵)的模型,保证同时重建最坏情况的数据包请求。在我们提出的编码交换范例中,我们的目标是在一个读取周期内同时从交换机存储器读取完整数据包的数量最大化。在每个读取周期中要读取的数据包在交换机控制平面发出的请求中指定。我们表明,在写数据包时对其进行编码可以显著增加读数据包的数量,从而使存储冗余的写负载略有增加。因此,编码可以显著提高总体交换吞吐量。我们确定并研究了高吞吐量编码交换机的两个关键组件:1)读取算法,可以在给定先前写入数据包的任意请求的情况下恢复最大数量的数据包,以及2)放置策略确定如何将编码块放置在交换机mu中。我们的结果为这两个组成部分提供了艺术和洞察力,更重要的是,它们揭示了它们之间的紧密关系。在较高的层次上,放置策略的选择可以提高读算法的性能和计算效率。为了显示前者,我们推导了一组分析工具来计算和/或绑定给定使用的放置策略的读取算法的性能。对于后者,我们展示了一个策略(均匀放置)的NP-hard最优读取问题与另外两个策略(循环和设计放置)的极其有效的最优读取算法之间的巨大差距。
{"title":"Coded Network Switches for Improved Throughput","authors":"Rami Cohen, Yuval Cassuto","doi":"10.1145/2928275.2933281","DOIUrl":"https://doi.org/10.1145/2928275.2933281","url":null,"abstract":"With the increasing demand for network bandwidth, network switches face the challenge of serving growing data rates. To parallelize the process of writing and reading packets to the switch memory, multiple memory units (MUs) are deployed in parallel in the switch fabric. However, memory contention may occur if packets requested to read happen to share one or more MUs, due to memory bandwidth limitations. Avoiding such contention in the write stage is limited as the reading schedule of packets is not known upon arrival of the packets to the switch. Thus, efficient packet placement and read policies are required. For greater flexibility in the read process, coded switches introduce redundancy to the packet-write path. This is done by calculating additional coded chunks from an incoming packet, and writing them along with the original packet chunks to MUs in the switch memory. A coding scheme takes an input of k packet chunks and encodes them into a codeword of n chunks (k ≤ n), where the redundant n --- k chunks are aimed at providing improved read flexibility. Thanks to the redundancy, only a subset of the coded chunks is required for reconstructing the original (uncoded) packet. Thus, packets may be read even when only a part of their chunks is available to read without contention. One natural coding approach is to use [n, k] maximum distance separable (MDS) codes, which have the attractive property that any k chunks taken from the n code chunks can be used for the recovery of the original k packet chunks. Although MDS codes provide the maximum flexibility, we show in our results that good switching performance can be obtained even with much weaker (and lower cost) codes, such as binary cyclic codes. Previous switch-coding works [1],[2] considered a stronger (and more costly) model guaranteeing simultaneous reconstruction of worst-case packet requests. In the coded switching paradigm we propose, our objective is to maximize the number of full packets read from the switch memory simultaneously in a read cycle. The packets to read at each read cycle are specified in a request issued by the control plane of the switch. We show that coding the packets upon their write can significantly increase the number of read packets, in return to a small increase in the write load to store the redundancy. Thus coding can significantly increase the overall switching throughput. We identify and study two key components for high-throughput coded switches: 1) Read algorithms that can recover the maximal number of packets given an arbitrary request for previously written packets, and 2) Placement policies determining how coded chunks are placed in the switch MUs. Our results contribute art and insight for each of these two components, and more importantly, they reveal the tight relations between them. At a high level, the choice of placement policy can improve both the performance and the computational efficiency of the read algorithm. To show the former, we derive a","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83995251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Helping Protect Software Distribution with PSWD 透过PSWD协助保护软件发行
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928281
Edi Shmueli, Sergey Goffman, Yoram Zahavi
The success of new technologies depends on whether proper usage models can be found to support them. In this paper we present such a model for Intel's Software Guard Extensions (SGX) -- the leveraging of the technology to provide copy protection to software. We describe the system that we architected, designed and implemented, which transforms, in a fully automated manner, off-the-shelve applications into secured versions that run on top of the enclaves. Our system can be delivered as stand-alone, but also as a layer in existing software copy protection stacks.
新技术的成功取决于是否能找到合适的使用模式来支持它们。在本文中,我们为英特尔的软件保护扩展(SGX)提出了这样一个模型——利用该技术为软件提供复制保护。我们描述了我们构建、设计和实现的系统,它以完全自动化的方式将现成的应用程序转换为在enclave之上运行的安全版本。我们的系统可以独立交付,也可以作为现有软件复制保护堆栈中的一层。
{"title":"Helping Protect Software Distribution with PSWD","authors":"Edi Shmueli, Sergey Goffman, Yoram Zahavi","doi":"10.1145/2928275.2928281","DOIUrl":"https://doi.org/10.1145/2928275.2928281","url":null,"abstract":"The success of new technologies depends on whether proper usage models can be found to support them. In this paper we present such a model for Intel's Software Guard Extensions (SGX) -- the leveraging of the technology to provide copy protection to software. We describe the system that we architected, designed and implemented, which transforms, in a fully automated manner, off-the-shelve applications into secured versions that run on top of the enclaves. Our system can be delivered as stand-alone, but also as a layer in existing software copy protection stacks.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78767207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Download Time Analysis of Coded Storage Systems 编码存储系统的灵活下载时间分析
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933278
Q. Shuai, V. Li
Download time is a key performance metric in distributed storage systems since it greatly impacts user experience, especially for latency-sensitive applications such as Google Search and so on. Recently, plenty of research has pointed out that coding can reduce download time. Till now, almost all previous studies analyze download time when a user requires all the information in a codeword. However, in practical storage systems such as the Windows Azure Storage System (WAS), only when files reach a certain size (e.g., 1GB), will it be a candidate for erasure coding [1]. That is, in practice, files stored in a codeword are usually very large and users' requests may only desire part of these files. Therefore, it is significant to analyze the latency performance when users only request a subset of the erasure-coded content.
下载时间是分布式存储系统中的一个关键性能指标,因为它会极大地影响用户体验,特别是对于谷歌Search等对延迟敏感的应用程序。最近,大量研究指出,编码可以减少下载时间。到目前为止,几乎所有的研究都是分析用户需要一个码字中的所有信息时的下载时间。然而,在实际的存储系统中,例如Windows Azure存储系统(WAS),只有当文件达到一定的大小(例如1GB)时,它才会成为擦除编码[1]的候选对象。也就是说,在实践中,存储在码字中的文件通常非常大,用户的请求可能只需要这些文件的一部分。因此,当用户只请求擦除编码内容的一个子集时,分析延迟性能具有重要意义。
{"title":"Flexible Download Time Analysis of Coded Storage Systems","authors":"Q. Shuai, V. Li","doi":"10.1145/2928275.2933278","DOIUrl":"https://doi.org/10.1145/2928275.2933278","url":null,"abstract":"Download time is a key performance metric in distributed storage systems since it greatly impacts user experience, especially for latency-sensitive applications such as Google Search and so on. Recently, plenty of research has pointed out that coding can reduce download time. Till now, almost all previous studies analyze download time when a user requires all the information in a codeword. However, in practical storage systems such as the Windows Azure Storage System (WAS), only when files reach a certain size (e.g., 1GB), will it be a candidate for erasure coding [1]. That is, in practice, files stored in a codeword are usually very large and users' requests may only desire part of these files. Therefore, it is significant to analyze the latency performance when users only request a subset of the erasure-coded content.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89278171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optics in Data Centers: Adapting to Diverse Modern Workloads 数据中心中的光学:适应多样化的现代工作负载
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933283
S. Vargaftik, I. Keslassy, A. Orda, K. Barabash, Y. Ben-Itzhak, O. Biran, D. Lorenz
Over the recent years we witness a massive growth of cloud usage, accelerated by new types of 'born-to-the-cloud' workloads. These new types of workloads are increasingly multi-component, dynamic and often present highly intensive communication patterns. Massive innovation of Data Center Network (DCN) technologies is required to support the demand, giving raise to new network topologies, new network control paradigms, and management models. One particularly promising technology candidate for improving the DCN efficiency is Optical Circuit Switching (OCS). Several hybrid solutions combining OCS with the traditional Electronic Packet Switching (EPS) have been proposed [1, 2], aiming to take advantage of the benefits of the OCS technology (e.g., high bandwidth, low latency and power consumption) while leveling out its shortcomings (e.g., slow reconfiguration time, integration with IP fabric). The first comprehensive work advocating OCS for DCN [1] considered HPC workloads with semi-static communication patterns. Follow up works, such as Helios [2], proposed new ways for identifying heavy flows, heuristics for computing the circuits configuration, and control hooks for dispatching the traffic over EPS and OCS paths. In yet newer works, e.g. [3], further advances were made -- supporting richer sets of communication patterns, employing Software Defined Networking (SDN) to steer the traffic and to achieve more reactive control planes in anticipation for faster OCS capabilities, and more. We observe that in hybrid solutions, the basic approach remains the same -- the network is partitioned between the two separate fabrics, one based on OCS and one based on EPS, so that each network flow is handled by one of the fabrics, depending on its properties. In this work, we present a new architecture where optical circuitry does not merely augment the EPS but is properly integrated with it into a coherently managed unified fabric. Our approach is based on understanding that modern workloads impose diverse traffic demands. Specifically, we identify the abundance of few-to-many and many-to-few communication patterns with multiple dynamic hot spots and observe that such traffic is better served by tighter integration of OCS and EPS achieved through introducing composite paths across the OCS-EPS boundaries. As a preliminary proof of concept, we have evaluated our architecture and compared it to the previously proposed hybrid solutions, considering the known uniform and skewed, as well as few-to-many and many-to-few demand models. For each traffic pattern, we evaluate both whether it can be met by each of the solutions and, if yes, the resulting link utilization. Our preliminary results show a significant improvement in both these metrics -- the feasibility and the link utilization. Looking forward, we plan to expand this research and explore a new thread of opportunities for leveraging the reconfiguration capabilities of contemporary OCS, posing it as a viable DCN te
近年来,我们见证了云使用的大规模增长,新型的“天生的云”工作负载加速了这一增长。这些新型工作负载越来越多地是多组件的、动态的,并且经常呈现高度密集的通信模式。数据中心网络(Data Center Network, DCN)技术需要大规模创新来支持这种需求,从而产生新的网络拓扑结构、新的网络控制范式和管理模式。提高DCN效率的一个特别有前途的技术候选是光电路交换(OCS)。已经提出了几种将OCS与传统电子分组交换(EPS)相结合的混合解决方案[1,2],旨在利用OCS技术的优点(例如,高带宽,低延迟和功耗),同时弥补其缺点(例如,缓慢的重新配置时间,与IP结构的集成)。第一个全面倡导OCS用于DCN的工作[1]考虑了具有半静态通信模式的HPC工作负载。后续工作,如Helios[2],提出了识别大流量的新方法,计算电路配置的启发式方法,以及通过EPS和OCS路径调度流量的控制钩子。在较新的作品中,例如[3],取得了进一步的进展-支持更丰富的通信模式集,使用软件定义网络(SDN)来引导流量,并实现更多响应控制平面,以预期更快的OCS功能等等。我们观察到,在混合解决方案中,基本方法保持不变——网络在两个独立的结构之间划分,一个基于OCS,一个基于EPS,因此每个网络流由其中一个结构根据其属性处理。在这项工作中,我们提出了一种新的架构,其中光电路不仅增强了EPS,而且与EPS适当地集成到一个相干管理的统一结构中。我们的方法基于对现代工作负载施加不同流量需求的理解。具体来说,我们确定了具有多个动态热点的少对多和多对少通信模式的丰富程度,并观察到通过在OCS-EPS边界上引入复合路径来实现OCS和EPS的更紧密集成,可以更好地服务于此类流量。作为概念的初步证明,我们已经评估了我们的架构,并将其与之前提出的混合解决方案进行了比较,考虑到已知的统一和倾斜,以及少对多和多对少的需求模型。对于每种交通模式,我们评估每个解决方案是否可以满足它,如果是,则评估由此产生的链路利用率。我们的初步结果表明,在可行性和链路利用率这两个指标上都有了显著的改进。展望未来,我们计划扩展这项研究并探索利用当代OCS重新配置能力的新机会,使其成为可行的DCN技术。本研究由欧共体第七框架计划(FP7/2001-2013)部分资助,资助协议号:619572 (COSIGN项目)。
{"title":"Optics in Data Centers: Adapting to Diverse Modern Workloads","authors":"S. Vargaftik, I. Keslassy, A. Orda, K. Barabash, Y. Ben-Itzhak, O. Biran, D. Lorenz","doi":"10.1145/2928275.2933283","DOIUrl":"https://doi.org/10.1145/2928275.2933283","url":null,"abstract":"Over the recent years we witness a massive growth of cloud usage, accelerated by new types of 'born-to-the-cloud' workloads. These new types of workloads are increasingly multi-component, dynamic and often present highly intensive communication patterns. Massive innovation of Data Center Network (DCN) technologies is required to support the demand, giving raise to new network topologies, new network control paradigms, and management models. One particularly promising technology candidate for improving the DCN efficiency is Optical Circuit Switching (OCS). Several hybrid solutions combining OCS with the traditional Electronic Packet Switching (EPS) have been proposed [1, 2], aiming to take advantage of the benefits of the OCS technology (e.g., high bandwidth, low latency and power consumption) while leveling out its shortcomings (e.g., slow reconfiguration time, integration with IP fabric). The first comprehensive work advocating OCS for DCN [1] considered HPC workloads with semi-static communication patterns. Follow up works, such as Helios [2], proposed new ways for identifying heavy flows, heuristics for computing the circuits configuration, and control hooks for dispatching the traffic over EPS and OCS paths. In yet newer works, e.g. [3], further advances were made -- supporting richer sets of communication patterns, employing Software Defined Networking (SDN) to steer the traffic and to achieve more reactive control planes in anticipation for faster OCS capabilities, and more. We observe that in hybrid solutions, the basic approach remains the same -- the network is partitioned between the two separate fabrics, one based on OCS and one based on EPS, so that each network flow is handled by one of the fabrics, depending on its properties. In this work, we present a new architecture where optical circuitry does not merely augment the EPS but is properly integrated with it into a coherently managed unified fabric. Our approach is based on understanding that modern workloads impose diverse traffic demands. Specifically, we identify the abundance of few-to-many and many-to-few communication patterns with multiple dynamic hot spots and observe that such traffic is better served by tighter integration of OCS and EPS achieved through introducing composite paths across the OCS-EPS boundaries. As a preliminary proof of concept, we have evaluated our architecture and compared it to the previously proposed hybrid solutions, considering the known uniform and skewed, as well as few-to-many and many-to-few demand models. For each traffic pattern, we evaluate both whether it can be met by each of the solutions and, if yes, the resulting link utilization. Our preliminary results show a significant improvement in both these metrics -- the feasibility and the link utilization. Looking forward, we plan to expand this research and explore a new thread of opportunities for leveraging the reconfiguration capabilities of contemporary OCS, posing it as a viable DCN te","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87010893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Parallelism of Distributed Nested Transactions 利用分布式嵌套事务的并行性
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928287
Duane Niles, R. Palmieri, B. Ravindran
We present SPCN, a framework that further extends the benefits of having distributed partially rollbackable (closed-nested) transactions by exploiting their parallel activation. SPCN provides support for executing each closed-nested transaction in parallel with others belonging to the same parent transaction. Their commit sequence is equivalent to the serial commit execution, but parallelism is leveraged to improve performance by reducing the amount of serial network communication. As we show in our evaluation study using 20 nodes on Amazon EC2 and three well-known benchmarks, SPCN provides performance improvement over the original closed nesting, gaining more than 2× in throughput.
我们提出SPCN,这个框架通过利用并行激活进一步扩展了分布式部分可回滚(封闭嵌套)事务的好处。SPCN支持与属于同一父事务的其他事务并行执行每个封闭嵌套事务。它们的提交序列相当于串行提交执行,但是利用并行性可以通过减少串行网络通信量来提高性能。正如我们在评估研究中使用Amazon EC2上的20个节点和三个著名的基准测试所显示的那样,SPCN比原始的封闭嵌套提供了性能改进,吞吐量提高了2倍以上。
{"title":"Exploiting Parallelism of Distributed Nested Transactions","authors":"Duane Niles, R. Palmieri, B. Ravindran","doi":"10.1145/2928275.2928287","DOIUrl":"https://doi.org/10.1145/2928275.2928287","url":null,"abstract":"We present SPCN, a framework that further extends the benefits of having distributed partially rollbackable (closed-nested) transactions by exploiting their parallel activation. SPCN provides support for executing each closed-nested transaction in parallel with others belonging to the same parent transaction. Their commit sequence is equivalent to the serial commit execution, but parallelism is leveraged to improve performance by reducing the amount of serial network communication. As we show in our evaluation study using 20 nodes on Amazon EC2 and three well-known benchmarks, SPCN provides performance improvement over the original closed nesting, gaining more than 2× in throughput.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90599239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
S-RAC: SSD Friendly Caching for Data Center Workloads S-RAC:数据中心工作负载的SSD友好缓存
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928284
Yuanjiang Ni, Jing Jiang, D. Jiang, Xiaosong Ma, Jin Xiong, Yuangang Wang
Current data-center applications tend to process increasingly large volume of data sets. The caching effect of page cache is reduced by its limited capacity. Emerging flash-based solid state drives (SSD) have latency and price advantages compared to hard disk and DRAM. Thus, SSD-based caching is widely deployed in data centers. However, SSD caching faces two challenges. First, SSD has limited write endurance, which requires cache manager to reduce write amount to SSD. Second, data-center workloads exhibit a diverse I/O access patterns, which requires one to figure out SSD caching friendly access patterns. This paper first classifies 6 I/O access patterns among 32 data-center workloads using a cost-benefit analysis. We derive implications for designing SSD cache from analyzing the access patterns. We then propose an SSD cache manager S-RAC with re-adding blocks and ghost cache adaptation to retain SSD friendly blocks in SSD. The experimental evaluation shows the efficiency of S-RAC in reducing SSD write amount while improving/maintaining cache hit ratio.
当前的数据中心应用程序倾向于处理越来越大的数据集。页面缓存的缓存效果由于其有限的容量而降低。与硬盘和DRAM相比,新兴的基于闪存的固态硬盘(SSD)具有延迟和价格优势。因此,基于ssd的缓存被广泛部署在数据中心中。但是,SSD缓存面临两个挑战。首先,SSD的写耐力有限,这就要求缓存管理器减少对SSD的写量。其次,数据中心工作负载表现出不同的I/O访问模式,这需要找出SSD缓存友好的访问模式。本文首先使用成本效益分析对32个数据中心工作负载中的6种I/O访问模式进行了分类。通过对访问模式的分析,得出了SSD缓存设计的启示。然后,我们提出了一种SSD缓存管理器S-RAC,它具有重新添加块和幽灵缓存适应功能,可以在SSD中保留SSD友好块。实验评估表明S-RAC在减少SSD写量的同时提高/保持缓存命中率的效率。
{"title":"S-RAC: SSD Friendly Caching for Data Center Workloads","authors":"Yuanjiang Ni, Jing Jiang, D. Jiang, Xiaosong Ma, Jin Xiong, Yuangang Wang","doi":"10.1145/2928275.2928284","DOIUrl":"https://doi.org/10.1145/2928275.2928284","url":null,"abstract":"Current data-center applications tend to process increasingly large volume of data sets. The caching effect of page cache is reduced by its limited capacity. Emerging flash-based solid state drives (SSD) have latency and price advantages compared to hard disk and DRAM. Thus, SSD-based caching is widely deployed in data centers. However, SSD caching faces two challenges. First, SSD has limited write endurance, which requires cache manager to reduce write amount to SSD. Second, data-center workloads exhibit a diverse I/O access patterns, which requires one to figure out SSD caching friendly access patterns. This paper first classifies 6 I/O access patterns among 32 data-center workloads using a cost-benefit analysis. We derive implications for designing SSD cache from analyzing the access patterns. We then propose an SSD cache manager S-RAC with re-adding blocks and ghost cache adaptation to retain SSD friendly blocks in SSD. The experimental evaluation shows the efficiency of S-RAC in reducing SSD write amount while improving/maintaining cache hit ratio.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"130 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76755022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Supporting data-driven I/O on GPUs using GPUfs 支持gpu上使用gpu的数据驱动I/O
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928276
Sagi Shahar, M. Silberstein
Using discrete GPUs for processing very large datasets is challenging, in particular when an algorithm exhibit unpredictable, data-driven access patterns. In this paper we investigate the utility of GPUfs, a library that provides direct access to files from GPU programs, to implement such algorithms. We analyze the system's bottlenecks, and suggest several modifications to the GPUfs design, including new concurrent hash table for the buffer cache and a highly parallel memory allocator. We also show that by implementing the workload in a warp-centric manner we can improve the performance even further. We evaluate our changes by implementing a real image processing application which creates collages from a dataset of 10 Million images. The enhanced GPUfs design improves the application performance by 5.6× on average over the original GPUfs, and outperforms both 12-core parallel CPU which uses the AVX instruction set, and a standard CUDA-based GPU implementation by up to 2.5× and 3× respectively, while significantly enhancing system programmability and simplifying the application design and implementation.
使用离散gpu来处理非常大的数据集是具有挑战性的,特别是当算法表现出不可预测的数据驱动访问模式时。在本文中,我们研究了GPU的效用,一个库,提供了从GPU程序直接访问文件,以实现这些算法。我们分析了系统的瓶颈,并建议对gpu设计进行一些修改,包括为缓冲缓存提供新的并发哈希表和高度并行的内存分配器。我们还展示了,通过以warp为中心的方式实现工作负载,我们可以进一步提高性能。我们通过实现一个真实的图像处理应用程序来评估我们的变化,该应用程序从1000万张图像的数据集中创建拼贴画。增强后的GPU设计使应用性能比原来的GPU平均提高5.6倍,比使用AVX指令集的12核并行CPU和基于cuda的标准GPU分别提高2.5倍和3倍,同时显著增强了系统的可编程性,简化了应用的设计和实现。
{"title":"Supporting data-driven I/O on GPUs using GPUfs","authors":"Sagi Shahar, M. Silberstein","doi":"10.1145/2928275.2928276","DOIUrl":"https://doi.org/10.1145/2928275.2928276","url":null,"abstract":"Using discrete GPUs for processing very large datasets is challenging, in particular when an algorithm exhibit unpredictable, data-driven access patterns. In this paper we investigate the utility of GPUfs, a library that provides direct access to files from GPU programs, to implement such algorithms. We analyze the system's bottlenecks, and suggest several modifications to the GPUfs design, including new concurrent hash table for the buffer cache and a highly parallel memory allocator. We also show that by implementing the workload in a warp-centric manner we can improve the performance even further. We evaluate our changes by implementing a real image processing application which creates collages from a dataset of 10 Million images. The enhanced GPUfs design improves the application performance by 5.6× on average over the original GPUfs, and outperforms both 12-core parallel CPU which uses the AVX instruction set, and a standard CUDA-based GPU implementation by up to 2.5× and 3× respectively, while significantly enhancing system programmability and simplifying the application design and implementation.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91237543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Enabling Space Elasticity in Storage Systems 在存储系统中启用空间弹性
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928291
Helgi Sigurbjarnarson, Pétur Orri Ragnarsson, Juncheng Yang, Ymir Vigfusson, M. Balakrishnan
Storage systems are designed to never lose data. However, modern applications increasingly use local storage to improve performance by storing soft state such as cached, prefetched or precomputed results. Required is elastic storage, where cloud providers can alter the storage footprint of applications by removing and regenerating soft state based on resource availability and access patterns. We propose a new abstraction called a motif that enables storage elasticity by allowing applications to describe how soft state can be regenerated. Carillon is a system that uses motifs to dynamically change the storage space used by applications. Carillon is implemented as a runtime and a collection of shim layers that interpose between applications and specific storage APIs; we describe shims for a filesystem (Carillon-FS) and a key-value store (Carillon-KV). We show that Carillon-FS allows us to dynamically alter the storage footprint of a VM, while Carillon-KV enables a graph database that accelerates performance based on available storage space.
存储系统被设计成永远不会丢失数据。然而,现代应用程序越来越多地使用本地存储来通过存储软状态(如缓存、预取或预计算结果)来提高性能。所需的是弹性存储,云提供商可以根据资源可用性和访问模式删除和重新生成软状态,从而改变应用程序的存储占用。我们提出了一种新的抽象,称为motif,它通过允许应用程序描述如何再生软状态来实现存储弹性。Carillon是一个使用motif动态更改应用程序使用的存储空间的系统。Carillon被实现为运行时和shim层的集合,这些层介于应用程序和特定的存储api之间;我们描述了文件系统(Carillon-FS)和键值存储(Carillon-KV)的shims。我们展示了Carillon-FS允许我们动态更改VM的存储占用,而Carillon-KV允许基于可用存储空间加速性能的图形数据库。
{"title":"Enabling Space Elasticity in Storage Systems","authors":"Helgi Sigurbjarnarson, Pétur Orri Ragnarsson, Juncheng Yang, Ymir Vigfusson, M. Balakrishnan","doi":"10.1145/2928275.2928291","DOIUrl":"https://doi.org/10.1145/2928275.2928291","url":null,"abstract":"Storage systems are designed to never lose data. However, modern applications increasingly use local storage to improve performance by storing soft state such as cached, prefetched or precomputed results. Required is elastic storage, where cloud providers can alter the storage footprint of applications by removing and regenerating soft state based on resource availability and access patterns. We propose a new abstraction called a motif that enables storage elasticity by allowing applications to describe how soft state can be regenerated. Carillon is a system that uses motifs to dynamically change the storage space used by applications. Carillon is implemented as a runtime and a collection of shim layers that interpose between applications and specific storage APIs; we describe shims for a filesystem (Carillon-FS) and a key-value store (Carillon-KV). We show that Carillon-FS allows us to dynamically alter the storage footprint of a VM, while Carillon-KV enables a graph database that accelerates performance based on available storage space.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84409456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 9th ACM International on Systems and Storage Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1