首页 > 最新文献

Proceedings of the 9th ACM International on Systems and Storage Conference最新文献

英文 中文
Reducing Journaling Harm on Virtualized I/O Systems 减少虚拟I/O系统的日志危害
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928289
Eunji Lee, H. Bahn, Minseong Jeong, Sunghwan Kim, Jesung Yeon, S. Yoo, S. Noh, K. Shin
This paper analyzes the host cache effectiveness in full virtualization, particularly associated with journaling of guests. We observe that the journal access of guests degrades cache performance largely due to the write-once access pattern and the frequent sync operations. To remedy this problem, we design and implement a novel caching policy, called PDC (Pollution Defensive Caching), that detects the journal accesses and prevents them from entering the host cache. The proposed PDC is implemented in QEMU-KVM 2.1 on Linux 4.14 and provides 3-32% performance improvement for various file and I/O benchmarks.
本文分析了完全虚拟化中主机缓存的有效性,特别是与客户机日志相关的有效性。我们观察到,由于一次写访问模式和频繁的同步操作,来宾的日志访问在很大程度上降低了缓存性能。为了解决这个问题,我们设计并实现了一种新的缓存策略,称为PDC(污染防御缓存),它检测日志访问并阻止它们进入主机缓存。本文提出的PDC在Linux 4.14的QEMU-KVM 2.1中实现,在各种文件和I/O基准测试中提供了3-32%的性能提升。
{"title":"Reducing Journaling Harm on Virtualized I/O Systems","authors":"Eunji Lee, H. Bahn, Minseong Jeong, Sunghwan Kim, Jesung Yeon, S. Yoo, S. Noh, K. Shin","doi":"10.1145/2928275.2928289","DOIUrl":"https://doi.org/10.1145/2928275.2928289","url":null,"abstract":"This paper analyzes the host cache effectiveness in full virtualization, particularly associated with journaling of guests. We observe that the journal access of guests degrades cache performance largely due to the write-once access pattern and the frequent sync operations. To remedy this problem, we design and implement a novel caching policy, called PDC (Pollution Defensive Caching), that detects the journal accesses and prevents them from entering the host cache. The proposed PDC is implemented in QEMU-KVM 2.1 on Linux 4.14 and provides 3-32% performance improvement for various file and I/O benchmarks.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81566727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Elastic Queue: A Universal SSD Lifetime Extension Plug-in for Cache Replacement Algorithms 弹性队列:用于缓存替换算法的通用SSD寿命扩展插件
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928286
Yushi Liang, Yunpeng Chai, Ning Bao, Hengyu Chen, Yao-Hong Liu
Flash-based solid-state drives (SSDs) are getting popular to be deployed as the second-level cache in storage systems because of the noticeable performance acceleration and transparency for the original software. However, the frequent data updates of existing cache replacement algorithms (e.g. LRU, LIRS, and LARC) causes too many writes on SSDs, leading to short lifetime and high costs of devices. SSD-oriented cache schemes with less SSD writes have fixed strategies of selecting cache blocks, so we cannot freely choose a suitable cache algorithm to adapt to application features for higher performance. Therefore, a universal SSD lifetime extension plug-in called Elastic Queue (EQ), which can cooperate with any cache algorithm to extend the lifetime of SSDs, is proposed in this paper. EQ reduces the data updating frequency by extending the eviction border of cache blocks elastically, making SSD devices serve much longer. The experimental results based on some real-world traces indicate that for the original LRU, LIRS, and LARC schemes, adding the EQ plug-in reduces their SSD write amounts by 39.03 times, and improves the cache hit rates by 17.30% on average at the same time.
基于闪存的固态驱动器(ssd)越来越流行,被部署为存储系统中的第二级缓存,因为原始软件具有明显的性能加速和透明度。但是,现有的缓存替换算法(如LRU、LIRS、LARC等)由于数据更新频繁,导致ssd写入次数过多,寿命短,设备成本高。面向SSD的缓存方案,由于SSD写入较少,其选择缓存块的策略是固定的,因此我们无法根据应用的特点自由选择合适的缓存算法,以获得更高的性能。为此,本文提出了一种通用的SSD寿命扩展插件EQ (Elastic Queue),该插件可以配合任何缓存算法来延长SSD的寿命。EQ通过弹性地扩展缓存块的取出边界来降低数据更新频率,使SSD设备的服务时间更长。基于实际迹线的实验结果表明,对于原来的LRU、LIRS和LARC方案,加入EQ插件后,其SSD写入量平均减少了39.03倍,同时缓存命中率平均提高了17.30%。
{"title":"Elastic Queue: A Universal SSD Lifetime Extension Plug-in for Cache Replacement Algorithms","authors":"Yushi Liang, Yunpeng Chai, Ning Bao, Hengyu Chen, Yao-Hong Liu","doi":"10.1145/2928275.2928286","DOIUrl":"https://doi.org/10.1145/2928275.2928286","url":null,"abstract":"Flash-based solid-state drives (SSDs) are getting popular to be deployed as the second-level cache in storage systems because of the noticeable performance acceleration and transparency for the original software. However, the frequent data updates of existing cache replacement algorithms (e.g. LRU, LIRS, and LARC) causes too many writes on SSDs, leading to short lifetime and high costs of devices. SSD-oriented cache schemes with less SSD writes have fixed strategies of selecting cache blocks, so we cannot freely choose a suitable cache algorithm to adapt to application features for higher performance. Therefore, a universal SSD lifetime extension plug-in called Elastic Queue (EQ), which can cooperate with any cache algorithm to extend the lifetime of SSDs, is proposed in this paper. EQ reduces the data updating frequency by extending the eviction border of cache blocks elastically, making SSD devices serve much longer. The experimental results based on some real-world traces indicate that for the original LRU, LIRS, and LARC schemes, adding the EQ plug-in reduces their SSD write amounts by 39.03 times, and improves the cache hit rates by 17.30% on average at the same time.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"115 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78183006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Enterprise Resource Management in Mesos Clusters Mesos集群中的企业资源管理
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933272
Abed Abu Dbai, David Breitgand, G. Gershinsky, A. Glikson, K. Ahmed
Enterprise data centers increasingly adopt a cloud-like architecture that enables the execution of multiple workloads on a shared pool of resources, reduces the data center footprint and drives down the costs. A number of cluster resource managers have appeared over the last few years, aimed at providing a uniform technology-neutral resource representation and management substrate. Examples include Apache YARN, Google Borg and Omega, Apache Mesos, and IBM Platform EGO. The Apache Mesos project [2] is emerging as a leading open source resource management technology for server clusters. Mesos offers simple yet powerful and flexible APIs, highly available and fault tolerant architecture, scalability to large clusters, isolation between tasks using Linux containers, multi-dimensional resource scheduling, ability to allocate shares of the cluster to roles representing users or user groups, and a clear separation of concerns between the applications (termed frameworks) and the "cluster kernel", which is Mesos. The resource scheduler of Mesos supports a generalization of max-min fairness, termed Dominant Resource Fairness (DRF) [1] scheduling discipline, which allows to harmonize execution of heterogeneous workloads (in terms of resource demand) by maximizing the share of any resource allocated to a specific framework. However, the default Mesos allocation mechanism lacks a number of policy and tenancy capabilities, important in enterprise deployments. We have investigated integration of Mesos with the IBM EGO (enterprise grid orchestrator) technology [3] which underpins various high performance computing, analytics and big data clusters in a variety of industry verticals including financial services, life sciences, manufacturing and electronics. We have designed and implemented an experimental integration prototype, and have tested it with SparkBench workloads. We demonstrate how Mesos can be enriched with new resource policy capabilities, required for managing enterprise data centers, such as • Capturing of the hierarchical structure of an enterprise (organisations, departments, groups, teams, users) by defining the corresponding resource consumer tree; • A fine grained resource plan allowing to define resource share ratio, ownership and lending/borrowing policies for each resource consumer; • A rich set of resource management policies making use of the hierarchical resource consumer model and providing fairness and isolation to the members of hierarchy including an important ability to dynamically change the allocations (time-based policy); • A Web-based GUI providing a centralized console through which the whole cluster is observed and managed. In particular, the cluster-wide resource management policies are applied through this GUI.
企业数据中心越来越多地采用类似云的体系结构,这种体系结构支持在共享资源池上执行多个工作负载,减少数据中心占用空间并降低成本。在过去几年中出现了许多集群资源管理器,旨在提供统一的技术中立的资源表示和管理基础。例如Apache YARN、Google Borg和Omega、Apache Mesos和IBM Platform EGO。Apache Mesos项目[2]正在成为服务器集群的领先开源资源管理技术。Mesos提供了简单而强大而灵活的api、高可用性和容错架构、大型集群的可扩展性、使用Linux容器的任务隔离、多维资源调度、将集群的共享分配给代表用户或用户组的角色的能力,以及应用程序(称为框架)和“集群内核”(即Mesos)之间的明确分离。Mesos的资源调度程序支持最大最小公平性的泛化,称为主导资源公平性(DRF)[1]调度原则,它允许通过最大化分配给特定框架的任何资源的共享来协调异构工作负载的执行(就资源需求而言)。但是,默认的Mesos分配机制缺乏许多策略和租户功能,这些功能在企业部署中很重要。我们已经研究了Mesos与IBM EGO(企业网格编排器)技术的集成[3],该技术支持各种行业垂直领域的高性能计算、分析和大数据集群,包括金融服务、生命科学、制造和电子。我们设计并实现了一个实验性的集成原型,并在SparkBench工作负载上进行了测试。我们演示了如何用管理企业数据中心所需的新资源策略功能来丰富Mesos,例如:通过定义相应的资源消费者树来捕获企业(组织、部门、组、团队、用户)的层次结构;•细粒度的资源计划,允许为每个资源消费者定义资源共享比例、所有权和借贷政策;•一套丰富的资源管理策略,利用分层资源消费者模型,为分层成员提供公平性和隔离性,包括动态更改分配的重要能力(基于时间的策略);•基于web的GUI提供了一个集中的控制台,通过该控制台可以观察和管理整个集群。特别是,集群范围的资源管理策略是通过这个GUI应用的。
{"title":"Enterprise Resource Management in Mesos Clusters","authors":"Abed Abu Dbai, David Breitgand, G. Gershinsky, A. Glikson, K. Ahmed","doi":"10.1145/2928275.2933272","DOIUrl":"https://doi.org/10.1145/2928275.2933272","url":null,"abstract":"Enterprise data centers increasingly adopt a cloud-like architecture that enables the execution of multiple workloads on a shared pool of resources, reduces the data center footprint and drives down the costs. A number of cluster resource managers have appeared over the last few years, aimed at providing a uniform technology-neutral resource representation and management substrate. Examples include Apache YARN, Google Borg and Omega, Apache Mesos, and IBM Platform EGO. The Apache Mesos project [2] is emerging as a leading open source resource management technology for server clusters. Mesos offers simple yet powerful and flexible APIs, highly available and fault tolerant architecture, scalability to large clusters, isolation between tasks using Linux containers, multi-dimensional resource scheduling, ability to allocate shares of the cluster to roles representing users or user groups, and a clear separation of concerns between the applications (termed frameworks) and the \"cluster kernel\", which is Mesos. The resource scheduler of Mesos supports a generalization of max-min fairness, termed Dominant Resource Fairness (DRF) [1] scheduling discipline, which allows to harmonize execution of heterogeneous workloads (in terms of resource demand) by maximizing the share of any resource allocated to a specific framework. However, the default Mesos allocation mechanism lacks a number of policy and tenancy capabilities, important in enterprise deployments. We have investigated integration of Mesos with the IBM EGO (enterprise grid orchestrator) technology [3] which underpins various high performance computing, analytics and big data clusters in a variety of industry verticals including financial services, life sciences, manufacturing and electronics. We have designed and implemented an experimental integration prototype, and have tested it with SparkBench workloads. We demonstrate how Mesos can be enriched with new resource policy capabilities, required for managing enterprise data centers, such as • Capturing of the hierarchical structure of an enterprise (organisations, departments, groups, teams, users) by defining the corresponding resource consumer tree; • A fine grained resource plan allowing to define resource share ratio, ownership and lending/borrowing policies for each resource consumer; • A rich set of resource management policies making use of the hierarchical resource consumer model and providing fairness and isolation to the members of hierarchy including an important ability to dynamically change the allocations (time-based policy); • A Web-based GUI providing a centralized console through which the whole cluster is observed and managed. In particular, the cluster-wide resource management policies are applied through this GUI.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76439031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Robotic Mobile Hot Spot Relay (MHSR) for Disaster Areas 灾区机器人移动热点中继(MHSR)
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933279
Itai Dabran, Tom Palny
Rescue forces in disaster areas mostly use Mobile Ad-Hoc networks to enable quick communication facilities over various physical barriers. Such networks consist of mobile terminals connected to base stations (BS) or Access Points (AP) in order to transmit essential information to the outer world. In disaster areas, rescue forces are equipped with a PAN (Personal Area Network) which combines devices such as medical, vibration and noise sensors. In such areas where communication conditions are unstable, it is essential to deploy the infrastructure as soon as possible. For example, the authors of [2] propose an implementation of an Autonomous P2P Ad-Hoc Group Communication that supports the need of emergency communication in earthquake disaster areas. In [3] a model for developing Ad Hoc Network configuration technologies is proposed. This model, the Disaster Area Architecture, improves information exchange and coordination among the participants. We present a small self propelled robot. Our robot is resistant to mechanical damage [4] and operates as a communication relay in order to overcome communication disorders inside ruins or tunnels, between the PAN and the outer world. This robot is called when there is no direct connection towards a wireless Access Point (AP), using a short range communication request (by a smartphone for example). Our Mobile Hot Spot Relay (MHSR) depicted in Figure 1, moves independently and can be used in a disaster area, where it can be mobilized upon a request. While moving, it monitors the Wi-Fi signal towards the AP and when it goes under a certain (predefined) threshold it stops.
灾区救援力量大多使用移动自组织网络,以实现跨越各种物理障碍的快速通信设施。这种网络由连接到基站(BS)或接入点(AP)的移动终端组成,以便向外界传输重要信息。在灾区,救援部队配备了一个PAN(个人区域网络),它结合了医疗、振动和噪音传感器等设备。在通信条件不稳定的地区,必须尽快部署基础设施。例如,[2]的作者提出了一种支持地震灾区应急通信需求的自治P2P自组织组通信的实现。在[3]中提出了一种开发Ad Hoc网络配置技术的模型。这个模型,即灾区体系结构,改善了参与者之间的信息交换和协调。我们展示了一个小型的自走机器人。我们的机器人具有抗机械损伤的能力[4],并作为通信中继来克服废墟或隧道内PAN与外部世界之间的通信障碍。当没有与无线接入点(AP)的直接连接时,使用短距离通信请求(例如通过智能手机)调用该机器人。我们的移动热点中继(MHSR)如图1所示,可以独立移动,并且可以在灾区使用,在那里可以根据请求进行动员。在移动时,它会监视指向AP的Wi-Fi信号,当信号低于某个(预定义的)阈值时,它会停止。
{"title":"A Robotic Mobile Hot Spot Relay (MHSR) for Disaster Areas","authors":"Itai Dabran, Tom Palny","doi":"10.1145/2928275.2933279","DOIUrl":"https://doi.org/10.1145/2928275.2933279","url":null,"abstract":"Rescue forces in disaster areas mostly use Mobile Ad-Hoc networks to enable quick communication facilities over various physical barriers. Such networks consist of mobile terminals connected to base stations (BS) or Access Points (AP) in order to transmit essential information to the outer world. In disaster areas, rescue forces are equipped with a PAN (Personal Area Network) which combines devices such as medical, vibration and noise sensors. In such areas where communication conditions are unstable, it is essential to deploy the infrastructure as soon as possible. For example, the authors of [2] propose an implementation of an Autonomous P2P Ad-Hoc Group Communication that supports the need of emergency communication in earthquake disaster areas. In [3] a model for developing Ad Hoc Network configuration technologies is proposed. This model, the Disaster Area Architecture, improves information exchange and coordination among the participants. We present a small self propelled robot. Our robot is resistant to mechanical damage [4] and operates as a communication relay in order to overcome communication disorders inside ruins or tunnels, between the PAN and the outer world. This robot is called when there is no direct connection towards a wireless Access Point (AP), using a short range communication request (by a smartphone for example). Our Mobile Hot Spot Relay (MHSR) depicted in Figure 1, moves independently and can be used in a disaster area, where it can be mobilized upon a request. While moving, it monitors the Wi-Fi signal towards the AP and when it goes under a certain (predefined) threshold it stops.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88262768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Optical Circuits in Hybrid Packet/Circuit Data-Center Networks 光电路在分组/电路混合数据中心网络中的应用
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933284
Y. Ben-Itzhak, C. Caba, José Soler
Existing Data Center Networks (DCNs) continue to evolve to keep up with application requirements in terms of bandwidth, latency, agility, etc. According to the updated release of the Cisco Global Cloud Index [1], by 2019, more than 86% of traffic workloads will be processed by cloud DCs. Traditional DCNs, which are based on electrical packet switching (EPS) with hierarchical, tree-like topologies can no longer support future cloud traffic requirements in terms of dynamicity, bandwidth and latency. Hence, existing DCNs can be enhanced with OCS (Optical Circuit Switching), which provides high bandwidth, low latency and low power consumption [2], giving rise to hybrid OCS-EPS topologies. In this research, we assess a virtualized, hybrid, flat DCN topology consisting of a single layer of high radix ToR (Top of Rack) switches, interconnected with each other and through an OCS plane. The benefit of such flat topology is twofold: 1) In terms of bandwidth, over-subscription is reduced, and bisection bandwidth is increased; and 2) In terms of latency, the diameter (longest path) of topology is reduced. Moreover, we present new algorithms and orchestration functionality to detect and offload suitable flows (e.g. elephant flows) from the EPS to the OCS plane. Our DC architecture consists of hybrid EPS-OCS DCN, an Openflow(OF) based control plane, and an orchestration layer. Our orchestration layer decouples the elephant flows detection from the rerouting decision logic in the DCN. Specifically, the elephant flows detection is done by flow tagging in the hypervisor, while the flow rerouting is executed at the EPSs, which are connected directly to the OCS. Hence, it provides a more efficient, scalable, and easy to configure architecture as compared to existing hybrid solutions. The orchestrator monitors the ToR switches by sFlow and detects high volume traffic between two ToRs, exceeding a given bandwidth threshold. Such traffic may consist of either few elephant flows or many mice flows. To further increase the optical circuit utilization, we introduce two types of optical circuits: 1) private circuit, presented in existing solutions, is utilized only by flows that originate and end at the ToR switches connected to the circuit endpoints. 2) shared circuit, is part of our novel approach. It can be used also by flows that are transmitted through ToR switches connected to the circuit endpoints, but originate and/or end at other ToRs. Moreover, the orchestrator may dynamically decide to configure private or shared optical circuits, according to various criteria including current network utilization, traffic flows nature, tenants SLAs, etc. Configuring or changing the optical circuit type requires installing a single OpenFlow rule for each ToR connected to the circuit endpoints; hence, enabling low overhead and fast network configuration. To assess the benefit of such optical circuit configurations, we implement the proposed algorithms and test them over an emula
现有的数据中心网络(dcn)不断发展,以满足应用程序在带宽、延迟、敏捷性等方面的需求。根据最新发布的思科全球云指数[1],到2019年,超过86%的流量工作负载将由云数据中心处理。传统的dcn是基于电子分组交换(EPS)的分层树状拓扑结构,在动态性、带宽和延迟方面已经不能满足未来云流量的需求。因此,现有的DCNs可以通过OCS(光电路交换)来增强,OCS(光电路交换)提供高带宽,低延迟和低功耗,从而产生混合OCS- eps拓扑。在本研究中,我们评估了一种虚拟化、混合、扁平的DCN拓扑结构,该拓扑结构由单层高基数ToR(机架顶部)交换机组成,通过OCS平面相互连接。这种平面拓扑的好处是双重的:1)在带宽方面,减少了过度订阅,增加了二分带宽;2)在延迟方面,减小了拓扑的直径(最长路径)。此外,我们提出了新的算法和编排功能,以检测和卸载合适的流(例如大象流)从EPS到OCS平面。我们的数据中心架构由混合EPS-OCS DCN、基于Openflow(of)的控制平面和编排层组成。我们的编排层将大象流检测与DCN中的重路由决策逻辑解耦。具体来说,大象流检测是通过管理程序中的流标记完成的,而流重路由是在直接连接到OCS的eps上执行的。因此,与现有的混合解决方案相比,它提供了更高效、可伸缩和易于配置的体系结构。编排器通过sFlow监视ToR交换机,并检测两个ToR之间超过给定带宽阈值的高容量流量。这种交通可能由少量大象流或许多老鼠流组成。为了进一步提高光电路的利用率,我们引入了两种类型的光电路:1)在现有解决方案中提出的专用电路,仅由连接到电路端点的ToR交换机发起和结束的流使用。2)共享电路,是我们新方法的一部分。它也可以用于通过连接到电路端点的ToR交换机传输的流,但起源于和/或结束于其他ToR。此外,编排器可以根据各种标准(包括当前网络利用率、流量流性质、租户sla等)动态决定配置私有或共享光电路。配置或更改光学电路类型需要为连接到电路端点的每个ToR安装单个OpenFlow规则;因此,支持低开销和快速的网络配置。为了评估这种光电路配置的好处,我们实现了所提出的算法,并在模拟数据和控制平面环境中对它们进行了测试。我们评估了专用和共享光电路的各种网络流量场景下的网络性能,并将它们与具有相同总链路带宽的仅eps基线拓扑进行了比较。我们的初步结果表明,与常用的专用电路相比,共享光电路的性能提高了5%至10%。本研究部分由欧共体第七框架计划(FP7/2001-2013)资助,资助协议号:619572 (COSIGN项目)。
{"title":"Utilizing Optical Circuits in Hybrid Packet/Circuit Data-Center Networks","authors":"Y. Ben-Itzhak, C. Caba, José Soler","doi":"10.1145/2928275.2933284","DOIUrl":"https://doi.org/10.1145/2928275.2933284","url":null,"abstract":"Existing Data Center Networks (DCNs) continue to evolve to keep up with application requirements in terms of bandwidth, latency, agility, etc. According to the updated release of the Cisco Global Cloud Index [1], by 2019, more than 86% of traffic workloads will be processed by cloud DCs. Traditional DCNs, which are based on electrical packet switching (EPS) with hierarchical, tree-like topologies can no longer support future cloud traffic requirements in terms of dynamicity, bandwidth and latency. Hence, existing DCNs can be enhanced with OCS (Optical Circuit Switching), which provides high bandwidth, low latency and low power consumption [2], giving rise to hybrid OCS-EPS topologies. In this research, we assess a virtualized, hybrid, flat DCN topology consisting of a single layer of high radix ToR (Top of Rack) switches, interconnected with each other and through an OCS plane. The benefit of such flat topology is twofold: 1) In terms of bandwidth, over-subscription is reduced, and bisection bandwidth is increased; and 2) In terms of latency, the diameter (longest path) of topology is reduced. Moreover, we present new algorithms and orchestration functionality to detect and offload suitable flows (e.g. elephant flows) from the EPS to the OCS plane. Our DC architecture consists of hybrid EPS-OCS DCN, an Openflow(OF) based control plane, and an orchestration layer. Our orchestration layer decouples the elephant flows detection from the rerouting decision logic in the DCN. Specifically, the elephant flows detection is done by flow tagging in the hypervisor, while the flow rerouting is executed at the EPSs, which are connected directly to the OCS. Hence, it provides a more efficient, scalable, and easy to configure architecture as compared to existing hybrid solutions. The orchestrator monitors the ToR switches by sFlow and detects high volume traffic between two ToRs, exceeding a given bandwidth threshold. Such traffic may consist of either few elephant flows or many mice flows. To further increase the optical circuit utilization, we introduce two types of optical circuits: 1) private circuit, presented in existing solutions, is utilized only by flows that originate and end at the ToR switches connected to the circuit endpoints. 2) shared circuit, is part of our novel approach. It can be used also by flows that are transmitted through ToR switches connected to the circuit endpoints, but originate and/or end at other ToRs. Moreover, the orchestrator may dynamically decide to configure private or shared optical circuits, according to various criteria including current network utilization, traffic flows nature, tenants SLAs, etc. Configuring or changing the optical circuit type requires installing a single OpenFlow rule for each ToR connected to the circuit endpoints; hence, enabling low overhead and fast network configuration. To assess the benefit of such optical circuit configurations, we implement the proposed algorithms and test them over an emula","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74182214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSD Failures in Datacenters: What? When? and Why? 数据中心SSD故障:什么?什么时候?,为什么?
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928278
Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, A. Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine M. Khessib, Kushagra Vaid
Despite the growing popularity of Solid State Disks (SSDs) in the datacenter, little is known about their reliability characteristics in the field. The little knowledge is mainly vendor supplied, and such information cannot really help understand how SSD failures can manifest and impact the operation of production systems, in order to take appropriate remedial measures. Besides actual failure data and the symptoms exhibited by SSDs before failing, a detailed characterization effort requires wide set of data about factors influencing SSD failures, right from provisioning factors to the operational ones. This paper presents an extensive SSD failure characterization by analyzing a wide spectrum of data from over half a million SSDs that span multiple generations spread across several datacenters which host a wide spectrum of workloads over nearly 3 years. By studying the diverse set of design, provisioning and operational factors on failures, and their symptoms, our work provides the first comprehensive analysis of the what, when and why characteristics of SSD failures in production datacenters.
尽管固态硬盘(ssd)在数据中心越来越受欢迎,但人们对其可靠性特性知之甚少。这些小知识主要是供应商提供的,这些信息并不能真正帮助理解SSD故障是如何表现出来并影响生产系统的运行,以便采取适当的补救措施。除了实际故障数据和SSD故障前表现出的症状外,详细的特征描述工作还需要关于影响SSD故障的因素(从供应因素到操作因素)的广泛数据集。本文通过分析来自50多万个SSD的广泛数据,展示了广泛的SSD故障特征,这些SSD跨越多代,分布在几个数据中心,这些数据中心在近3年的时间里承载了广泛的工作负载。通过研究故障的各种设计、配置和操作因素及其症状,我们的工作首次全面分析了生产数据中心中SSD故障的内容、时间和原因。
{"title":"SSD Failures in Datacenters: What? When? and Why?","authors":"Iyswarya Narayanan, Di Wang, Myeongjae Jeon, Bikash Sharma, Laura Caulfield, A. Sivasubramaniam, Ben Cutler, Jie Liu, Badriddine M. Khessib, Kushagra Vaid","doi":"10.1145/2928275.2928278","DOIUrl":"https://doi.org/10.1145/2928275.2928278","url":null,"abstract":"Despite the growing popularity of Solid State Disks (SSDs) in the datacenter, little is known about their reliability characteristics in the field. The little knowledge is mainly vendor supplied, and such information cannot really help understand how SSD failures can manifest and impact the operation of production systems, in order to take appropriate remedial measures. Besides actual failure data and the symptoms exhibited by SSDs before failing, a detailed characterization effort requires wide set of data about factors influencing SSD failures, right from provisioning factors to the operational ones. This paper presents an extensive SSD failure characterization by analyzing a wide spectrum of data from over half a million SSDs that span multiple generations spread across several datacenters which host a wide spectrum of workloads over nearly 3 years. By studying the diverse set of design, provisioning and operational factors on failures, and their symptoms, our work provides the first comprehensive analysis of the what, when and why characteristics of SSD failures in production datacenters.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87515059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Proper Timed I/O: High-Accuracy Real-Time Control for Conventional Operating Systems 适当定时I/O:传统操作系统的高精度实时控制
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928283
Yogev Vaknin, Sivan Toledo
We propose a novel high-level abstraction for real-time control, called Proper Timed I/O (PTIO). The abstraction allows user-space programs running on a stock operating system (without real-time extensions) to perform high-resolution real-time digital I/O (setting pins high or low, responding to input transitions, etc.). PTIO programs express their real-time I/O behavior in terms of a timed automaton that can communicate with the user-space program. Simple behaviors are encoded in the timed automaton; complex behaviors are implemented by the user-space program. We present two implementations of the PTIO abstraction, both for Linux. One utilizes a deterministic co-processor that is available on some ARM-based system-on-a-chip processors. This implementation can achieve timing accuracy of 100ns or better and can perform millions of finite-state transitions per second. The other implementation uses hardware timers that are available on every system-on-a-chip; it achieves a timing accuracy of 6µs or better, but it is limited to about 2000 state transitions per second. Both implementations guarantee that the PTIO never fails silently: if the mechanism missed a deadline, the user space program is always notified. In many cases, PTIOs eliminate the need for bare-metal programming or for specialized real-time operating systems.
我们提出了一种新的高级实时控制抽象,称为定时I/O (PTIO)。抽象允许用户空间程序运行在一个股票操作系统(没有实时扩展),以执行高分辨率的实时数字I/O(设置引脚高或低,响应输入转换,等等)。PTIO程序用可以与用户空间程序通信的定时自动机来表达它们的实时I/O行为。简单的行为被编码在时间自动机中;复杂的行为由用户空间程序实现。我们给出了两个PTIO抽象的实现,都是针对Linux的。一种是利用某些基于arm的单片系统处理器上可用的确定性协处理器。这种实现可以实现100ns或更好的定时精度,并且每秒可以执行数百万次有限状态转换。另一种实现使用每个片上系统上可用的硬件计时器;它实现了6µs或更好的定时精度,但它被限制在每秒大约2000个状态转换。这两种实现都保证PTIO永远不会静默失败:如果该机制错过了截止日期,则始终会通知用户空间程序。在许多情况下,ptio消除了对裸机编程或专门的实时操作系统的需求。
{"title":"Proper Timed I/O: High-Accuracy Real-Time Control for Conventional Operating Systems","authors":"Yogev Vaknin, Sivan Toledo","doi":"10.1145/2928275.2928283","DOIUrl":"https://doi.org/10.1145/2928275.2928283","url":null,"abstract":"We propose a novel high-level abstraction for real-time control, called Proper Timed I/O (PTIO). The abstraction allows user-space programs running on a stock operating system (without real-time extensions) to perform high-resolution real-time digital I/O (setting pins high or low, responding to input transitions, etc.). PTIO programs express their real-time I/O behavior in terms of a timed automaton that can communicate with the user-space program. Simple behaviors are encoded in the timed automaton; complex behaviors are implemented by the user-space program. We present two implementations of the PTIO abstraction, both for Linux. One utilizes a deterministic co-processor that is available on some ARM-based system-on-a-chip processors. This implementation can achieve timing accuracy of 100ns or better and can perform millions of finite-state transitions per second. The other implementation uses hardware timers that are available on every system-on-a-chip; it achieves a timing accuracy of 6µs or better, but it is limited to about 2000 state transitions per second. Both implementations guarantee that the PTIO never fails silently: if the mechanism missed a deadline, the user space program is always notified. In many cases, PTIOs eliminate the need for bare-metal programming or for specialized real-time operating systems.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78673145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
File System Usage in Android Mobile Phones Android手机文件系统使用情况
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928280
R. Friedman, David Sainz
In this paper, we report on the analysis of data from Android mobile phones of 38 users, composed of access traces of the users' mobile file systems during 30 days. We shed new light on the file usage patterns and present the data in terms of file size distributions, file sessions, file lifetime, file access activity and read / write access patterns. We characterize different distributions and extract conclusions about usage patterns of Android file systems.
在本文中,我们报告了对38名Android手机用户的数据分析,这些数据由用户移动文件系统在30天内的访问痕迹组成。我们揭示了文件使用模式,并根据文件大小分布、文件会话、文件生命周期、文件访问活动和读/写访问模式来呈现数据。我们描述了不同的发行版,并提取了关于Android文件系统使用模式的结论。
{"title":"File System Usage in Android Mobile Phones","authors":"R. Friedman, David Sainz","doi":"10.1145/2928275.2928280","DOIUrl":"https://doi.org/10.1145/2928275.2928280","url":null,"abstract":"In this paper, we report on the analysis of data from Android mobile phones of 38 users, composed of access traces of the users' mobile file systems during 30 days. We shed new light on the file usage patterns and present the data in terms of file size distributions, file sessions, file lifetime, file access activity and read / write access patterns. We characterize different distributions and extract conclusions about usage patterns of Android file systems.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78847835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
SeMiNAS: A Secure Middleware for Wide-Area Network-Attached Storage 用于广域网附加存储的安全中间件
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2928282
Ming Chen, E. Zadok, A. Vasudevan, Kelong Wang
Utility computing is being gradually realized as exemplified by cloud computing. Outsourcing computing and storage to global-scale cloud providers benefits from high accessibility, flexibility, scalability, and cost-effectiveness. However, users are uneasy outsourcing the storage of sensitive data due to security concerns. We address this problem by presenting SeMiNAS---an efficient middleware system that allows files to be securely outsourced to providers and shared among geo-distributed offices. SeMiNAS achieves end-to-end data integrity and confidentiality with a highly efficient authenticated-encryption scheme. SeMiNAS leverages advanced NFSv4 features, including compound procedures and data-integrity extensions, to minimize extra network round trips caused by security meta-data. SeMiNAS also caches remote files locally to reduce accesses to providers over WANs. We designed, implemented, and evaluated SeMiNAS, which demonstrates a small performance penalty of less than 26% and an occasional performance boost of up to 19% for Filebench workloads.
以云计算为代表的效用计算正在逐步实现。将计算和存储外包给全球规模的云提供商可以从高可访问性、灵活性、可伸缩性和成本效益中获益。然而,出于安全考虑,用户对外包敏感数据的存储感到不安。我们通过介绍SeMiNAS来解决这个问题——一个高效的中间件系统,它允许文件安全地外包给提供商,并在地理分布的办公室之间共享。semas通过高效的认证加密方案实现端到端的数据完整性和机密性。semas利用先进的NFSv4功能,包括复合过程和数据完整性扩展,以最大限度地减少由安全元数据引起的额外网络往返。semas还在本地缓存远程文件,以减少通过广域网访问提供商。我们设计、实现并评估了semas,结果表明,对于Filebench工作负载,semas的性能损失小于26%,偶尔性能提升高达19%。
{"title":"SeMiNAS: A Secure Middleware for Wide-Area Network-Attached Storage","authors":"Ming Chen, E. Zadok, A. Vasudevan, Kelong Wang","doi":"10.1145/2928275.2928282","DOIUrl":"https://doi.org/10.1145/2928275.2928282","url":null,"abstract":"Utility computing is being gradually realized as exemplified by cloud computing. Outsourcing computing and storage to global-scale cloud providers benefits from high accessibility, flexibility, scalability, and cost-effectiveness. However, users are uneasy outsourcing the storage of sensitive data due to security concerns. We address this problem by presenting SeMiNAS---an efficient middleware system that allows files to be securely outsourced to providers and shared among geo-distributed offices. SeMiNAS achieves end-to-end data integrity and confidentiality with a highly efficient authenticated-encryption scheme. SeMiNAS leverages advanced NFSv4 features, including compound procedures and data-integrity extensions, to minimize extra network round trips caused by security meta-data. SeMiNAS also caches remote files locally to reduce accesses to providers over WANs. We designed, implemented, and evaluated SeMiNAS, which demonstrates a small performance penalty of less than 26% and an occasional performance boost of up to 19% for Filebench workloads.","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85508370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Cross-ISA Container Migration 跨isa容器迁移
Pub Date : 2016-06-06 DOI: 10.1145/2928275.2933275
J. Nider, Mike Rapoport
Containers are a convenient way of encapsulating and isolating applications. They incur less overhead than virtual machines and provide more flexibility and versatility to improve server utilization. Many new cloud applications are being written in the microservices style to take advantage of container technologies. Each component of the application can be encapsulated in a separate container, which enables the use of other features such as auto-scaling. However, legacy applications can also benefit from containers which provide more efficient development and deployment models. In modern data centers, orchestration middle-ware is responsible for container placement, SLA enforcement and resource management. The orchestration software can implement various policies for managing the resources. The orchestration software can take corrective actions when detecting inefficiencies in the data center operation to satisfy the current policy. Power efficiency is becoming one of the most important characteristics taken into account when designing a data center and defining policy for the orchestration middleware [4]. Different server architectures have different power efficiency and energy proportionality characteristics. Recent research has shown that heterogeneous systems have the potential to significantly improve energy efficiency[3, 5]. Our work focuses on the mechanism required by the middle-ware to implement a power optimization policy. We research migration of containerized applications between servers inside a heterogeneous data center, for the purpose of optimizing power efficiency. Migrating a running container between different architectures relies on the compatibility of the application environment on the source and destination servers. Containers are viewed as a set of one or more processes and each process must have the ability to be migrated. A modified compiler is used to build executables in a manner allowing the program migration between different architectures. The source and destination servers must also have a shared file system and comparable networking capabilities. We take advantage of the recently added user-space page fault feature in the Linux kernel [2] to implement post-copy container migration in CRIU [1]. Post-copy migration significantly reduces perceived down-time of the container, and can potentially reduce network traffic as well. We propose creating a cluster of servers with different architectures (i.e., ARM, POWER, and x86) connected with a high-speed, low-latency network. This cluster will run SaaS applications in a containerized environment. The applications will be built using a specialized toolchain that ensures an identical memory layout across all architectures, enabling seamless migration at runtime. The majority of the challenges in cross-ISA migration are related to the toolchain adaptation, and ensuring the compatibility of the runtime environment across various servers in the cluster. The ability to efficien
容器是封装和隔离应用程序的一种方便方法。它们比虚拟机产生更少的开销,并提供更多的灵活性和多功能性,以提高服务器利用率。许多新的云应用程序正在以微服务风格编写,以利用容器技术。应用程序的每个组件都可以封装在单独的容器中,这样就可以使用自动缩放等其他特性。然而,遗留应用程序也可以从容器中受益,因为容器提供了更有效的开发和部署模型。在现代数据中心中,编排中间件负责容器放置、SLA实施和资源管理。业务流程软件可以实现各种策略来管理资源。编排软件可以在检测到数据中心操作中的低效率时采取纠正措施,以满足当前策略。在设计数据中心和为编排中间件定义策略时,电源效率正在成为要考虑的最重要的特征之一[4]。不同的服务器架构具有不同的功率效率和能量比例特性。最近的研究表明,异质系统具有显著提高能源效率的潜力[3,5]。我们的工作重点是中间件实现功率优化策略所需的机制。为了优化电源效率,我们研究了异构数据中心内服务器之间容器化应用程序的迁移。在不同体系结构之间迁移正在运行的容器依赖于源服务器和目标服务器上应用程序环境的兼容性。容器被视为一组一个或多个进程,每个进程必须具有迁移的能力。修改后的编译器用于以允许程序在不同体系结构之间迁移的方式构建可执行文件。源服务器和目标服务器还必须具有共享文件系统和可比较的网络功能。我们利用Linux内核中最近添加的用户空间页面故障特性[2]来实现CRIU中的复制后容器迁移[1]。复制后迁移可以显著减少容器的停机时间,并且还可以潜在地减少网络流量。我们建议创建一个具有不同架构(即ARM, POWER和x86)的服务器集群,这些服务器与高速,低延迟的网络相连。该集群将在容器化环境中运行SaaS应用程序。应用程序将使用专门的工具链来构建,以确保在所有架构中具有相同的内存布局,从而在运行时实现无缝迁移。跨isa迁移中的大多数挑战都与工具链的适应有关,并确保集群中各种服务器之间运行时环境的兼容性。在具有不同能量比例特征的服务器之间有效迁移正在运行的容器的能力在空闲期间提供了更好的电力节省,而不会影响SLA承诺。
{"title":"Cross-ISA Container Migration","authors":"J. Nider, Mike Rapoport","doi":"10.1145/2928275.2933275","DOIUrl":"https://doi.org/10.1145/2928275.2933275","url":null,"abstract":"Containers are a convenient way of encapsulating and isolating applications. They incur less overhead than virtual machines and provide more flexibility and versatility to improve server utilization. Many new cloud applications are being written in the microservices style to take advantage of container technologies. Each component of the application can be encapsulated in a separate container, which enables the use of other features such as auto-scaling. However, legacy applications can also benefit from containers which provide more efficient development and deployment models. In modern data centers, orchestration middle-ware is responsible for container placement, SLA enforcement and resource management. The orchestration software can implement various policies for managing the resources. The orchestration software can take corrective actions when detecting inefficiencies in the data center operation to satisfy the current policy. Power efficiency is becoming one of the most important characteristics taken into account when designing a data center and defining policy for the orchestration middleware [4]. Different server architectures have different power efficiency and energy proportionality characteristics. Recent research has shown that heterogeneous systems have the potential to significantly improve energy efficiency[3, 5]. Our work focuses on the mechanism required by the middle-ware to implement a power optimization policy. We research migration of containerized applications between servers inside a heterogeneous data center, for the purpose of optimizing power efficiency. Migrating a running container between different architectures relies on the compatibility of the application environment on the source and destination servers. Containers are viewed as a set of one or more processes and each process must have the ability to be migrated. A modified compiler is used to build executables in a manner allowing the program migration between different architectures. The source and destination servers must also have a shared file system and comparable networking capabilities. We take advantage of the recently added user-space page fault feature in the Linux kernel [2] to implement post-copy container migration in CRIU [1]. Post-copy migration significantly reduces perceived down-time of the container, and can potentially reduce network traffic as well. We propose creating a cluster of servers with different architectures (i.e., ARM, POWER, and x86) connected with a high-speed, low-latency network. This cluster will run SaaS applications in a containerized environment. The applications will be built using a specialized toolchain that ensures an identical memory layout across all architectures, enabling seamless migration at runtime. The majority of the challenges in cross-ISA migration are related to the toolchain adaptation, and ensuring the compatibility of the runtime environment across various servers in the cluster. The ability to efficien","PeriodicalId":20607,"journal":{"name":"Proceedings of the 9th ACM International on Systems and Storage Conference","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89212721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
Proceedings of the 9th ACM International on Systems and Storage Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1