首页 > 最新文献

ACM Transactions on Computer Systems (TOCS)最新文献

英文 中文
Venice 威尼斯
Pub Date : 2019-03-14 DOI: 10.1145/3310360
Boyan Zhao, Rui Hou, Jianbo Dong, Michael C. Huang, S. Mckee, Qianlong Zhang, Yueji Liu, Ye Li, Lixin Zhang, Dan Meng
Consolidated server racks are quickly becoming the standard infrastructure for engineering, business, medicine, and science. Such servers are still designed much in the way when they were organized as individual, distributed systems. Given that many fields rely on big-data analytics substantially, its cost-effectiveness and performance should be improved, which can be achieved by flexibly allowing resources to be shared across nodes. Here we describe Venice, a family of data-center server architectures that includes a strong communication substrate as a first-class resource. Venice supports a diverse set of resource-joining mechanisms that enables applications to leverage non-local resources efficiently. We have constructed a hardware prototype to better understand the implications of design decisions about system support for resource sharing. We use it to measure the performance of at-scale applications and to explore performance, power, and resource-sharing transparency tradeoffs (i.e., how many programming changes are needed). We analyze these tradeoffs for sharing memory, accelerators, and NICs. We find that reducing/hiding latency is particularly important, the chosen communication channels should match the sharing access patterns of the applications, and of which we can improve performance by exploiting inter-channel collaboration.
整合服务器机架正迅速成为工程、商业、医学和科学领域的标准基础设施。这类服务器的设计在很大程度上仍然沿用了它们作为独立的分布式系统组织时的方式。考虑到许多领域在很大程度上依赖于大数据分析,它的成本效益和性能应该得到提高,这可以通过灵活地允许跨节点共享资源来实现。在这里,我们描述了威尼斯,这是一系列数据中心服务器架构,其中包括一个强大的通信基板作为一级资源。威尼斯支持多种资源连接机制,使应用程序能够有效地利用非本地资源。我们已经构建了一个硬件原型,以便更好地理解有关系统支持资源共享的设计决策的含义。我们使用它来衡量大规模应用程序的性能,并探索性能、功率和资源共享透明度的权衡(即,需要多少编程更改)。我们分析了共享内存、加速器和nic的这些权衡。我们发现减少/隐藏延迟是特别重要的,所选择的通信通道应该匹配应用程序的共享访问模式,并且我们可以通过利用通道间协作来提高性能。
{"title":"Venice","authors":"Boyan Zhao, Rui Hou, Jianbo Dong, Michael C. Huang, S. Mckee, Qianlong Zhang, Yueji Liu, Ye Li, Lixin Zhang, Dan Meng","doi":"10.1145/3310360","DOIUrl":"https://doi.org/10.1145/3310360","url":null,"abstract":"Consolidated server racks are quickly becoming the standard infrastructure for engineering, business, medicine, and science. Such servers are still designed much in the way when they were organized as individual, distributed systems. Given that many fields rely on big-data analytics substantially, its cost-effectiveness and performance should be improved, which can be achieved by flexibly allowing resources to be shared across nodes. Here we describe Venice, a family of data-center server architectures that includes a strong communication substrate as a first-class resource. Venice supports a diverse set of resource-joining mechanisms that enables applications to leverage non-local resources efficiently. We have constructed a hardware prototype to better understand the implications of design decisions about system support for resource sharing. We use it to measure the performance of at-scale applications and to explore performance, power, and resource-sharing transparency tradeoffs (i.e., how many programming changes are needed). We analyze these tradeoffs for sharing memory, accelerators, and NICs. We find that reducing/hiding latency is particularly important, the chosen communication channels should match the sharing access patterns of the applications, and of which we can improve performance by exploiting inter-channel collaboration.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122969121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Ryoan Ryoan
Pub Date : 2018-12-16 DOI: 10.1145/3231594
T. Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, E. Witchel
Users of modern data-processing services such as tax preparation or genomic screening are forced to trust them with data that the users wish to keep secret. Ryoan1 protects secret data while it is processed by services that the data owner does not trust. Accomplishing this goal in a distributed setting is difficult, because the user has no control over the service providers or the computational platform. Confining code to prevent it from leaking secrets is notoriously difficult, but Ryoan benefits from new hardware and a request-oriented data model. Ryoan provides a distributed sandbox, leveraging hardware enclaves (e.g., Intel’s software guard extensions (SGX) [40]) to protect sandbox instances from potentially malicious computing platforms. The protected sandbox instances confine untrusted data-processing modules to prevent leakage of the user’s input data. Ryoan is designed for a request-oriented data model, where confined modules only process input once and do not persist state about the input. We present the design and prototype implementation of Ryoan and evaluate it on a series of challenging problems including email filtering, health analysis, image processing and machine translation.
使用现代数据处理服务(如报税或基因组筛选)的用户被迫将用户希望保密的数据委托给他们。当数据由数据所有者不信任的服务处理时,Ryoan1保护秘密数据。在分布式设置中实现这一目标是困难的,因为用户无法控制服务提供者或计算平台。限制代码以防止其泄露机密是出了名的困难,但是ryan受益于新的硬件和面向请求的数据模型。Ryoan提供了一个分布式沙箱,利用硬件飞地(例如,英特尔的软件防护扩展(SGX)[40])来保护沙箱实例免受潜在恶意计算平台的攻击。受保护的沙箱实例限制了不受信任的数据处理模块,以防止用户输入数据的泄漏。ryan是为面向请求的数据模型设计的,在这种模型中,受限制的模块只处理一次输入,并且不保存有关输入的状态。我们介绍了Ryoan的设计和原型实现,并在一系列具有挑战性的问题上对其进行了评估,包括电子邮件过滤、健康分析、图像处理和机器翻译。
{"title":"Ryoan","authors":"T. Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, E. Witchel","doi":"10.1145/3231594","DOIUrl":"https://doi.org/10.1145/3231594","url":null,"abstract":"Users of modern data-processing services such as tax preparation or genomic screening are forced to trust them with data that the users wish to keep secret. Ryoan1 protects secret data while it is processed by services that the data owner does not trust. Accomplishing this goal in a distributed setting is difficult, because the user has no control over the service providers or the computational platform. Confining code to prevent it from leaking secrets is notoriously difficult, but Ryoan benefits from new hardware and a request-oriented data model. Ryoan provides a distributed sandbox, leveraging hardware enclaves (e.g., Intel’s software guard extensions (SGX) [40]) to protect sandbox instances from potentially malicious computing platforms. The protected sandbox instances confine untrusted data-processing modules to prevent leakage of the user’s input data. Ryoan is designed for a request-oriented data model, where confined modules only process input once and do not persist state about the input. We present the design and prototype implementation of Ryoan and evaluate it on a series of challenging problems including email filtering, health analysis, image processing and machine translation.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115830931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Building Consistent Transactions with Inconsistent Replication 使用不一致复制构建一致事务
Pub Date : 2018-12-16 DOI: 10.1145/3269981
Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, A. Krishnamurthy, Dan R. K. Ports
Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google’s Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use—in part, because they require costly replication protocols, like Paxos, for fault tolerance. In this article, we present a new approach that makes transactional storage systems more affordable: We eliminate consistency from the replication protocol, while still providing distributed transactions with strong consistency to applications. We present the Transactional Application Protocol for Inconsistent Replication (TAPIR), the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and better throughput.
应用程序程序员越来越喜欢具有强一致性和分布式事务的分布式存储系统(例如,Google的Spanner),因为它们具有强大的保证和易用性。不幸的是,现有的事务性存储系统使用起来非常昂贵,部分原因是它们需要昂贵的复制协议(如Paxos)来实现容错。在本文中,我们提出了一种使事务性存储系统更加经济实惠的新方法:我们从复制协议中消除一致性,同时仍然为应用程序提供具有强一致性的分布式事务。我们提出了不一致复制的事务性应用协议(TAPIR),这是第一个使用新的复制协议(称为不一致复制)的事务协议,该协议提供了不一致的容错功能。通过仅在事务协议中强制强一致性,TAPIR可以在单次往返中提交事务,并且无需集中协调就可以对分布式事务进行排序。我们将演示在事务性键值存储TAPIR- kv中使用TAPIR。与传统系统相比,TAPIR-KV提供了更好的延迟和更好的吞吐量。
{"title":"Building Consistent Transactions with Inconsistent Replication","authors":"Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, A. Krishnamurthy, Dan R. K. Ports","doi":"10.1145/3269981","DOIUrl":"https://doi.org/10.1145/3269981","url":null,"abstract":"Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google’s Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use—in part, because they require costly replication protocols, like Paxos, for fault tolerance. In this article, we present a new approach that makes transactional storage systems more affordable: We eliminate consistency from the replication protocol, while still providing distributed transactions with strong consistency to applications. We present the Transactional Application Protocol for Inconsistent Replication (TAPIR), the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and better throughput.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130775268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Pivot Tracing 主跟踪
Pub Date : 2018-12-05 DOI: 10.1145/3208104
Jonathan Mace, Ryan Roelke, Rodrigo Fonseca
Monitoring and troubleshooting distributed systems is notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today—logs, counters, and metrics—have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This article presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. We have implemented a prototype of Pivot Tracing for Java-based systems and evaluate it on a heterogeneous Hadoop cluster comprising HDFS, HBase, MapReduce, and YARN. We show that Pivot Tracing can effectively identify a diverse range of root causes such as software bugs, misconfiguration, and limping hardware. We show that Pivot Tracing is dynamic, extensible, and enables cross-tier analysis between inter-operating applications, with low execution overhead.
监控和故障排除分布式系统是出了名的困难;潜在的问题是复杂的、多样的和不可预测的。目前常用的监视和诊断工具(日志、计数器和指标)有两个重要的局限性:记录的内容是先验定义的,信息是以组件或机器为中心的方式记录的,因此很难将跨越这些边界的事件关联起来。本文介绍了Pivot Tracing,这是一个用于分布式系统的监视框架,它通过将动态检测与一种新的关系操作符(happens -before join)相结合来解决这两个限制。枢轴跟踪使用户能够在运行时在系统的一个点定义任意指标,同时能够根据在系统的其他部分有意义的事件进行选择、筛选和分组,即使是在跨组件或机器边界时也是如此。我们已经为基于java的系统实现了一个Pivot Tracing的原型,并在一个包含HDFS、HBase、MapReduce和YARN的异构Hadoop集群上对其进行了评估。我们展示了Pivot Tracing可以有效地识别各种各样的根本原因,例如软件错误、错误配置和跛行硬件。我们展示了Pivot Tracing是动态的、可扩展的,并且支持互操作应用程序之间的跨层分析,执行开销低。
{"title":"Pivot Tracing","authors":"Jonathan Mace, Ryan Roelke, Rodrigo Fonseca","doi":"10.1145/3208104","DOIUrl":"https://doi.org/10.1145/3208104","url":null,"abstract":"Monitoring and troubleshooting distributed systems is notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today—logs, counters, and metrics—have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This article presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. We have implemented a prototype of Pivot Tracing for Java-based systems and evaluate it on a heterogeneous Hadoop cluster comprising HDFS, HBase, MapReduce, and YARN. We show that Pivot Tracing can effectively identify a diverse range of root causes such as software bugs, misconfiguration, and limping hardware. We show that Pivot Tracing is dynamic, extensible, and enables cross-tier analysis between inter-operating applications, with low execution overhead.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126592927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Corrigendum to “The IX Operating System: Combining Low Latency, High Throughput and Efficiency in a Protected Dataplane” “IX操作系统:在受保护的数据平面中结合低延迟,高吞吐量和效率”的勘误表
Pub Date : 2017-12-29 DOI: 10.1145/3154292
A. Belay, G. Prekas, Mia Primorac, Ana Klimovic, Samuel Grossman, C. Kozyrakis, Edouard Bugnion
On page 21 of “The IX Operating System: Combining Low Latency, High Throughput and Efficiency in a Protected Dataplane” we describe our use of the tool mutilate to evaluate the latency and throughput of memcached. We discovered an error in our setup: we did not load the initial key-value state into memcached before the start of the experiment. Instead, memcached started in an empty state, causing some GET requests to require less computation than intended. Table 1 shows the performance differences between our original and corrected memcached results.
在“IX操作系统:在受保护的数据平面中结合低延迟、高吞吐量和效率”的第21页,我们描述了我们使用工具mutinate来评估memcached的延迟和吞吐量。我们在设置中发现了一个错误:在实验开始之前,我们没有将初始键值状态加载到memcached中。相反,memcached以空状态启动,导致一些GET请求所需的计算比预期的要少。表1显示了原始和修正后的memcached结果之间的性能差异。
{"title":"Corrigendum to “The IX Operating System: Combining Low Latency, High Throughput and Efficiency in a Protected Dataplane”","authors":"A. Belay, G. Prekas, Mia Primorac, Ana Klimovic, Samuel Grossman, C. Kozyrakis, Edouard Bugnion","doi":"10.1145/3154292","DOIUrl":"https://doi.org/10.1145/3154292","url":null,"abstract":"On page 21 of “The IX Operating System: Combining Low Latency, High Throughput and Efficiency in a Protected Dataplane” we describe our use of the tool mutilate to evaluate the latency and throughput of memcached. We discovered an error in our setup: we did not load the initial key-value state into memcached before the start of the experiment. Instead, memcached started in an empty state, causing some GET requests to require less computation than intended. Table 1 shows the performance differences between our original and corrected memcached results.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Determining Application-Specific Peak Power and Energy Requirements for Ultra-Low-Power Processors 确定超低功耗处理器的特定应用峰值功率和能量需求
Pub Date : 2017-12-26 DOI: 10.1145/3148052
Hari Cherupalli, Henry Duwe, Weidong Ye, Rakesh Kumar, J. Sartori
Many emerging applications such as the Internet of Things, wearables, implantables, and sensor networks are constrained by power and energy. These applications rely on ultra-low-power processors that have rapidly become the most abundant type of processor manufactured today. In the ultra-low-power embedded systems used by these applications, peak power and energy requirements are the primary factors that determine critical system characteristics, such as size, weight, cost, and lifetime. While the power and energy requirements of these systems tend to be application specific, conventional techniques for rating peak power and energy cannot accurately bound the power and energy requirements of an application running on a processor, leading to overprovisioning that increases system size and weight. In this article, we present an automated technique that performs hardware–software coanalysis of the application and ultra-low-power processor in an embedded system to determine application-specific peak power and energy requirements. Our technique provides more accurate, tighter bounds than conventional techniques for determining peak power and energy requirements. Also, unlike conventional approaches, our technique reports guaranteed bounds on peak power and energy independent of an application’s input set. Tighter bounds on peak power and energy can be exploited to reduce system size, weight, and cost.
许多新兴应用,如物联网、可穿戴设备、可植入设备和传感器网络,都受到电力和能源的限制。这些应用依赖于超低功耗处理器,超低功耗处理器已迅速成为当今制造的最丰富的处理器类型。在这些应用使用的超低功耗嵌入式系统中,峰值功率和能量需求是决定关键系统特性(如尺寸、重量、成本和使用寿命)的主要因素。虽然这些系统的功率和能量需求往往是特定于应用程序的,但用于评估峰值功率和能量的传统技术不能准确地限定在处理器上运行的应用程序的功率和能量需求,从而导致过度供应,从而增加系统尺寸和重量。在本文中,我们介绍了一种自动化技术,该技术可以对嵌入式系统中的应用程序和超低功耗处理器进行硬件-软件协同分析,以确定特定于应用程序的峰值功率和能量需求。我们的技术提供了比传统技术更准确,更严格的界限,以确定峰值功率和能量需求。此外,与传统方法不同,我们的技术报告了与应用程序输入集无关的峰值功率和能量的保证界限。可以利用更严格的峰值功率和能量限制来减小系统尺寸、重量和成本。
{"title":"Determining Application-Specific Peak Power and Energy Requirements for Ultra-Low-Power Processors","authors":"Hari Cherupalli, Henry Duwe, Weidong Ye, Rakesh Kumar, J. Sartori","doi":"10.1145/3148052","DOIUrl":"https://doi.org/10.1145/3148052","url":null,"abstract":"Many emerging applications such as the Internet of Things, wearables, implantables, and sensor networks are constrained by power and energy. These applications rely on ultra-low-power processors that have rapidly become the most abundant type of processor manufactured today. In the ultra-low-power embedded systems used by these applications, peak power and energy requirements are the primary factors that determine critical system characteristics, such as size, weight, cost, and lifetime. While the power and energy requirements of these systems tend to be application specific, conventional techniques for rating peak power and energy cannot accurately bound the power and energy requirements of an application running on a processor, leading to overprovisioning that increases system size and weight. In this article, we present an automated technique that performs hardware–software coanalysis of the application and ultra-low-power processor in an embedded system to determine application-specific peak power and energy requirements. Our technique provides more accurate, tighter bounds than conventional techniques for determining peak power and energy requirements. Also, unlike conventional approaches, our technique reports guaranteed bounds on peak power and energy independent of an application’s input set. Tighter bounds on peak power and energy can be exploited to reduce system size, weight, and cost.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127002214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Hipster Approach for Improving Cloud System Efficiency 提高云系统效率的时髦方法
Pub Date : 2017-12-04 DOI: 10.1145/3144168
Rajiv Nishtala, P. Carpenter, V. Petrucci, X. Martorell
In 2013, U.S. data centers accounted for 2.2% of the country’s total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important data center workloads in cloud computing are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to optimize power consumption along with increasing performance demands. This article introduces Hipster, a technique that combines heuristics and reinforcement learning to improve resource efficiency in cloud systems. Hipster explores heterogeneous multi-cores and dynamic voltage and frequency scaling for reducing energy consumption while managing the QoS of the latency-critical workloads. To improve data center utilization and make best usage of the available resources, Hipster can dynamically assign remaining cores to batch workloads without violating the QoS constraints for the latency-critical workloads. We perform experiments using a 64-bit ARM big.LITTLE platform and show that, compared to prior work, Hipster improves the QoS guarantee for Web-Search from 80% to 96%, and for Memcached from 92% to 99%, while reducing the energy consumption by up to 18%. Hipster is also effective in learning and adapting automatically to specific requirements of new incoming workloads just enough to meet the QoS and optimize resource consumption.
2013年,美国数据中心的用电量占全国总用电量的2.2%,预计这一数字将在未来10年迅速增长。云计算中许多重要的数据中心工作负载是交互式的,它们需要严格的服务质量(QoS)级别来满足用户的期望,这使得优化功耗以及不断增长的性能需求具有挑战性。本文介绍Hipster,这是一种结合了启发式和强化学习来提高云系统资源效率的技术。Hipster探索了异构多核和动态电压和频率缩放,以便在管理延迟关键工作负载的QoS的同时降低能耗。为了提高数据中心利用率并充分利用可用资源,Hipster可以动态地将剩余核心分配给批处理工作负载,而不会违反延迟关键工作负载的QoS约束。我们使用64位ARM处理器进行实验。LITTLE平台的研究表明,与之前的工作相比,Hipster将Web-Search的QoS保证从80%提高到96%,将Memcached的QoS保证从92%提高到99%,同时将能耗降低了18%。Hipster还可以有效地学习和自动适应新传入工作负载的特定需求,以满足QoS并优化资源消耗。
{"title":"The Hipster Approach for Improving Cloud System Efficiency","authors":"Rajiv Nishtala, P. Carpenter, V. Petrucci, X. Martorell","doi":"10.1145/3144168","DOIUrl":"https://doi.org/10.1145/3144168","url":null,"abstract":"In 2013, U.S. data centers accounted for 2.2% of the country’s total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important data center workloads in cloud computing are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to optimize power consumption along with increasing performance demands. This article introduces Hipster, a technique that combines heuristics and reinforcement learning to improve resource efficiency in cloud systems. Hipster explores heterogeneous multi-cores and dynamic voltage and frequency scaling for reducing energy consumption while managing the QoS of the latency-critical workloads. To improve data center utilization and make best usage of the available resources, Hipster can dynamically assign remaining cores to batch workloads without violating the QoS constraints for the latency-critical workloads. We perform experiments using a 64-bit ARM big.LITTLE platform and show that, compared to prior work, Hipster improves the QoS guarantee for Web-Search from 80% to 96%, and for Memcached from 92% to 99%, while reducing the energy consumption by up to 18%. Hipster is also effective in learning and adapting automatically to specific requirements of new incoming workloads just enough to meet the QoS and optimize resource consumption.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132185960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Seer 先见
Pub Date : 2017-11-14 DOI: 10.1145/3132036
Nuno Diegues, P. Romano, Stoyan Garbatov
The ubiquity of multicore processors has led programmers to write parallel and concurrent applications to take advantage of the underlying hardware and speed up their executions. In this context, Transactional Memory (TM) has emerged as a simple and effective synchronization paradigm, via the familiar abstraction of atomic transactions. After many years of intense research, major processor manufacturers (including Intel) have recently released mainstream processors with hardware support for TM (HTM). In this work, we study a relevant issue with great impact on the performance of HTM. Due to the optimistic and inherently limited nature of HTM, transactions may have to be aborted and restarted numerous times, without any progress guarantee. As a result, it is up to the software library that regulates the HTM usage to ensure progress and optimize performance. Transaction scheduling is probably one of the most well-studied and effective techniques to achieve these goals. However, these recent mainstream HTMs have some technical limitations that prevent the adoption of known scheduling techniques: unlike software implementations of TM used in the past, existing HTMs provide limited or no information on which memory regions or contending transactions caused the abort. To address this crucial issue for HTMs, we propose Seer, a software scheduler that addresses precisely this restriction of HTM by leveraging on an online probabilistic inference technique that identifies the most likely conflict relations and establishes a dynamic locking scheme to serialize transactions in a fine-grained manner. The key idea of our solution is to constrain the portions of parallelism that are affecting negatively the whole system. As a result, this not only prevents performance reduction but also in fact unveils further scalability and performance for HTM. Via an extensive evaluation study, we show that Seer improves the performance of the Intel’s HTM by up to 3.6×, and by 65% on average across all concurrency degrees and benchmarks on a large processor with 28 cores.
多核处理器的普及使得程序员编写并行和并发应用程序,以利用底层硬件并加快其执行速度。在这种情况下,事务性内存(Transactional Memory, TM)通过熟悉的原子事务抽象,作为一种简单而有效的同步范式而出现。经过多年的深入研究,主要的处理器制造商(包括Intel)最近发布了支持TM (HTM)硬件的主流处理器。在这项工作中,我们研究了一个对HTM性能有很大影响的相关问题。由于HTM的乐观性和固有的有限性,事务可能不得不多次中止和重新启动,而没有任何进度保证。因此,这取决于规范HTM使用的软件库,以确保进度和优化性能。事务调度可能是实现这些目标研究得最充分、最有效的技术之一。然而,这些最近的主流html有一些技术限制,这些限制阻碍了已知调度技术的采用:与过去使用的TM的软件实现不同,现有的html提供有限或没有关于哪些内存区域或争用事务导致中断的信息。为了解决HTM的这个关键问题,我们提出了Seer,这是一个软件调度器,它通过利用在线概率推理技术来精确地解决HTM的这个限制,该技术可以识别最可能的冲突关系,并建立一个动态锁定方案,以细粒度的方式序列化事务。我们的解决方案的关键思想是限制对整个系统产生负面影响的并行性部分。因此,这不仅可以防止性能下降,而且实际上还揭示了HTM的进一步可伸缩性和性能。通过广泛的评估研究,我们表明Seer将英特尔HTM的性能提高了3.6倍,在所有并发度和28核大型处理器的基准测试中平均提高了65%。
{"title":"Seer","authors":"Nuno Diegues, P. Romano, Stoyan Garbatov","doi":"10.1145/3132036","DOIUrl":"https://doi.org/10.1145/3132036","url":null,"abstract":"The ubiquity of multicore processors has led programmers to write parallel and concurrent applications to take advantage of the underlying hardware and speed up their executions. In this context, Transactional Memory (TM) has emerged as a simple and effective synchronization paradigm, via the familiar abstraction of atomic transactions. After many years of intense research, major processor manufacturers (including Intel) have recently released mainstream processors with hardware support for TM (HTM). In this work, we study a relevant issue with great impact on the performance of HTM. Due to the optimistic and inherently limited nature of HTM, transactions may have to be aborted and restarted numerous times, without any progress guarantee. As a result, it is up to the software library that regulates the HTM usage to ensure progress and optimize performance. Transaction scheduling is probably one of the most well-studied and effective techniques to achieve these goals. However, these recent mainstream HTMs have some technical limitations that prevent the adoption of known scheduling techniques: unlike software implementations of TM used in the past, existing HTMs provide limited or no information on which memory regions or contending transactions caused the abort. To address this crucial issue for HTMs, we propose Seer, a software scheduler that addresses precisely this restriction of HTM by leveraging on an online probabilistic inference technique that identifies the most likely conflict relations and establishes a dynamic locking scheme to serialize transactions in a fine-grained manner. The key idea of our solution is to constrain the portions of parallelism that are affecting negatively the whole system. As a result, this not only prevents performance reduction but also in fact unveils further scalability and performance for HTM. Via an extensive evaluation study, we show that Seer improves the performance of the Intel’s HTM by up to 3.6×, and by 65% on average across all concurrency degrees and benchmarks on a large processor with 28 cores.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127623785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Apache REEF Apache礁
Pub Date : 2017-10-10 DOI: 10.1145/3132037
Byung-Gon Chun, Tyson Condie, Yingda Chen, Brian Cho, Andrew Chung, C. Curino, C. Douglas, Matteo Interlandi, Beomyeol Jeon, Joo Seong Jeong, Gyewon Lee, Yunseong Lee, Tony Majestro, D. Malkhi, Sergiy Matusevych, Brandon Myers, M. Mykhailova, Shravan M. Narayanamurthy, Joseph Noor, R. Ramakrishnan, Sriram Rao, R. Sears, B. Sezgin, Taegeon Um, Julia Wang, Markus Weimer, Youngseok Yang
Resource Managers like YARN and Mesos have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault tolerance, task scheduling and coordination) and reimplement common mechanisms (e.g., caching, bulk-data transfers). This article presents REEF, a development framework that provides a control plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource reuse for data caching and state management abstractions that greatly ease the development of elastic data processing pipelines on cloud platforms that support a Resource Manager service. We illustrate the power of REEF by showing applications built atop: a distributed shell application, a machine-learning framework, a distributed in-memory caching system, and a port of the CORFU system. REEF is currently an Apache top-level project that has attracted contributors from several institutions and it is being used to develop several commercial offerings such as the Azure Stream Analytics service.
像YARN和Mesos这样的资源管理器已经成为云计算系统堆栈中的关键层,但是开发人员对租用集群资源和实例化应用程序逻辑的抽象是非常低级的。就开发人员的工作而言,这种灵活性的代价很高,因为每个应用程序必须重复处理相同的挑战(例如,容错、任务调度和协调),并重新实现通用机制(例如,缓存、大容量数据传输)。本文介绍了REEF,这是一个开发框架,它提供了一个控制平面,用于调度和协调从资源管理器获得的集群资源上的任务级(数据平面)工作。REEF提供的机制促进了数据缓存和状态管理抽象的资源重用,极大地简化了支持resource Manager服务的云平台上弹性数据处理管道的开发。我们通过展示构建在上面的应用程序来说明REEF的强大功能:一个分布式shell应用程序、一个机器学习框架、一个分布式内存缓存系统和一个CORFU系统的端口。REEF目前是Apache的一个顶级项目,吸引了来自多个机构的贡献者,它被用于开发一些商业产品,比如Azure流分析服务。
{"title":"Apache REEF","authors":"Byung-Gon Chun, Tyson Condie, Yingda Chen, Brian Cho, Andrew Chung, C. Curino, C. Douglas, Matteo Interlandi, Beomyeol Jeon, Joo Seong Jeong, Gyewon Lee, Yunseong Lee, Tony Majestro, D. Malkhi, Sergiy Matusevych, Brandon Myers, M. Mykhailova, Shravan M. Narayanamurthy, Joseph Noor, R. Ramakrishnan, Sriram Rao, R. Sears, B. Sezgin, Taegeon Um, Julia Wang, Markus Weimer, Youngseok Yang","doi":"10.1145/3132037","DOIUrl":"https://doi.org/10.1145/3132037","url":null,"abstract":"Resource Managers like YARN and Mesos have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault tolerance, task scheduling and coordination) and reimplement common mechanisms (e.g., caching, bulk-data transfers). This article presents REEF, a development framework that provides a control plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource reuse for data caching and state management abstractions that greatly ease the development of elastic data processing pipelines on cloud platforms that support a Resource Manager service. We illustrate the power of REEF by showing applications built atop: a distributed shell application, a machine-learning framework, a distributed in-memory caching system, and a port of the CORFU system. REEF is currently an Apache top-level project that has attracted contributors from several institutions and it is being used to develop several commercial offerings such as the Azure Stream Analytics service.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Supercloud Supercloud
Pub Date : 2017-10-04 DOI: 10.1145/3132038
Zhiming Shen, Qin Jia, Gur-Eyal Sela, Weijia Song, Hakim Weatherspoon, R. Van Renesse
Infrastructure-as-a-Service (IaaS) cloud providers hide available interfaces for virtual machine (VM) placement and migration, CPU capping, memory ballooning, page sharing, and I/O throttling, limiting the ways in which applications can optimally configure resources or respond to dynamically shifting workloads. Given these interfaces, applications could migrate VMs in response to diurnal workloads or changing prices, adjust resources in response to load changes, and so on. This article proposes a new abstraction that we call a Library Cloud and that allows users to customize the diverse available cloud resources to best serve their applications. We built a prototype of a Library Cloud that we call the Supercloud. The Supercloud encapsulates applications in a virtual cloud under users’ full control and can incorporate one or more availability zones within a cloud provider or across different providers. The Supercloud provides virtual machine, storage, and networking complete with a full set of management operations, allowing applications to optimize performance. In this article, we demonstrate various innovations enabled by the Library Cloud.
基础设施即服务(IaaS)云提供商隐藏了用于虚拟机(VM)放置和迁移、CPU封顶、内存膨胀、页面共享和I/O节流的可用接口,从而限制了应用程序优化配置资源或响应动态移动工作负载的方式。有了这些接口,应用程序就可以根据日常工作负载或价格变化迁移vm,根据负载变化调整资源,等等。本文提出了一个新的抽象,我们称之为图书馆云,它允许用户自定义各种可用的云资源,以最好地服务于他们的应用程序。我们建立了一个图书馆云的原型,我们称之为超级云。Supercloud将应用程序封装在用户完全控制的虚拟云中,并且可以在云提供商内或跨不同提供商合并一个或多个可用性区域。Supercloud提供虚拟机、存储和网络,以及一整套管理操作,允许应用程序优化性能。在本文中,我们将演示图书馆云支持的各种创新。
{"title":"Supercloud","authors":"Zhiming Shen, Qin Jia, Gur-Eyal Sela, Weijia Song, Hakim Weatherspoon, R. Van Renesse","doi":"10.1145/3132038","DOIUrl":"https://doi.org/10.1145/3132038","url":null,"abstract":"Infrastructure-as-a-Service (IaaS) cloud providers hide available interfaces for virtual machine (VM) placement and migration, CPU capping, memory ballooning, page sharing, and I/O throttling, limiting the ways in which applications can optimally configure resources or respond to dynamically shifting workloads. Given these interfaces, applications could migrate VMs in response to diurnal workloads or changing prices, adjust resources in response to load changes, and so on. This article proposes a new abstraction that we call a Library Cloud and that allows users to customize the diverse available cloud resources to best serve their applications. We built a prototype of a Library Cloud that we call the Supercloud. The Supercloud encapsulates applications in a virtual cloud under users’ full control and can incorporate one or more availability zones within a cloud provider or across different providers. The Supercloud provides virtual machine, storage, and networking complete with a full set of management operations, allowing applications to optimize performance. In this article, we demonstrate various innovations enabled by the Library Cloud.","PeriodicalId":318554,"journal":{"name":"ACM Transactions on Computer Systems (TOCS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121537953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
ACM Transactions on Computer Systems (TOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1