首页 > 最新文献

2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems最新文献

英文 中文
Exploiting Spatial Locality to Improve Disk Efficiency in Virtualized Environments 利用空间局部性提高虚拟化环境中的磁盘效率
Xiao Ling, Shadi Ibrahim, Hai Jin, Song Wu, Songqiao Tao
Virtualization has become a prominent tool in data centers and is extensively leveraged in cloud environments: it enables multiple virtual machines (VMs) - with multiple operating systems and applications - to run within a physical server. However, virtualization introduces the challenging issue of preserving the high disk utilization (i.e., reducing the seek delay and rotation overhead) when allocating disk resources to VMs. Exploiting spatial locality, a key technique for improving disk utilization and performance, faces additional challenges in the virtualized cloud because of the transparency feature of virtualization (hyper visors do not have the information about the access patterns of applications running within each VM). To this end, this paper contributes a novel disk I/O scheduling framework, named Pregather, to improve disk I/O efficiency through exposure and exploitation of the special spatial locality in the virtualized environment (regional and sub-regional spatial locality corresponds to the virtual disk space and applications' access patterns, respectively), thereby improving the performance of disk-intensive applications without harming the transparency feature of virtualization (without a priori knowledge of the applications' access patterns). The key idea behind Pregather is to implement an intelligent model to predict the access regularity of sub-regional spatial locality for each VM. We implement the Pregather disk scheduling framework and perform extensive experiments that involve multiple simultaneous applications of both synthetic benchmarks and a MapReduce application on Xen-based platforms. Our experiments demonstrate the accuracy of our prediction model and indicate that Pregather results in the high disk spatial locality and a significant improvement in disk throughput and application performance.
虚拟化已经成为数据中心的一个重要工具,并且在云环境中得到了广泛的利用:它使多个虚拟机(vm)——具有多个操作系统和应用程序——能够在物理服务器中运行。然而,虚拟化引入了一个具有挑战性的问题,即在为vm分配磁盘资源时保持高磁盘利用率(即减少寻道延迟和旋转开销)。利用空间局部性是提高磁盘利用率和性能的一项关键技术,但由于虚拟化的透明性特性(虚拟监控程序不具有关于在每个VM中运行的应用程序的访问模式的信息),它在虚拟化云中面临着额外的挑战。为此,本文提出了一种新的磁盘I/O调度框架Pregather,通过暴露和利用虚拟环境中的特殊空间局部性(区域和子区域空间局部性分别对应虚拟磁盘空间和应用程序的访问模式)来提高磁盘I/O效率。从而提高磁盘密集型应用程序的性能,而不损害虚拟化的透明性特性(不需要预先了解应用程序的访问模式)。Pregather的核心思想是实现一个智能模型来预测每个虚拟机的子区域空间位置的访问规律。我们实现了Pregather磁盘调度框架,并进行了大量的实验,包括在基于xen的平台上同时使用多个合成基准测试和MapReduce应用程序。我们的实验证明了我们的预测模型的准确性,并表明Pregather可以获得较高的磁盘空间局域性,并显著提高磁盘吞吐量和应用程序性能。
{"title":"Exploiting Spatial Locality to Improve Disk Efficiency in Virtualized Environments","authors":"Xiao Ling, Shadi Ibrahim, Hai Jin, Song Wu, Songqiao Tao","doi":"10.1109/MASCOTS.2013.27","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.27","url":null,"abstract":"Virtualization has become a prominent tool in data centers and is extensively leveraged in cloud environments: it enables multiple virtual machines (VMs) - with multiple operating systems and applications - to run within a physical server. However, virtualization introduces the challenging issue of preserving the high disk utilization (i.e., reducing the seek delay and rotation overhead) when allocating disk resources to VMs. Exploiting spatial locality, a key technique for improving disk utilization and performance, faces additional challenges in the virtualized cloud because of the transparency feature of virtualization (hyper visors do not have the information about the access patterns of applications running within each VM). To this end, this paper contributes a novel disk I/O scheduling framework, named Pregather, to improve disk I/O efficiency through exposure and exploitation of the special spatial locality in the virtualized environment (regional and sub-regional spatial locality corresponds to the virtual disk space and applications' access patterns, respectively), thereby improving the performance of disk-intensive applications without harming the transparency feature of virtualization (without a priori knowledge of the applications' access patterns). The key idea behind Pregather is to implement an intelligent model to predict the access regularity of sub-regional spatial locality for each VM. We implement the Pregather disk scheduling framework and perform extensive experiments that involve multiple simultaneous applications of both synthetic benchmarks and a MapReduce application on Xen-based platforms. Our experiments demonstrate the accuracy of our prediction model and indicate that Pregather results in the high disk spatial locality and a significant improvement in disk throughput and application performance.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129474285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Online Energy Budgeting for Virtualized Data Centers 虚拟化数据中心在线能源预算
M. A. Islam, Shaolei Ren, Gang Quan
Increasingly serious concerns about the IT carbon footprints have been pushing data center operators to cap their (brown) energy consumption. Naturally, achieving energy capping involves deciding the energy usage over a long timescale (without foreseeing the far future) and hence, we call this process "energy budgeting". The specific goal of this paper is to study energy budgeting for virtualized data centers from an algorithmic perspective: we develop a provably-efficient online algorithm, called eBud (energy Budgeting), which determines server CPU speed and resource allocation to virtual machines for minimizing the data center operational cost while satisfying the long-term energy capping constraint in an online fashion. We rigorously prove that eBud achieves a close-to-minimum cost compared to the optimal offline algorithm with future information, while bounding the potential violation of energy budget constraint, in an almost arbitrarily random environment. We also perform a trace-based simulation study to complement the analysis. The simulation results are consistent with our theoretical analysis and show that eBud reduces the cost by more than 60% (compared to state-of-the-art prediction-based algorithm) while resulting in a zero energy budget deficit.
对IT碳足迹日益严重的担忧促使数据中心运营商限制他们的(棕色)能源消耗。当然,实现能源上限需要在很长一段时间内决定能源的使用情况(没有预见到遥远的未来),因此,我们称这个过程为“能源预算”。本文的具体目标是从算法的角度研究虚拟化数据中心的能源预算:我们开发了一个可证明高效的在线算法,称为eBud(能源预算),它确定服务器CPU速度和虚拟机的资源分配,以最小化数据中心的运营成本,同时满足在线方式的长期能源上限约束。我们严格证明了eBud在几乎任意随机的环境中,与具有未来信息的最优离线算法相比,实现了接近最小的成本,同时限制了可能违反能量预算约束的情况。我们还进行了基于轨迹的模拟研究来补充分析。仿真结果与我们的理论分析一致,并表明eBud降低了60%以上的成本(与最先进的基于预测的算法相比),同时导致零能源预算赤字。
{"title":"Online Energy Budgeting for Virtualized Data Centers","authors":"M. A. Islam, Shaolei Ren, Gang Quan","doi":"10.1109/MASCOTS.2013.64","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.64","url":null,"abstract":"Increasingly serious concerns about the IT carbon footprints have been pushing data center operators to cap their (brown) energy consumption. Naturally, achieving energy capping involves deciding the energy usage over a long timescale (without foreseeing the far future) and hence, we call this process \"energy budgeting\". The specific goal of this paper is to study energy budgeting for virtualized data centers from an algorithmic perspective: we develop a provably-efficient online algorithm, called eBud (energy Budgeting), which determines server CPU speed and resource allocation to virtual machines for minimizing the data center operational cost while satisfying the long-term energy capping constraint in an online fashion. We rigorously prove that eBud achieves a close-to-minimum cost compared to the optimal offline algorithm with future information, while bounding the potential violation of energy budget constraint, in an almost arbitrarily random environment. We also perform a trace-based simulation study to complement the analysis. The simulation results are consistent with our theoretical analysis and show that eBud reduces the cost by more than 60% (compared to state-of-the-art prediction-based algorithm) while resulting in a zero energy budget deficit.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133446259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Channel and Receiver Contention in Optical Flow Switching Networks 光流交换网络中的信道与接收机争用
Joobum Kim, Yamini Jayabal, M. Razo, M. Tacca, A. Fumagalli
An increasing number of users perform large transfers over data networks. While, for the most part, these transfers are currently performed over the IP network, a number of studies advocate the use of end-to-end optical circuits to support these resource-consuming jobs. One of the major advantages is the ability to carry a large fraction of the overall network traffic using the relatively lower-cost and lower-power optical equipment, when compared to IP routers. For example, in optical flow network, end-to-end optical circuits can be established by reserving wavelength channels only when needed. Once the circuit is established, the large data set is seamlessly transferred across the network without requiring IP routers to be involved in the data transfer. For a circuit to be successfully established the following conditions must be simultaneously met: a transmitter must be available at the sender, a receiver must be available at the destination, and a wavelength channel must be available across the network to connect the sender to the destination. Data transfer can start only when the conditions above are simultaneously met. As a result, a request can experience a delay before being established. Network throughput and delay are affected by the availability of network channels (channel-contention) and end-user's receiver (receiver-contention). The contribution of this paper is twofold. First, channel throughput and delay are analytically estimated. Second, the analytical results are validated using simulation results. A number of experiments are conducted using the presented analytical models and simulation platform to investigate the effect of channel and receiver contention on throughput and delay.
越来越多的用户通过数据网络进行大量传输。虽然在大多数情况下,这些传输目前是通过IP网络进行的,但许多研究主张使用端到端光电路来支持这些消耗资源的工作。与IP路由器相比,其主要优点之一是能够使用相对较低成本和较低功耗的光学设备承载整个网络流量的很大一部分。例如,在光流网络中,只在需要时保留波长通道,就可以建立端到端的光电路。一旦电路建立起来,大数据集就可以在网络上无缝传输,而不需要IP路由器参与数据传输。要成功地建立一个电路,必须同时满足以下条件:发送端必须有发射器,目的地端必须有接收器,并且在整个网络中必须有波长通道将发送端连接到目的地端。只有同时满足上述条件,才能开始数据传输。因此,请求在建立之前可能会经历延迟。网络吞吐量和延迟受网络信道可用性(信道争用)和终端用户的接收方(接收方争用)的影响。本文的贡献是双重的。首先,对信道吞吐量和时延进行了分析估计。其次,利用仿真结果对分析结果进行验证。利用所提出的分析模型和仿真平台进行了大量实验,以研究信道和接收机争用对吞吐量和延迟的影响。
{"title":"Channel and Receiver Contention in Optical Flow Switching Networks","authors":"Joobum Kim, Yamini Jayabal, M. Razo, M. Tacca, A. Fumagalli","doi":"10.1109/MASCOTS.2013.59","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.59","url":null,"abstract":"An increasing number of users perform large transfers over data networks. While, for the most part, these transfers are currently performed over the IP network, a number of studies advocate the use of end-to-end optical circuits to support these resource-consuming jobs. One of the major advantages is the ability to carry a large fraction of the overall network traffic using the relatively lower-cost and lower-power optical equipment, when compared to IP routers. For example, in optical flow network, end-to-end optical circuits can be established by reserving wavelength channels only when needed. Once the circuit is established, the large data set is seamlessly transferred across the network without requiring IP routers to be involved in the data transfer. For a circuit to be successfully established the following conditions must be simultaneously met: a transmitter must be available at the sender, a receiver must be available at the destination, and a wavelength channel must be available across the network to connect the sender to the destination. Data transfer can start only when the conditions above are simultaneously met. As a result, a request can experience a delay before being established. Network throughput and delay are affected by the availability of network channels (channel-contention) and end-user's receiver (receiver-contention). The contribution of this paper is twofold. First, channel throughput and delay are analytically estimated. Second, the analytical results are validated using simulation results. A number of experiments are conducted using the presented analytical models and simulation platform to investigate the effect of channel and receiver contention on throughput and delay.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124161922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance Comparison of Routing Protocols for Cognitive Radio Networks 认知无线网络路由协议的性能比较
Li Sun, Wei Zheng, Naveen Rawat, Vikramsinh Sawant, Dimitrios Koutsonikolas
Cognitive radio networks (CRNs) have emerged as a promising solution to the ever-growing demand for additional spectrum resources and more efficient spectrum utilization. A large number of routing protocols for CRNs have been proposed recently, each based on different design goals, and evaluated in different scenarios, under different assumptions. However, little is known about the relative performance of all these protocols, let alone the tradeoffs among their different design goals. In this paper, we conduct the first detailed, empirical performance comparison of three representative routing protocols for CRNs, under the same realistic set of assumptions. Our extensive simulation study shows that the performance of routing protocols in CRNs is affected by a number of factors, in addition to PU activity, some of which have been largely ignored by the majority of previous works. We find that different protocols perform well under different scenarios, and investigate the causes of the observed performance. Furthermore, we present a generic software architecture for the experimental evaluation of CRN routing protocols on a test bed based on the USRP2 platform, and compare the performance of two protocols on a 6 node test bed. The test bed results confirm the findings of our simulation study.
认知无线电网络(crn)已成为解决日益增长的额外频谱资源和更有效的频谱利用需求的一种有前途的解决方案。最近已经提出了大量的crn路由协议,每个协议都基于不同的设计目标,并在不同的场景和假设下进行了评估。然而,我们对所有这些协议的相对性能知之甚少,更不用说它们不同设计目标之间的权衡了。在本文中,我们在相同的现实假设集下,对crn的三种代表性路由协议进行了首次详细的经验性能比较。我们广泛的仿真研究表明,除了PU活动外,crn中路由协议的性能还受到许多因素的影响,其中一些因素在很大程度上被大多数先前的工作所忽略。我们发现不同的协议在不同的场景下表现良好,并研究了观察到的性能的原因。此外,我们提出了一种通用的软件架构,用于在USRP2平台的测试台上对CRN路由协议进行实验评估,并比较了两种协议在6节点测试台上的性能。实验结果证实了仿真研究的结果。
{"title":"Performance Comparison of Routing Protocols for Cognitive Radio Networks","authors":"Li Sun, Wei Zheng, Naveen Rawat, Vikramsinh Sawant, Dimitrios Koutsonikolas","doi":"10.1109/MASCOTS.2013.67","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.67","url":null,"abstract":"Cognitive radio networks (CRNs) have emerged as a promising solution to the ever-growing demand for additional spectrum resources and more efficient spectrum utilization. A large number of routing protocols for CRNs have been proposed recently, each based on different design goals, and evaluated in different scenarios, under different assumptions. However, little is known about the relative performance of all these protocols, let alone the tradeoffs among their different design goals. In this paper, we conduct the first detailed, empirical performance comparison of three representative routing protocols for CRNs, under the same realistic set of assumptions. Our extensive simulation study shows that the performance of routing protocols in CRNs is affected by a number of factors, in addition to PU activity, some of which have been largely ignored by the majority of previous works. We find that different protocols perform well under different scenarios, and investigate the causes of the observed performance. Furthermore, we present a generic software architecture for the experimental evaluation of CRN routing protocols on a test bed based on the USRP2 platform, and compare the performance of two protocols on a 6 node test bed. The test bed results confirm the findings of our simulation study.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129821277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A Fix-and-Relax Model for Heterogeneous LTE-Based Networks 异构lte网络的固定松弛模型
F. Malandrino, C. Casetti, C. Chiasserini
We envision a next-generation cellular network, where base stations allow Internet connectivity through different wireless interfaces (e.g., LTE and WiFi), and licensed cellular frequencies can be used also for device-to-device communications. With this scenario in mind, we develop a model that synthetically and consistently describes the diverse communications opportunities offered by the above network system. Then, we propose a fix-and-relax approach that makes the model solvable in real time. As one of its possible applications, our numerical results show how the model can be effectively used to design and analyze policies for dynamic frequency allocation.
我们设想下一代蜂窝网络,其中基站允许通过不同的无线接口(例如,LTE和WiFi)连接互联网,并且许可的蜂窝频率也可用于设备对设备通信。考虑到这种情况,我们开发了一个模型,该模型综合并一致地描述了上述网络系统提供的各种通信机会。然后,我们提出了一种固定放松方法,使模型可以实时求解。作为其可能的应用之一,我们的数值结果表明该模型可以有效地用于设计和分析动态频率分配策略。
{"title":"A Fix-and-Relax Model for Heterogeneous LTE-Based Networks","authors":"F. Malandrino, C. Casetti, C. Chiasserini","doi":"10.1109/MASCOTS.2013.41","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.41","url":null,"abstract":"We envision a next-generation cellular network, where base stations allow Internet connectivity through different wireless interfaces (e.g., LTE and WiFi), and licensed cellular frequencies can be used also for device-to-device communications. With this scenario in mind, we develop a model that synthetically and consistently describes the diverse communications opportunities offered by the above network system. Then, we propose a fix-and-relax approach that makes the model solvable in real time. As one of its possible applications, our numerical results show how the model can be effectively used to design and analyze policies for dynamic frequency allocation.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115241878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms 移动计算平台上无损压缩/解压缩工具的性能与能耗
A. Milenković, Armen Dzhagaryan, Martin Burtscher
Data compression and decompression utilities can be critical in increasing communication throughput, reducing communication latencies, achieving energy-efficient communication, and making effective use of available storage. This paper experimentally evaluates several such utilities for multiple compression levels on systems that represent current mobile platforms. We characterize each utility in terms of its compression ratio, compression and decompression through-put, and energy efficiency. We consider different use cases that are typical for modern mobile environments. We find a wide variety of energy costs associated with data compression and decompression and provide practical guidelines for selecting the most energy efficient configurations for each use case. The best performing configurations provide 6-fold and 4-fold improvements in energy efficiency for compressed uploads and downloads over WLAN, respectively, when compared to uncompressed data transfers.
数据压缩和解压缩实用程序对于提高通信吞吐量、减少通信延迟、实现节能通信以及有效利用可用存储非常关键。本文通过实验评估了代表当前移动平台的系统上的多个压缩级别的几个这样的实用程序。我们根据其压缩比、压缩和解压吞吐量以及能源效率来描述每种实用程序。我们考虑了现代移动环境中典型的不同用例。我们发现了与数据压缩和解压缩相关的各种能源成本,并为每个用例选择最节能的配置提供了实用指南。与未压缩的数据传输相比,性能最好的配置分别为通过WLAN进行压缩上传和下载提供了6倍和4倍的能源效率提升。
{"title":"Performance and Energy Consumption of Lossless Compression/Decompression Utilities on Mobile Computing Platforms","authors":"A. Milenković, Armen Dzhagaryan, Martin Burtscher","doi":"10.1109/MASCOTS.2013.33","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.33","url":null,"abstract":"Data compression and decompression utilities can be critical in increasing communication throughput, reducing communication latencies, achieving energy-efficient communication, and making effective use of available storage. This paper experimentally evaluates several such utilities for multiple compression levels on systems that represent current mobile platforms. We characterize each utility in terms of its compression ratio, compression and decompression through-put, and energy efficiency. We consider different use cases that are typical for modern mobile environments. We find a wide variety of energy costs associated with data compression and decompression and provide practical guidelines for selecting the most energy efficient configurations for each use case. The best performing configurations provide 6-fold and 4-fold improvements in energy efficiency for compressed uploads and downloads over WLAN, respectively, when compared to uncompressed data transfers.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130063975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Dynamic Balanced Configuration of Multi-resources in Virtualized Clusters 虚拟化集群中多资源动态均衡配置
Yudi Wei, Chengzhong Xu
Dynamic resource configuration is crucial to the provisioning of service level agreements (SLAs) in cloud computing. Most of today's autonomic resource configuration approaches are designed to scale a single type of resource. A few works are able to partition multiple resources, but mainly to meet the requirement of throughput. Unlike throughput, however, response time behaves nonlinearly with respect to resources. Therefore, these approaches are hardly applicable to dynamic sharing of multi-resources for the provisioning of response time guarantee. Moreover, the optimization of resource efficiency and utilization has great significance to IaaS providers. We show theoretically and experimentally that resource optimization lies in balanced configuration of resources. In this paper, we propose a framework, BConf, for dynamic balanced configuration of multi-resources for the provisioning of response time guarantee in virtualized clusters. BConf employs an integrated MPC (model predictive control) and adaptive PI (proportional integral) control approach (IMAP). MPC is applied to actively balance multiple resources using a novel resource metric. For the performance prediction, a gray-box model is built on generic OS and hardware metrics in addition to resource actuators and performance. We find out that resource penalty is an effective metric to measure the imbalanced degree of a configuration. Using this metric and the model, BConf tunes resources in a balanced way by minimizing the resource penalty while satisfying the response time target. Adaptive PI is used to coordinate with MPC by narrowing the optimization space to a promising region. Within BConf framework, resources are coordinated during contention. Experimental results with mixed TPC-W and TPC-C benchmarks show that BConf reduces resource usages by about 50% and 30% for TPC-W and TPC-C respectively, improves stability by more than 35.6%, and has a much shorter settling time, in comparison with a representative partition approach. The advantages of BConf in resource coordination are also demonstrated.
动态资源配置对于云计算中的服务水平协议(sla)的提供至关重要。目前大多数自主资源配置方法都是为扩展单一类型的资源而设计的。少数作品能够分区多个资源,但主要是为了满足吞吐量的要求。然而,与吞吐量不同的是,响应时间与资源呈非线性关系。因此,这些方法很难适用于多资源动态共享以提供响应时间保证。此外,资源效率和利用率的优化对IaaS提供商具有重要意义。从理论和实验两方面论证了资源优化在于资源的均衡配置。在本文中,我们提出了一个框架BConf,用于在虚拟化集群中提供响应时间保证的多资源动态平衡配置。BConf采用综合MPC(模型预测控制)和自适应PI(比例积分)控制方法(IMAP)。MPC使用一种新的资源度量来主动平衡多个资源。对于性能预测,除了资源执行器和性能之外,还基于通用操作系统和硬件指标构建灰盒模型。研究发现,资源惩罚是衡量配置不平衡程度的有效指标。使用这个指标和模型,BConf通过最小化资源损失,同时满足响应时间目标,以一种平衡的方式调整资源。自适应PI通过将优化空间缩小到一个有希望的区域来与MPC协调。在BConf框架中,资源在争用期间进行协调。混合TPC-W和TPC-C基准测试的实验结果表明,与具有代表性的分区方法相比,BConf对TPC-W和TPC-C分别减少了约50%和30%的资源使用,提高了35.6%以上的稳定性,并且具有更短的稳定时间。说明了BConf在资源协调中的优势。
{"title":"Dynamic Balanced Configuration of Multi-resources in Virtualized Clusters","authors":"Yudi Wei, Chengzhong Xu","doi":"10.1109/MASCOTS.2013.14","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.14","url":null,"abstract":"Dynamic resource configuration is crucial to the provisioning of service level agreements (SLAs) in cloud computing. Most of today's autonomic resource configuration approaches are designed to scale a single type of resource. A few works are able to partition multiple resources, but mainly to meet the requirement of throughput. Unlike throughput, however, response time behaves nonlinearly with respect to resources. Therefore, these approaches are hardly applicable to dynamic sharing of multi-resources for the provisioning of response time guarantee. Moreover, the optimization of resource efficiency and utilization has great significance to IaaS providers. We show theoretically and experimentally that resource optimization lies in balanced configuration of resources. In this paper, we propose a framework, BConf, for dynamic balanced configuration of multi-resources for the provisioning of response time guarantee in virtualized clusters. BConf employs an integrated MPC (model predictive control) and adaptive PI (proportional integral) control approach (IMAP). MPC is applied to actively balance multiple resources using a novel resource metric. For the performance prediction, a gray-box model is built on generic OS and hardware metrics in addition to resource actuators and performance. We find out that resource penalty is an effective metric to measure the imbalanced degree of a configuration. Using this metric and the model, BConf tunes resources in a balanced way by minimizing the resource penalty while satisfying the response time target. Adaptive PI is used to coordinate with MPC by narrowing the optimization space to a promising region. Within BConf framework, resources are coordinated during contention. Experimental results with mixed TPC-W and TPC-C benchmarks show that BConf reduces resource usages by about 50% and 30% for TPC-W and TPC-C respectively, improves stability by more than 35.6%, and has a much shorter settling time, in comparison with a representative partition approach. The advantages of BConf in resource coordination are also demonstrated.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125558844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bayesian Service Demand Estimation Using Gibbs Sampling 基于吉布斯抽样的贝叶斯服务需求估计
Weikun Wang, G. Casale
Performance modelling of web applications involves the task of estimating service demands of requests at physical resources, such as CPUs. In this paper, we propose a service demand estimation algorithm based on a Markov Chain Monte Carlo (MCMC) technique, Gibbs sampling. Our methodology is widely applicable as it requires only queue length samples at each resource, which are simple to measure. Additionally, since we use a Bayesian approach, our method can use prior information on the distribution of parameters, a feature not always available with existing demand estimation approaches. The main challenge of Gibbs sampling is to efficiently evaluate the conditional expression required to sample from the posterior distribution of the demands. This expression is shown to be the equilibrium solution of a multiclass closed queueing network. We define a novel approximation to efficiently obtain the normalising constant to make the cost of its evaluation acceptable for MCMC applications. Experimental evaluation based on simulation data with different model sizes demonstrates the effectiveness of Gibbs sampling for service demand estimation.
web应用程序的性能建模涉及估计物理资源(如cpu)上请求的服务需求的任务。本文提出了一种基于马尔可夫链蒙特卡罗(MCMC)技术的服务需求估计算法——吉布斯抽样。我们的方法广泛适用,因为它只需要每个资源上的队列长度样本,这很容易测量。此外,由于我们使用贝叶斯方法,我们的方法可以使用参数分布的先验信息,这是现有需求估计方法并不总是可用的特征。吉布斯抽样的主要挑战是有效地评估从需求的后验分布中抽样所需的条件表达式。该表达式被证明是一类多类封闭排队网络的平衡解。我们定义了一个新的近似值来有效地获得归一化常数,使其评估的成本在MCMC应用中可以接受。基于不同模型大小的仿真数据的实验评估证明了Gibbs抽样对服务需求估计的有效性。
{"title":"Bayesian Service Demand Estimation Using Gibbs Sampling","authors":"Weikun Wang, G. Casale","doi":"10.1109/mascots.2013.78","DOIUrl":"https://doi.org/10.1109/mascots.2013.78","url":null,"abstract":"Performance modelling of web applications involves the task of estimating service demands of requests at physical resources, such as CPUs. In this paper, we propose a service demand estimation algorithm based on a Markov Chain Monte Carlo (MCMC) technique, Gibbs sampling. Our methodology is widely applicable as it requires only queue length samples at each resource, which are simple to measure. Additionally, since we use a Bayesian approach, our method can use prior information on the distribution of parameters, a feature not always available with existing demand estimation approaches. The main challenge of Gibbs sampling is to efficiently evaluate the conditional expression required to sample from the posterior distribution of the demands. This expression is shown to be the equilibrium solution of a multiclass closed queueing network. We define a novel approximation to efficiently obtain the normalising constant to make the cost of its evaluation acceptable for MCMC applications. Experimental evaluation based on simulation data with different model sizes demonstrates the effectiveness of Gibbs sampling for service demand estimation.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
SYREN: Synergistic Link Correlation-Aware and Network Coding-Based Dissemination in Wireless Sensor Networks 无线传感器网络中协同链路关联感知和基于网络编码的传播
S. Alam, Salmin Sultana, Y. C. Hu, S. Fahmy
Rapid flooding is necessary for code updates and routing tree formation in wireless sensor networks. Link correlation-aware collective flooding (CF) is a recently proposed technique that provides a substrate for efficiently disseminating a single packet. Applying CF to multiple packet dissemination poses several challenges, such as reliability degradation, redundant transmissions, and increased contention among node transmissions. The varying link correlation observed in real networks makes the problem harder. In this paper, we propose a multi-packet flooding protocol, SYREN, that exploits the synergy among link correlation and network coding. In particular, SYREN exploits link correlation to eliminate the overhead of explicit control packets in networks with high correlation, and uses network coding to pipeline transmission of multiple packets via a novel, single yet scalable timer per node. SYREN reduces the number of redundant transmissions while achieving near-perfect reliability, especially in networks with low link correlation. Test bed experiments and simulations show that SYREN reduces the average number of transmissions by 30% and dissemination delay by more than 60% while achieving the same reliability as state-of-the-art protocols.
在无线传感器网络中,快速泛洪是编码更新和路由树形成的必要条件。链路相关感知集体泛洪(CF)是最近提出的一种技术,它为有效地传播单个数据包提供了一个基础。将CF应用于多包分发会带来一些挑战,例如可靠性降低、冗余传输以及节点传输之间的争用增加。在真实网络中观察到的不同链接相关性使问题变得更加困难。本文提出了一种利用链路关联和网络编码协同作用的多包泛洪协议SYREN。特别是,SYREN利用链路相关性来消除高相关性网络中显式控制数据包的开销,并使用网络编码通过每个节点的新颖,单一但可扩展的计时器来管道传输多个数据包。SYREN减少了冗余传输的数量,同时实现了近乎完美的可靠性,特别是在链路相关性低的网络中。试验台实验和模拟表明,SYREN在实现与最先进协议相同的可靠性的同时,将平均传输次数减少了30%,传播延迟减少了60%以上。
{"title":"SYREN: Synergistic Link Correlation-Aware and Network Coding-Based Dissemination in Wireless Sensor Networks","authors":"S. Alam, Salmin Sultana, Y. C. Hu, S. Fahmy","doi":"10.1109/MASCOTS.2013.70","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.70","url":null,"abstract":"Rapid flooding is necessary for code updates and routing tree formation in wireless sensor networks. Link correlation-aware collective flooding (CF) is a recently proposed technique that provides a substrate for efficiently disseminating a single packet. Applying CF to multiple packet dissemination poses several challenges, such as reliability degradation, redundant transmissions, and increased contention among node transmissions. The varying link correlation observed in real networks makes the problem harder. In this paper, we propose a multi-packet flooding protocol, SYREN, that exploits the synergy among link correlation and network coding. In particular, SYREN exploits link correlation to eliminate the overhead of explicit control packets in networks with high correlation, and uses network coding to pipeline transmission of multiple packets via a novel, single yet scalable timer per node. SYREN reduces the number of redundant transmissions while achieving near-perfect reliability, especially in networks with low link correlation. Test bed experiments and simulations show that SYREN reduces the average number of transmissions by 30% and dissemination delay by more than 60% while achieving the same reliability as state-of-the-art protocols.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131759743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
LiU: Hiding Disk Access Latency for HPC Applications with a New SSD-Enabled Data Layout LiU:通过新的启用ssd的数据布局隐藏HPC应用程序的磁盘访问延迟
Dachuan Huang, Xuechen Zhang, Wei Shi, Mai Zheng, Song Jiang, Feng Qin
Unlike in the consumer electronics and personal computing areas, in the HPC environment hard disks can hardly be replaced by SSDs. The reasons include hard disk's large capacity, very low price, and decent peak throughput. However, when latency dominates the I/O performance (e.g., when accessing random data), the hard disk's performance can be compromised. If the issue of high latency could be effectively solved, the HPC community would enjoy a large, affordable and fast storage without having to replace disks completely with expensive SSDs. In this paper, we propose an almost latency-free hard-disk dominated storage system called LiU for HPC. The key technique is leveraging limited amount of SSD storage for its low-latency access, and changing data layout in a hybrid storage hierarchy with low-latency SSD at the top and high-latency hard disk at the bottom. If a segment of data would be randomly accessed, we lift its top part (the head) up in the hierarchy to the SSD and leave the remaining part (the body) untouched on the disk. As a result, the latency of accessing this whole segment can be removed because access latency of the body can be hidden by the access time of the head on the SSD. Combined with the effect of prefetching a large segment, LiU (Lift it Up) can effectively remove disk access latency so disk's high peak throughput can now be fully exploited for data-intensive HPC applications. We have implemented a prototype of LiU in the PVFS parallel file system and evaluated it with representative MPI-IO micro benchmarks, including MPI-IO-test, mpi-tile-io, and ior-mpi-io, and one macro-benchmark BTIO. Our experimental results show that LiU can effectively improve the I/O performance for HPC applications, with the throughput improvement ratio up to 5.8. Furthermore, LiU can bring much more benefits to sequential-I/O MPI applications when the applications are interfered by other workloads. For example, LiU improves the I/O throughput of mpi-io-test, which is under interference, by 1.1-3.4 times, while improving the same workload without interference by 15%.
与消费电子和个人计算领域不同,在高性能计算环境中,硬盘很难被固态硬盘取代。原因包括硬盘的大容量、非常低的价格和不错的峰值吞吐量。但是,当延迟占I/O性能的主导地位时(例如,访问随机数据时),硬盘的性能可能会受到损害。如果可以有效地解决高延迟问题,HPC社区将享受到一个大的、负担得起的和快速的存储,而不必完全用昂贵的ssd取代磁盘。在本文中,我们提出了一种几乎没有延迟的硬盘为主的HPC存储系统,称为LiU。关键技术是利用有限数量的SSD存储进行低延迟访问,并在混合存储层次结构中更改数据布局,其中低延迟SSD位于顶部,高延迟硬盘位于底部。如果一段数据将被随机访问,我们将其顶部部分(头部)在层次结构中提升到SSD,而将其余部分(主体)保留在磁盘上。因此,由于主体的访问延迟可以被SSD上磁头的访问时间所隐藏,因此可以消除访问整个段的延迟。结合预取大段的效果,LiU (Lift it Up)可以有效地消除磁盘访问延迟,因此磁盘的高峰吞吐量现在可以完全用于数据密集型HPC应用程序。我们在PVFS并行文件系统中实现了LiU的原型,并使用具有代表性的MPI-IO微基准(包括MPI-IO-test、mpi-tile-io和ior-mpi-io)和一个宏观基准BTIO对其进行了评估。实验结果表明,LiU可以有效地提高HPC应用的I/O性能,吞吐量提升比高达5.8。此外,当顺序i /O MPI应用程序受到其他工作负载的干扰时,LiU可以为这些应用程序带来更多好处。例如,LiU将受干扰的mpi-io-test的I/O吞吐量提高了1.1-3.4倍,而在没有干扰的情况下将相同的工作负载提高了15%。
{"title":"LiU: Hiding Disk Access Latency for HPC Applications with a New SSD-Enabled Data Layout","authors":"Dachuan Huang, Xuechen Zhang, Wei Shi, Mai Zheng, Song Jiang, Feng Qin","doi":"10.1109/MASCOTS.2013.19","DOIUrl":"https://doi.org/10.1109/MASCOTS.2013.19","url":null,"abstract":"Unlike in the consumer electronics and personal computing areas, in the HPC environment hard disks can hardly be replaced by SSDs. The reasons include hard disk's large capacity, very low price, and decent peak throughput. However, when latency dominates the I/O performance (e.g., when accessing random data), the hard disk's performance can be compromised. If the issue of high latency could be effectively solved, the HPC community would enjoy a large, affordable and fast storage without having to replace disks completely with expensive SSDs. In this paper, we propose an almost latency-free hard-disk dominated storage system called LiU for HPC. The key technique is leveraging limited amount of SSD storage for its low-latency access, and changing data layout in a hybrid storage hierarchy with low-latency SSD at the top and high-latency hard disk at the bottom. If a segment of data would be randomly accessed, we lift its top part (the head) up in the hierarchy to the SSD and leave the remaining part (the body) untouched on the disk. As a result, the latency of accessing this whole segment can be removed because access latency of the body can be hidden by the access time of the head on the SSD. Combined with the effect of prefetching a large segment, LiU (Lift it Up) can effectively remove disk access latency so disk's high peak throughput can now be fully exploited for data-intensive HPC applications. We have implemented a prototype of LiU in the PVFS parallel file system and evaluated it with representative MPI-IO micro benchmarks, including MPI-IO-test, mpi-tile-io, and ior-mpi-io, and one macro-benchmark BTIO. Our experimental results show that LiU can effectively improve the I/O performance for HPC applications, with the throughput improvement ratio up to 5.8. Furthermore, LiU can bring much more benefits to sequential-I/O MPI applications when the applications are interfered by other workloads. For example, LiU improves the I/O throughput of mpi-io-test, which is under interference, by 1.1-3.4 times, while improving the same workload without interference by 15%.","PeriodicalId":385538,"journal":{"name":"2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1