首页 > 最新文献

2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)最新文献

英文 中文
Choquet integral based QoS-to-QoE mapping for mobile VoD applications 基于Choquet积分的移动VoD应用的qos到qoe映射
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590441
Yanwei Liu, Jinxia Liu, Zhen Xu, S. Ci
Today, how to accurately predict the quality of experience (QoE) of the networking service is a very important issue for the network operator to optimize the service. However, due to the complex multi-dimensional characteristics of QoE, QoE estimation is extremely challenging. With utilizing the advantages of quality of service (QoS) in evaluating the networking performance, we exploit QoS/QoE correlation to predict QoE by building a QoSto-QoE mapping relationship. To fully consider the inter-dependency among QoS parameters towards forming the QoE, a Choquet integral based fuzzy measurement method is used to map QoS to QoE. Via extensive experiments in mobile VoD applications, the advancement and effectiveness of the proposed method are verified.
当前,如何准确预测网络服务的体验质量(QoE)是网络运营商优化服务的一个非常重要的问题。然而,由于QoE的复杂多维特性,QoE的估计极具挑战性。利用服务质量(QoS)在评价网络性能方面的优势,通过建立QoS -QoE映射关系,利用QoS/QoE相关性来预测QoE。为了充分考虑QoS参数之间的相互依赖性来形成QoE,采用基于Choquet积分的模糊度量方法将QoS映射到QoE。通过在移动视频点播应用中的大量实验,验证了该方法的先进性和有效性。
{"title":"Choquet integral based QoS-to-QoE mapping for mobile VoD applications","authors":"Yanwei Liu, Jinxia Liu, Zhen Xu, S. Ci","doi":"10.1109/IWQoS.2016.7590441","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590441","url":null,"abstract":"Today, how to accurately predict the quality of experience (QoE) of the networking service is a very important issue for the network operator to optimize the service. However, due to the complex multi-dimensional characteristics of QoE, QoE estimation is extremely challenging. With utilizing the advantages of quality of service (QoS) in evaluating the networking performance, we exploit QoS/QoE correlation to predict QoE by building a QoSto-QoE mapping relationship. To fully consider the inter-dependency among QoS parameters towards forming the QoE, a Choquet integral based fuzzy measurement method is used to map QoS to QoE. Via extensive experiments in mobile VoD applications, the advancement and effectiveness of the proposed method are verified.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129578756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bandwidth-aware delayed repair in distributed storage systems 分布式存储系统的带宽感知延迟修复
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590386
Jiajie Shen, Jiazhen Gu, Yangfan Zhou, Xin Wang
In data storage systems, data are typically stored in redundant storage nodes to ensure storage reliability. When storage nodes fail, with the help of the redundant nodes, the lost data can be restored in new storage nodes. Such a regeneration process may be aborted, since storage nodes may fail during the process. Therefore, reducing the time of regeneration process is a well-known challenge to improve the reliability of storage systems. Delayed repair is a typical repair scheme in real-world storage systems. It reduces the overhead of the regeneration process by recovering multiple node failures simultaneously. How to reduce the regeneration time of delayed repair is yet to be well addressed. Since available bandwidth is flowing in storage systems and the regeneration time is seriously affected by the available bandwidth, we find the key to solve this problem is determining the start time of the regeneration process. Via modeling this problem with Lyaponuv optimization framework, we propose an OMFR scheme to reduce the regeneration time. The experimental results show that OMFR scheme can reduce cumulative regeneration time by up to 78% compared with traditional delayed repair schemes.
在数据存储系统中,为了保证存储的可靠性,数据通常存储在冗余的存储节点上。当存储节点出现故障时,通过冗余节点,可以在新的存储节点上恢复丢失的数据。这样的再生过程可能被中止,因为存储节点可能在此过程中失败。因此,减少再生过程的时间是提高存储系统可靠性的一个众所周知的挑战。延迟修复是现实存储系统中一种典型的修复方案。它通过同时恢复多个节点故障来减少再生过程的开销。如何减少延迟修复的再生时间是一个有待解决的问题。由于存储系统中可用带宽是流动的,再生时间受可用带宽的影响很大,因此确定再生过程的开始时间是解决这一问题的关键。通过使用Lyaponuv优化框架对该问题进行建模,我们提出了一种减少再生时间的OMFR方案。实验结果表明,与传统延迟修复方案相比,OMFR方案可使累积再生时间缩短78%。
{"title":"Bandwidth-aware delayed repair in distributed storage systems","authors":"Jiajie Shen, Jiazhen Gu, Yangfan Zhou, Xin Wang","doi":"10.1109/IWQoS.2016.7590386","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590386","url":null,"abstract":"In data storage systems, data are typically stored in redundant storage nodes to ensure storage reliability. When storage nodes fail, with the help of the redundant nodes, the lost data can be restored in new storage nodes. Such a regeneration process may be aborted, since storage nodes may fail during the process. Therefore, reducing the time of regeneration process is a well-known challenge to improve the reliability of storage systems. Delayed repair is a typical repair scheme in real-world storage systems. It reduces the overhead of the regeneration process by recovering multiple node failures simultaneously. How to reduce the regeneration time of delayed repair is yet to be well addressed. Since available bandwidth is flowing in storage systems and the regeneration time is seriously affected by the available bandwidth, we find the key to solve this problem is determining the start time of the regeneration process. Via modeling this problem with Lyaponuv optimization framework, we propose an OMFR scheme to reduce the regeneration time. The experimental results show that OMFR scheme can reduce cumulative regeneration time by up to 78% compared with traditional delayed repair schemes.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121494810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Smartphone-assisted smooth live video broadcast on wearable cameras 智能手机辅助可穿戴摄像头流畅的视频直播
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590439
Jiwei Li, Zhe Peng, Bin Xiao
Wearable cameras require connecting to cellular-capable devices (e.g., smartphones) so as to provide live broadcast services for worldwide users when Wi-Fi is unavailable. However, the constantly changing cellular network conditions may substantially slow down the upload of recorded videos. In this paper, we consider the scenario where wearable cameras upload live videos to remote distribution servers under cellular networks, aiming at maximizing the quality of uploaded videos while meeting the delay requirements. To attain the goal, we propose a dynamic video coding approach that utilizes dynamic video recording resolution adjustment on wearable cameras and Lyapunov based video preprocessing on smartphones. Our proposed resolution adjustment algorithm adapts to network condition changes, and reduces the overheads of video preprocessing. Due to the property of Lyapunov optimization framework, our proposed video preprocessing algorithm delivers near-optimal video quality while meeting the upload delay requirements. Our evaluation results show that our approach achieves up to 50% reduction in power consumption on smartphones and up to 60% reduction in average delay, at the cost of slightly compromised video quality.
可穿戴相机需要连接到具有蜂窝功能的设备(例如智能手机),以便在Wi-Fi不可用时为全球用户提供直播服务。然而,不断变化的蜂窝网络条件可能会大大减慢录制视频的上传速度。在本文中,我们考虑在蜂窝网络下,可穿戴摄像机将实时视频上传到远程分发服务器的场景,以最大限度地提高上传视频的质量,同时满足延迟要求。为了实现这一目标,我们提出了一种动态视频编码方法,该方法利用可穿戴相机上的动态视频录制分辨率调整和智能手机上基于Lyapunov的视频预处理。我们提出的分辨率调整算法能够适应网络条件的变化,降低了视频预处理的开销。由于Lyapunov优化框架的特性,我们提出的视频预处理算法在满足上传延迟要求的同时提供了接近最优的视频质量。我们的评估结果表明,我们的方法可以将智能手机的功耗降低50%,平均延迟降低60%,但代价是视频质量略有下降。
{"title":"Smartphone-assisted smooth live video broadcast on wearable cameras","authors":"Jiwei Li, Zhe Peng, Bin Xiao","doi":"10.1109/IWQoS.2016.7590439","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590439","url":null,"abstract":"Wearable cameras require connecting to cellular-capable devices (e.g., smartphones) so as to provide live broadcast services for worldwide users when Wi-Fi is unavailable. However, the constantly changing cellular network conditions may substantially slow down the upload of recorded videos. In this paper, we consider the scenario where wearable cameras upload live videos to remote distribution servers under cellular networks, aiming at maximizing the quality of uploaded videos while meeting the delay requirements. To attain the goal, we propose a dynamic video coding approach that utilizes dynamic video recording resolution adjustment on wearable cameras and Lyapunov based video preprocessing on smartphones. Our proposed resolution adjustment algorithm adapts to network condition changes, and reduces the overheads of video preprocessing. Due to the property of Lyapunov optimization framework, our proposed video preprocessing algorithm delivers near-optimal video quality while meeting the upload delay requirements. Our evaluation results show that our approach achieves up to 50% reduction in power consumption on smartphones and up to 60% reduction in average delay, at the cost of slightly compromised video quality.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Reliability in future radio access networks: From linguistic to quantitative definitions 未来无线接入网络的可靠性:从语言到定量定义
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590410
V. Suryaprakash, Ilaria Malanchini
For the first time since the advent of mobile networks, the idea of advancing their pervasiveness by co-opting them into most aspects of daily life has taken hold and this idea is, henceforth, intended to be a mainstay of future networks (5G and beyond). As a result, a term one frequently encounters in the latest literature pertinent to radio access networks is reliability. It is, however, fairly evident that it is mostly used in a colloquial linguistic sense or that, in some cases, it is used synonymously with availability. This work is, to the best of our knowledge, the first to provide a quantitative definition of reliability which stems from its characterization in the dictionary and is based on quantifiable definitions of resilience, availability, and other parameters important to radio access networks. The utility of this quantitative definition is demonstrated by developing a reliability-aware scheduler which takes predictions of the channel quality into account. The scheduler developed here is also compared with the classical proportional fair scheduler in use today. This comparison not only succeeds in highlighting the practicality of the definition provided, but it also shows that the anticipatory reliability-aware scheduler is able to provide an improvement of about 35 - 50% in reliability when compared to a proportional fair scheduler which is common in contemporary use.
自移动网络出现以来,通过将其纳入日常生活的大多数方面来提高其普遍性的想法第一次得到了认可,并且从此以后,这一想法旨在成为未来网络(5G及以后)的支柱。因此,在与无线接入网络相关的最新文献中,人们经常遇到的一个术语是可靠性。然而,相当明显的是,它主要用于口语语言学意义上,或者在某些情况下,它与可用性同义使用。据我们所知,这项工作是第一次提供可靠性的定量定义,该定义源于字典中的描述,并基于弹性、可用性和其他对无线电接入网络重要参数的可量化定义。通过开发一个考虑信道质量预测的可靠性感知调度器,证明了这个定量定义的效用。本文所开发的调度器还与目前使用的经典比例公平调度器进行了比较。这一比较不仅成功地突出了所提供定义的实用性,而且还表明,与当前常用的比例公平调度器相比,预期可靠性感知调度器能够提供约35 - 50%的可靠性改进。
{"title":"Reliability in future radio access networks: From linguistic to quantitative definitions","authors":"V. Suryaprakash, Ilaria Malanchini","doi":"10.1109/IWQoS.2016.7590410","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590410","url":null,"abstract":"For the first time since the advent of mobile networks, the idea of advancing their pervasiveness by co-opting them into most aspects of daily life has taken hold and this idea is, henceforth, intended to be a mainstay of future networks (5G and beyond). As a result, a term one frequently encounters in the latest literature pertinent to radio access networks is reliability. It is, however, fairly evident that it is mostly used in a colloquial linguistic sense or that, in some cases, it is used synonymously with availability. This work is, to the best of our knowledge, the first to provide a quantitative definition of reliability which stems from its characterization in the dictionary and is based on quantifiable definitions of resilience, availability, and other parameters important to radio access networks. The utility of this quantitative definition is demonstrated by developing a reliability-aware scheduler which takes predictions of the channel quality into account. The scheduler developed here is also compared with the classical proportional fair scheduler in use today. This comparison not only succeeds in highlighting the practicality of the definition provided, but it also shows that the anticipatory reliability-aware scheduler is able to provide an improvement of about 35 - 50% in reliability when compared to a proportional fair scheduler which is common in contemporary use.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127508642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Tetris: Optimizing cloud resource usage unbalance with elastic VM 俄罗斯方块:通过弹性虚拟机优化云资源使用不平衡
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590395
Xiao Ling, Yi Yuan, Dan Wang, Jiahai Yang
Recently, the cloud systems face an increasing number of big data applications. It becomes an important issue for the cloud providers to allocate resources so as to accommodate as many of these big data applications as possible. In current cloud service, e.g., Amazon EMR, a job runs on a fixed cluster. This means that a fixed amount of resources (e.g. CPU, memory) is allocated to the life cycle of this job. We observe that the resources are inefficiently used in such services because of resources usage unbalance. Therefore, we propose a runtime elastic VM approach where the cloud system can increase or decrease the number of CPUs at different time periods for the jobs. There is little change to such services as Amazon EMR, yet the cloud system can accommodate many more jobs. In this paper, we first present a measurement study to show the feasibility and the quantitative impact of adjusting VM configurations dynamically. We then model the task and job completion time of big data applications, which are used for elastic VM adjustment decisions. We validate our models through experiments. We present Tetris, an elastic VM strategy based on cloud system that can better optimize resource utilization to support big data applications. We further implement a Tetris prototype and comprehensively evaluate Tetris on a real private cloud platform using Facebook trace and Wikipedia dataset. We observe that with Tetris, the cloud system can accommodate 31.3% more jobs.
近年来,云系统面临着越来越多的大数据应用。对于云提供商来说,分配资源以容纳尽可能多的大数据应用程序成为一个重要的问题。在当前的云服务中,例如Amazon EMR,作业在固定的集群上运行。这意味着将固定数量的资源(例如CPU、内存)分配给此作业的生命周期。我们观察到,由于资源使用的不平衡,这些服务的资源利用效率低下。因此,我们提出了一种运行时弹性VM方法,其中云系统可以在不同时间段为作业增加或减少cpu数量。像亚马逊电子病历这样的服务几乎没有什么变化,但云系统可以容纳更多的工作。在本文中,我们首先提出了一项测量研究,以显示动态调整虚拟机配置的可行性和定量影响。然后,我们对大数据应用程序的任务和工作完成时间进行建模,用于弹性VM调整决策。我们通过实验来验证我们的模型。提出了一种基于云系统的弹性虚拟机策略《俄罗斯方块》,可以更好地优化资源利用,支持大数据应用。我们进一步实现了俄罗斯方块原型,并使用Facebook trace和Wikipedia数据集在真实的私有云平台上对俄罗斯方块进行了全面评估。我们观察到,使用俄罗斯方块,云系统可以容纳更多31.3%的工作。
{"title":"Tetris: Optimizing cloud resource usage unbalance with elastic VM","authors":"Xiao Ling, Yi Yuan, Dan Wang, Jiahai Yang","doi":"10.1109/IWQoS.2016.7590395","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590395","url":null,"abstract":"Recently, the cloud systems face an increasing number of big data applications. It becomes an important issue for the cloud providers to allocate resources so as to accommodate as many of these big data applications as possible. In current cloud service, e.g., Amazon EMR, a job runs on a fixed cluster. This means that a fixed amount of resources (e.g. CPU, memory) is allocated to the life cycle of this job. We observe that the resources are inefficiently used in such services because of resources usage unbalance. Therefore, we propose a runtime elastic VM approach where the cloud system can increase or decrease the number of CPUs at different time periods for the jobs. There is little change to such services as Amazon EMR, yet the cloud system can accommodate many more jobs. In this paper, we first present a measurement study to show the feasibility and the quantitative impact of adjusting VM configurations dynamically. We then model the task and job completion time of big data applications, which are used for elastic VM adjustment decisions. We validate our models through experiments. We present Tetris, an elastic VM strategy based on cloud system that can better optimize resource utilization to support big data applications. We further implement a Tetris prototype and comprehensively evaluate Tetris on a real private cloud platform using Facebook trace and Wikipedia dataset. We observe that with Tetris, the cloud system can accommodate 31.3% more jobs.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126789310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FTDC: A fault-tolerant server-centric data center network FTDC:以服务器为中心的容错数据中心网络
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590405
Ye Yu, Chen Qian
Server-centric data center networks enable several important features of modern data center applications, such as cloud storage and big data processing. However, network failures are ubiquitous and significantly affect network performance, such as routing correctness and network bandwidth. Existing server-centric data centers do not provide specific fault-tolerance mechanisms to recover the network from failures and to protect network performance from downgrading. In this work, we design FTDC, a fault-tolerant network and its routing protocols. FTDC is developed to provide high-bandwidth and flexibility to data center applications and achieve fault tolerance in a self-fixing manner. Upon failures, the servers automatically explore valid paths to deliver packets to the destination by exchanging control messages among servers. Experimental results show that FTDC demonstrate high performance with very little extra overhead during network failures.
以服务器为中心的数据中心网络支持现代数据中心应用程序的几个重要特性,例如云存储和大数据处理。然而,网络故障无处不在,严重影响网络性能,如路由正确性和网络带宽。现有的以服务器为中心的数据中心没有提供特定的容错机制来从故障中恢复网络并保护网络性能不降级。在这项工作中,我们设计了一种容错网络FTDC及其路由协议。FTDC旨在为数据中心应用提供高带宽和灵活性,并以自修复的方式实现容错。当出现故障时,服务器通过在服务器间交换控制消息,自动探索有效路径,将报文发送到目的地。实验结果表明,FTDC具有很高的性能,并且在网络故障时额外的开销很小。
{"title":"FTDC: A fault-tolerant server-centric data center network","authors":"Ye Yu, Chen Qian","doi":"10.1109/IWQoS.2016.7590405","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590405","url":null,"abstract":"Server-centric data center networks enable several important features of modern data center applications, such as cloud storage and big data processing. However, network failures are ubiquitous and significantly affect network performance, such as routing correctness and network bandwidth. Existing server-centric data centers do not provide specific fault-tolerance mechanisms to recover the network from failures and to protect network performance from downgrading. In this work, we design FTDC, a fault-tolerant network and its routing protocols. FTDC is developed to provide high-bandwidth and flexibility to data center applications and achieve fault tolerance in a self-fixing manner. Upon failures, the servers automatically explore valid paths to deliver packets to the destination by exchanging control messages among servers. Experimental results show that FTDC demonstrate high performance with very little extra overhead during network failures.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126514183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using recurrent neural networks toward black-box system anomaly prediction 应用递归神经网络进行黑箱系统异常预测
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590435
Shaohan Huang, Carol J. Fung, Kui Wang, Polo Pei, Zhongzhi Luan, D. Qian
Component based enterprise systems are becoming extremely complex in which the availability and usability are influenced intensively by the system's anomalies. Anomaly prediction is highly important for ensuring a system's stability, which aims at preventing anomaly from occurring through pre-failure warning. However, due to the system's complex nature and the noise from monitoring, capturing pre-failure symptoms is a challenging problem. In this paper, we present a sequential and an averaged recurrent neural networks (RNN) models for distributed systems and component based systems. Specifically, we use cycle representation to capture cyclical system behaviors, which can be used to improve prediction accuracy. The anomaly data used in the experiments is collected from RUBis, IBM System S, and the component based system of enterprise T. The experimental results show that our proposed methods can achieve high prediction accuracy with satisfying lead time. Our recurrent neural networks model also demonstrates time efficiency for monitoring large-scale systems.
基于组件的企业系统正变得极其复杂,系统的可用性和可用性受到系统异常的强烈影响。异常预测是保证系统稳定性的重要手段,其目的是通过故障预警来防止异常的发生。然而,由于系统的复杂性和监测噪声,捕获故障前症状是一个具有挑战性的问题。本文提出了分布式系统和基于组件的系统的顺序和平均递归神经网络(RNN)模型。具体来说,我们使用循环表示来捕获循环系统行为,这可以用来提高预测精度。实验中使用的异常数据分别来自RUBis、IBM System S和企业t的组件系统。实验结果表明,我们提出的方法可以在满足提前期的情况下获得较高的预测精度。我们的递归神经网络模型也证明了监测大型系统的时间效率。
{"title":"Using recurrent neural networks toward black-box system anomaly prediction","authors":"Shaohan Huang, Carol J. Fung, Kui Wang, Polo Pei, Zhongzhi Luan, D. Qian","doi":"10.1109/IWQoS.2016.7590435","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590435","url":null,"abstract":"Component based enterprise systems are becoming extremely complex in which the availability and usability are influenced intensively by the system's anomalies. Anomaly prediction is highly important for ensuring a system's stability, which aims at preventing anomaly from occurring through pre-failure warning. However, due to the system's complex nature and the noise from monitoring, capturing pre-failure symptoms is a challenging problem. In this paper, we present a sequential and an averaged recurrent neural networks (RNN) models for distributed systems and component based systems. Specifically, we use cycle representation to capture cyclical system behaviors, which can be used to improve prediction accuracy. The anomaly data used in the experiments is collected from RUBis, IBM System S, and the component based system of enterprise T. The experimental results show that our proposed methods can achieve high prediction accuracy with satisfying lead time. Our recurrent neural networks model also demonstrates time efficiency for monitoring large-scale systems.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122314626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
LCC-Graph: A high-performance graph-processing framework with low communication costs LCC-Graph:一个具有低通信成本的高性能图形处理框架
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590434
Yongli Cheng, F. Wang, Hong Jiang, Yu Hua, D. Feng, XiuNeng Wang
With the rapid growth of data, communication overhead has become an important concern in applications of data centers and cloud computing. However, existing distributed graph-processing frameworks routinely suffer from high communication costs, leading to very long waiting times experienced by users for the graph-computing results. In order to address this problem, we propose a new computation model with low communication costs, called LCC-BSP. We use this model to design and implement a high-performance distributed graph-processing framework called LCC-Graph. This framework eliminates the high communication costs in existing distributed graph-processing frameworks. Moreover, LCC-Graph also minimizes the computation workload of each vertex, significantly reducing the computation time for each superstep. Evaluation of LCC-Graph on a 32-node cluster, driven by real-world graph datasets, shows that it significantly outperforms existing distributed graph-processing frameworks in terms of runtime, particularly when the system is supported by a high-bandwidth network. For example, LCC-Graph achieves an order of magnitude performance improvement over GPS and GraphLab.
随着数据量的快速增长,通信开销已成为数据中心和云计算应用中的一个重要问题。然而,现有的分布式图形处理框架通常存在通信成本高的问题,导致用户等待图计算结果的时间很长。为了解决这个问题,我们提出了一种新的低通信成本的计算模型,称为LCC-BSP。我们使用该模型设计并实现了一个高性能的分布式图形处理框架LCC-Graph。该框架消除了现有分布式图形处理框架中高昂的通信成本。此外,lc - graph还使每个顶点的计算工作量最小化,大大减少了每个超步的计算时间。在一个32节点的集群上,由真实世界的图形数据集驱动的lc - graph的评估表明,它在运行时方面明显优于现有的分布式图形处理框架,特别是当系统由高带宽网络支持时。例如,LCC-Graph在性能上比GPS和GraphLab提高了一个数量级。
{"title":"LCC-Graph: A high-performance graph-processing framework with low communication costs","authors":"Yongli Cheng, F. Wang, Hong Jiang, Yu Hua, D. Feng, XiuNeng Wang","doi":"10.1109/IWQoS.2016.7590434","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590434","url":null,"abstract":"With the rapid growth of data, communication overhead has become an important concern in applications of data centers and cloud computing. However, existing distributed graph-processing frameworks routinely suffer from high communication costs, leading to very long waiting times experienced by users for the graph-computing results. In order to address this problem, we propose a new computation model with low communication costs, called LCC-BSP. We use this model to design and implement a high-performance distributed graph-processing framework called LCC-Graph. This framework eliminates the high communication costs in existing distributed graph-processing frameworks. Moreover, LCC-Graph also minimizes the computation workload of each vertex, significantly reducing the computation time for each superstep. Evaluation of LCC-Graph on a 32-node cluster, driven by real-world graph datasets, shows that it significantly outperforms existing distributed graph-processing frameworks in terms of runtime, particularly when the system is supported by a high-bandwidth network. For example, LCC-Graph achieves an order of magnitude performance improvement over GPS and GraphLab.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133003116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-Resource Partial-Ordered Task Scheduling in cloud computing 云计算中的多资源部分有序任务调度
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590423
Chaokun Zhang, Yong Cui, Rong Zheng, E. Jinlong, Jianping Wu
In this paper, we investigate the scheduling problem with multi-resource allocation in cloud computing environments. In contrast to existing work that focuses on flow-level scheduling, which treats flows in isolation, we consider dependency among subtasks of applications that imposes a partial order relationship in execution. We formulate the problem of Multi-Resource Partial-Ordered Task Scheduling (MR-POTS) to minimize the makespan. In the first stage, the proposed Dominant Resource Priority (DRP) algorithm decides the collection of subtasks for resource allocation by taking into account the partial order relationship and characteristics of subtasks. In the second stage, the proposed Maximum Utilization Allocation (MUA) algorithm partitions multiple resources among selected subtasks with the objective to maximize the overall utilization. Both theoretical analysis and experimental evaluation demonstrate the proposed algorithms can approximately achieve the minimal makespan with high resource utilization. Specifically, a reduction of 50% in makespan can be achieved compared with existing scheduling schemes.
本文研究了云计算环境下多资源分配的调度问题。与关注流级调度的现有工作(隔离处理流)相反,我们考虑了应用程序的子任务之间的依赖关系,这些子任务在执行中施加了部分顺序关系。我们提出了多资源部分有序任务调度(MR-POTS)问题以最小化完工时间。在第一阶段,提出的DRP (Dominant Resource Priority)算法通过考虑子任务的偏序关系和子任务的特点,确定子任务的集合进行资源分配。在第二阶段,提出的最大利用率分配(MUA)算法将多个资源划分到选定的子任务中,目标是使整体利用率最大化。理论分析和实验验证表明,本文提出的算法可以近似地实现最小的makespan,同时具有较高的资源利用率。具体而言,与现有调度方案相比,最大完工时间可以减少50%。
{"title":"Multi-Resource Partial-Ordered Task Scheduling in cloud computing","authors":"Chaokun Zhang, Yong Cui, Rong Zheng, E. Jinlong, Jianping Wu","doi":"10.1109/IWQoS.2016.7590423","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590423","url":null,"abstract":"In this paper, we investigate the scheduling problem with multi-resource allocation in cloud computing environments. In contrast to existing work that focuses on flow-level scheduling, which treats flows in isolation, we consider dependency among subtasks of applications that imposes a partial order relationship in execution. We formulate the problem of Multi-Resource Partial-Ordered Task Scheduling (MR-POTS) to minimize the makespan. In the first stage, the proposed Dominant Resource Priority (DRP) algorithm decides the collection of subtasks for resource allocation by taking into account the partial order relationship and characteristics of subtasks. In the second stage, the proposed Maximum Utilization Allocation (MUA) algorithm partitions multiple resources among selected subtasks with the objective to maximize the overall utilization. Both theoretical analysis and experimental evaluation demonstrate the proposed algorithms can approximately achieve the minimal makespan with high resource utilization. Specifically, a reduction of 50% in makespan can be achieved compared with existing scheduling schemes.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125865230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
FDALB: Flow distribution aware load balancing for datacenter networks FDALB:数据中心网络的流量分布感知负载均衡
Pub Date : 2016-06-20 DOI: 10.1109/IWQoS.2016.7590409
Shuo Wang, Jiao Zhang, Tao Huang, Tian Pan, Jiang Liu, Yun-jie Liu
We present FDALB, a flow distribution aware load balancing mechanism aimed at reducing flow collisions and achieving high scalability. FDALB, like the most of centralized methods, uses a centralized controller to get the view of networks and congestion information. However, FDALB classifies flows into short flows and long flows. The paths of short flows and long flows are controlled by distributed switches and the centralized controller respectively. Thus, the controller handles only a small part of flows to achieve high scalability. To further reduce the controller's overhead, FDALB leverages end-hosts to tag long flows, thus switches can easily determine long flows by inspecting the tag. Besides, FDALB can adaptively adjust the threshold at each end-host to keep up with the flow distribution dynamics.
本文提出了一种基于流分布感知的负载平衡机制FDALB,旨在减少流冲突并实现高可扩展性。与大多数集中式方法一样,FDALB使用集中式控制器来获取网络视图和拥塞信息。然而,FDALB将流分为短流和长流。短流路径和长流路径分别由分布式开关和集中控制器控制。因此,控制器只处理流的一小部分,以实现高可伸缩性。为了进一步减少控制器的开销,FDALB利用终端主机标记长流,因此交换机可以通过检查标记轻松确定长流。此外,FDALB可以自适应调整各端主机的阈值,以跟上流量分布动态。
{"title":"FDALB: Flow distribution aware load balancing for datacenter networks","authors":"Shuo Wang, Jiao Zhang, Tao Huang, Tian Pan, Jiang Liu, Yun-jie Liu","doi":"10.1109/IWQoS.2016.7590409","DOIUrl":"https://doi.org/10.1109/IWQoS.2016.7590409","url":null,"abstract":"We present FDALB, a flow distribution aware load balancing mechanism aimed at reducing flow collisions and achieving high scalability. FDALB, like the most of centralized methods, uses a centralized controller to get the view of networks and congestion information. However, FDALB classifies flows into short flows and long flows. The paths of short flows and long flows are controlled by distributed switches and the centralized controller respectively. Thus, the controller handles only a small part of flows to achieve high scalability. To further reduce the controller's overhead, FDALB leverages end-hosts to tag long flows, thus switches can easily determine long flows by inspecting the tag. Besides, FDALB can adaptively adjust the threshold at each end-host to keep up with the flow distribution dynamics.","PeriodicalId":304978,"journal":{"name":"2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125960547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2016 IEEE/ACM 24th International Symposium on Quality of Service (IWQoS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1