首页 > 最新文献

2012 IEEE 20th International Workshop on Quality of Service最新文献

英文 中文
Maximizing the bandwidth multiplier effect for hybrid cloud-P2P content distribution 最大化混合云- p2p内容分发的带宽乘数效应
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245990
Zhenhua Li, Tieying Zhang, Yan Huang, Zhi-Li Zhang, Yafei Dai
Hybrid cloud-P2P content distribution (“CloudP2P”) provides a promising alternative to the conventional cloud-based or peer-to-peer (P2P)-based large-scale content distribution. It addresses the potential limitations of these two conventional approaches while inheriting their advantages. A key strength of CloudP2P lies in the so-called bandwidth multiplier effect: by appropriately allocating a small portion of cloud (server) bandwidth Si to a peer swarm i (consisting of users interested in the same content) to seed the content, the users in the peer swarm - with an aggregate download bandwidth Di - can then distribute the content among themselves; we refer to the ratio Di/Si as the bandwidth multiplier (for peer swarm i). A major problem in the design of a CloudP2P content distribution system is therefore how to allocate cloud (server) bandwidth to peer swarms so as to maximize the overall bandwidth multiplier effect of the system. In this paper, using real-world measurements, we identify the key factors that affect the bandwidth multipliers of peer swarms and thus construct a fine-grained performance model for addressing the optimal bandwidth allocation problem (OBAP). Then we develop a fast-convergent iterative algorithm to solve OBAP. Both trace-driven simulations and prototype implementation confirm the efficacy of our solution.
混合云-P2P内容分发(“CloudP2P”)为传统的基于云或基于点对点(P2P)的大规模内容分发提供了一种有前途的替代方案。它解决了这两种传统方法的潜在局限性,同时继承了它们的优点。CloudP2P的一个关键优势在于所谓的带宽乘数效应:通过适当地将一小部分云(服务器)带宽Si分配给对等群i(由对相同内容感兴趣的用户组成)来播种内容,对等群中的用户-具有总下载带宽Di -然后可以在他们之间分发内容;我们将Di/Si的比值称为带宽乘数(对于对等群i)。因此,设计CloudP2P内容分发系统的一个主要问题是如何将云(服务器)带宽分配给对等群,从而使系统的整体带宽乘数效果最大化。在本文中,我们使用实际测量,确定了影响对等群带宽乘数的关键因素,从而构建了一个细粒度的性能模型来解决最优带宽分配问题(OBAP)。然后提出了一种快速收敛的迭代算法来求解OBAP。轨迹驱动仿真和原型实现都证实了我们的解决方案的有效性。
{"title":"Maximizing the bandwidth multiplier effect for hybrid cloud-P2P content distribution","authors":"Zhenhua Li, Tieying Zhang, Yan Huang, Zhi-Li Zhang, Yafei Dai","doi":"10.1109/IWQoS.2012.6245990","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245990","url":null,"abstract":"Hybrid cloud-P2P content distribution (“CloudP2P”) provides a promising alternative to the conventional cloud-based or peer-to-peer (P2P)-based large-scale content distribution. It addresses the potential limitations of these two conventional approaches while inheriting their advantages. A key strength of CloudP2P lies in the so-called bandwidth multiplier effect: by appropriately allocating a small portion of cloud (server) bandwidth Si to a peer swarm i (consisting of users interested in the same content) to seed the content, the users in the peer swarm - with an aggregate download bandwidth Di - can then distribute the content among themselves; we refer to the ratio Di/Si as the bandwidth multiplier (for peer swarm i). A major problem in the design of a CloudP2P content distribution system is therefore how to allocate cloud (server) bandwidth to peer swarms so as to maximize the overall bandwidth multiplier effect of the system. In this paper, using real-world measurements, we identify the key factors that affect the bandwidth multipliers of peer swarms and thus construct a fine-grained performance model for addressing the optimal bandwidth allocation problem (OBAP). Then we develop a fast-convergent iterative algorithm to solve OBAP. Both trace-driven simulations and prototype implementation confirm the efficacy of our solution.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"19 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
AIST: Insights into queuing and loss on highly multiplexed links AIST:对高复用链路上的排队和损失的见解
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245974
Maxim Podlesny, Sergey Gorinsky, Balaji Rengarajan
In explicit or delay-driven congestion control, a common objective is to sustain high throughput without long queues and large losses at the bottleneck link of the network path. Congestion control protocols strive to achieve this goal by transmitting smoothly in the steady state. The discovery of the appropriate steady-state transmission rates is a challenging task in itself and typically introduces additional queuing and losses. Seeking insights into the steady-state profiles of queuing and loss achievable by real protocols, this paper presents an AIST (Asynchronous arrivals with Ideally Smooth Transmission) model that abstracts away transient queuing and losses related to discovering the path capacity and redistributing it fairly among the packet flows on the bottleneck link. In AIST, the flows arrive asynchronously but transmit their packets at the same constant rate in the steady state. For the link with an overprovisioned buffer, our queuing-theoretic analysis and simulations for different smooth distributions of packet interarrival times agree that queuing under AIST with the target utilization of 1 is on the order of the square root of N, where N is the number of flows. With small buffers, our simulations of AIST show an ability to provide bounded loss rates regardless of the number of flows.
在显式或延迟驱动的拥塞控制中,一个共同的目标是在网络路径的瓶颈链路上保持高吞吐量,而不需要长队列和大损失。拥塞控制协议通过在稳定状态下平滑传输来实现这一目标。发现适当的稳态传输速率本身就是一项具有挑战性的任务,并且通常会引入额外的排队和损失。为了深入了解真实协议可实现的排队和损失的稳态概况,本文提出了一个AIST(理想平滑传输的异步到达)模型,该模型抽象了与发现路径容量相关的瞬态排队和损失,并在瓶颈链路上的分组流中公平地重新分配它。在AIST中,流异步到达,但在稳定状态下以相同的恒定速率传输它们的数据包。对于缓冲区过剩的链路,我们的排队理论分析和对不同数据包到达时间平滑分布的模拟表明,在目标利用率为1的AIST条件下,排队的量级为N的平方根,其中N为流的数量。在小缓冲区的情况下,我们对AIST的模拟显示,无论流量多少,都能提供有限的损失率。
{"title":"AIST: Insights into queuing and loss on highly multiplexed links","authors":"Maxim Podlesny, Sergey Gorinsky, Balaji Rengarajan","doi":"10.1109/IWQoS.2012.6245974","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245974","url":null,"abstract":"In explicit or delay-driven congestion control, a common objective is to sustain high throughput without long queues and large losses at the bottleneck link of the network path. Congestion control protocols strive to achieve this goal by transmitting smoothly in the steady state. The discovery of the appropriate steady-state transmission rates is a challenging task in itself and typically introduces additional queuing and losses. Seeking insights into the steady-state profiles of queuing and loss achievable by real protocols, this paper presents an AIST (Asynchronous arrivals with Ideally Smooth Transmission) model that abstracts away transient queuing and losses related to discovering the path capacity and redistributing it fairly among the packet flows on the bottleneck link. In AIST, the flows arrive asynchronously but transmit their packets at the same constant rate in the steady state. For the link with an overprovisioned buffer, our queuing-theoretic analysis and simulations for different smooth distributions of packet interarrival times agree that queuing under AIST with the target utilization of 1 is on the order of the square root of N, where N is the number of flows. With small buffers, our simulations of AIST show an ability to provide bounded loss rates regardless of the number of flows.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117214490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approaching optimal compression with fast update for large scale routing tables 大规模路由表的快速更新逼近最优压缩
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245978
Tong Yang, Bo Yuan, Shenjiang Zhang, Ting Zhang, Ruian Duan, Yi Wang, B. Liu
With the fast development of Internet, the size of routing tables in the backbone routers keeps a rapid growth in recent years. An effective solution to control the memory occupation of the ever-increased huge routing table is the Forwarding Information Base (FIB) compression. Existing optimal FIB compression algorithm ORTC suffers from high computational complexity and poor update performance, due to the loss of essential structure information during its compression process. To address this problem, we present two suboptimal FIB compression algorithms - EAR-fast and EAR-slow, respectively, based on our proposed Election and Representative (EAR) algorithm which is an optimal FIB compression algorithm. The two suboptimal algorithms preserve the structure information, and support fast incremental updates while reducing computational complexity. Experiments on an 18-month real data set show that compared with ORTC, the proposed EAR-fast algorithm requires only 9.8% compression time and 37.7% memory space, but supports faster update while prolonging the recompression interval remarkably. All these performance advantages come at a cost of merely a 1.5% loss in compression ratio compared with the theoretical optimal ratio.
近年来,随着互联网的快速发展,骨干路由器的路由表规模保持了快速增长。对于日益增长的庞大路由表,控制其内存占用的有效解决方案是转发信息库(Forwarding Information Base, FIB)压缩。现有最优FIB压缩算法ORTC由于在压缩过程中丢失了基本的结构信息,计算量大,更新性能差。为了解决这个问题,我们提出了两个次优的FIB压缩算法- EAR-fast和EAR-slow,分别基于我们提出的选举和代表(EAR)算法是最优的FIB压缩算法。这两种次优算法保留了结构信息,支持快速增量更新,同时降低了计算复杂度。在18个月的真实数据集上进行的实验表明,与ORTC相比,本文提出的ear快速算法只需要9.8%的压缩时间和37.7%的内存空间,但支持更快的更新,并且显著延长了重压缩间隔。所有这些性能优势的代价是压缩比与理论最佳比相比仅损失1.5%。
{"title":"Approaching optimal compression with fast update for large scale routing tables","authors":"Tong Yang, Bo Yuan, Shenjiang Zhang, Ting Zhang, Ruian Duan, Yi Wang, B. Liu","doi":"10.1109/IWQoS.2012.6245978","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245978","url":null,"abstract":"With the fast development of Internet, the size of routing tables in the backbone routers keeps a rapid growth in recent years. An effective solution to control the memory occupation of the ever-increased huge routing table is the Forwarding Information Base (FIB) compression. Existing optimal FIB compression algorithm ORTC suffers from high computational complexity and poor update performance, due to the loss of essential structure information during its compression process. To address this problem, we present two suboptimal FIB compression algorithms - EAR-fast and EAR-slow, respectively, based on our proposed Election and Representative (EAR) algorithm which is an optimal FIB compression algorithm. The two suboptimal algorithms preserve the structure information, and support fast incremental updates while reducing computational complexity. Experiments on an 18-month real data set show that compared with ORTC, the proposed EAR-fast algorithm requires only 9.8% compression time and 37.7% memory space, but supports faster update while prolonging the recompression interval remarkably. All these performance advantages come at a cost of merely a 1.5% loss in compression ratio compared with the theoretical optimal ratio.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123614490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Scheduling with outdated CSI: Effective service capacities of optimistic vs. pessimistic policies 过时CSI调度:乐观与悲观策略的有效服务能力
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245966
J. Gross
The concept of the effective service capacity is an analytical framework for evaluating QoS-constrained queuing performance of communication systems. Recently, it has been applied to the analysis of different wireless systems like point-to-point systems or multi-user systems. In contrast to previous work, we consider in this work slot-based systems where a scheduler determines a packet size to be transmitted at the beginning of the slot. For this, the scheduler can utilize outdated channel state information. Based on a threshold error model, we derive the effective service capacity for different scheduling strategies that the scheduler might apply. We show that even slightly outdated channel state information leads to a significant loss in capacity in comparison to an ideal system with perfect channel state information available at the transmitter. This loss depends on the `risk-level' the scheduler is willing to take which is represented by an SNR margin. We show that for any QoS target and average link state there exists an optimal SNR margin improving the maximum sustainable rate. Typically, this SNR margin is around 3 dB but is sensible to the QoS target and average link quality. Finally, we can also show that adapting to the instantaneous channel state only pays off if the correlation between the channel estimate and the channel state is relatively high (with a coefficient above 0.9).
有效服务能力的概念是评价qos约束下通信系统排队性能的一个分析框架。近年来,它已被应用于分析不同的无线系统,如点对点系统或多用户系统。与以前的工作相反,我们在此工作中考虑基于时隙的系统,其中调度程序决定要在时隙开始时传输的数据包大小。为此,调度器可以利用过时的通道状态信息。基于阈值误差模型,导出了调度程序可能采用的不同调度策略的有效服务容量。我们表明,与发射机具有完美信道状态信息的理想系统相比,即使稍微过时的信道状态信息也会导致显着的容量损失。这种损失取决于调度器愿意承担的“风险水平”,这是由信噪比保证金表示的。我们表明,对于任何QoS目标和平均链路状态,存在一个提高最大可持续速率的最佳信噪比裕度。通常,这个信噪比余量在3db左右,但对QoS目标和平均链路质量是明智的。最后,我们还可以证明,只有当信道估计与信道状态之间的相关性相对较高(系数大于0.9)时,适应瞬时信道状态才会有回报。
{"title":"Scheduling with outdated CSI: Effective service capacities of optimistic vs. pessimistic policies","authors":"J. Gross","doi":"10.1109/IWQoS.2012.6245966","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245966","url":null,"abstract":"The concept of the effective service capacity is an analytical framework for evaluating QoS-constrained queuing performance of communication systems. Recently, it has been applied to the analysis of different wireless systems like point-to-point systems or multi-user systems. In contrast to previous work, we consider in this work slot-based systems where a scheduler determines a packet size to be transmitted at the beginning of the slot. For this, the scheduler can utilize outdated channel state information. Based on a threshold error model, we derive the effective service capacity for different scheduling strategies that the scheduler might apply. We show that even slightly outdated channel state information leads to a significant loss in capacity in comparison to an ideal system with perfect channel state information available at the transmitter. This loss depends on the `risk-level' the scheduler is willing to take which is represented by an SNR margin. We show that for any QoS target and average link state there exists an optimal SNR margin improving the maximum sustainable rate. Typically, this SNR margin is around 3 dB but is sensible to the QoS target and average link quality. Finally, we can also show that adapting to the instantaneous channel state only pays off if the correlation between the channel estimate and the channel state is relatively high (with a coefficient above 0.9).","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131597571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A service quality coordination model bridging QoS and QoE 连接QoS和QoE的服务质量协调模型
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245977
T. Yamazaki, T. Miyoshi, Masato Eguchi, K. Yamori
Both of Quality of Service (QoS) and Quality of Experience (QoE) are defined to specify the degree of service quality. Although they are dealt with in different layers in multi-layered models, collaboration of these is necessary to improve the user satisfaction for telecommunication services. In this paper, after sorting out the concepts and specification of QoS and QoE, a service quality coordination model combining these is proposed. The model is applied to a video-sharing service and its coordination model is derived based on subjective experiments. The structural equation modeling is used to compute the user satisfaction from QoS and QoE.
服务质量(Quality of Service, QoS)和体验质量(Quality of Experience, QoE)都是用来表示服务质量的程度。尽管在多层模型中它们是在不同的层中处理的,但为了提高用户对电信服务的满意度,它们之间的协作是必要的。本文在对QoS和QoE的概念和规范进行梳理后,提出了一种结合两者的服务质量协调模型。将该模型应用于视频分享服务,并在主观实验的基础上推导出其协调模型。采用结构方程模型从QoS和QoE两方面计算用户满意度。
{"title":"A service quality coordination model bridging QoS and QoE","authors":"T. Yamazaki, T. Miyoshi, Masato Eguchi, K. Yamori","doi":"10.1109/IWQoS.2012.6245977","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245977","url":null,"abstract":"Both of Quality of Service (QoS) and Quality of Experience (QoE) are defined to specify the degree of service quality. Although they are dealt with in different layers in multi-layered models, collaboration of these is necessary to improve the user satisfaction for telecommunication services. In this paper, after sorting out the concepts and specification of QoS and QoE, a service quality coordination model combining these is proposed. The model is applied to a video-sharing service and its coordination model is derived based on subjective experiments. The structural equation modeling is used to compute the user satisfaction from QoS and QoE.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122032597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Enhancing AQM to combat wireless losses 加强空气质素监测,以减低无线信号的损失
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245989
Chengdi Lai, Ka-Cheong Leung, V. Li
In order to maintain a small, stable backlog at the router buffer, active queue management (AQM) algorithms drop packets probabilistically at the onset of congestion, leading to backoffs by Transmission Control Protocol (TCP) flows. However, wireless losses may be misinterpreted as congestive losses and induce spurious backoffs. In this paper, we raise the basic question: Can AQM maintain a stable, small backlog under wireless losses? We find that the representative AQM, random early detection (RED), fails to maintain a stable backlog under time-varying wireless losses. We find that the key to resolving the problem is to robustly track the backlog to a preset reference level, and apply the control-theoretic vehicle, internal model principle, to realize such tracking. We further devise the integral controller (IC) as an embodiment of the principle. Our simulation results show that IC is robust against time-varying wireless losses under various network scenarios.
为了在路由器缓冲区中保持一个小而稳定的积压,主动队列管理(AQM)算法在拥塞开始时概率地丢弃数据包,导致传输控制协议(TCP)流的回退。然而,无线损耗可能被误解为充血性损耗,并导致虚假的回退。在本文中,我们提出了一个基本问题:AQM能否在无线损耗下保持一个稳定的、小的积压?我们发现代表性的AQM随机早期检测(RED)在时变无线损耗下不能保持稳定的积压。研究发现,解决问题的关键是将积压跟踪到预设的参考水平,并应用控制理论的车辆内模原理来实现这一跟踪。我们进一步设计了集成控制器(IC)作为该原理的体现。仿真结果表明,在各种网络场景下,集成电路具有抗时变无线损耗的鲁棒性。
{"title":"Enhancing AQM to combat wireless losses","authors":"Chengdi Lai, Ka-Cheong Leung, V. Li","doi":"10.1109/IWQoS.2012.6245989","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245989","url":null,"abstract":"In order to maintain a small, stable backlog at the router buffer, active queue management (AQM) algorithms drop packets probabilistically at the onset of congestion, leading to backoffs by Transmission Control Protocol (TCP) flows. However, wireless losses may be misinterpreted as congestive losses and induce spurious backoffs. In this paper, we raise the basic question: Can AQM maintain a stable, small backlog under wireless losses? We find that the representative AQM, random early detection (RED), fails to maintain a stable backlog under time-varying wireless losses. We find that the key to resolving the problem is to robustly track the backlog to a preset reference level, and apply the control-theoretic vehicle, internal model principle, to realize such tracking. We further devise the integral controller (IC) as an embodiment of the principle. Our simulation results show that IC is robust against time-varying wireless losses under various network scenarios.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123994958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Preventing TCP incast throughput collapse at the initiation, continuation, and termination 防止TCP连铸吞吐量在开始、继续和结束时崩溃
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245995
A. Tam, Kang Xi, Yang Xu, H. Jonathan Chao
Incast applications have grown in popularity with the advancement of data center technology. It is found that the TCP incast may suffer from the throughput collapse problem, as a consequence of TCP retransmission timeouts when the bottleneck buffer is overwhelmed and causes the packet losses. This is critical to the Quality of Service of cloud computing applications. While some previous literature has proposed solutions, we still see the problem not completely solved. In this paper, we investigate the three root causes for the poor performance of TCP incast flows and propose three solutions, one for each at the beginning, the middle and the end of a TCP connection. The three solutions are: admission control to TCP flows so that the flow population would not exceed the network's capacity; retransmission based on timestamp to detect loss of retransmitted packets; and reiterated FIN packets to keep the TCP connection active until the the termination of a session is acknowledged. The orchestration of these solutions prevents the throughput collapse. The main idea of these solutions is to ensure all the on-going TCP incast flows can maintain the self-clocking, thus eliminates the need to resort to retransmission timeout for recovery. We evaluate these solutions and find them work well in preventing the retransmission timeout of TCP incast flows, hence also preventing the throughput collapse.
随着数据中心技术的进步,实时应用程序越来越受欢迎。研究发现,当瓶颈缓冲区过载时,由于TCP重传超时而导致丢包,TCP重传可能会出现吞吐量崩溃问题。这对云计算应用程序的服务质量至关重要。虽然以前的一些文献已经提出了解决方案,但我们仍然看到问题没有完全解决。在本文中,我们研究了TCP铸流性能差的三个根本原因,并提出了三种解决方案,分别适用于TCP连接的开始,中间和结束。三种解决方案是:对TCP流进行准入控制,使流量人口不会超过网络的容量;基于时间戳的重传,检测重传包的丢失;并重申FIN数据包,以保持TCP连接的活动,直到会话结束被确认。这些解决方案的编排可以防止吞吐量崩溃。这些解决方案的主要思想是确保所有正在进行的TCP铸流都能保持自时钟,从而消除了诉诸重传超时进行恢复的需要。我们对这些解决方案进行了评估,发现它们可以很好地防止TCP连播流的重传超时,从而也可以防止吞吐量崩溃。
{"title":"Preventing TCP incast throughput collapse at the initiation, continuation, and termination","authors":"A. Tam, Kang Xi, Yang Xu, H. Jonathan Chao","doi":"10.1109/IWQoS.2012.6245995","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245995","url":null,"abstract":"Incast applications have grown in popularity with the advancement of data center technology. It is found that the TCP incast may suffer from the throughput collapse problem, as a consequence of TCP retransmission timeouts when the bottleneck buffer is overwhelmed and causes the packet losses. This is critical to the Quality of Service of cloud computing applications. While some previous literature has proposed solutions, we still see the problem not completely solved. In this paper, we investigate the three root causes for the poor performance of TCP incast flows and propose three solutions, one for each at the beginning, the middle and the end of a TCP connection. The three solutions are: admission control to TCP flows so that the flow population would not exceed the network's capacity; retransmission based on timestamp to detect loss of retransmitted packets; and reiterated FIN packets to keep the TCP connection active until the the termination of a session is acknowledged. The orchestration of these solutions prevents the throughput collapse. The main idea of these solutions is to ensure all the on-going TCP incast flows can maintain the self-clocking, thus eliminates the need to resort to retransmission timeout for recovery. We evaluate these solutions and find them work well in preventing the retransmission timeout of TCP incast flows, hence also preventing the throughput collapse.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124469733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
CloudGPS: A scalable and ISP-friendly server selection scheme in cloud computing environments CloudGPS:云计算环境中可扩展且对isp友好的服务器选择方案
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245964
Cong Ding, Yang Chen, Tianyin Xu, Xiaoming Fu
In order to minimize user perceived latency while ensuring high data availability, cloud applications desire to select servers from one of the multiple data centers (i.e., server clusters) in different geographical locations, which are able to provide desired services with low latency and low cost. This paper presents CloudGPS, a new server selection scheme of the cloud computing environment that achieves high scalability and ISP-friendliness. CloudGPS proposes a configurable global performance function that allows Internet service providers (ISPs) and cloud service providers (CSPs) to leverage the cost in terms of inter-domain transit traffic and the quality of service in terms of network latency. CloudGPS bounds the overall burden to be linear with the number of end users. Moreover, compared with traditional approaches, CloudGPS significantly reduces network distance measurement cost (i.e., from O(N) to O(1) for each end user in an application using N data centers). Furthermore, CloudGPS achieves ISP-friendliness by significantly decreasing inter-domain transit traffic.
为了最大限度地减少用户感知到的延迟,同时确保高数据可用性,云应用程序希望从不同地理位置的多个数据中心(即服务器集群)中的一个选择服务器,这些服务器能够以低延迟和低成本提供所需的服务。本文提出了一种新的云计算环境下的服务器选择方案CloudGPS,该方案具有较高的可扩展性和网络友好性。CloudGPS提出了一种可配置的全局性能功能,允许互联网服务提供商(isp)和云服务提供商(csp)利用域间传输流量方面的成本和网络延迟方面的服务质量。CloudGPS将总体负担限定为与最终用户数量成线性关系。此外,与传统方法相比,CloudGPS显著降低了网络距离测量成本(即在使用N个数据中心的应用程序中,每个最终用户的网络距离测量成本从O(N)降低到O(1))。此外,CloudGPS通过显著减少域间传输流量实现了isp友好性。
{"title":"CloudGPS: A scalable and ISP-friendly server selection scheme in cloud computing environments","authors":"Cong Ding, Yang Chen, Tianyin Xu, Xiaoming Fu","doi":"10.1109/IWQoS.2012.6245964","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245964","url":null,"abstract":"In order to minimize user perceived latency while ensuring high data availability, cloud applications desire to select servers from one of the multiple data centers (i.e., server clusters) in different geographical locations, which are able to provide desired services with low latency and low cost. This paper presents CloudGPS, a new server selection scheme of the cloud computing environment that achieves high scalability and ISP-friendliness. CloudGPS proposes a configurable global performance function that allows Internet service providers (ISPs) and cloud service providers (CSPs) to leverage the cost in terms of inter-domain transit traffic and the quality of service in terms of network latency. CloudGPS bounds the overall burden to be linear with the number of end users. Moreover, compared with traditional approaches, CloudGPS significantly reduces network distance measurement cost (i.e., from O(N) to O(1) for each end user in an application using N data centers). Furthermore, CloudGPS achieves ISP-friendliness by significantly decreasing inter-domain transit traffic.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Application dependency discovery using matrix factorization 使用矩阵分解发现应用程序依赖项
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245965
Min Ding, V. Singh, Yueping Zhang, Guofei Jiang
Driven by the large-scale growth of applications deployment in data centers and complicated interactions between service components, automated application dependency discovery becomes essential to daily system management and operation. In this paper, we present ADD, which extracts dependency paths for each application by decomposing the application-layer connectivity graph inferred from passive network monitoring data. ADD utilizes a series of statistical techniques and is based on the combination of global observation of application traffic matrix in the data center and local observation of traffic volumes at small time scales on each server. Compared to existing approaches, ADD is especially effective in the presence of overlapping and multi-hop applications and resilient to data loss and estimation errors.
由于数据中心中应用程序部署的大规模增长以及服务组件之间复杂的交互,自动化应用程序依赖项发现对于日常系统管理和操作变得至关重要。在本文中,我们提出了ADD,它通过分解从被动网络监控数据推断出的应用层连接图来提取每个应用程序的依赖路径。ADD利用一系列统计技术,将数据中心对应用流量矩阵的全局观测与各服务器上小时间尺度的局部流量观测相结合。与现有的方法相比,ADD在存在重叠和多跳应用的情况下特别有效,并且对数据丢失和估计错误具有弹性。
{"title":"Application dependency discovery using matrix factorization","authors":"Min Ding, V. Singh, Yueping Zhang, Guofei Jiang","doi":"10.1109/IWQoS.2012.6245965","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245965","url":null,"abstract":"Driven by the large-scale growth of applications deployment in data centers and complicated interactions between service components, automated application dependency discovery becomes essential to daily system management and operation. In this paper, we present ADD, which extracts dependency paths for each application by decomposing the application-layer connectivity graph inferred from passive network monitoring data. ADD utilizes a series of statistical techniques and is based on the combination of global observation of application traffic matrix in the data center and local observation of traffic volumes at small time scales on each server. Compared to existing approaches, ADD is especially effective in the presence of overlapping and multi-hop applications and resilient to data loss and estimation errors.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132629603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Session reconstruction for HTTP adaptive streaming: Laying the foundation for network-based QoE monitoring HTTP自适应流的会话重构:为基于网络的QoE监控奠定基础
Pub Date : 2012-06-04 DOI: 10.1109/IWQoS.2012.6245987
Rafael Huysegems, Bart De Vleeschauwer, Koen De Schepper, Chris Hawinkel, Tingyao Wu, K. Laevens, W. V. Leekwijck
HTTP Adaptive Streaming (HAS) is rapidly becoming a key video delivery technology for fixed and mobile networks. However, today there is no solution that allows network operators or CDN providers to perform network-based QoE monitoring for HAS sessions. We present a HAS QoE monitoring system, based on data collected in the network, without monitoring information from the client. To retrieve the major QoE parameters such as average quality, quality variation, rebuffering events and interactivity delay, we propose a technique called session reconstruction. We define a number of iterative steps and developed algorithms that can be used to perform HAS session reconstruction. Finally, we present the results of a working prototype for the reconstruction and monitoring of Microsoft Smooth Streaming HAS sessions that is capable of dealing with intermediate caching and user interactivity. We describe the main observations when using the platform to analyze more than a hundred HAS sessions.
HTTP自适应流(HAS)正迅速成为固定和移动网络视频传输的关键技术。然而,目前还没有解决方案允许网络运营商或CDN提供商对HAS会话执行基于网络的QoE监控。我们提出了一个基于从网络中收集的数据而不需要从客户端获取监测信息的HAS QoE监测系统。为了检索主要的QoE参数,如平均质量、质量变化、再缓冲事件和交互延迟,我们提出了一种称为会话重建的技术。我们定义了许多迭代步骤并开发了可用于执行HAS会话重建的算法。最后,我们给出了一个用于重建和监控Microsoft平滑流HAS会话的工作原型的结果,该会话能够处理中间缓存和用户交互。我们描述了使用该平台分析100多个HAS会话时的主要观察结果。
{"title":"Session reconstruction for HTTP adaptive streaming: Laying the foundation for network-based QoE monitoring","authors":"Rafael Huysegems, Bart De Vleeschauwer, Koen De Schepper, Chris Hawinkel, Tingyao Wu, K. Laevens, W. V. Leekwijck","doi":"10.1109/IWQoS.2012.6245987","DOIUrl":"https://doi.org/10.1109/IWQoS.2012.6245987","url":null,"abstract":"HTTP Adaptive Streaming (HAS) is rapidly becoming a key video delivery technology for fixed and mobile networks. However, today there is no solution that allows network operators or CDN providers to perform network-based QoE monitoring for HAS sessions. We present a HAS QoE monitoring system, based on data collected in the network, without monitoring information from the client. To retrieve the major QoE parameters such as average quality, quality variation, rebuffering events and interactivity delay, we propose a technique called session reconstruction. We define a number of iterative steps and developed algorithms that can be used to perform HAS session reconstruction. Finally, we present the results of a working prototype for the reconstruction and monitoring of Microsoft Smooth Streaming HAS sessions that is capable of dealing with intermediate caching and user interactivity. We describe the main observations when using the platform to analyze more than a hundred HAS sessions.","PeriodicalId":178333,"journal":{"name":"2012 IEEE 20th International Workshop on Quality of Service","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114443497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
2012 IEEE 20th International Workshop on Quality of Service
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1