Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218504
R. Sivaraj, Ioannis Broustis, N. K. Shankaranarayanan, V. Aggarwal, P. Mohapatra
LTE network service reliability is highly dependent on the wireless coverage that is provided by cell towers (eNB). Therefore, the network operator's response to outage scenarios needs to be fast and efficient, in order to minimize any degradation in the Quality of Service (QoS). In this paper, we propose an outage mitigation framework for LTE-Advanced (LTE-A) wireless networks. Our framework exploits the inherent design features of LTE-A; it performs a dual optimization of the transmission power and beamforming weight parameters at each neighbor cell sector of the outage eNBs, while taking into account both the channel characteristics and residual eNB resources, after serving its current traffic load. Assuming statistical Channel State Information about the users at the eNBs, we show that this problem is theoretically NP-hard; thus we relax it as a convex optimization problem and solve for the optimal points using an iterative algorithm. Contrary to previously-proposed power control studies, our framework is specifically designed to alleviate the effects of sudden LTE-A eNB outages, where a large number of mobile users need to be efficiently offloaded to nearby towers. We present the detailed analytical design of our framework, and we assess its efficacy via extensive NS-3 simulations on an LTE-A topology. Our simulations demonstrate that our framework provides adequate coverage and QoS across all examined outage scenarios.
{"title":"Mitigating macro-cell outage in LTE-Advanced deployments","authors":"R. Sivaraj, Ioannis Broustis, N. K. Shankaranarayanan, V. Aggarwal, P. Mohapatra","doi":"10.1109/INFOCOM.2015.7218504","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218504","url":null,"abstract":"LTE network service reliability is highly dependent on the wireless coverage that is provided by cell towers (eNB). Therefore, the network operator's response to outage scenarios needs to be fast and efficient, in order to minimize any degradation in the Quality of Service (QoS). In this paper, we propose an outage mitigation framework for LTE-Advanced (LTE-A) wireless networks. Our framework exploits the inherent design features of LTE-A; it performs a dual optimization of the transmission power and beamforming weight parameters at each neighbor cell sector of the outage eNBs, while taking into account both the channel characteristics and residual eNB resources, after serving its current traffic load. Assuming statistical Channel State Information about the users at the eNBs, we show that this problem is theoretically NP-hard; thus we relax it as a convex optimization problem and solve for the optimal points using an iterative algorithm. Contrary to previously-proposed power control studies, our framework is specifically designed to alleviate the effects of sudden LTE-A eNB outages, where a large number of mobile users need to be efficiently offloaded to nearby towers. We present the detailed analytical design of our framework, and we assess its efficacy via extensive NS-3 simulations on an LTE-A topology. Our simulations demonstrate that our framework provides adequate coverage and QoS across all examined outage scenarios.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129971642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218460
Zhe Huang, Bharath Balasubramanian, Michael Wang, Tian Lan, M. Chiang, D. Tsang
There is an increasing need for cloud service performance that can be tailored to customer requirements. In the context of jobs submitted to cloud computing clusters, a crucial requirement is the specification of job completion-times. A natural way to model this specification, is through client/job utility functions that are dependent on job completion-times. We present a method to allocate and schedule heterogeneous resources to jointly optimize the utilities of jobs in a cloud. Specifically: (i) we formulate a completion-time optimal resource allocation (CORA) problem to apportion cluster resources across the jobs that enforces max-min fairness among job utilities, and (ii) starting with an integer programming problem, we perform a series of steps to transform it into an equivalent linear programming problem, and (iii) we implement the proposed framework as a utility-aware resource scheduler in the widely used Hadoop data processing framework, and finally (iv) through extensive experiments with real-world datasets, we show that our prototype achieves significant performance improvement over existing resource-allocation policies.
{"title":"Need for speed: CORA scheduler for optimizing completion-times in the cloud","authors":"Zhe Huang, Bharath Balasubramanian, Michael Wang, Tian Lan, M. Chiang, D. Tsang","doi":"10.1109/INFOCOM.2015.7218460","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218460","url":null,"abstract":"There is an increasing need for cloud service performance that can be tailored to customer requirements. In the context of jobs submitted to cloud computing clusters, a crucial requirement is the specification of job completion-times. A natural way to model this specification, is through client/job utility functions that are dependent on job completion-times. We present a method to allocate and schedule heterogeneous resources to jointly optimize the utilities of jobs in a cloud. Specifically: (i) we formulate a completion-time optimal resource allocation (CORA) problem to apportion cluster resources across the jobs that enforces max-min fairness among job utilities, and (ii) starting with an integer programming problem, we perform a series of steps to transform it into an equivalent linear programming problem, and (iii) we implement the proposed framework as a utility-aware resource scheduler in the widely used Hadoop data processing framework, and finally (iv) through extensive experiments with real-world datasets, we show that our prototype achieves significant performance improvement over existing resource-allocation policies.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128284841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218518
Yin-Chi Chan, Jun Guo, E. Wong, M. Zukerman
Overflow loss systems have wide applications in telecommunications and multimedia systems. In this paper, we consider an overflow loss system consisting of a set of finite-buffer processor-sharing (PS) queues, and develop effective methods for evaluation of its blocking probability. For such a problem, an existing approximation of the blocking probability is based on decomposition of the system into independent PS queues. We provide a new approximation which instead performs decomposition on a surrogate model of the original system, and demonstrate via extensive numerical results that our new approximation is more accurate and robust than the existing approach. We also examine the sensitivity of the blocking probability to the service time distribution, and demonstrate that an exponential distribution is a good approximation for a wide range of service time distributions.
{"title":"Performance analysis for overflow loss systems of processor-sharing queues","authors":"Yin-Chi Chan, Jun Guo, E. Wong, M. Zukerman","doi":"10.1109/INFOCOM.2015.7218518","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218518","url":null,"abstract":"Overflow loss systems have wide applications in telecommunications and multimedia systems. In this paper, we consider an overflow loss system consisting of a set of finite-buffer processor-sharing (PS) queues, and develop effective methods for evaluation of its blocking probability. For such a problem, an existing approximation of the blocking probability is based on decomposition of the system into independent PS queues. We provide a new approximation which instead performs decomposition on a surrogate model of the original system, and demonstrate via extensive numerical results that our new approximation is more accurate and robust than the existing approach. We also examine the sensitivity of the blocking probability to the service time distribution, and demonstrate that an exponential distribution is a good approximation for a wide range of service time distributions.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128497059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218467
A. Arapostathis, A. Biswas, G. Pang
We consider the optimal scheduling problem for a large-scale parallel server system with one large pool of statistically identical servers and multiple classes of jobs under the expected long-run average (ergodic) cost criterion. Jobs of each class arrive as a Poisson process, are served in the FCFS discipline within each class and may elect to abandon while waiting in their queue. The service and abandonment rates are both class-dependent. Assume that the system is operating in the Halfin-Whitt regime, where the arrival rates and the number of servers grow appropriately so that the system gets critically loaded while the service and abandonment rates are fixed. The optimal solution is obtained via the ergodic diffusion control problem in the limit, which forms a new class of problems in the literature of ergodic controls. A new theoretical framework is provided to solve this class of ergodic control problems. The proof of the convergence of the values of the multiclass parallel server system to that of the diffusion control problem relies on a new approximation method, spatial truncation, where the Markov policies follow a fixed priority policy outside a fixed compact set.
{"title":"Optimal scheduling of a large-scale multiclass parallel server system with ergodic cost","authors":"A. Arapostathis, A. Biswas, G. Pang","doi":"10.1109/INFOCOM.2015.7218467","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218467","url":null,"abstract":"We consider the optimal scheduling problem for a large-scale parallel server system with one large pool of statistically identical servers and multiple classes of jobs under the expected long-run average (ergodic) cost criterion. Jobs of each class arrive as a Poisson process, are served in the FCFS discipline within each class and may elect to abandon while waiting in their queue. The service and abandonment rates are both class-dependent. Assume that the system is operating in the Halfin-Whitt regime, where the arrival rates and the number of servers grow appropriately so that the system gets critically loaded while the service and abandonment rates are fixed. The optimal solution is obtained via the ergodic diffusion control problem in the limit, which forms a new class of problems in the literature of ergodic controls. A new theoretical framework is provided to solve this class of ergodic control problems. The proof of the convergence of the values of the multiclass parallel server system to that of the diffusion control problem relies on a new approximation method, spatial truncation, where the Markov policies follow a fixed priority policy outside a fixed compact set.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130563153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218407
K. Han, Zhiming Hu, Jun Luo, Liu Xiang
The recent development of 60GHz technology has made hybrid Data Center Networks (hybrid DCNs) possible, i.e., augmenting wired DCNs with highly directional 60GHz wireless links to provide flexible network connectivity. Although a few recent proposals have demonstrated the feasibility of this hybrid design, it still remains an open problem how to route DCN traffics with guaranteed performance under a hybrid DCN environment. In this paper, we make the first attempt to tackle this challenge, and propose the RUSH framework to minimize the network congestion in hybrid DCNs, by jointly routing flows and scheduling wireless (directional) antennas. Though the problem is shown to be NP-hard, the RUSH algorithms offer guaranteed performance bounds. Our algorithms are able to handle both batched arrivals and sequential arrivals of flow demands, and the theoretical analysis shows that they achieve competitive ratios of O(log n), where n is the number of switches in the network. We also conduct extensive simulations using ns-3 to verify the effectiveness of RUSH. The results demonstrate that RUSH produces nearly optimal performance and significantly outperforms the current practice and a simple greedy heuristics.
{"title":"RUSH: Routing and scheduling for hybrid data center networks","authors":"K. Han, Zhiming Hu, Jun Luo, Liu Xiang","doi":"10.1109/INFOCOM.2015.7218407","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218407","url":null,"abstract":"The recent development of 60GHz technology has made hybrid Data Center Networks (hybrid DCNs) possible, i.e., augmenting wired DCNs with highly directional 60GHz wireless links to provide flexible network connectivity. Although a few recent proposals have demonstrated the feasibility of this hybrid design, it still remains an open problem how to route DCN traffics with guaranteed performance under a hybrid DCN environment. In this paper, we make the first attempt to tackle this challenge, and propose the RUSH framework to minimize the network congestion in hybrid DCNs, by jointly routing flows and scheduling wireless (directional) antennas. Though the problem is shown to be NP-hard, the RUSH algorithms offer guaranteed performance bounds. Our algorithms are able to handle both batched arrivals and sequential arrivals of flow demands, and the theoretical analysis shows that they achieve competitive ratios of O(log n), where n is the number of switches in the network. We also conduct extensive simulations using ns-3 to verify the effectiveness of RUSH. The results demonstrate that RUSH produces nearly optimal performance and significantly outperforms the current practice and a simple greedy heuristics.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"7 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130942527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218396
Qiben Yan, Yao Zheng, Tingting Jiang, W. Lou, Y. T. Hou
Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.
{"title":"PeerClean: Unveiling peer-to-peer botnets through dynamic group behavior analysis","authors":"Qiben Yan, Yao Zheng, Tingting Jiang, W. Lou, Y. T. Hou","doi":"10.1109/INFOCOM.2015.7218396","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218396","url":null,"abstract":"Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126368975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218493
Wenjie Hu, G. Cao
Video streaming on smartphone consumes lots of energy. One common solution is to download and buffer future video data for playback so that the wireless interface can be turned off most of time and then save energy. However, this may waste energy and bandwidth if the user skips or quits before the end of the video. Using a small buffer can reduce the bandwidth wastage, but may consume more energy and introduce rebuffering delay. In this paper, we analyze the power consumption during video streaming considering user skip and early quit scenarios. We first propose an offline method to compute the minimum power consumption, and then introduce an online solution to save energy based on whether the user tends to watch video for a long time or tends to skip. We have implemented the online solution on Android based smartphones. Experimental results and trace-driven simulation results show that that our method can save energy while achieving a better tradeoff between delay and bandwidth compared to existing methods.
{"title":"Energy-aware video streaming on smartphones","authors":"Wenjie Hu, G. Cao","doi":"10.1109/INFOCOM.2015.7218493","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218493","url":null,"abstract":"Video streaming on smartphone consumes lots of energy. One common solution is to download and buffer future video data for playback so that the wireless interface can be turned off most of time and then save energy. However, this may waste energy and bandwidth if the user skips or quits before the end of the video. Using a small buffer can reduce the bandwidth wastage, but may consume more energy and introduce rebuffering delay. In this paper, we analyze the power consumption during video streaming considering user skip and early quit scenarios. We first propose an offline method to compute the minimum power consumption, and then introduce an online solution to save energy based on whether the user tends to watch video for a long time or tends to skip. We have implemented the online solution on Android based smartphones. Experimental results and trace-driven simulation results show that that our method can save energy while achieving a better tradeoff between delay and bandwidth compared to existing methods.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133724652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218674
Qi Zhang, Yutian Wen, Xiaohua Tian, Xiaoying Gan, Xinbing Wang
Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.
{"title":"Incentivize crowd labeling under budget constraint","authors":"Qi Zhang, Yutian Wen, Xiaohua Tian, Xiaoying Gan, Xinbing Wang","doi":"10.1109/INFOCOM.2015.7218674","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218674","url":null,"abstract":"Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130404003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218624
Valentino Pacifici, G. Dán
Internet service providers increasingly deploy internal CDNs with the objective of reducing the traffic on their transit links and to improve their customers' quality of experience. Once ISP managed CDNs (nCDNs) become commonplace, ISPs would likely provide common interfaces to interconnect their nCDNs for mutual benefit, as they do with peering today. In this paper we consider the problem of using distributed algorithms for computing a content allocation for nCDNs. We show that if every ISP aims to minimize its cost and bilateral payments are not allowed then it may be impossible to compute a content allocation. For the case of bilateral payments we propose two distributed algorithms, the aggregate value compensation (AC) and the object value compensation (OC) algorithms, which differ in terms of the level of parallelism they allow and in terms of the amount of information exchanged between nCDNs. We prove that the algorithms converge, and we propose a scheme to ensure ex-post individual rationality. Simulations performed on a real AS-level network topology and synthetic topologies show that the algorithms have geometric rate of convergence, and scale well with the graphs' density and the nCDN capacity.
{"title":"Distributed algorithms for content allocation in interconnected content distribution networks","authors":"Valentino Pacifici, G. Dán","doi":"10.1109/INFOCOM.2015.7218624","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218624","url":null,"abstract":"Internet service providers increasingly deploy internal CDNs with the objective of reducing the traffic on their transit links and to improve their customers' quality of experience. Once ISP managed CDNs (nCDNs) become commonplace, ISPs would likely provide common interfaces to interconnect their nCDNs for mutual benefit, as they do with peering today. In this paper we consider the problem of using distributed algorithms for computing a content allocation for nCDNs. We show that if every ISP aims to minimize its cost and bilateral payments are not allowed then it may be impossible to compute a content allocation. For the case of bilateral payments we propose two distributed algorithms, the aggregate value compensation (AC) and the object value compensation (OC) algorithms, which differ in terms of the level of parallelism they allow and in terms of the amount of information exchanged between nCDNs. We prove that the algorithms converge, and we propose a scheme to ensure ex-post individual rationality. Simulations performed on a real AS-level network topology and synthetic topologies show that the algorithms have geometric rate of convergence, and scale well with the graphs' density and the nCDN capacity.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132800563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-24DOI: 10.1109/INFOCOM.2015.7218362
K. Hua, Ning Jiang, J. Kuhns, V. Sundaram, C. Zou
Statistics show that 79% of the Internet traffic is video and mostly “redundant”. Video-On-Demand in particular follows a 90/10 access pattern, where 90% of the users access the same 10% of all video content. As a result, redundant data are repeatedly transmitted over the Internet. In this paper, we propose a novel traffic deduplication technique to achieve more efficient network communication between video sources (video servers or proxy servers in a CDN) and clients. The proposed SMART (Small packet Merge-Able RouTers) overlay network employs an opportunistic traffic deduplication approach and allows each SMART router to dynamically merge independent streams of the same video content, forming a video streaming tree (VST). The merged streams are tunneled through the overlay together with TCP sessions information before eventually being de-multiplexed and delivered to the clients fully compatible with the TCP protocol. We present theoretical analysis findings on the merging strategy between the video source and clients, the efficiency of the SMART router to save traffic during a merge process, and the overall performance of implementing a SMART overlay topology between a video source and clients. Finally, we prototyped SMART in the PlanetLab environment. We illustrate that performance evaluation results are consistent with our theoretical analysis and significant bandwidth saving is achieved.
统计数据显示,79%的互联网流量是视频,而且大部分是“冗余”的。视频点播尤其遵循90/10访问模式,其中90%的用户访问所有视频内容的相同10%。因此,冗余数据在因特网上重复传输。在本文中,我们提出了一种新的流量重复数据删除技术,以实现视频源(CDN中的视频服务器或代理服务器)和客户端之间更有效的网络通信。提出的SMART (Small packet merge - able RouTers)覆盖网络采用机会性的流量重复数据删除方法,允许每个SMART路由器动态合并相同视频内容的独立流,形成视频流树(video streaming tree, VST)。合并后的流与TCP会话信息一起在覆盖层中隧道化,最终被解复用并交付给完全兼容TCP协议的客户端。我们对视频源和客户端之间的合并策略、SMART路由器在合并过程中节省流量的效率以及在视频源和客户端之间实现SMART覆盖拓扑的整体性能进行了理论分析。最后,我们在PlanetLab环境中创建了SMART原型。我们证明了性能评估结果与我们的理论分析一致,并且实现了显著的带宽节省。
{"title":"Redundancy control through traffic deduplication","authors":"K. Hua, Ning Jiang, J. Kuhns, V. Sundaram, C. Zou","doi":"10.1109/INFOCOM.2015.7218362","DOIUrl":"https://doi.org/10.1109/INFOCOM.2015.7218362","url":null,"abstract":"Statistics show that 79% of the Internet traffic is video and mostly “redundant”. Video-On-Demand in particular follows a 90/10 access pattern, where 90% of the users access the same 10% of all video content. As a result, redundant data are repeatedly transmitted over the Internet. In this paper, we propose a novel traffic deduplication technique to achieve more efficient network communication between video sources (video servers or proxy servers in a CDN) and clients. The proposed SMART (Small packet Merge-Able RouTers) overlay network employs an opportunistic traffic deduplication approach and allows each SMART router to dynamically merge independent streams of the same video content, forming a video streaming tree (VST). The merged streams are tunneled through the overlay together with TCP sessions information before eventually being de-multiplexed and delivered to the clients fully compatible with the TCP protocol. We present theoretical analysis findings on the merging strategy between the video source and clients, the efficiency of the SMART router to save traffic during a merge process, and the overall performance of implementing a SMART overlay topology between a video source and clients. Finally, we prototyped SMART in the PlanetLab environment. We illustrate that performance evaluation results are consistent with our theoretical analysis and significant bandwidth saving is achieved.","PeriodicalId":342583,"journal":{"name":"2015 IEEE Conference on Computer Communications (INFOCOM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128537307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}