Many Big Data applications in science and industry have arisen, that require large amounts of streamed or event data to be analyzed with low latency. This paper presents a reactive strategy to enforce latency guarantees in data flows running on scalable Stream Processing Engines (SPEs), while minimizing resource consumption. We introduce a model for estimating the latency of a data flow, when the degrees of parallelism of the tasks within are changed. We describe how to continuously measure the necessary performance metrics for the model, and how it can be used to enforce latency guarantees, by determining appropriate scaling actions at runtime. Therefore, it leverages the elasticity inherent to common cloud technology and cluster resource management systems. We have implemented our strategy as part of the Nephele SPE. To showcase the effectiveness of our approach, we provide an experimental evaluation on a large commodity cluster, using both a synthetic workload as well as an application performing real-time sentiment analysis on real-world social media data.
{"title":"Elastic Stream Processing with Latency Guarantees","authors":"Björn Lohrmann, P. Janacik, O. Kao","doi":"10.1109/ICDCS.2015.48","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.48","url":null,"abstract":"Many Big Data applications in science and industry have arisen, that require large amounts of streamed or event data to be analyzed with low latency. This paper presents a reactive strategy to enforce latency guarantees in data flows running on scalable Stream Processing Engines (SPEs), while minimizing resource consumption. We introduce a model for estimating the latency of a data flow, when the degrees of parallelism of the tasks within are changed. We describe how to continuously measure the necessary performance metrics for the model, and how it can be used to enforce latency guarantees, by determining appropriate scaling actions at runtime. Therefore, it leverages the elasticity inherent to common cloud technology and cluster resource management systems. We have implemented our strategy as part of the Nephele SPE. To showcase the effectiveness of our approach, we provide an experimental evaluation on a large commodity cluster, using both a synthetic workload as well as an application performing real-time sentiment analysis on real-world social media data.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel insurance mechanism consisting of an insurance protocol and a transaction mechanism to reduce new seller ramp up time in eBay-like reputation mechanisms. We conduct experiments on an eBay's dataset and show that our insurance mechanism reduces ramp up time by 90%.
{"title":"A Mechanism Approach to Reduce New Seller Ramp-Up Time in eBay-Like Reputation Systems","authors":"Hong Xie, John C.S. Lui","doi":"10.1109/ICDCS.2015.101","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.101","url":null,"abstract":"We present a novel insurance mechanism consisting of an insurance protocol and a transaction mechanism to reduce new seller ramp up time in eBay-like reputation mechanisms. We conduct experiments on an eBay's dataset and show that our insurance mechanism reduces ramp up time by 90%.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132522290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several solutions have recently been proposed to securely estimate sensor positions even when there is malicious location information which distorts the estimate. Some of those solutions are based on the Minimum Mean Square Estimation (MMSE) methods which efficiently estimate sensor positions. Although such solutions can filter out most of malicious information, if an attacker knows the position of a target sensor, the attacker can significantly alter the position information. In this paper, we introduce such a new attack, called Inside-Attack, and a technique that is able to detect and filter out malicious location information.
{"title":"Inside Attack Filtering for Robust Sensor Localization","authors":"Jongho Won, E. Bertino","doi":"10.1145/2897845.2897926","DOIUrl":"https://doi.org/10.1145/2897845.2897926","url":null,"abstract":"Several solutions have recently been proposed to securely estimate sensor positions even when there is malicious location information which distorts the estimate. Some of those solutions are based on the Minimum Mean Square Estimation (MMSE) methods which efficiently estimate sensor positions. Although such solutions can filter out most of malicious information, if an attacker knows the position of a target sensor, the attacker can significantly alter the position information. In this paper, we introduce such a new attack, called Inside-Attack, and a technique that is able to detect and filter out malicious location information.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123947896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even when more ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called Advanced BCube Connected Crossbars (ABCCC), which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, ABCCC has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in ABCCC. We also introduce an addressing scheme and an efficient routing algorithm for one-to-one communication in ABCCC. We make comprehensive comparisons between ABCCC and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive simulations to evaluate ABCCC, which show that ABCCC achieves the best trade off among all these critical metrics and it suits for many different applications by fine tuning its parameters.
{"title":"ABCCC: An Advanced Cube Based Network for Data Centers","authors":"Zhenhua Li, Yuanyuan Yang","doi":"10.1109/ICDCS.2015.62","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.62","url":null,"abstract":"A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even when more ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called Advanced BCube Connected Crossbars (ABCCC), which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, ABCCC has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in ABCCC. We also introduce an addressing scheme and an efficient routing algorithm for one-to-one communication in ABCCC. We make comprehensive comparisons between ABCCC and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive simulations to evaluate ABCCC, which show that ABCCC achieves the best trade off among all these critical metrics and it suits for many different applications by fine tuning its parameters.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121246111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gueyoung Jung, M. Hiltunen, Kaustubh R. Joshi, R. Panta, R. Schlichting
A complex cloud application consists of virtual machines (VMs) running software such as web servers and load balancers, storage in the form of disk volumes, and network connections that enable communication between VMs and between VMs and disk volumes. The application is also associated with various requirements, including not only quantities such as the sizes of the VMs and disk volumes, but also quality of service (QoS) attributes such as throughput, latency, and reliability. This paper presents Ostro, an Open Stack-based scheduler that optimizes the utilization of data center resources, while satisfying the requirements of the cloud applications. The novelty of the approach realized by Ostro is that it makes holistic placement decisions, in which all the requirements of an application -- described using an application topology abstraction -- are considered jointly. Specific placement algorithms for application topologies are described including an estimate-based greedy algorithm and a time-bounded A algorithm. These algorithms can deal with complex topologies that have heterogeneous resource requirements, while still being scalable enough to handle the placement of hundreds of VMs and volumes across several thousands of host servers. The approach is evaluated using both extensive simulations and realistic experiments. These results show that Ostro significantly improves resource utilization when compared with naive approaches.
{"title":"Ostro: Scalable Placement Optimization of Complex Application Topologies in Large-Scale Data Centers","authors":"Gueyoung Jung, M. Hiltunen, Kaustubh R. Joshi, R. Panta, R. Schlichting","doi":"10.1109/ICDCS.2015.23","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.23","url":null,"abstract":"A complex cloud application consists of virtual machines (VMs) running software such as web servers and load balancers, storage in the form of disk volumes, and network connections that enable communication between VMs and between VMs and disk volumes. The application is also associated with various requirements, including not only quantities such as the sizes of the VMs and disk volumes, but also quality of service (QoS) attributes such as throughput, latency, and reliability. This paper presents Ostro, an Open Stack-based scheduler that optimizes the utilization of data center resources, while satisfying the requirements of the cloud applications. The novelty of the approach realized by Ostro is that it makes holistic placement decisions, in which all the requirements of an application -- described using an application topology abstraction -- are considered jointly. Specific placement algorithms for application topologies are described including an estimate-based greedy algorithm and a time-bounded A algorithm. These algorithms can deal with complex topologies that have heterogeneous resource requirements, while still being scalable enough to handle the placement of hundreds of VMs and volumes across several thousands of host servers. The approach is evaluated using both extensive simulations and realistic experiments. These results show that Ostro significantly improves resource utilization when compared with naive approaches.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127288249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed storage systems are the key infrastructures for hosting the user data of large-scale Online Social Networks (OSNs). The amount of inter-server communication is an important scalability indicator for these systems. Data partitioning and replication are two inter-related issues affecting the inter-server traffic caused by user-initiated read and write operations. This paper investigates the problem of minimizing the total inter-server traffic among a cluster of OSN servers through joint partitioning and replication optimization. We propose a Traffic-Optimized Partitioning and Replication (TOPR) method based on an analysis of how replica allocation affects the inter-server communication. Lightweight algorithms are developed to adjust partitioning and replication dynamically according to data read and write rates. Evaluations with real Facebook and Twitter social graphs show that TOPR significantly reduces the inter-server communication compared with state-of-the-art methods.
分布式存储系统是承载大规模在线社交网络(Online Social network, osn)用户数据的关键基础设施。服务器间通信的数量是这些系统的重要可伸缩性指标。数据分区和复制是两个相互关联的问题,会影响由用户发起的读写操作引起的服务器间流量。本文研究了通过联合分区和复制优化来最小化OSN集群服务器间总流量的问题。在分析副本分配如何影响服务器间通信的基础上,提出了一种流量优化分区和复制(TOPR)方法。开发了轻量级算法,根据数据读写速率动态调整分区和复制。对真实Facebook和Twitter社交图的评估表明,与最先进的方法相比,TOPR显著减少了服务器间的通信。
{"title":"Optimizing Inter-server Communication for Online Social Networks","authors":"Jing Tang, Xueyan Tang, Junsong Yuan","doi":"10.1109/ICDCS.2015.30","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.30","url":null,"abstract":"Distributed storage systems are the key infrastructures for hosting the user data of large-scale Online Social Networks (OSNs). The amount of inter-server communication is an important scalability indicator for these systems. Data partitioning and replication are two inter-related issues affecting the inter-server traffic caused by user-initiated read and write operations. This paper investigates the problem of minimizing the total inter-server traffic among a cluster of OSN servers through joint partitioning and replication optimization. We propose a Traffic-Optimized Partitioning and Replication (TOPR) method based on an analysis of how replica allocation affects the inter-server communication. Lightweight algorithms are developed to adjust partitioning and replication dynamically according to data read and write rates. Evaluations with real Facebook and Twitter social graphs show that TOPR significantly reduces the inter-server communication compared with state-of-the-art methods.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134273431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As video streaming applications are deployed on the cloud, cloud providers are charged by ISPs for inter-data enter transfers under the dominant percentile-based charging models. In order to minimize the payment costs, existing works aim to keep the traffic on each link under the charging volume (i.e., 95th percentile traffic volume from the beginning of a charging period up to current time). However, these methods cannot fully utilize each link's available bandwidth capacity, and may increase the charging volumes. To further reduce the bandwidth payment cost by fully utilizing link bandwidth, we propose an economical and deadline-driven video flow scheduling system, called EcoFlow. Considering different video flows have different transmission deadlines, EcoFlow transmits videos in the order of their deadline tightness and postpones the deliveries of later-deadline videos to later time slots so that the charging volume at current time interval will not increase. The flows that are expected to miss their deadlines are divided into sub flows to be rerouted to other underutilized links in order to meet their deadlines without increasing charging volumes. Experimental results on Planet Lab and EC2 show that compared to existing methods, EcoFlow achieves the least bandwidth costs for cloud providers.
{"title":"EcoFlow: An Economical and Deadline-Driven Inter-datacenter Video Flow Scheduling System","authors":"Yuhua Lin, Haiying Shen, Liuhua Chen","doi":"10.1145/2733373.2806403","DOIUrl":"https://doi.org/10.1145/2733373.2806403","url":null,"abstract":"As video streaming applications are deployed on the cloud, cloud providers are charged by ISPs for inter-data enter transfers under the dominant percentile-based charging models. In order to minimize the payment costs, existing works aim to keep the traffic on each link under the charging volume (i.e., 95th percentile traffic volume from the beginning of a charging period up to current time). However, these methods cannot fully utilize each link's available bandwidth capacity, and may increase the charging volumes. To further reduce the bandwidth payment cost by fully utilizing link bandwidth, we propose an economical and deadline-driven video flow scheduling system, called EcoFlow. Considering different video flows have different transmission deadlines, EcoFlow transmits videos in the order of their deadline tightness and postpones the deliveries of later-deadline videos to later time slots so that the charging volume at current time interval will not increase. The flows that are expected to miss their deadlines are divided into sub flows to be rerouted to other underutilized links in order to meet their deadlines without increasing charging volumes. Experimental results on Planet Lab and EC2 show that compared to existing methods, EcoFlow achieves the least bandwidth costs for cloud providers.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129691347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qinhui Wang, Baoliu Ye, Bin Tang, Song Guo, Sanglu Lu
The paradigm of cloud computing has spontaneously prompted a wide interest in auction-based mechanisms for cloud resource allocation. To eliminate market manipulation, a number of strategy-proof (a.k.a. Truthful) cloud auction mechanisms have been recently proposed by enforcing bidders to bid their true valuations of the cloud resources. However, as discovered in this paper, they would suffer from a new cheating pattern, named false-name bids, where a bidder can gain profit by submitting bids under multiple fictitious names (e.g, Multiple e-mail addresses). Such false-name cheating is easy to make but hard to detect in cloud auctions. To tackle this issue, we propose FAITH, a new False-name-proof Auction for virtual machine instance allocation, that is proven both strategy-proof and false-name proof by our theoretical analysis. When N users compete for M different types of computing instances with multiple units, FAITH achieves a lower time complexity of O(N log N+NM) compared to exiting cloud auction designs. We further extend FAITH to support range-based requests as desired in practice for flexible auction. Through extensive simulation experiments, we show that FAITH highly improves auction efficiency, outperforming the extended mechanisms of conventional false-name-proof auctions in terms of generated revenue and social welfare by up to 220% and 140%, respectively.
{"title":"eBay in the Clouds: False-Name-Proof Auctions for Cloud Resource Allocation","authors":"Qinhui Wang, Baoliu Ye, Bin Tang, Song Guo, Sanglu Lu","doi":"10.1109/ICDCS.2015.24","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.24","url":null,"abstract":"The paradigm of cloud computing has spontaneously prompted a wide interest in auction-based mechanisms for cloud resource allocation. To eliminate market manipulation, a number of strategy-proof (a.k.a. Truthful) cloud auction mechanisms have been recently proposed by enforcing bidders to bid their true valuations of the cloud resources. However, as discovered in this paper, they would suffer from a new cheating pattern, named false-name bids, where a bidder can gain profit by submitting bids under multiple fictitious names (e.g, Multiple e-mail addresses). Such false-name cheating is easy to make but hard to detect in cloud auctions. To tackle this issue, we propose FAITH, a new False-name-proof Auction for virtual machine instance allocation, that is proven both strategy-proof and false-name proof by our theoretical analysis. When N users compete for M different types of computing instances with multiple units, FAITH achieves a lower time complexity of O(N log N+NM) compared to exiting cloud auction designs. We further extend FAITH to support range-based requests as desired in practice for flexible auction. Through extensive simulation experiments, we show that FAITH highly improves auction efficiency, outperforming the extended mechanisms of conventional false-name-proof auctions in terms of generated revenue and social welfare by up to 220% and 140%, respectively.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130776587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1007/978-3-319-55699-4_16
Chunyao Song, Tingjian Ge, Cindy X. Chen, Jie Wang
{"title":"Soft Quorums: A High Availability Solution for Service Oriented Stream Systems","authors":"Chunyao Song, Tingjian Ge, Cindy X. Chen, Jie Wang","doi":"10.1007/978-3-319-55699-4_16","DOIUrl":"https://doi.org/10.1007/978-3-319-55699-4_16","url":null,"abstract":"","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123680281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing popularity of Massively Multiplayer Online Game (MMOG) and fast growth of mobile gaming, cloud gaming exhibits great promises over the conventional MMOG gaming model as it frees players from the requirement of hardware and game installation on their local computers. However, as the graphics rendering is offloaded to the cloud, the data transmission between the end-users and the cloud significantly increases the response latency and limits the user coverage, thus preventing cloud gaming to achieve high user Quality of Experience (QoE). To solve this problem, previous research suggested deploying more data centers, but it comes at a prohibitive cost. We propose a lightweight system called Cloud Fog, which incorporates "fog" consisting of super nodes that are responsible for rendering game videos and streaming them to their nearby players. Fog enables the cloud to be only responsible for the intensive game state computation and sending update information to super nodes, which significantly reduce the traffic hence the latency and bandwidth consumption. Experimental results from PeerSim and Planet Lab show the effectiveness and efficiency of Cloud Fog in increasing user coverage, reducing response latency and bandwidth consumption.
{"title":"Leveraging Fog to Extend Cloud Gaming for Thin-Client MMOG with High Quality of Experience","authors":"Yuhua Lin, Haiying Shen","doi":"10.1109/ICDCS.2015.83","DOIUrl":"https://doi.org/10.1109/ICDCS.2015.83","url":null,"abstract":"With the increasing popularity of Massively Multiplayer Online Game (MMOG) and fast growth of mobile gaming, cloud gaming exhibits great promises over the conventional MMOG gaming model as it frees players from the requirement of hardware and game installation on their local computers. However, as the graphics rendering is offloaded to the cloud, the data transmission between the end-users and the cloud significantly increases the response latency and limits the user coverage, thus preventing cloud gaming to achieve high user Quality of Experience (QoE). To solve this problem, previous research suggested deploying more data centers, but it comes at a prohibitive cost. We propose a lightweight system called Cloud Fog, which incorporates \"fog\" consisting of super nodes that are responsible for rendering game videos and streaming them to their nearby players. Fog enables the cloud to be only responsible for the intensive game state computation and sending update information to super nodes, which significantly reduce the traffic hence the latency and bandwidth consumption. Experimental results from PeerSim and Planet Lab show the effectiveness and efficiency of Cloud Fog in increasing user coverage, reducing response latency and bandwidth consumption.","PeriodicalId":129182,"journal":{"name":"2015 IEEE 35th International Conference on Distributed Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130199754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}