Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524421
Salaheddine Hamadi, Khalil Blaiech, Petko Valtchev, O. Cherkaoui, R. State
Openflow is a key step in abstracting network functions by separating the control and the forwarding plane. However, even with continuous innovation and evolution of the protocol, its adoption on forwarding device targets remains laborious and time consuming. In this paper, we present a semantic-based approach to packet forwarding design that tailors flow classification to the underlying switch device. Its key idea consist in streamlining flow classification through rule compiling and thus to optimize forwarding operations and improve switch resources usage. The compiling itself exploits a rule grouping gleaned through Frequent Pattern Mining and Network Calculus in optimizing flow classification w.r.t. the switch pipelined architecture.
{"title":"Compiling packet forwarding rules for switch pipelined architecture","authors":"Salaheddine Hamadi, Khalil Blaiech, Petko Valtchev, O. Cherkaoui, R. State","doi":"10.1109/INFOCOM.2016.7524421","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524421","url":null,"abstract":"Openflow is a key step in abstracting network functions by separating the control and the forwarding plane. However, even with continuous innovation and evolution of the protocol, its adoption on forwarding device targets remains laborious and time consuming. In this paper, we present a semantic-based approach to packet forwarding design that tailors flow classification to the underlying switch device. Its key idea consist in streamlining flow classification through rule compiling and thus to optimize forwarding operations and improve switch resources usage. The compiling itself exploits a rule grouping gleaned through Frequent Pattern Mining and Network Calculus in optimizing flow classification w.r.t. the switch pipelined architecture.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129327384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524494
Zongqing Lu, Xiao Sun, T. L. Porta
Opportunistic mobile networks consisting of intermittently connected mobile devices have been exploited for various applications, such as computational offloading and mitigating cellular traffic load. Different from existing work, in this paper, we focus on cooperatively offloading data among mobile devices to maximally improve the probability of data delivery from a mobile device to an intermittently connected remote server or data center within a given time constraint, which is referred to as the cooperative offloading problem. Unfortunately, cooperative offloading is NP-hard. To this end, a heuristic algorithm is designed based on the proposed probabilistic framework, which provides the estimation of the probability of successful data delivery over the opportunistic path, considering both data size and contact duration. Due to the lack of global information, a distributed algorithm is further proposed. The performance of the proposed approaches is evaluated based on both synthetic networks and real traces, and simulation results show that cooperative offloading can significantly improve the data delivery probability and the performance of both heuristic algorithm and distributed algorithm outperforms other approaches.
{"title":"Cooperative data offloading in opportunistic mobile networks","authors":"Zongqing Lu, Xiao Sun, T. L. Porta","doi":"10.1109/INFOCOM.2016.7524494","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524494","url":null,"abstract":"Opportunistic mobile networks consisting of intermittently connected mobile devices have been exploited for various applications, such as computational offloading and mitigating cellular traffic load. Different from existing work, in this paper, we focus on cooperatively offloading data among mobile devices to maximally improve the probability of data delivery from a mobile device to an intermittently connected remote server or data center within a given time constraint, which is referred to as the cooperative offloading problem. Unfortunately, cooperative offloading is NP-hard. To this end, a heuristic algorithm is designed based on the proposed probabilistic framework, which provides the estimation of the probability of successful data delivery over the opportunistic path, considering both data size and contact duration. Due to the lack of global information, a distributed algorithm is further proposed. The performance of the proposed approaches is evaluated based on both synthetic networks and real traces, and simulation results show that cooperative offloading can significantly improve the data delivery probability and the performance of both heuristic algorithm and distributed algorithm outperforms other approaches.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129457127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inference of traffic volume of the whole network from partial traffic measurements becomes increasingly critical for various network engineering tasks, such as traffic prediction, network optimization, and anomaly detection. Previous studies indicate that the matrix completion is a possible solution for this problem. However, as a two-dimension matrix cannot sufficiently capture the spatial-temporal features of traffic data, these approaches fail to work when the data missing ratio is high. To fully exploit hidden spatial-temporal structures of the traffic data, this paper models the traffic data as a 3-way traffic tensor and formulates the traffic data recovery problem as a low-rank tensor completion problem. However, the high computation complexity incurred by the conventional tensor completion algorithms prevents its practical application for the traffic data recovery. To reduce the computation cost, we propose a novel Sequential Tensor Completion algorithm (STC) which can efficiently exploit the tensor decomposition result for the previous traffic data to deduce the tensor decomposition for the current data. To the best of our knowledge, we are the first to apply the tensor to model Internet traffic data to well exploit their hidden structures and propose a sequential tensor completion algorithm to significantly speed up the traffic data recovery process. We have done extensive simulations with the real traffic trace as the input. The simulation results demonstrate that our algorithm can achieve significantly better performance compared with the literature tensor and matrix completion algorithms even when the data missing ratio is high.
{"title":"Accurate recovery of Internet traffic data: A tensor completion approach","authors":"Kun Xie, Lele Wang, Xin Wang, Gaogang Xie, Jigang Wen, Guangxin Zhang","doi":"10.1109/INFOCOM.2016.7524463","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524463","url":null,"abstract":"The inference of traffic volume of the whole network from partial traffic measurements becomes increasingly critical for various network engineering tasks, such as traffic prediction, network optimization, and anomaly detection. Previous studies indicate that the matrix completion is a possible solution for this problem. However, as a two-dimension matrix cannot sufficiently capture the spatial-temporal features of traffic data, these approaches fail to work when the data missing ratio is high. To fully exploit hidden spatial-temporal structures of the traffic data, this paper models the traffic data as a 3-way traffic tensor and formulates the traffic data recovery problem as a low-rank tensor completion problem. However, the high computation complexity incurred by the conventional tensor completion algorithms prevents its practical application for the traffic data recovery. To reduce the computation cost, we propose a novel Sequential Tensor Completion algorithm (STC) which can efficiently exploit the tensor decomposition result for the previous traffic data to deduce the tensor decomposition for the current data. To the best of our knowledge, we are the first to apply the tensor to model Internet traffic data to well exploit their hidden structures and propose a sequential tensor completion algorithm to significantly speed up the traffic data recovery process. We have done extensive simulations with the real traffic trace as the input. The simulation results demonstrate that our algorithm can achieve significantly better performance compared with the literature tensor and matrix completion algorithms even when the data missing ratio is high.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130541582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524418
Tal Mizrahi, Y. Moses
With the rise of Software Defined Networks (SDN), there is growing interest in dynamic and centralized traffic engineering, where decisions about forwarding paths are taken dynamically from a network-wide perspective. Frequent path reconfiguration can significantly improve the network performance, but should be handled with care, so as to minimize disruptions that may occur during network updates. In this paper we introduce Time4, an approach that uses accurate time to coordinate network updates. We characterize a set of update scenarios called flow swaps, for which Time4 is the optimal update approach, yielding less packet loss than existing update approaches. We define the lossless flow allocation problem, and formally show that in environments with frequent path allocation, scenarios that require simultaneous changes at multiple network devices are inevitable. We present the design, implementation, and evaluation of a time4-enabled OpenFlow prototype. The prototype is publicly available as open source. Our work includes an extension to the OpenFlow protocol that has been adopted by the Open Networking Foundation (ONF), and is now included in OpenFlow 1.5. Our experimental results demonstrate the significant advantages of Time4 compared to other network update approaches.
{"title":"Software defined networks: It's about time","authors":"Tal Mizrahi, Y. Moses","doi":"10.1109/INFOCOM.2016.7524418","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524418","url":null,"abstract":"With the rise of Software Defined Networks (SDN), there is growing interest in dynamic and centralized traffic engineering, where decisions about forwarding paths are taken dynamically from a network-wide perspective. Frequent path reconfiguration can significantly improve the network performance, but should be handled with care, so as to minimize disruptions that may occur during network updates. In this paper we introduce Time4, an approach that uses accurate time to coordinate network updates. We characterize a set of update scenarios called flow swaps, for which Time4 is the optimal update approach, yielding less packet loss than existing update approaches. We define the lossless flow allocation problem, and formally show that in environments with frequent path allocation, scenarios that require simultaneous changes at multiple network devices are inevitable. We present the design, implementation, and evaluation of a time4-enabled OpenFlow prototype. The prototype is publicly available as open source. Our work includes an extension to the OpenFlow protocol that has been adopted by the Open Networking Foundation (ONF), and is now included in OpenFlow 1.5. Our experimental results demonstrate the significant advantages of Time4 compared to other network update approaches.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130086226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524520
Liang Tong, Wei Gao
Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.
{"title":"Application-aware traffic scheduling for workload offloading in mobile clouds","authors":"Liang Tong, Wei Gao","doi":"10.1109/INFOCOM.2016.7524520","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524520","url":null,"abstract":"Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524541
Xueshu Zheng, S. Yang, Naigao Jin, Lei Wang, Mathew L. Wymore, D. Qiao
This paper presents DiVA, a novel hybrid range-free and range-based acoustic source localization scheme that uses an ad-hoc network of microphone sensor nodes to produce an accurate estimate of the source's location in the presence of various real-world challenges. DiVA uses range-free pairwise comparisons of sound detection timestamps between local Voronoi neighbors to identify the node closest to the acoustic source, which then estimates the source's location using a constrained range-based method. Through simulation and experimental evaluations, DiVA is shown to be accurate and highly robust, making it practical for real-world applications.
{"title":"DiVA: Distributed Voronoi-based acoustic source localization with wireless sensor networks","authors":"Xueshu Zheng, S. Yang, Naigao Jin, Lei Wang, Mathew L. Wymore, D. Qiao","doi":"10.1109/INFOCOM.2016.7524541","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524541","url":null,"abstract":"This paper presents DiVA, a novel hybrid range-free and range-based acoustic source localization scheme that uses an ad-hoc network of microphone sensor nodes to produce an accurate estimate of the source's location in the presence of various real-world challenges. DiVA uses range-free pairwise comparisons of sound detection timestamps between local Voronoi neighbors to identify the node closest to the acoustic source, which then estimates the source's location using a constrained range-based method. Through simulation and experimental evaluations, DiVA is shown to be accurate and highly robust, making it practical for real-world applications.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"4 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524627
Boyang Yu, Jianping Pan
With the increasing demand of big data applications, a variety of problems on how to operate the supporting infrastructures more intelligently and efficiently have attracted much attention in the literature. To optimize the data placement among distributed network locations is one of the fundamental problems, which aims at facilitating the data storage and access. However, traditional schemes meet challenges on the running time and the overhead introduced due to the increasing scale of datasets. Therefore, we propose a novel data placement scheme based on sketches to overcome these challenges. We first justify the effectiveness of applying the hypergraph sparsification on the data placement problem, and then present the method of constructing sparsifiers through the sketches of request traffic. Besides, the scheme features on the support of aggregating distributed sketches to make the decision and capturing the pattern of recent traffic through sliding windows. Finally, we obtain numerical results through simulations which confirm that the proposed scheme can place data effectively while reducing the introduced overhead in terms of algorithm running time, space and network traffic.
{"title":"Sketch-based data placement among geo-distributed datacenters for cloud storages","authors":"Boyang Yu, Jianping Pan","doi":"10.1109/INFOCOM.2016.7524627","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524627","url":null,"abstract":"With the increasing demand of big data applications, a variety of problems on how to operate the supporting infrastructures more intelligently and efficiently have attracted much attention in the literature. To optimize the data placement among distributed network locations is one of the fundamental problems, which aims at facilitating the data storage and access. However, traditional schemes meet challenges on the running time and the overhead introduced due to the increasing scale of datasets. Therefore, we propose a novel data placement scheme based on sketches to overcome these challenges. We first justify the effectiveness of applying the hypergraph sparsification on the data placement problem, and then present the method of constructing sparsifiers through the sketches of request traffic. Besides, the scheme features on the support of aggregating distributed sketches to make the decision and capturing the pattern of recent traffic through sliding windows. Finally, we obtain numerical results through simulations which confirm that the proposed scheme can place data effectively while reducing the introduced overhead in terms of algorithm running time, space and network traffic.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524412
Toru Mano, Takeru Inoue, Kimihiro Mizutani, Osamu Akashi
Virtual network embedding has been intensively studied for a decade. The time complexity of most conventional methods has been reduced to the cube of the number of links. Since customers are likely to request a dense virtual network that connects every node pair directly (|E| = O(|V|2)) based on a traffic matrix, the time complexity is actually O(|E|3 = |V|6). If we were allowed to reduce this dense network into a sparse one before embedding, the time complexity could be decreased to O(|V|3); the time gap can be a million times for |V| = 100. The network reduction, however, combines several virtual links into a broader link, which makes the embedding cost (solution quality) much worse. This paper analytically and empirically investigates the trade-off between the embedding time and cost for the virtual network reduction. We define two simple reduction algorithms and analyze them with several interesting theorems. The analysis indicates that the embedding cost increases only linearly with exponential decay of embedding time. Thorough numerical evaluation justifies the desirability of the trade-off.
{"title":"Reducing dense virtual networks for fast embedding","authors":"Toru Mano, Takeru Inoue, Kimihiro Mizutani, Osamu Akashi","doi":"10.1109/INFOCOM.2016.7524412","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524412","url":null,"abstract":"Virtual network embedding has been intensively studied for a decade. The time complexity of most conventional methods has been reduced to the cube of the number of links. Since customers are likely to request a dense virtual network that connects every node pair directly (|E| = O(|V|2)) based on a traffic matrix, the time complexity is actually O(|E|3 = |V|6). If we were allowed to reduce this dense network into a sparse one before embedding, the time complexity could be decreased to O(|V|3); the time gap can be a million times for |V| = 100. The network reduction, however, combines several virtual links into a broader link, which makes the embedding cost (solution quality) much worse. This paper analytically and empirically investigates the trade-off between the embedding time and cost for the virtual network reduction. We define two simple reduction algorithms and analyze them with several interesting theorems. The analysis indicates that the embedding cost increases only linearly with exponential decay of embedding time. Thorough numerical evaluation justifies the desirability of the trade-off.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127164807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524370
Avik Ray, S. Deb, Pantelis Monogioudis
As cellular networks like 4G LTE networks get more and more sophisticated, mobiles also measure and send enormous amount of mobile measurement data (in TBs/week/metropolitan) during every call and session. The mobile measurement records are saved in data center for further analysis and mining, however, these measurement records are not geo-tagged because the measurement procedures are implemented in mobile LTE stack. Geo-tagging (or localizing) the stored measurement record is a fundamental building block towards network analytics and troubleshooting since the measurement records contain rich information on call quality, latency, throughput, signal quality, error codes etc. In this work, our goal is to localize these mobile measurement records. Precisely, we answer the following question: what was the location of the mobile when it sent a given measurement record? We design and implement novel machine learning based algorithms to infer whether a mobile was outdoor and if so, it infers the latitude-longitude associated with the measurement record. The key technical challenge comes from the fact that measurement records do not contain sufficient information required for triangulation or RF fingerprinting based techniques to work by themselves. Experiments performed with real data sets from an operational 4G network in a major metropolitan show that, the median accuracy of our proposed solution is around 20 m for outdoor mobiles and outdoor classification accuracy is more than 98%.
{"title":"Localization of LTE measurement records with missing information","authors":"Avik Ray, S. Deb, Pantelis Monogioudis","doi":"10.1109/INFOCOM.2016.7524370","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524370","url":null,"abstract":"As cellular networks like 4G LTE networks get more and more sophisticated, mobiles also measure and send enormous amount of mobile measurement data (in TBs/week/metropolitan) during every call and session. The mobile measurement records are saved in data center for further analysis and mining, however, these measurement records are not geo-tagged because the measurement procedures are implemented in mobile LTE stack. Geo-tagging (or localizing) the stored measurement record is a fundamental building block towards network analytics and troubleshooting since the measurement records contain rich information on call quality, latency, throughput, signal quality, error codes etc. In this work, our goal is to localize these mobile measurement records. Precisely, we answer the following question: what was the location of the mobile when it sent a given measurement record? We design and implement novel machine learning based algorithms to infer whether a mobile was outdoor and if so, it infers the latitude-longitude associated with the measurement record. The key technical challenge comes from the fact that measurement records do not contain sufficient information required for triangulation or RF fingerprinting based techniques to work by themselves. Experiments performed with real data sets from an operational 4G network in a major metropolitan show that, the median accuracy of our proposed solution is around 20 m for outdoor mobiles and outdoor classification accuracy is more than 98%.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524354
Lin Cui, Richard Cziva, Fung Po Tso, D. Pezaros
In modern Cloud Data Centers (DC)s, correct implementation of network policies is crucial to provide secure, efficient and high performance services for tenants. It is reported that the inefficient management of network policies accounts for 78% of DC downtime, challenged by the dynamically changing network characteristics and by the effects of dynamic Virtual Machine (VM) consolidation. While there has been significant research in policy and VM management, they have so far been treated as disjoint research problems. In this paper, we explore the simultaneous, dynamic VM and policy consolidation, and formulate the Policy-VM Consolidation (PVC) problem, which is shown to be NP-Hard. We then propose Sync, an efficient and synergistic scheme to jointly consolidate network policies and virtual machines. Extensive evaluation results and a testbed implementation of our controller show that policy and VM migration under Sync significantly reduces flow end-to-end delay by nearly 40%, and network-wide communication cost by 50% within few seconds, while adhering strictly to the requirements of network policies.
{"title":"Synergistic policy and virtual machine consolidation in cloud data centers","authors":"Lin Cui, Richard Cziva, Fung Po Tso, D. Pezaros","doi":"10.1109/INFOCOM.2016.7524354","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524354","url":null,"abstract":"In modern Cloud Data Centers (DC)s, correct implementation of network policies is crucial to provide secure, efficient and high performance services for tenants. It is reported that the inefficient management of network policies accounts for 78% of DC downtime, challenged by the dynamically changing network characteristics and by the effects of dynamic Virtual Machine (VM) consolidation. While there has been significant research in policy and VM management, they have so far been treated as disjoint research problems. In this paper, we explore the simultaneous, dynamic VM and policy consolidation, and formulate the Policy-VM Consolidation (PVC) problem, which is shown to be NP-Hard. We then propose Sync, an efficient and synergistic scheme to jointly consolidate network policies and virtual machines. Extensive evaluation results and a testbed implementation of our controller show that policy and VM migration under Sync significantly reduces flow end-to-end delay by nearly 40%, and network-wide communication cost by 50% within few seconds, while adhering strictly to the requirements of network policies.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124368625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}