Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524351
Abhijin Adiga, S. Venkatramanan, A. Vullikanti
Interventions such as vaccinations or installing anti-virus software are common strategies for controlling the spread of epidemics and malware on complex networks. Typically, nodes decide whether to implement such an intervention independently, depending on the costs they incur. A node can be protected by herd immunity, if enough other nodes implement such an intervention, making the problem of determining strategic decisions for vaccination a natural game-theoretical problem. There has been a lot of work on vaccination and network security game models, but all these models assume the vaccination decisions are made at the start of the game. However, in practice, a lot of individuals defer their vaccination decision, and the reasons for this behavior are not well understood, especially in network models. In this paper, we study a novel repeated game formulation, which considers vaccination decisions over time. We characterize Nash equilibria and the social optimum in such games, and find that a significant fraction of vaccinations might be deferred, in general. This depends crucially on the network structure, and the information and the vaccination delay. We show that finding Nash equilibria and the social optimum are NP-hard in general, and we develop an approximation algorithm for the social optimum whose approximation guarantee depends on the delay.
{"title":"To delay or not: Temporal vaccination games on networks","authors":"Abhijin Adiga, S. Venkatramanan, A. Vullikanti","doi":"10.1109/INFOCOM.2016.7524351","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524351","url":null,"abstract":"Interventions such as vaccinations or installing anti-virus software are common strategies for controlling the spread of epidemics and malware on complex networks. Typically, nodes decide whether to implement such an intervention independently, depending on the costs they incur. A node can be protected by herd immunity, if enough other nodes implement such an intervention, making the problem of determining strategic decisions for vaccination a natural game-theoretical problem. There has been a lot of work on vaccination and network security game models, but all these models assume the vaccination decisions are made at the start of the game. However, in practice, a lot of individuals defer their vaccination decision, and the reasons for this behavior are not well understood, especially in network models. In this paper, we study a novel repeated game formulation, which considers vaccination decisions over time. We characterize Nash equilibria and the social optimum in such games, and find that a significant fraction of vaccinations might be deferred, in general. This depends crucially on the network structure, and the information and the vaccination delay. We show that finding Nash equilibria and the social optimum are NP-hard in general, and we develop an approximation algorithm for the social optimum whose approximation guarantee depends on the delay.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"595 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116276422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524520
Liang Tong, Wei Gao
Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.
{"title":"Application-aware traffic scheduling for workload offloading in mobile clouds","authors":"Liang Tong, Wei Gao","doi":"10.1109/INFOCOM.2016.7524520","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524520","url":null,"abstract":"Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524400
Yafeng Yin, Qun A. Li, Lei Xie, Shanhe Yi, Ed Novak, Sanglu Lu
Due to the smaller size of mobile devices, on-screen keyboards become inefficient for text entry. In this paper, we present CamK, a camera-based text-entry method, which uses an arbitrary panel (e.g., a piece of paper) with a keyboard layout to input text into small devices. CamK captures the images during the typing process and uses the image processing technique to recognize the typing behavior. The principle of CamK is to extract the keys, track the user's fingertips, detect and localize the keystroke. To achieve high accuracy of keystroke localization and low false positive rate of keystroke detection, CamK introduces the initial training and online calibration. Additionally, CamK optimizes computation-intensive modules to reduce the time latency. We implement CamK on a mobile device running Android. Our experiment results show that CamK can achieve above 95% accuracy of keystroke localization, with only 4.8% false positive keystrokes. When compared to on-screen keyboards, CamK can achieve 1.25X typing speedup for regular text input and 2.5X for random character input.
{"title":"CamK: A camera-based keyboard for small mobile devices","authors":"Yafeng Yin, Qun A. Li, Lei Xie, Shanhe Yi, Ed Novak, Sanglu Lu","doi":"10.1109/INFOCOM.2016.7524400","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524400","url":null,"abstract":"Due to the smaller size of mobile devices, on-screen keyboards become inefficient for text entry. In this paper, we present CamK, a camera-based text-entry method, which uses an arbitrary panel (e.g., a piece of paper) with a keyboard layout to input text into small devices. CamK captures the images during the typing process and uses the image processing technique to recognize the typing behavior. The principle of CamK is to extract the keys, track the user's fingertips, detect and localize the keystroke. To achieve high accuracy of keystroke localization and low false positive rate of keystroke detection, CamK introduces the initial training and online calibration. Additionally, CamK optimizes computation-intensive modules to reduce the time latency. We implement CamK on a mobile device running Android. Our experiment results show that CamK can achieve above 95% accuracy of keystroke localization, with only 4.8% false positive keystrokes. When compared to on-screen keyboards, CamK can achieve 1.25X typing speedup for regular text input and 2.5X for random character input.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116828597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524444
Ammar Gharaibeh, Abdallah Khreishah, Issa M. Khalil
Since the emergence of Content Centric Networking (CCN) as a new paradigm for content delivery in the Internet, copious of research targeted the evaluation or the enhancement of CCN caching schemes. Motivated by providing the Internet Service Providers with incentives to perform caching, the increasing deployment of in-network cloudlets, and the low cost of storage devices, we study caching in CCN from an economical point of view, where the content providers pay the Internet Service Providers in exchange for caching their content items. We propose an online caching algorithm for CCN that does not require the exact knowledge of content items' popularities to minimize the total cost paid by the content providers. The total cost here is the sum of the caching costs and the retrieval costs. Our analysis shows that the proposed algorithm achieves an O(1) competitive ratio when compared to the optimal offline caching scheme that possesses the exact knowledge of content items' popularities. We also show through simulations that the proposed algorithm can cut the cost incurred by widely used caching schemes such as Leave Copy Down (LCD) and Leave Copy Everywhere (LCE) by up to 65%.
自从内容中心网络(CCN)作为互联网内容交付的新范式出现以来,大量的研究针对CCN缓存方案的评估或增强。由于向互联网服务提供商提供执行缓存的激励、网络内云的不断增加的部署以及存储设备的低成本,我们从经济的角度研究了CCN中的缓存,其中内容提供商支付互联网服务提供商以换取缓存其内容项。我们提出了一种CCN的在线缓存算法,该算法不需要准确了解内容项目的流行程度,以最大限度地减少内容提供商支付的总成本。这里的总成本是缓存成本和检索成本的总和。我们的分析表明,与拥有内容项目流行度确切知识的最优离线缓存方案相比,所提出的算法实现了O(1)的竞争比。我们还通过仿真表明,该算法可以将广泛使用的缓存方案(如Leave Copy Down (LCD)和Leave Copy Everywhere (LCE))所产生的成本降低高达65%。
{"title":"An O(1)-competitive online caching algorithm for content centric networking","authors":"Ammar Gharaibeh, Abdallah Khreishah, Issa M. Khalil","doi":"10.1109/INFOCOM.2016.7524444","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524444","url":null,"abstract":"Since the emergence of Content Centric Networking (CCN) as a new paradigm for content delivery in the Internet, copious of research targeted the evaluation or the enhancement of CCN caching schemes. Motivated by providing the Internet Service Providers with incentives to perform caching, the increasing deployment of in-network cloudlets, and the low cost of storage devices, we study caching in CCN from an economical point of view, where the content providers pay the Internet Service Providers in exchange for caching their content items. We propose an online caching algorithm for CCN that does not require the exact knowledge of content items' popularities to minimize the total cost paid by the content providers. The total cost here is the sum of the caching costs and the retrieval costs. Our analysis shows that the proposed algorithm achieves an O(1) competitive ratio when compared to the optimal offline caching scheme that possesses the exact knowledge of content items' popularities. We also show through simulations that the proposed algorithm can cut the cost incurred by widely used caching schemes such as Leave Copy Down (LCD) and Leave Copy Everywhere (LCE) by up to 65%.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123356793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524349
T. Chekam, Ennan Zhai, Zhenhua Li, Yong Cui, K. Ren
As one type of the most popular cloud storage services, OpenStack Swift and its follow-up systems replicate each data object across multiple storage nodes and leverage object sync protocols to achieve high availability and eventual consistency. The performance of object sync protocols heavily relies on two key parameters: r (number of replicas for each object) and η (number of objects hosted by each storage node). In existing tutorials and demos, the configurations are usually r = 3 and n <; 1000 by default, and the object sync process seems to perform well. To deep understand object sync protocols, we first make a lab-scale OpenStack Swift deployment and run experiments with various configurations. We discover that in data-intensive scenarios, e.g., when r > 3 and n ≫ 1000, the object sync process is significantly delayed and produces massive network overhead. This phenomenon is referred to as the sync bottleneck problem. Then, to explore the root cause, we review the source code of OpenStack Swift and find that its object sync protocol utilizes a fairly simple and network-intensive approach to check the consistency among replicas of objects. In particular, each storage node is required to periodically multicast the hash values of all its hosted objects to all the other replica nodes. Thus in a sync round, the number of exchanged hash values per node is Θ(n×r). Further, to tackle the problem, we propose a lightweight object sync protocol called LightSync. It remarkably reduces the sync overhead by using two novel building blocks: 1) Hashing of Hashes, which aggregates all the h hash values of each data partition into a single but representative hash value with the Merkle tree; 2) Circular Hash Checking, which checks the consistency of different partition replicas by only sending the aggregated hash value to the clockwise neighbor. Its design provably reduces the per-node network overhead from Θ(n×r) to Θ(n/h). In addition, we have implemented LightSync as an open-source patch and adopted it to OpenStack Swift, thus reducing sync delay by up to 28.8× and network overhead by up to 14.2×.
{"title":"On the synchronization bottleneck of OpenStack Swift-like cloud storage systems","authors":"T. Chekam, Ennan Zhai, Zhenhua Li, Yong Cui, K. Ren","doi":"10.1109/INFOCOM.2016.7524349","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524349","url":null,"abstract":"As one type of the most popular cloud storage services, OpenStack Swift and its follow-up systems replicate each data object across multiple storage nodes and leverage object sync protocols to achieve high availability and eventual consistency. The performance of object sync protocols heavily relies on two key parameters: r (number of replicas for each object) and η (number of objects hosted by each storage node). In existing tutorials and demos, the configurations are usually r = 3 and n <; 1000 by default, and the object sync process seems to perform well. To deep understand object sync protocols, we first make a lab-scale OpenStack Swift deployment and run experiments with various configurations. We discover that in data-intensive scenarios, e.g., when r > 3 and n ≫ 1000, the object sync process is significantly delayed and produces massive network overhead. This phenomenon is referred to as the sync bottleneck problem. Then, to explore the root cause, we review the source code of OpenStack Swift and find that its object sync protocol utilizes a fairly simple and network-intensive approach to check the consistency among replicas of objects. In particular, each storage node is required to periodically multicast the hash values of all its hosted objects to all the other replica nodes. Thus in a sync round, the number of exchanged hash values per node is Θ(n×r). Further, to tackle the problem, we propose a lightweight object sync protocol called LightSync. It remarkably reduces the sync overhead by using two novel building blocks: 1) Hashing of Hashes, which aggregates all the h hash values of each data partition into a single but representative hash value with the Merkle tree; 2) Circular Hash Checking, which checks the consistency of different partition replicas by only sending the aggregated hash value to the clockwise neighbor. Its design provably reduces the per-node network overhead from Θ(n×r) to Θ(n/h). In addition, we have implemented LightSync as an open-source patch and adopted it to OpenStack Swift, thus reducing sync delay by up to 28.8× and network overhead by up to 14.2×.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123232107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524450
Yaxiong Xie, Zhenjiang Li, Mo Li, K. Jamieson
Due to frequency selective fading, modern wideband 802.11 transmissions have unevenly distributed bit BERs in a packet. In this paper, we propose to unequally protect packet bits according to their BERs. By doing so, we can best match the effective transmission rate of each bit to channel condition, and improve throughput. The major design challenge lies in deriving an accurate relationship between the frequency selective channel condition and the decoded packet bit BERs, all the way through the complex 802.11 PHY layer. Based on our study, we find that the decoding error of a packet bit corresponds to dense errors in the underlying codeword bits, and the BER can be truthfully approximated by the codeword bit error density. With above observation, we propose UnPKT, scheme that protects packet bits using different MAC-layer FEC redundancies based on bit-wise BER estimation to augment wide-band 802.11 transmissions. UnPKT is software-implementable and compatible with the existing 802.11 architecture. Extensive evaluations based on Atheros 9580 NICs and GNU-Radio platforms show the effectiveness of our design. UnPKT can achieve a significant goodput improvement over state-of-the-art approaches.
{"title":"Augmenting wide-band 802.11 transmissions via unequal packet bit protection","authors":"Yaxiong Xie, Zhenjiang Li, Mo Li, K. Jamieson","doi":"10.1109/INFOCOM.2016.7524450","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524450","url":null,"abstract":"Due to frequency selective fading, modern wideband 802.11 transmissions have unevenly distributed bit BERs in a packet. In this paper, we propose to unequally protect packet bits according to their BERs. By doing so, we can best match the effective transmission rate of each bit to channel condition, and improve throughput. The major design challenge lies in deriving an accurate relationship between the frequency selective channel condition and the decoded packet bit BERs, all the way through the complex 802.11 PHY layer. Based on our study, we find that the decoding error of a packet bit corresponds to dense errors in the underlying codeword bits, and the BER can be truthfully approximated by the codeword bit error density. With above observation, we propose UnPKT, scheme that protects packet bits using different MAC-layer FEC redundancies based on bit-wise BER estimation to augment wide-band 802.11 transmissions. UnPKT is software-implementable and compatible with the existing 802.11 architecture. Extensive evaluations based on Atheros 9580 NICs and GNU-Radio platforms show the effectiveness of our design. UnPKT can achieve a significant goodput improvement over state-of-the-art approaches.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124675030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524541
Xueshu Zheng, S. Yang, Naigao Jin, Lei Wang, Mathew L. Wymore, D. Qiao
This paper presents DiVA, a novel hybrid range-free and range-based acoustic source localization scheme that uses an ad-hoc network of microphone sensor nodes to produce an accurate estimate of the source's location in the presence of various real-world challenges. DiVA uses range-free pairwise comparisons of sound detection timestamps between local Voronoi neighbors to identify the node closest to the acoustic source, which then estimates the source's location using a constrained range-based method. Through simulation and experimental evaluations, DiVA is shown to be accurate and highly robust, making it practical for real-world applications.
{"title":"DiVA: Distributed Voronoi-based acoustic source localization with wireless sensor networks","authors":"Xueshu Zheng, S. Yang, Naigao Jin, Lei Wang, Mathew L. Wymore, D. Qiao","doi":"10.1109/INFOCOM.2016.7524541","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524541","url":null,"abstract":"This paper presents DiVA, a novel hybrid range-free and range-based acoustic source localization scheme that uses an ad-hoc network of microphone sensor nodes to produce an accurate estimate of the source's location in the presence of various real-world challenges. DiVA uses range-free pairwise comparisons of sound detection timestamps between local Voronoi neighbors to identify the node closest to the acoustic source, which then estimates the source's location using a constrained range-based method. Through simulation and experimental evaluations, DiVA is shown to be accurate and highly robust, making it practical for real-world applications.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"4 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inference of traffic volume of the whole network from partial traffic measurements becomes increasingly critical for various network engineering tasks, such as traffic prediction, network optimization, and anomaly detection. Previous studies indicate that the matrix completion is a possible solution for this problem. However, as a two-dimension matrix cannot sufficiently capture the spatial-temporal features of traffic data, these approaches fail to work when the data missing ratio is high. To fully exploit hidden spatial-temporal structures of the traffic data, this paper models the traffic data as a 3-way traffic tensor and formulates the traffic data recovery problem as a low-rank tensor completion problem. However, the high computation complexity incurred by the conventional tensor completion algorithms prevents its practical application for the traffic data recovery. To reduce the computation cost, we propose a novel Sequential Tensor Completion algorithm (STC) which can efficiently exploit the tensor decomposition result for the previous traffic data to deduce the tensor decomposition for the current data. To the best of our knowledge, we are the first to apply the tensor to model Internet traffic data to well exploit their hidden structures and propose a sequential tensor completion algorithm to significantly speed up the traffic data recovery process. We have done extensive simulations with the real traffic trace as the input. The simulation results demonstrate that our algorithm can achieve significantly better performance compared with the literature tensor and matrix completion algorithms even when the data missing ratio is high.
{"title":"Accurate recovery of Internet traffic data: A tensor completion approach","authors":"Kun Xie, Lele Wang, Xin Wang, Gaogang Xie, Jigang Wen, Guangxin Zhang","doi":"10.1109/INFOCOM.2016.7524463","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524463","url":null,"abstract":"The inference of traffic volume of the whole network from partial traffic measurements becomes increasingly critical for various network engineering tasks, such as traffic prediction, network optimization, and anomaly detection. Previous studies indicate that the matrix completion is a possible solution for this problem. However, as a two-dimension matrix cannot sufficiently capture the spatial-temporal features of traffic data, these approaches fail to work when the data missing ratio is high. To fully exploit hidden spatial-temporal structures of the traffic data, this paper models the traffic data as a 3-way traffic tensor and formulates the traffic data recovery problem as a low-rank tensor completion problem. However, the high computation complexity incurred by the conventional tensor completion algorithms prevents its practical application for the traffic data recovery. To reduce the computation cost, we propose a novel Sequential Tensor Completion algorithm (STC) which can efficiently exploit the tensor decomposition result for the previous traffic data to deduce the tensor decomposition for the current data. To the best of our knowledge, we are the first to apply the tensor to model Internet traffic data to well exploit their hidden structures and propose a sequential tensor completion algorithm to significantly speed up the traffic data recovery process. We have done extensive simulations with the real traffic trace as the input. The simulation results demonstrate that our algorithm can achieve significantly better performance compared with the literature tensor and matrix completion algorithms even when the data missing ratio is high.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130541582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524477
Jin Wang, K. Lu, Jianping Wang, C. Qiao
For many emerging mobile broadband services and applications, the source and destination are located in the same local region. Consequently, it is very important to design access networks to facilitate efficient local data exchange. In the past few years, most existing studies focus on either the wired or wireless domains. In this paper, we aim to exploit both the wired and wireless domains. Specifically, we consider a Fiber-Wireless access network in which a passive optical network (PON) connects densely deployed base stations. In such a scenario, we propose a novel access scheme, namely, NCDA, where the main idea is to utilize both network coding and device association. To understand the potentials of NCDA, we first formulate a mixed integer nonlinear programming (MINLP) to minimize the weighted number of packet transmissions (WNT), which is related to both the system capacity and energy consumption. We then theoretically analyze the tight upper bounds of the minimal WNT in the PON, which helps us to approximate the original problem by a mixed integer linear programming (MILP). Next, we develop efficient algorithms based on linear programming relaxation to solve the optimal NCDA problem. To validate our design, we conduct extensive simulation experiments, which demonstrate the impact of important network parameters and the promising potentials of the proposed scheme.
{"title":"Optimal local data exchange in fiber-wireless access network: A joint network coding and device association design","authors":"Jin Wang, K. Lu, Jianping Wang, C. Qiao","doi":"10.1109/INFOCOM.2016.7524477","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524477","url":null,"abstract":"For many emerging mobile broadband services and applications, the source and destination are located in the same local region. Consequently, it is very important to design access networks to facilitate efficient local data exchange. In the past few years, most existing studies focus on either the wired or wireless domains. In this paper, we aim to exploit both the wired and wireless domains. Specifically, we consider a Fiber-Wireless access network in which a passive optical network (PON) connects densely deployed base stations. In such a scenario, we propose a novel access scheme, namely, NCDA, where the main idea is to utilize both network coding and device association. To understand the potentials of NCDA, we first formulate a mixed integer nonlinear programming (MINLP) to minimize the weighted number of packet transmissions (WNT), which is related to both the system capacity and energy consumption. We then theoretically analyze the tight upper bounds of the minimal WNT in the PON, which helps us to approximate the original problem by a mixed integer linear programming (MILP). Next, we develop efficient algorithms based on linear programming relaxation to solve the optimal NCDA problem. To validate our design, we conduct extensive simulation experiments, which demonstrate the impact of important network parameters and the promising potentials of the proposed scheme.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130622653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-10DOI: 10.1109/INFOCOM.2016.7524494
Zongqing Lu, Xiao Sun, T. L. Porta
Opportunistic mobile networks consisting of intermittently connected mobile devices have been exploited for various applications, such as computational offloading and mitigating cellular traffic load. Different from existing work, in this paper, we focus on cooperatively offloading data among mobile devices to maximally improve the probability of data delivery from a mobile device to an intermittently connected remote server or data center within a given time constraint, which is referred to as the cooperative offloading problem. Unfortunately, cooperative offloading is NP-hard. To this end, a heuristic algorithm is designed based on the proposed probabilistic framework, which provides the estimation of the probability of successful data delivery over the opportunistic path, considering both data size and contact duration. Due to the lack of global information, a distributed algorithm is further proposed. The performance of the proposed approaches is evaluated based on both synthetic networks and real traces, and simulation results show that cooperative offloading can significantly improve the data delivery probability and the performance of both heuristic algorithm and distributed algorithm outperforms other approaches.
{"title":"Cooperative data offloading in opportunistic mobile networks","authors":"Zongqing Lu, Xiao Sun, T. L. Porta","doi":"10.1109/INFOCOM.2016.7524494","DOIUrl":"https://doi.org/10.1109/INFOCOM.2016.7524494","url":null,"abstract":"Opportunistic mobile networks consisting of intermittently connected mobile devices have been exploited for various applications, such as computational offloading and mitigating cellular traffic load. Different from existing work, in this paper, we focus on cooperatively offloading data among mobile devices to maximally improve the probability of data delivery from a mobile device to an intermittently connected remote server or data center within a given time constraint, which is referred to as the cooperative offloading problem. Unfortunately, cooperative offloading is NP-hard. To this end, a heuristic algorithm is designed based on the proposed probabilistic framework, which provides the estimation of the probability of successful data delivery over the opportunistic path, considering both data size and contact duration. Due to the lack of global information, a distributed algorithm is further proposed. The performance of the proposed approaches is evaluated based on both synthetic networks and real traces, and simulation results show that cooperative offloading can significantly improve the data delivery probability and the performance of both heuristic algorithm and distributed algorithm outperforms other approaches.","PeriodicalId":274591,"journal":{"name":"IEEE INFOCOM 2016 - The 35th Annual IEEE International Conference on Computer Communications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129457127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}