Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6567094
Richard Combes, Z. Altman, E. Altman
In dense wireless networks, inter-cell interference highly limits the capacity and quality of service perceived by users. Previous work has shown that approaches based on frequency reuse provide important capacity gains. We model a wireless network with Inter-Cell Interference Coordination (ICIC) at the flow level where users arrive and depart dynamically, in order to optimize quality of service indicators perceivable by users such as file transfer time for elastic traffic. We propose an algorithm to tune the parameters of ICIC schemes automatically based on measurements. The convergence of the algorithm to a local optimum is proven, and a heuristic to improve its convergence speed is given. Numerical experiments show that the distance between local optima and the global optimum is very small, and that the algorithm is fast enough to track changes in traffic on the time scale of hours. The proposed algorithm can be implemented in a distributed way with very small signaling load.
{"title":"Interference coordination in wireless networks: A flow-level perspective","authors":"Richard Combes, Z. Altman, E. Altman","doi":"10.1109/INFCOM.2013.6567094","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6567094","url":null,"abstract":"In dense wireless networks, inter-cell interference highly limits the capacity and quality of service perceived by users. Previous work has shown that approaches based on frequency reuse provide important capacity gains. We model a wireless network with Inter-Cell Interference Coordination (ICIC) at the flow level where users arrive and depart dynamically, in order to optimize quality of service indicators perceivable by users such as file transfer time for elastic traffic. We propose an algorithm to tune the parameters of ICIC schemes automatically based on measurements. The convergence of the algorithm to a local optimum is proven, and a heuristic to improve its convergence speed is given. Numerical experiments show that the distance between local optima and the global optimum is very small, and that the algorithm is fast enough to track changes in traffic on the time scale of hours. The proposed algorithm can be implemented in a distributed way with very small signaling load.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114345964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566849
Erran L. Li, Vahid Liaghat, Hongze Zhao, M. Hajiaghayi, Dan Li, G. Wilfong, Y. Yang, Chuanxiong Guo
The emergence of new capabilities such as virtualization and elastic (private or public) cloud computing infrastructures has made it possible to deploy multiple applications, on demand, on the same cloud infrastructure. A major challenge to achieve this possibility, however, is that modern applications are typically distributed, structured systems that include not only computational and storage entities, but also policy entities (e.g., load balancers, firewalls, intrusion prevention boxes). Deploying applications on a cloud infrastructure without the policy entities may introduce substantial policy violations and/or security holes. In this paper, we present PACE: the first systematic framework for Policy-Aware Application Cloud Embedding. We precisely define the policy-aware, cloud application embedding problem, study its complexity and introduce simple, efficient, online primal-dual algorithms to embed applications in cloud data centers. We conduct evaluations using data from a real, large campus network and a realistic data center topology to evaluate the feasibility and performance of PACE. We show that deployment in a cloud without considering in-network policies may lead to a large number of policy violations (e.g., using tree routing as a way to enforce in-network policies may observe up to 91% policy violations). We also show that our embedding algorithms are very efficient by comparing with a good online fractional embedding algorithm.
{"title":"PACE: Policy-Aware Application Cloud Embedding","authors":"Erran L. Li, Vahid Liaghat, Hongze Zhao, M. Hajiaghayi, Dan Li, G. Wilfong, Y. Yang, Chuanxiong Guo","doi":"10.1109/INFCOM.2013.6566849","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566849","url":null,"abstract":"The emergence of new capabilities such as virtualization and elastic (private or public) cloud computing infrastructures has made it possible to deploy multiple applications, on demand, on the same cloud infrastructure. A major challenge to achieve this possibility, however, is that modern applications are typically distributed, structured systems that include not only computational and storage entities, but also policy entities (e.g., load balancers, firewalls, intrusion prevention boxes). Deploying applications on a cloud infrastructure without the policy entities may introduce substantial policy violations and/or security holes. In this paper, we present PACE: the first systematic framework for Policy-Aware Application Cloud Embedding. We precisely define the policy-aware, cloud application embedding problem, study its complexity and introduce simple, efficient, online primal-dual algorithms to embed applications in cloud data centers. We conduct evaluations using data from a real, large campus network and a realistic data center topology to evaluate the feasibility and performance of PACE. We show that deployment in a cloud without considering in-network policies may lead to a large number of policy violations (e.g., using tree routing as a way to enforce in-network policies may observe up to 91% policy violations). We also show that our embedding algorithms are very efficient by comparing with a good online fractional embedding algorithm.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114633473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566844
Lan Zhang, Xiangyang Li, Yunhao Liu, Taeho Jung
The existing work on distributed secure multi-party computation, e.g., set operations, dot product, ranking, focus on the privacy protection aspects, while the verifiability of user inputs and outcomes are neglected. Most of the existing works assume that the involved parties will follow the protocol honestly. In practice, a malicious adversary can easily forge his/her input values to achieve incorrect outcomes or simply lie about the computation results to cheat other parities. In this work, we focus on the problem of verifiable privacy preserving multiparty computation. We thoroughly analyze the attacks on existing privacy preserving multi-party computation approaches and design a series of protocols for dot product, ranging and ranking, which are proved to be privacy preserving and verifiable. We implement our protocols on laptops and mobile phones. The results show that our verifiable private computation protocols are efficient both in computation and communication.
{"title":"Verifiable private multi-party computation: Ranging and ranking","authors":"Lan Zhang, Xiangyang Li, Yunhao Liu, Taeho Jung","doi":"10.1109/INFCOM.2013.6566844","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566844","url":null,"abstract":"The existing work on distributed secure multi-party computation, e.g., set operations, dot product, ranking, focus on the privacy protection aspects, while the verifiability of user inputs and outcomes are neglected. Most of the existing works assume that the involved parties will follow the protocol honestly. In practice, a malicious adversary can easily forge his/her input values to achieve incorrect outcomes or simply lie about the computation results to cheat other parities. In this work, we focus on the problem of verifiable privacy preserving multiparty computation. We thoroughly analyze the attacks on existing privacy preserving multi-party computation approaches and design a series of protocols for dot product, ranging and ranking, which are proved to be privacy preserving and verifiable. We implement our protocols on laptops and mobile phones. The results show that our verifiable private computation protocols are efficient both in computation and communication.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114551030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOMW.2013.6562867
U. Montanari, Alain Tcheukam Siwe
Decentralized power management systems will play a key role in reducing greenhouse gas emissions and increasing electricity production through alternative energy sources. In this paper, we focus on power market models in which prosumers interact in a distributed environment during the purchase or sale of electric power. We have chosen to follow the distributed power market model DEZENT. Our contribution is the planning phase of the consumption of prosumers based on the negotiation mechanism of DEZENT. We propose a controller for the planning of the consumption which aims at minimizing the electricity cost achieved at the end of a day. In the paper we discuss the assumptions on which the controller design is based.
{"title":"Real time market models and prosumer profiling","authors":"U. Montanari, Alain Tcheukam Siwe","doi":"10.1109/INFCOMW.2013.6562867","DOIUrl":"https://doi.org/10.1109/INFCOMW.2013.6562867","url":null,"abstract":"Decentralized power management systems will play a key role in reducing greenhouse gas emissions and increasing electricity production through alternative energy sources. In this paper, we focus on power market models in which prosumers interact in a distributed environment during the purchase or sale of electric power. We have chosen to follow the distributed power market model DEZENT. Our contribution is the planning phase of the consumption of prosumers based on the negotiation mechanism of DEZENT. We propose a controller for the planning of the consumption which aims at minimizing the electricity cost achieved at the end of a day. In the paper we discuss the assumptions on which the controller design is based.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116235087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566919
Xiang Sheng, Jian Tang, Chenfei Gao, Weiyi Zhang, Chonggang Wang
With wireless resource virtualization, multiple Mobile Virtual Network Operators (MVNOs) can be supported over a shared physical wireless network and traffic loads in a Base Station (BS) can be easily migrated to more power-efficient BSs in its neighborhood such that idle BSs can be turned off or put into sleep to save power. In this paper, we propose to leverage load migration and BS consolidation for green communications and consider a power-efficient network planning problem in virtualized Cognitive Radio Networks (CRNs) with the objective of minimizing total power consumption while meeting traffic load demand of each MVNO. First, we present a Mixed Integer Linear Programming (MILP) to provide optimal solutions. Then we present a general optimization framework to guide algorithm design, which solves two subproblems, channel assignment and load allocation, in sequence. For channel assignment, we present a (Δ1)-approximation algorithm (where Δ is the maximum number of BSs a BS can potentially interfere with). For load allocation, we present a polynomial-time optimal algorithm for a special case where BSs are power-proportional as well as two effective heuristic algorithms for the general case. In addition, we present an effective heuristic algorithm that jointly solves the two subproblems. It has been shown by extensive simulation results that the proposed algorithms produce close-to-optimal solutions, and moreover, achieve over 45% power savings compared to a baseline algorithm that does not migrate loads or consolidate BSs.
{"title":"Leveraging load migration and basestaion consolidation for green communications in virtualized Cognitive Radio Networks","authors":"Xiang Sheng, Jian Tang, Chenfei Gao, Weiyi Zhang, Chonggang Wang","doi":"10.1109/INFCOM.2013.6566919","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566919","url":null,"abstract":"With wireless resource virtualization, multiple Mobile Virtual Network Operators (MVNOs) can be supported over a shared physical wireless network and traffic loads in a Base Station (BS) can be easily migrated to more power-efficient BSs in its neighborhood such that idle BSs can be turned off or put into sleep to save power. In this paper, we propose to leverage load migration and BS consolidation for green communications and consider a power-efficient network planning problem in virtualized Cognitive Radio Networks (CRNs) with the objective of minimizing total power consumption while meeting traffic load demand of each MVNO. First, we present a Mixed Integer Linear Programming (MILP) to provide optimal solutions. Then we present a general optimization framework to guide algorithm design, which solves two subproblems, channel assignment and load allocation, in sequence. For channel assignment, we present a (Δ1)-approximation algorithm (where Δ is the maximum number of BSs a BS can potentially interfere with). For load allocation, we present a polynomial-time optimal algorithm for a special case where BSs are power-proportional as well as two effective heuristic algorithms for the general case. In addition, we present an effective heuristic algorithm that jointly solves the two subproblems. It has been shown by extensive simulation results that the proposed algorithms produce close-to-optimal solutions, and moreover, achieve over 45% power savings compared to a baseline algorithm that does not migrate loads or consolidate BSs.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115476743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566769
Ignacio Bermudez, S. Traverso, M. Mellia, M. Munafò
This paper presents a characterization of Amazon's Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud design.
{"title":"Exploring the cloud from passive measurements: The Amazon AWS case","authors":"Ignacio Bermudez, S. Traverso, M. Mellia, M. Munafò","doi":"10.1109/INFCOM.2013.6566769","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566769","url":null,"abstract":"This paper presents a characterization of Amazon's Web Services (AWS), the most prominent cloud provider that offers computing, storage, and content delivery platforms. Leveraging passive measurements, we explore the EC2, S3 and CloudFront AWS services to unveil their infrastructure, the pervasiveness of content they host, and their traffic allocation policies. Measurements reveal that most of the content residing on EC2 and S3 is served by one Amazon datacenter, located in Virginia, which appears to be the worst performing one for Italian users. This causes traffic to take long and expensive paths in the network. Since no automatic migration and load-balancing policies are offered by AWS among different locations, content is exposed to the risks of outages. The CloudFront CDN, on the contrary, shows much better performance thanks to the effective cache selection policy that serves 98% of the traffic from the nearest available cache. CloudFront exhibits also dynamic load-balancing policies, in contrast to the static allocation of instances on EC2 and S3. Information presented in this paper will be useful for developers aiming at entrusting AWS to deploy their contents, and for researchers willing to improve cloud design.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124937482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566981
Seon-Yeong Han, N. Abu-Ghazaleh, Dongman Lee
The accuracy of wireless network packet simulation critically depends on the quality of the wireless channel models. These models directly affect the fundamental network characteristics, such as link quality, transmission range, and capture effect, as well as their dynamic variation in time and space. Path loss is the stationary component of the channel model affected by the shadowing in the environment. Existing path loss models are inaccurate, require very high measurement or computational overhead, and/or often cannot be made to represent a given environment. The paper contributes a flexible path loss model that uses a novel approach for spatially coherent interpolation from available nearby channels to allow accurate and efficient modeling of path loss. We show that the proposed model, called Double Regression (DR), generates a correlated space, allowing both the sender and the receiver to move without abrupt change in path loss. Combining DR with a traditional temporal fading model, such as Rayleigh fading, provides an accurate and efficient channel model that we integrate with the NS-2 simulator. We use measurements to validate the accuracy of the model for a number of scenarios. We also show that there is substantial impact on simulation behavior (e.g., up to 600% difference in throughput for simple scenarios) when path loss is modeled accurately.
{"title":"Double Regression: Efficient spatially correlated path loss model for wireless network simulation","authors":"Seon-Yeong Han, N. Abu-Ghazaleh, Dongman Lee","doi":"10.1109/INFCOM.2013.6566981","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566981","url":null,"abstract":"The accuracy of wireless network packet simulation critically depends on the quality of the wireless channel models. These models directly affect the fundamental network characteristics, such as link quality, transmission range, and capture effect, as well as their dynamic variation in time and space. Path loss is the stationary component of the channel model affected by the shadowing in the environment. Existing path loss models are inaccurate, require very high measurement or computational overhead, and/or often cannot be made to represent a given environment. The paper contributes a flexible path loss model that uses a novel approach for spatially coherent interpolation from available nearby channels to allow accurate and efficient modeling of path loss. We show that the proposed model, called Double Regression (DR), generates a correlated space, allowing both the sender and the receiver to move without abrupt change in path loss. Combining DR with a traditional temporal fading model, such as Rayleigh fading, provides an accurate and efficient channel model that we integrate with the NS-2 simulator. We use measurements to validate the accuracy of the model for a number of scenarios. We also show that there is substantial impact on simulation behavior (e.g., up to 600% difference in throughput for simple scenarios) when path loss is modeled accurately.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124941077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566850
M. Alicherry, T. V. Lakshman
Many cloud applications are data intensive requiring the processing of large data sets and the MapReduce/Hadoop architecture has become the de facto processing framework for these applications. Large data sets are stored in data nodes in the cloud which are typically SAN or NAS devices. Cloud applications process these data sets using a large number of application virtual machines (VMs), with the total completion time being an important performance metric. There are many factors that affect the total completion time of the processing task such as the load on the individual servers, the task scheduling mechanism, communication and data access bottlenecks, etc. One dominating factor that affects completion times for data intensive applications is the access latencies from processing nodes to data nodes. Ideally, one would like to keep all data access local to minimize access latency but this is often not possible due to the size of the data sets, capacity constraints in processing nodes which constrain VMs from being placed in their ideal location and so on. When it is not possible to keep all data access local, one would like to optimize the placement of VMs so that the impact of data access latencies on completion times is minimized. We address this problem of optimized VM placement - given the location of the data sets, we need to determine the locations for placing the VMs so as to minimize data access latencies while satisfying system constraints. We present optimal algorithms for determining the VM locations satisfying various constraints and with objectives that capture natural tradeoffs between minimizing latencies and incurring bandwidth costs. We also consider the problem of incorporating inter-VM latency constraints. In this case, the associated location problem is NP-hard with no effective approximation within a factor of 2 - ϵ for any ϵ > 0. We discuss an effective heuristic for this case and evaluate by simulation the impact of the various tradeoffs in the optimization objectives.
{"title":"Optimizing data access latencies in cloud systems by intelligent virtual machine placement","authors":"M. Alicherry, T. V. Lakshman","doi":"10.1109/INFCOM.2013.6566850","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566850","url":null,"abstract":"Many cloud applications are data intensive requiring the processing of large data sets and the MapReduce/Hadoop architecture has become the de facto processing framework for these applications. Large data sets are stored in data nodes in the cloud which are typically SAN or NAS devices. Cloud applications process these data sets using a large number of application virtual machines (VMs), with the total completion time being an important performance metric. There are many factors that affect the total completion time of the processing task such as the load on the individual servers, the task scheduling mechanism, communication and data access bottlenecks, etc. One dominating factor that affects completion times for data intensive applications is the access latencies from processing nodes to data nodes. Ideally, one would like to keep all data access local to minimize access latency but this is often not possible due to the size of the data sets, capacity constraints in processing nodes which constrain VMs from being placed in their ideal location and so on. When it is not possible to keep all data access local, one would like to optimize the placement of VMs so that the impact of data access latencies on completion times is minimized. We address this problem of optimized VM placement - given the location of the data sets, we need to determine the locations for placing the VMs so as to minimize data access latencies while satisfying system constraints. We present optimal algorithms for determining the VM locations satisfying various constraints and with objectives that capture natural tradeoffs between minimizing latencies and incurring bandwidth costs. We also consider the problem of incorporating inter-VM latency constraints. In this case, the associated location problem is NP-hard with no effective approximation within a factor of 2 - ϵ for any ϵ > 0. We discuss an effective heuristic for this case and evaluate by simulation the impact of the various tradeoffs in the optimization objectives.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125056671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566785
Liwen Xu, Xiao Qi, Yuexuan Wang, T. Moscibroda
Data gathering is one of the core algorithmic and theoretic problems in wireless sensor networks. In this paper, we propose a novel approach - Compressed Sparse Functions - to efficiently gather data through the use of highly sophisticated Compressive Sensing techniques. The idea of CSF is to gather a compressed version of a satisfying function (containing all the data) under a suitable function base, and to finally recover the original data. We show through theoretical analysis that our scheme significantly outperforms state-of-the-art methods in terms of efficiency, while matching them in terms of accuracy. For example, in a binary tree-structured network of n nodes, our solution reduces the number of packets from the best-known O(kn log n) to O(k log2 n), where k is a parameter depending on the correlation of the underlying sensor data. Finally, we provide simulations showing that our solution can save up to 80% of communication overhead in a 100-node network. Extensive simulations further show that our solution is robust, high-capacity and low-delay.
{"title":"Efficient data gathering using Compressed Sparse Functions","authors":"Liwen Xu, Xiao Qi, Yuexuan Wang, T. Moscibroda","doi":"10.1109/INFCOM.2013.6566785","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566785","url":null,"abstract":"Data gathering is one of the core algorithmic and theoretic problems in wireless sensor networks. In this paper, we propose a novel approach - Compressed Sparse Functions - to efficiently gather data through the use of highly sophisticated Compressive Sensing techniques. The idea of CSF is to gather a compressed version of a satisfying function (containing all the data) under a suitable function base, and to finally recover the original data. We show through theoretical analysis that our scheme significantly outperforms state-of-the-art methods in terms of efficiency, while matching them in terms of accuracy. For example, in a binary tree-structured network of n nodes, our solution reduces the number of packets from the best-known O(kn log n) to O(k log2 n), where k is a parameter depending on the correlation of the underlying sensor data. Finally, we provide simulations showing that our solution can save up to 80% of communication overhead in a 100-node network. Extensive simulations further show that our solution is robust, high-capacity and low-delay.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123536019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-14DOI: 10.1109/INFCOM.2013.6566801
Xiaomin Chen, A. Jukan, A. Gumaste
In elastic optical networks, the spectrum consecutive and continuous constraints may cause the so-called spectrum fragmentation issue, degrading spectrum utilization, which is especially critical under dynamic traffic scenarios. In this paper, we propose a novel multipath de-fragmentation method which aggregates spectrum fragments instead of reconfiguring existing spectrum paths. We propose an optimization model based on Integer Linear Programming (ILP) and heuristic algorithms and discuss the practical feasibility of the proposed method. We show that multipath routing is an effective de-fragmentation method, as it improves spectral efficiency and reduces blocking under dynamic traffic conditions. We also show that the differential delay issue does not present an obstacle to the application of multipath de-fragmentation in elastic optical networks.
{"title":"Multipath de-fragmentation: Achieving better spectral efficiency in elastic optical path networks","authors":"Xiaomin Chen, A. Jukan, A. Gumaste","doi":"10.1109/INFCOM.2013.6566801","DOIUrl":"https://doi.org/10.1109/INFCOM.2013.6566801","url":null,"abstract":"In elastic optical networks, the spectrum consecutive and continuous constraints may cause the so-called spectrum fragmentation issue, degrading spectrum utilization, which is especially critical under dynamic traffic scenarios. In this paper, we propose a novel multipath de-fragmentation method which aggregates spectrum fragments instead of reconfiguring existing spectrum paths. We propose an optimization model based on Integer Linear Programming (ILP) and heuristic algorithms and discuss the practical feasibility of the proposed method. We show that multipath routing is an effective de-fragmentation method, as it improves spectral efficiency and reduces blocking under dynamic traffic conditions. We also show that the differential delay issue does not present an obstacle to the application of multipath de-fragmentation in elastic optical networks.","PeriodicalId":206346,"journal":{"name":"2013 Proceedings IEEE INFOCOM","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}