Though providing an intrinsic secrecy, network coding is still vulnerable to eavesdropping attacks, by which an adversary may compromise the confidentiality of message content. Existing studies mainly deal with eavesdroppers that can intercept a lim-ited number of packets. However, real scenarios often consist of more capable adversaries, e.g., global eavesdroppers, which can defeat these techniques. In this paper, we propose P-Coding, a novel security scheme against eavesdropping attacks in network coding. With the lightweight permutation encryption performed on each message and its coding vector, P-Coding can efficiently thwart global eavesdroppers in a transparent way. Moreover, P-Coding is also featured in scalability and robustness, which enable it to be integrated into practical network coded systems. Security analysis and simulation results demonstrate the efficacy and efficiency of the P-Coding scheme.
{"title":"P-Coding: Secure Network Coding against Eavesdropping Attacks","authors":"Peng Zhang, Yixin Jiang, Chuang Lin, Yanfei Fan, Xuemin Shen","doi":"10.1109/INFCOM.2010.5462050","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462050","url":null,"abstract":"Though providing an intrinsic secrecy, network coding is still vulnerable to eavesdropping attacks, by which an adversary may compromise the confidentiality of message content. Existing studies mainly deal with eavesdroppers that can intercept a lim-ited number of packets. However, real scenarios often consist of more capable adversaries, e.g., global eavesdroppers, which can defeat these techniques. In this paper, we propose P-Coding, a novel security scheme against eavesdropping attacks in network coding. With the lightweight permutation encryption performed on each message and its coding vector, P-Coding can efficiently thwart global eavesdroppers in a transparent way. Moreover, P-Coding is also featured in scalability and robustness, which enable it to be integrated into practical network coded systems. Security analysis and simulation results demonstrate the efficacy and efficiency of the P-Coding scheme.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123002020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5462065
Hamed Mohsenian Rad, V. Wong, R. Schober
Random access protocols, such as Aloha, are commonly modeled in wireless ad-hoc networks by using the protocol model. However, it is well-known that the protocol model is not accurate and particularly it cannot account for aggregate interference from multiple interference sources. In this paper, we use the more accurate physical model, which is based on the signal-to-interference-plus-noise-ratio (SINR), to study optimization-based design in wireless random access systems, where the optimization variables are the transmission probabilities of the users. We focus on throughput maximization, fair resource allocation, and network utility maximization, and show that they entail non-convex optimization problems if the physical model is adopted. We propose two schemes to solve these problems. The first design is centralized and leads to the global optimal solution using a sum-of-squares technique. However, due to its complexity, this approach is only applicable to small-scale networks. The second design is distributed and leads to a close-to-optimal solution using the coordinate ascent method. This approach is applicable to medium-size and large-scale networks. Based on various simulations, we show that it is highly preferable to use the physical model for optimization-based random access design. In this regard, even a sub-optimal design based on the physical model can achieve a significantly better performance than an optimal design based on the inaccurate protocol model.
{"title":"Optimal SINR-based Random Access","authors":"Hamed Mohsenian Rad, V. Wong, R. Schober","doi":"10.1109/INFCOM.2010.5462065","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462065","url":null,"abstract":"Random access protocols, such as Aloha, are commonly modeled in wireless ad-hoc networks by using the protocol model. However, it is well-known that the protocol model is not accurate and particularly it cannot account for aggregate interference from multiple interference sources. In this paper, we use the more accurate physical model, which is based on the signal-to-interference-plus-noise-ratio (SINR), to study optimization-based design in wireless random access systems, where the optimization variables are the transmission probabilities of the users. We focus on throughput maximization, fair resource allocation, and network utility maximization, and show that they entail non-convex optimization problems if the physical model is adopted. We propose two schemes to solve these problems. The first design is centralized and leads to the global optimal solution using a sum-of-squares technique. However, due to its complexity, this approach is only applicable to small-scale networks. The second design is distributed and leads to a close-to-optimal solution using the coordinate ascent method. This approach is applicable to medium-size and large-scale networks. Based on various simulations, we show that it is highly preferable to use the physical model for optimization-based random access design. In this regard, even a sub-optimal design based on the physical model can achieve a significantly better performance than an optimal design based on the inaccurate protocol model.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121885686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5462144
Anima Anandkumar, Nithin Michael, A. Tang
The problem of cooperative allocation among multiple secondary users to maximize cognitive system throughput is considered. The channel availability statistics are initially unknown to the secondary users and are learnt via sensing samples. Two distributed learning and allocation schemes which maximize the cognitive system throughput or equivalently minimize the total regret in distributed learning and allocation are proposed. The first scheme assumes minimal prior information in terms of pre-allocated ranks for secondary users while the second scheme is fully distributed and assumes no such prior information. The two schemes have sum regret which is provably logarithmic in the number of sensing time slots. A lower bound is derived for any learning scheme which is asymptotically logarithmic in the number of slots. Hence, our schemes achieve asymptotic order optimality in terms of regret in distributed learning and allocation.
{"title":"Opportunistic Spectrum Access with Multiple Users: Learning under Competition","authors":"Anima Anandkumar, Nithin Michael, A. Tang","doi":"10.1109/INFCOM.2010.5462144","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462144","url":null,"abstract":"The problem of cooperative allocation among multiple secondary users to maximize cognitive system throughput is considered. The channel availability statistics are initially unknown to the secondary users and are learnt via sensing samples. Two distributed learning and allocation schemes which maximize the cognitive system throughput or equivalently minimize the total regret in distributed learning and allocation are proposed. The first scheme assumes minimal prior information in terms of pre-allocated ranks for secondary users while the second scheme is fully distributed and assumes no such prior information. The two schemes have sum regret which is provably logarithmic in the number of sensing time slots. A lower bound is derived for any learning scheme which is asymptotically logarithmic in the number of slots. Hence, our schemes achieve asymptotic order optimality in terms of regret in distributed learning and allocation.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122087234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5461945
Tao Shu, M. Krunz
We study the problem of finding the least-priced path (LPP) between a source and a destination in opportunistic spectrum access (OSA) networks. This problem is motivated by economic considerations, whereby spectrum opportunities are sold/leased to secondary radios (SRs). This incurs a communication cost, e.g., for traffic relaying. As the beneficiary of these services, the end user must compensate the service-providing SRs for their spectrum cost. To give an incentive (i.e., profit) for SRs to report their true cost, typically the payment to a SR should be higher than the actual cost. However, from an end user's perspective, unnecessary overpayment should be avoided. So we are interested in the optimal route selection and payment determination mechanism that minimizes the price tag of the selected route and at the same time guarantees truthful cost reports from SRs. This setup is in contrast to the conventional truthful least-cost path (LCP) problem, where the interest is to find the minimum-cost route. The LPP problem is investigated with and without capacity constraints at individual SRs. For both cases, our algorithmic solutions can be executed in polynomial time. The effectiveness of our algorithms in terms of price saving is verified through extensive simulations.
{"title":"Truthful Least-Priced-Path Routing in Opportunistic Spectrum Access Networks","authors":"Tao Shu, M. Krunz","doi":"10.1109/INFCOM.2010.5461945","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5461945","url":null,"abstract":"We study the problem of finding the least-priced path (LPP) between a source and a destination in opportunistic spectrum access (OSA) networks. This problem is motivated by economic considerations, whereby spectrum opportunities are sold/leased to secondary radios (SRs). This incurs a communication cost, e.g., for traffic relaying. As the beneficiary of these services, the end user must compensate the service-providing SRs for their spectrum cost. To give an incentive (i.e., profit) for SRs to report their true cost, typically the payment to a SR should be higher than the actual cost. However, from an end user's perspective, unnecessary overpayment should be avoided. So we are interested in the optimal route selection and payment determination mechanism that minimizes the price tag of the selected route and at the same time guarantees truthful cost reports from SRs. This setup is in contrast to the conventional truthful least-cost path (LCP) problem, where the interest is to find the minimum-cost route. The LPP problem is investigated with and without capacity constraints at individual SRs. For both cases, our algorithmic solutions can be executed in polynomial time. The effectiveness of our algorithms in terms of price saving is verified through extensive simulations.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":" 36","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5461928
Yun Wang, Kai Li, Jie Wu
Distance estimation is fundamental for many functionalities of wireless sensor networks and has been studied intensively in recent years. A critical challenge in distance estimation is handling anisotropic problems in sensor networks. Compared with isotropic networks, anisotropic networks are more intractable in that their properties vary according to the directions of measurement. Anisotropic properties result from various factors, such as geographic shapes, irregular radio patterns, node densities, and impacts from obstacles. In this paper, we study the problem of measuring irregularity of sensor networks and evaluating its impact on distance estimation. In particular, we establish a new metric to measure irregularity along a path in sensor networks, and identify turning nodes where a considered path is inflected. Furthermore, we develop an approach to construct a virtual ruler for distance estimation between any pair of sensor nodes. The construction of a virtual ruler is carried out according to distance measurements among beacon nodes. However, it does not require beacon nodes to be deployed uniformly throughout sensor networks. Compared with existing methods, our approach neither assumes global knowledge of boundary recognition nor relies on uniform distribution of beacon nodes. Therefore, this approach is robust and applicable in practical environments. Simulation results show that our approach outperforms some previous methods, such as DVDistance and PDM.
{"title":"Distance Estimation by Constructing The Virtual Ruler in Anisotropic Sensor Networks","authors":"Yun Wang, Kai Li, Jie Wu","doi":"10.1109/INFCOM.2010.5461928","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5461928","url":null,"abstract":"Distance estimation is fundamental for many functionalities of wireless sensor networks and has been studied intensively in recent years. A critical challenge in distance estimation is handling anisotropic problems in sensor networks. Compared with isotropic networks, anisotropic networks are more intractable in that their properties vary according to the directions of measurement. Anisotropic properties result from various factors, such as geographic shapes, irregular radio patterns, node densities, and impacts from obstacles. In this paper, we study the problem of measuring irregularity of sensor networks and evaluating its impact on distance estimation. In particular, we establish a new metric to measure irregularity along a path in sensor networks, and identify turning nodes where a considered path is inflected. Furthermore, we develop an approach to construct a virtual ruler for distance estimation between any pair of sensor nodes. The construction of a virtual ruler is carried out according to distance measurements among beacon nodes. However, it does not require beacon nodes to be deployed uniformly throughout sensor networks. Compared with existing methods, our approach neither assumes global knowledge of boundary recognition nor relies on uniform distribution of beacon nodes. Therefore, this approach is robust and applicable in practical environments. Simulation results show that our approach outperforms some previous methods, such as DVDistance and PDM.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124268791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5462057
Zhisu Zhu, A. M. So, Y. Ye
A fundamental problem in wireless ad-hoc and sensor networks is that of determining the positions of nodes. Often, such a problem is complicated by the presence of nodes whose positions cannot be uniquely determined. Most existing work uses the notion of global rigidity from rigidity theory to address the non-uniqueness issue. However, such a notion is not entirely satisfactory, as it has been shown that even if a network localization instance is known to be globally rigid, the problem of determining the node positions is still intractable in general. In this paper, we propose to use the notion of universal rigidity to bridge such disconnect. Although the notion of universal rigidity is more restrictive than that of global rigidity, it captures a large class of networks and is much more relevant to the efficient solvability of the network localization problem. Specifically, we show that both the problem of deciding whether a given network localization instance is universally rigid and the problem of determining the node positions of a universally rigid instance can be solved efficiently using semidefinite programming (SDP). Then, we give various constructions of universally rigid instances. In particular, we show that trilateration graphs are generically universally rigid, thus demonstrating not only the richness of the class of universally rigid instances, but also the fact that trilateration graphs possess much stronger geometric properties than previously known. Finally, we apply our results to design a novel edge sparsification heuristic that can reduce the size of the input network while provably preserving its original localization properties. One of the applications of such heuristic is to speed up existing convex optimization-based localization algorithms. Simulation results show that our speedup approach compares very favorably with existing ones, both in terms of accuracy and computation time.
{"title":"Universal Rigidity: Towards Accurate and Efficient Localization of Wireless Networks","authors":"Zhisu Zhu, A. M. So, Y. Ye","doi":"10.1109/INFCOM.2010.5462057","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462057","url":null,"abstract":"A fundamental problem in wireless ad-hoc and sensor networks is that of determining the positions of nodes. Often, such a problem is complicated by the presence of nodes whose positions cannot be uniquely determined. Most existing work uses the notion of global rigidity from rigidity theory to address the non-uniqueness issue. However, such a notion is not entirely satisfactory, as it has been shown that even if a network localization instance is known to be globally rigid, the problem of determining the node positions is still intractable in general. In this paper, we propose to use the notion of universal rigidity to bridge such disconnect. Although the notion of universal rigidity is more restrictive than that of global rigidity, it captures a large class of networks and is much more relevant to the efficient solvability of the network localization problem. Specifically, we show that both the problem of deciding whether a given network localization instance is universally rigid and the problem of determining the node positions of a universally rigid instance can be solved efficiently using semidefinite programming (SDP). Then, we give various constructions of universally rigid instances. In particular, we show that trilateration graphs are generically universally rigid, thus demonstrating not only the richness of the class of universally rigid instances, but also the fact that trilateration graphs possess much stronger geometric properties than previously known. Finally, we apply our results to design a novel edge sparsification heuristic that can reduce the size of the input network while provably preserving its original localization properties. One of the applications of such heuristic is to speed up existing convex optimization-based localization algorithms. Simulation results show that our speedup approach compares very favorably with existing ones, both in terms of accuracy and computation time.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130109558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5462037
Yao Hua, Qian Zhang, Z. Niu
Cooperative relay networks combined with Orthogonal Frequency Division Multiplexing Access (OFDMA) technology has been widely recognized as a promising candidate for future cellular infrastructure due to the performance enhancement by flexible resource allocation schemes. The majority of the existing schemes aim to optimize single cell performance gain. However, the higher frequency reuse factor and smaller cell size requirement lead to severe inter-cell interference problem. Therefore, the multi-cell resource allocation of subcarrier, time scheduling and power should be jointly considered to alleviate the severe inter-cell interference problem. In this paper, the joint resource allocation problem is formulated. Considering the high complexity of the optimal solution, a two-stage resource allocation scheme is proposed. In the first stage, all of the users in each cell are selected sequentially and the joint subcarrier allocation and scheduling is conducted for the selected users without considering the interference. In the second stage, the optimal power control is performed by geometric programming method. Simulation results show that the proposed the interference-aware resource allocation scheme improves the system capacity compared with existing schemes. Especially, the edge users achieve more benefit.
{"title":"Resource Allocation in Multi-cell OFDMA-based Relay Networks","authors":"Yao Hua, Qian Zhang, Z. Niu","doi":"10.1109/INFCOM.2010.5462037","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462037","url":null,"abstract":"Cooperative relay networks combined with Orthogonal Frequency Division Multiplexing Access (OFDMA) technology has been widely recognized as a promising candidate for future cellular infrastructure due to the performance enhancement by flexible resource allocation schemes. The majority of the existing schemes aim to optimize single cell performance gain. However, the higher frequency reuse factor and smaller cell size requirement lead to severe inter-cell interference problem. Therefore, the multi-cell resource allocation of subcarrier, time scheduling and power should be jointly considered to alleviate the severe inter-cell interference problem. In this paper, the joint resource allocation problem is formulated. Considering the high complexity of the optimal solution, a two-stage resource allocation scheme is proposed. In the first stage, all of the users in each cell are selected sequentially and the joint subcarrier allocation and scheduling is conducted for the selected users without considering the interference. In the second stage, the optimal power control is performed by geometric programming method. Simulation results show that the proposed the interference-aware resource allocation scheme improves the system capacity compared with existing schemes. Especially, the edge users achieve more benefit.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129306433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5461913
J. Liebeherr, A. Burchard, F. Ciucu
Traffic with self-similar and heavy-tailed characteristics has been widely reported in networks, yet, only few analytical results are available for predicting the delay performance of such networks. We address a particularly difficult type of heavy-tailed traffic where only the first moment can be computed, and present the first non-asymptotic end-to-end delay bounds for such traffic. The derived performance bounds are non-asymptotic in that they do not assume a steady state, large buffer, or many sources regime. Our analysis considers a multi-hop path of fixed-capacity links with heavy-tailed self-similar cross traffic at each node. A key contribution of the analysis is a probabilistic sample-path bound for heavy-tailed arrival and service processes, which is based on a scale-free sampling method. We explore how delays scale as a function of the length of the path, and compare them with lower bounds. A comparison with simulations illustrates pitfalls when simulating self-similar heavy-tailed traffic, providing further evidence for the need of analytical bounds.
{"title":"Non-asymptotic Delay Bounds for Networks with Heavy-Tailed Traffic","authors":"J. Liebeherr, A. Burchard, F. Ciucu","doi":"10.1109/INFCOM.2010.5461913","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5461913","url":null,"abstract":"Traffic with self-similar and heavy-tailed characteristics has been widely reported in networks, yet, only few analytical results are available for predicting the delay performance of such networks. We address a particularly difficult type of heavy-tailed traffic where only the first moment can be computed, and present the first non-asymptotic end-to-end delay bounds for such traffic. The derived performance bounds are non-asymptotic in that they do not assume a steady state, large buffer, or many sources regime. Our analysis considers a multi-hop path of fixed-capacity links with heavy-tailed self-similar cross traffic at each node. A key contribution of the analysis is a probabilistic sample-path bound for heavy-tailed arrival and service processes, which is based on a scale-free sampling method. We explore how delays scale as a function of the length of the path, and compare them with lower bounds. A comparison with simulations illustrates pitfalls when simulating self-similar heavy-tailed traffic, providing further evidence for the need of analytical bounds.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122527290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5462240
Xiaoqing Zhu, Rong Pan, Nandita Dukkipati, V. Subramanian, F. Bonomi
This paper presents a novel scheme, Layered Internet Video Engineering (LIVE), in which network nodes feed back virtual congestion levels to video senders to assist both media-aware bandwidth sharing and transient loss protection. The video senders respond to such feedback by adapting the rates of encoded H.264/SVC streams based on their respective video rate-distortion (R-D) characteristics. The same feedback is employed to calculate the amount of forward error correction (FEC) protection for combating transient losses. Simulation studies show that LIVE can minimize the total distortion of all participating video streams and hence maximize their overall quality. At steady state, video streams experience no queuing delays or packet losses. In face of transient congestion, the network-assisted adaptive FEC effectively protect video packets from losses while keeping a minimum overhead. Our theoretical analysis further guarantees system stability for arbitrary number of streams with arbitrary round trip delays below a prescribed limit. Finally, we show that LIVE streams can coexist with TCP flows within the existing explicit congestion notification (ECN) framework.
{"title":"Layered Internet Video Engineering (LIVE): Network-Assisted Bandwidth Sharing and Transient Loss Protection for Scalable Video Streaming","authors":"Xiaoqing Zhu, Rong Pan, Nandita Dukkipati, V. Subramanian, F. Bonomi","doi":"10.1109/INFCOM.2010.5462240","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5462240","url":null,"abstract":"This paper presents a novel scheme, Layered Internet Video Engineering (LIVE), in which network nodes feed back virtual congestion levels to video senders to assist both media-aware bandwidth sharing and transient loss protection. The video senders respond to such feedback by adapting the rates of encoded H.264/SVC streams based on their respective video rate-distortion (R-D) characteristics. The same feedback is employed to calculate the amount of forward error correction (FEC) protection for combating transient losses. Simulation studies show that LIVE can minimize the total distortion of all participating video streams and hence maximize their overall quality. At steady state, video streams experience no queuing delays or packet losses. In face of transient congestion, the network-assisted adaptive FEC effectively protect video packets from losses while keeping a minimum overhead. Our theoretical analysis further guarantees system stability for arbitrary number of streams with arbitrary round trip delays below a prescribed limit. Finally, we show that LIVE streams can coexist with TCP flows within the existing explicit congestion notification (ECN) framework.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122274089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-03-14DOI: 10.1109/INFCOM.2010.5461905
M. Dinitz
In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.
{"title":"Distributed Algorithms for Approximating Wireless Network Capacity","authors":"M. Dinitz","doi":"10.1109/INFCOM.2010.5461905","DOIUrl":"https://doi.org/10.1109/INFCOM.2010.5461905","url":null,"abstract":"In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.","PeriodicalId":259639,"journal":{"name":"2010 Proceedings IEEE INFOCOM","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116463216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}