Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541794
N. Karamchandani, S. Diggavi, G. Caire, S. Shamai
Motivated by the ability of modern terminals to receive simultaneously from multiple networks (e.g., WLAN and Cellular), we extend the single shared link network with caching at the user nodes to the case of r parallel partially shared links, where users in different classes receive from the server simultaneously and in parallel through different set of links. For this setting, we give an order-optimal rate and (maximal) delay region characterization for the case of r = 2 links with two classes of users, one receiving only from link 1 and the other from both links 1 and 2. We also extend these results to r = 3 with three classes of users, receiving from link 1, from links 1 and 2, and from links 1 and 3, respectively.
{"title":"Rate and delay for coded caching with carrier aggregation","authors":"N. Karamchandani, S. Diggavi, G. Caire, S. Shamai","doi":"10.1109/ISIT.2016.7541794","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541794","url":null,"abstract":"Motivated by the ability of modern terminals to receive simultaneously from multiple networks (e.g., WLAN and Cellular), we extend the single shared link network with caching at the user nodes to the case of r parallel partially shared links, where users in different classes receive from the server simultaneously and in parallel through different set of links. For this setting, we give an order-optimal rate and (maximal) delay region characterization for the case of r = 2 links with two classes of users, one receiving only from link 1 and the other from both links 1 and 2. We also extend these results to r = 3 with three classes of users, receiving from link 1, from links 1 and 2, and from links 1 and 3, respectively.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128700023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541605
O. Kosut, J. Kliewer
This paper explores the relationship between two ideas in network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1. Various notions of edge removal and strong converse are defined, depending on how edge capacity and residual error probability scale with blocklength, and relations between them are proved. In particular, each class of strong converse implies a specific class of edge removal. The opposite direction is proved for deterministic networks, and some discussion is given for the noisy case.
{"title":"On the relationship between edge removal and strong converses","authors":"O. Kosut, J. Kliewer","doi":"10.1109/ISIT.2016.7541605","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541605","url":null,"abstract":"This paper explores the relationship between two ideas in network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1. Various notions of edge removal and strong converse are defined, depending on how edge capacity and residual error probability scale with blocklength, and relations between them are proved. In particular, each class of strong converse implies a specific class of edge removal. The opposite direction is proved for deterministic networks, and some discussion is given for the noisy case.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129633240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541799
Zhengwei Ni, M. Motani
Compared with energy-harvesting transmitters, the performance of energy-harvesting receivers has not been fully investigated. The main consumption of energy at transmitters is for transmission, while that at receivers is for information decoding. Hence, the analysis and optimization of energy-harvesting transmitters and receivers are inherently different. This paper considers optimization of a communication system using an energy-harvesting receiver. We assume that the receiver antenna operates over a relatively wide range of frequencies; hence the receiver can harvest energy from both the in-band signal sent by the transmitter and other possibly out-of-band sources. The receiver adopts a time-switching architecture, i.e., in each block, the receiver first harvests energy then decodes information. We assume the energy consumption for decoding is a non-decreasing convex function of the normalized code rate and dominates the energy used for other processing tasks. In this context, we formulate a non-convex optimization problem to maximize the amount of information decoded over multiple blocks. We solve this non-convex problem by converting it into an equivalent convex problem. We also provide numerical examples to validate the accuracy of our analysis and compare our scheme with two suboptimal schemes requiring less overhead.
{"title":"Optimization of time-switching energy harvesting receivers over multiple transmission blocks","authors":"Zhengwei Ni, M. Motani","doi":"10.1109/ISIT.2016.7541799","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541799","url":null,"abstract":"Compared with energy-harvesting transmitters, the performance of energy-harvesting receivers has not been fully investigated. The main consumption of energy at transmitters is for transmission, while that at receivers is for information decoding. Hence, the analysis and optimization of energy-harvesting transmitters and receivers are inherently different. This paper considers optimization of a communication system using an energy-harvesting receiver. We assume that the receiver antenna operates over a relatively wide range of frequencies; hence the receiver can harvest energy from both the in-band signal sent by the transmitter and other possibly out-of-band sources. The receiver adopts a time-switching architecture, i.e., in each block, the receiver first harvests energy then decodes information. We assume the energy consumption for decoding is a non-decreasing convex function of the normalized code rate and dominates the energy used for other processing tasks. In this context, we formulate a non-convex optimization problem to maximize the amount of information decoded over multiple blocks. We solve this non-convex problem by converting it into an equivalent convex problem. We also provide numerical examples to validate the accuracy of our analysis and compare our scheme with two suboptimal schemes requiring less overhead.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130524202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541633
Weiqiang Yang, Ying Li, Xiaopu Yu, Yue Sun
In this paper, we propose a rateless two-way spinal code. There exist two encoding processes in the proposed code, i.e., the forward encoding process and the backward encoding process. Rather than the original spinal code, where each message segment only has relationship with the coded symbols corresponding to itself and the later message segments, the information of each message segment of the proposed code is conveyed by the coded symbols corresponding to all the message segments. Based on this two-way coding strategy, we propose an iterative decoding algorithm. Different transmission schemes, including the symmetric transmission and the asymmetric transmission, are also discussed in this paper. Our analysis illustrates that the asymmetric transmission can be treated as a tradeoff between the performance and the decoding complexity. Simulation results show that the proposed code outperforms not only the original spinal code but also some strong channel codes, such as polar codes and raptor codes.
{"title":"Two-way spinal codes","authors":"Weiqiang Yang, Ying Li, Xiaopu Yu, Yue Sun","doi":"10.1109/ISIT.2016.7541633","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541633","url":null,"abstract":"In this paper, we propose a rateless two-way spinal code. There exist two encoding processes in the proposed code, i.e., the forward encoding process and the backward encoding process. Rather than the original spinal code, where each message segment only has relationship with the coded symbols corresponding to itself and the later message segments, the information of each message segment of the proposed code is conveyed by the coded symbols corresponding to all the message segments. Based on this two-way coding strategy, we propose an iterative decoding algorithm. Different transmission schemes, including the symmetric transmission and the asymmetric transmission, are also discussed in this paper. Our analysis illustrates that the asymmetric transmission can be treated as a tradeoff between the performance and the decoding complexity. Simulation results show that the proposed code outperforms not only the original spinal code but also some strong channel codes, such as polar codes and raptor codes.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114207085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541745
Craig Wilson, V. Veeravalli
Estimation in a two node sensor network is considered, with one sensor of high quality but potentially affected by an adversary and one sensor of low quality but immune to the actions of the adversary. The observations of the sensors are combined at a fusion center to produce a minimum mean square error (MSE) estimate taking into account the actions of the adversary. An approach based on hypothesis testing is introduced to decide whether the high quality sensor should be used. The false alarm probability of the hypothesis test introduces a natural trade-off between the MSE performance when the adversary takes no action and when the adversary acts. Finally, a method is developed to select the false alarm probability robustly to ensure good performance regardless of the adversary's action.
{"title":"MMSE estimation in a sensor network in the presence of an adversary","authors":"Craig Wilson, V. Veeravalli","doi":"10.1109/ISIT.2016.7541745","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541745","url":null,"abstract":"Estimation in a two node sensor network is considered, with one sensor of high quality but potentially affected by an adversary and one sensor of low quality but immune to the actions of the adversary. The observations of the sensors are combined at a fusion center to produce a minimum mean square error (MSE) estimate taking into account the actions of the adversary. An approach based on hypothesis testing is introduced to decide whether the high quality sensor should be used. The false alarm probability of the hypothesis test introduces a natural trade-off between the MSE performance when the adversary takes no action and when the adversary acts. Finally, a method is developed to select the false alarm probability robustly to ensure good performance regardless of the adversary's action.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121535240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541433
M. Kuijper, M. Bossert
For streaming codes an erasure channel is assumed and the decoding delay is one of the main parameters to be considered. In this paper the erasure correcting capability of unit memory convolutional codes based on disjoint RS codes is analyzed. We take a sliding window decoder approach, where only the most current information is decoded before sliding the window one time-step further. We show that when we restrict the decoding delay to a small value, these codes still achieve an excellent erasure correction performance. This makes these codes useful for streaming applications where low latency is required.
{"title":"On (partial) unit memory codes based on Reed-Solomon codes for streaming","authors":"M. Kuijper, M. Bossert","doi":"10.1109/ISIT.2016.7541433","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541433","url":null,"abstract":"For streaming codes an erasure channel is assumed and the decoding delay is one of the main parameters to be considered. In this paper the erasure correcting capability of unit memory convolutional codes based on disjoint RS codes is analyzed. We take a sliding window decoder approach, where only the most current information is decoded before sliding the window one time-step further. We show that when we restrict the decoding delay to a small value, these codes still achieve an excellent erasure correction performance. This makes these codes useful for streaming applications where low latency is required.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121566929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541790
I. Bocharova, B. Kudryashov, Vitaly Skachek, Yauhen Yakimenka
A novel method for decoding of low-density parity-check codes on the AWGN channel is presented. In the proposed method, first, a standard belief-propagation decoder is applied, then a certain number of positions is erased using a combination of a reliability criterion and a set of masks. A list erasure decoder is then applied to the resulting word. The performance of the proposed method is analyzed mathematically and demonstrated by simulations.
{"title":"Low complexity algorithm approaching the ML decoding of binary LDPC codes","authors":"I. Bocharova, B. Kudryashov, Vitaly Skachek, Yauhen Yakimenka","doi":"10.1109/ISIT.2016.7541790","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541790","url":null,"abstract":"A novel method for decoding of low-density parity-check codes on the AWGN channel is presented. In the proposed method, first, a standard belief-propagation decoder is applied, then a certain number of positions is erased using a combination of a reliability criterion and a set of masks. A list erasure decoder is then applied to the resulting word. The performance of the proposed method is analyzed mathematically and demonstrated by simulations.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114758407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541691
Yury Polyanskiy, Yihong Wu
It is shown that under suitable regularity conditions, differential entropy is O(√n)-Lipschitz as a function of probability distributions on ℝn with respect to the quadratic Wasserstein distance. Under similar conditions, (discrete) Shannon entropy is shown to be O(n)-Lipschitz in distributions over the product space with respect to Ornstein's d̅-distance (Wasserstein distance corresponding to the Hamming distance). These results together with Talagrand's and Marton's transportation-information inequalities allow one to replace the unknown multi-user interference with its i.i.d. approximations. As an application, a new outer bound for the two-user Gaussian interference channel is proved, which, in particular, settles the “missing corner point” problem of Costa (1985).
证明了在适当的正则性条件下,微分熵是O(√n)-Lipschitz,是关于二次Wasserstein距离的概率分布的函数。在类似的条件下,(离散)香农熵在乘积空间的分布中相对于Ornstein's d -distance(对应于Hamming距离的Wasserstein距离)显示为O(n)-Lipschitz。这些结果与塔拉格兰德和马顿的传输信息不等式一起,允许人们用其i.i.d近似值代替未知的多用户干扰。作为应用,证明了双用户高斯干涉信道的一个新的外界,特别解决了Costa(1985)的“缺角点”问题。
{"title":"Converse bounds for interference channels via coupling and proof of Costa's conjecture","authors":"Yury Polyanskiy, Yihong Wu","doi":"10.1109/ISIT.2016.7541691","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541691","url":null,"abstract":"It is shown that under suitable regularity conditions, differential entropy is O(√n)-Lipschitz as a function of probability distributions on ℝn with respect to the quadratic Wasserstein distance. Under similar conditions, (discrete) Shannon entropy is shown to be O(n)-Lipschitz in distributions over the product space with respect to Ornstein's d̅-distance (Wasserstein distance corresponding to the Hamming distance). These results together with Talagrand's and Marton's transportation-information inequalities allow one to replace the unknown multi-user interference with its i.i.d. approximations. As an application, a new outer bound for the two-user Gaussian interference channel is proved, which, in particular, settles the “missing corner point” problem of Costa (1985).","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"CE-21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126542609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541568
Ryan Gabrys, Eitan Yaakobi
The sequence-reconstruction problem, first proposed by Levenshtein, models a setup in which a sequence from some set is transmitted over several independent channels, and the decoder receives the outputs from every channel. The main problem of interest is to determine the minimum number of channels required to reconstruct the transmitted sequence. In the combinatorial context, the problem is equivalent to finding the maximum intersection between two balls of radius t where the distance between their centers is at least d. The setup of this problem was studied before for several error metrics such as the Hamming metric, the Kendall-tau metric, and the Johnson metric. In this paper, we extend the study initiated by Levenshtein for reconstructing sequences over the deletion channel. While he solved the case where the transmitted word can be arbitrary, we study the setup where the transmitted word belongs to a single-deletion-correcting code and there are t deletions in every channel. Under this paradigm, we study the minimum number of different channel outputs in order to construct a successful decoder.
{"title":"Sequence reconstruction over the deletion channel","authors":"Ryan Gabrys, Eitan Yaakobi","doi":"10.1109/ISIT.2016.7541568","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541568","url":null,"abstract":"The sequence-reconstruction problem, first proposed by Levenshtein, models a setup in which a sequence from some set is transmitted over several independent channels, and the decoder receives the outputs from every channel. The main problem of interest is to determine the minimum number of channels required to reconstruct the transmitted sequence. In the combinatorial context, the problem is equivalent to finding the maximum intersection between two balls of radius t where the distance between their centers is at least d. The setup of this problem was studied before for several error metrics such as the Hamming metric, the Kendall-tau metric, and the Johnson metric. In this paper, we extend the study initiated by Levenshtein for reconstructing sequences over the deletion channel. While he solved the case where the transmitted word can be arbitrary, we study the setup where the transmitted word belongs to a single-deletion-correcting code and there are t deletions in every channel. Under this paradigm, we study the minimum number of different channel outputs in order to construct a successful decoder.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128132183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-10DOI: 10.1109/ISIT.2016.7541613
S. S. Bidokhti, M. Wigger, R. Timo
We study the capacity of a broadcast packet-erasure network with receiver caching. The receivers in the network are divided into two groups: A group of strong receivers with small packet erasure probabilities, and a group of weak receivers with large packet erasure probabilities. The weak receivers are provided with local cache memories as compensation for their poor channels. Achievable (lower) and converse (upper) bounds for the optimal capacity-memory tradeoff are derived. The lower bounds are proved using new joint cache-channel coding schemes that significantly outperform naive separate cache-channel coding schemes. For the case of two receivers, the capacity-memory tradeoff is completely characterized for a range of useful cache memory sizes.
{"title":"Erasure broadcast networks with receiver caching","authors":"S. S. Bidokhti, M. Wigger, R. Timo","doi":"10.1109/ISIT.2016.7541613","DOIUrl":"https://doi.org/10.1109/ISIT.2016.7541613","url":null,"abstract":"We study the capacity of a broadcast packet-erasure network with receiver caching. The receivers in the network are divided into two groups: A group of strong receivers with small packet erasure probabilities, and a group of weak receivers with large packet erasure probabilities. The weak receivers are provided with local cache memories as compensation for their poor channels. Achievable (lower) and converse (upper) bounds for the optimal capacity-memory tradeoff are derived. The lower bounds are proved using new joint cache-channel coding schemes that significantly outperform naive separate cache-channel coding schemes. For the case of two receivers, the capacity-memory tradeoff is completely characterized for a range of useful cache memory sizes.","PeriodicalId":198767,"journal":{"name":"2016 IEEE International Symposium on Information Theory (ISIT)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121932789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}