Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174250
Hanxu Hou, P. Lee, Y. Han
An (n, k) maximum distance separable (MDS) code encodes kα data symbols into nα symbols that are stored in n nodes with α symbols each, such that the kα data symbols can be reconstructed from any k out of n nodes. MDS codes achieve optimal repair access if we can repair the lost symbols of any single node by accessing $frac{alpha }{{d - k + 1}}$ symbols from each of d other surviving nodes, where k + 1 ≤ d ≤ n - 1. In this paper, we propose a generic transformation for any MDS code to achieve optimal repair access for a single-node repair among d - k + 1 nodes, while the transformed MDS codes maintain the same update bandwidth (i.e., the total amount of symbols transferred for updating the symbols of affected nodes when some data symbols are updated) as that of the underlying MDS codes. By recursively applying our transformation for existing MDS codes with the minimum update bandwidth, we can obtain multi-layer transformed MDS codes that achieve both optimal repair access for any single-node repair among all n nodes and minimum update bandwidth.
一个(n, k)最大距离可分离码(MDS)将k个α数据符号编码成n个α符号,这些符号存储在n个节点中,每个节点有α符号,这样kα数据符号可以从n个节点中的任意k重构。如果我们可以通过访问其他d个幸存节点的$frac{alpha }{{d - k + 1}}$符号来修复任何单个节点的丢失符号,MDS代码实现了最优修复访问,其中k + 1≤d≤n - 1。在本文中,我们提出了一种对任意MDS代码的通用转换,以实现d - k + 1个节点间单节点修复的最优修复访问,而转换后的MDS代码保持与底层MDS代码相同的更新带宽(即更新某些数据符号时更新受影响节点符号所传输的符号总量)。通过对已有的更新带宽最小的MDS代码进行递归变换,我们可以得到多层变换后的MDS代码,在所有n个节点中,任意一个单节点的修复访问都是最优的,并且更新带宽最小。
{"title":"Toward Optimality in Both Repair and Update via Generic MDS Code Transformation","authors":"Hanxu Hou, P. Lee, Y. Han","doi":"10.1109/ISIT44484.2020.9174250","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174250","url":null,"abstract":"An (n, k) maximum distance separable (MDS) code encodes kα data symbols into nα symbols that are stored in n nodes with α symbols each, such that the kα data symbols can be reconstructed from any k out of n nodes. MDS codes achieve optimal repair access if we can repair the lost symbols of any single node by accessing $frac{alpha }{{d - k + 1}}$ symbols from each of d other surviving nodes, where k + 1 ≤ d ≤ n - 1. In this paper, we propose a generic transformation for any MDS code to achieve optimal repair access for a single-node repair among d - k + 1 nodes, while the transformed MDS codes maintain the same update bandwidth (i.e., the total amount of symbols transferred for updating the symbols of affected nodes when some data symbols are updated) as that of the underlying MDS codes. By recursively applying our transformation for existing MDS codes with the minimum update bandwidth, we can obtain multi-layer transformed MDS codes that achieve both optimal repair access for any single-node repair among all n nodes and minimum update bandwidth.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121032893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174009
Jin Sima, Ryan Gabrys, Jehoshua Bruck
We introduce a general technique that we call syndrome compression, for designing low-redundancy error correcting codes. The technique allows us to boost the redundancy efficiency of hash/labeling-based codes by further compressing the labeling. We apply syndrome compression to different types of adversarial deletion channels and present code constructions that correct up to a constant number of errors. Our code constructions achieve the redundancy of twice the Gilbert-Varshamov bound, which improve upon the state of art for these channels. The encoding/decoding complexity of our constructions is of order equal to the size of the corresponding deletion balls, namely, it is polynomial in the code length.
{"title":"Syndrome Compression for Optimal Redundancy Codes","authors":"Jin Sima, Ryan Gabrys, Jehoshua Bruck","doi":"10.1109/ISIT44484.2020.9174009","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174009","url":null,"abstract":"We introduce a general technique that we call syndrome compression, for designing low-redundancy error correcting codes. The technique allows us to boost the redundancy efficiency of hash/labeling-based codes by further compressing the labeling. We apply syndrome compression to different types of adversarial deletion channels and present code constructions that correct up to a constant number of errors. Our code constructions achieve the redundancy of twice the Gilbert-Varshamov bound, which improve upon the state of art for these channels. The encoding/decoding complexity of our constructions is of order equal to the size of the corresponding deletion balls, namely, it is polynomial in the code length.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123795700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174300
Payam Delgosha, V. Anantharam
Many modern applications involve accessing and processing graphical data, i.e. data that is naturally indexed by graphs. Examples come from internet graphs, social networks, genomics and proteomics, and other sources. The typically large size of such data motivates seeking efficient ways for its compression and decompression. The current compression methods are usually tailored to specific models, or do not provide theoretical guarantees. In this paper, we introduce a low–complexity lossless compression algorithm for sparse marked graphs, i.e. graphical data indexed by sparse graphs, which is capable of universally achieving the optimal compression rate in a precisely defined sense. In order to define universality, we employ the framework of local weak convergence, which allows one to make sense of a notion of stochastic processes for graphs. Moreover, we investigate the performance of our algorithm through some experimental results on both synthetic and real–world data.
{"title":"A Universal Low Complexity Compression Algorithm for Sparse Marked Graphs","authors":"Payam Delgosha, V. Anantharam","doi":"10.1109/ISIT44484.2020.9174300","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174300","url":null,"abstract":"Many modern applications involve accessing and processing graphical data, i.e. data that is naturally indexed by graphs. Examples come from internet graphs, social networks, genomics and proteomics, and other sources. The typically large size of such data motivates seeking efficient ways for its compression and decompression. The current compression methods are usually tailored to specific models, or do not provide theoretical guarantees. In this paper, we introduce a low–complexity lossless compression algorithm for sparse marked graphs, i.e. graphical data indexed by sparse graphs, which is capable of universally achieving the optimal compression rate in a precisely defined sense. In order to define universality, we employ the framework of local weak convergence, which allows one to make sense of a notion of stochastic processes for graphs. Moreover, we investigate the performance of our algorithm through some experimental results on both synthetic and real–world data.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114272039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174010
Elisabetta Cornacchia, Neta Singer, E. Abbe
This paper introduces a model for opinion dynamics, where at each time step, randomly selected agents see their opinions — modeled as scalars in [0,1] — evolve depending on a local interaction function. In the classical Bounded Confidence Model, agents opinions get attracted when they are close enough. The proposed model extends this by adding a repulsion component, which models the effect of opinions getting further pushed away when dissimilar enough. With this repulsion component added, and under a repulsion-attraction cleavage assumption, it is shown that a new stable configuration emerges beyond the classical consensus configuration, namely the polarization configuration. More specifically, it is shown that total consensus and total polarization are the only two possible limiting configurations. The paper further provides an analysis of the infinite population regime in dimension 1 and higher, with a phase transition phenomenon conjectured and backed heuristically.
{"title":"Polarization in Attraction-Repulsion Models","authors":"Elisabetta Cornacchia, Neta Singer, E. Abbe","doi":"10.1109/ISIT44484.2020.9174010","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174010","url":null,"abstract":"This paper introduces a model for opinion dynamics, where at each time step, randomly selected agents see their opinions — modeled as scalars in [0,1] — evolve depending on a local interaction function. In the classical Bounded Confidence Model, agents opinions get attracted when they are close enough. The proposed model extends this by adding a repulsion component, which models the effect of opinions getting further pushed away when dissimilar enough. With this repulsion component added, and under a repulsion-attraction cleavage assumption, it is shown that a new stable configuration emerges beyond the classical consensus configuration, namely the polarization configuration. More specifically, it is shown that total consensus and total polarization are the only two possible limiting configurations. The paper further provides an analysis of the infinite population regime in dimension 1 and higher, with a phase transition phenomenon conjectured and backed heuristically.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125095610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174328
Zhongxing Sun, Wei Cui, Yulong Liu
Quantized corrupted sensing concerns the problem of estimating structured signals from their quantized corrupted samples. A typical case is that when the measurements y = Φx* + v* + n are corrupted with both structured corruption v* and unstructured noise n, we wish to reconstruct x* and v* from the quantized samples of y. Our work shows that the Generalized Lasso can be applied for the recovery of signal provided that a uniform random dithering is added to the measurements before quantization. The theoretical results illustrate that the influence of quantization behaves as independent unstructured noise. We also confirm our results numerically in several scenarios such as sparse vectors and low-rank matrices.
{"title":"Quantized Corrupted Sensing with Random Dithering","authors":"Zhongxing Sun, Wei Cui, Yulong Liu","doi":"10.1109/ISIT44484.2020.9174328","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174328","url":null,"abstract":"Quantized corrupted sensing concerns the problem of estimating structured signals from their quantized corrupted samples. A typical case is that when the measurements y = Φx* + v* + n are corrupted with both structured corruption v* and unstructured noise n, we wish to reconstruct x* and v* from the quantized samples of y. Our work shows that the Generalized Lasso can be applied for the recovery of signal provided that a uniform random dithering is added to the measurements before quantization. The theoretical results illustrate that the influence of quantization behaves as independent unstructured noise. We also confirm our results numerically in several scenarios such as sparse vectors and low-rank matrices.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125226587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174116
Avishek Ghosh, A. Pananjady, Adityanand Guntuboyina, K. Ramchandran
We study the max-affine regression model, where the unknown regression function is modeled as a maximum of a fixed number of affine functions. In recent work [1], we showed that end-to-end parameter estimates were obtainable using this model with an alternating minimization (AM) algorithm provided the covariates (or designs) were normally distributed, and chosen independently of the underlying parameters. In this paper, we show that AM is significantly more robust than the setting of [1]: It converges locally under small-ball design assumptions (which is a much broader class, including bounded log-concave distributions), and even when the underlying parameters are chosen with knowledge of the realized covariates. Once again, the final rate obtained by the procedure is near-parametric and minimax optimal (up to a polylogarithmic factor) as a function of the dimension, sample size, and noise variance. As a by-product of our analysis, we obtain convergence guarantees on a classical algorithm for the (real) phase retrieval problem in the presence of noise under considerably weaker assumptions on the design distribution than was previously known.
{"title":"Max-affine regression with universal parameter estimation for small-ball designs","authors":"Avishek Ghosh, A. Pananjady, Adityanand Guntuboyina, K. Ramchandran","doi":"10.1109/ISIT44484.2020.9174116","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174116","url":null,"abstract":"We study the max-affine regression model, where the unknown regression function is modeled as a maximum of a fixed number of affine functions. In recent work [1], we showed that end-to-end parameter estimates were obtainable using this model with an alternating minimization (AM) algorithm provided the covariates (or designs) were normally distributed, and chosen independently of the underlying parameters. In this paper, we show that AM is significantly more robust than the setting of [1]: It converges locally under small-ball design assumptions (which is a much broader class, including bounded log-concave distributions), and even when the underlying parameters are chosen with knowledge of the realized covariates. Once again, the final rate obtained by the procedure is near-parametric and minimax optimal (up to a polylogarithmic factor) as a function of the dimension, sample size, and noise variance. As a by-product of our analysis, we obtain convergence guarantees on a classical algorithm for the (real) phase retrieval problem in the presence of noise under considerably weaker assumptions on the design distribution than was previously known.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125696116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174056
M. Salman, M. Varanasi
The K-receiver discrete memoryless (DM) broadcast channel (BC) is considered in which each receiver is equipped with a cache memory of the same size. We obtain an upper bound on the capacity-memory tradeoff with uncoded pre-fetching, the highest rate of reliable communication for given cache size. This bound holds for the interleavable DM BC, a class of channels that subsumes the K-receiver degraded DM BC and the three-receiver less noisy DM BC. We then specialize our bound to the Gaussian BC, and show that it is tighter than that recently proposed in the literature for coded pre-fetching for a wide range of cache sizes as would be expected, but the two bounds coincide for sufficiently large cache size. In the two-receiver case, our bound is tight in that it is the exact capacity-memory trade-off with uncoded prefetching which implies that, in this case, coded prefetching does not enhance the capacity-memory tradeoff for sufficiently large cache size.
{"title":"An Upper Bound on the Capacity-Memory Tradeoff of Interleavable Discrete Memoryless Broadcast Channels with Uncoded Prefetching","authors":"M. Salman, M. Varanasi","doi":"10.1109/ISIT44484.2020.9174056","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174056","url":null,"abstract":"The K-receiver discrete memoryless (DM) broadcast channel (BC) is considered in which each receiver is equipped with a cache memory of the same size. We obtain an upper bound on the capacity-memory tradeoff with uncoded pre-fetching, the highest rate of reliable communication for given cache size. This bound holds for the interleavable DM BC, a class of channels that subsumes the K-receiver degraded DM BC and the three-receiver less noisy DM BC. We then specialize our bound to the Gaussian BC, and show that it is tighter than that recently proposed in the literature for coded pre-fetching for a wide range of cache sizes as would be expected, but the two bounds coincide for sufficiently large cache size. In the two-receiver case, our bound is tight in that it is the exact capacity-memory trade-off with uncoded prefetching which implies that, in this case, coded prefetching does not enhance the capacity-memory tradeoff for sufficiently large cache size.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122737890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174414
O. Bilgen, A. Wagner
We provide a method for outer bounding the rate- distortion region of Gaussian distributed compression problems in which the source variables can be embedded in a Gauss- Markov tree. The outer bound so obtained takes the form of a convex optimization problem. Simulations demonstrate that the outer bound is close to the Berger-Tung inner bound, coinciding with it in many cases.
{"title":"Gaussian Multiterminal Source-Coding with Markovity: An Efficiently-Computable Outer Bound","authors":"O. Bilgen, A. Wagner","doi":"10.1109/ISIT44484.2020.9174414","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174414","url":null,"abstract":"We provide a method for outer bounding the rate- distortion region of Gaussian distributed compression problems in which the source variables can be embedded in a Gauss- Markov tree. The outer bound so obtained takes the form of a convex optimization problem. Simulations demonstrate that the outer bound is close to the Berger-Tung inner bound, coinciding with it in many cases.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122918183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174286
Fan Li, Jinyuan Chen
In this work we consider common randomness-aided secure communications, where a limited common randomness is available at the transmitters. Specifically, we focus on a two-user interference channel with secrecy constraints and a wiretap channel with a helper, in the presence of a limited common randomness shared between the transmitters. For both settings, we characterize the optimal secure sum degrees-of-freedom (DoF) or secure DoF as a function of the DoF of common randomness. The results reveal that the secure sum DoF or secure DoF increases as the DoF of common randomness increases, bridging the gap between the extreme DoF point without common randomness and the other extreme DoF point with unlimited common randomness. The proposed scheme is a two-layer coding scheme, in which two sub-schemes are designed in two layers respectively, i.e., at two different power levels, utilizing common randomness in the first layer only. The role of common randomness is to jam partial information signal at the eavesdroppers, without causing interference at the legitimate receivers. To prove the optimality of the proposed scheme, a new converse is also derived in this work.
{"title":"Secure Communications with Limited Common Randomness at Transmitters","authors":"Fan Li, Jinyuan Chen","doi":"10.1109/ISIT44484.2020.9174286","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174286","url":null,"abstract":"In this work we consider common randomness-aided secure communications, where a limited common randomness is available at the transmitters. Specifically, we focus on a two-user interference channel with secrecy constraints and a wiretap channel with a helper, in the presence of a limited common randomness shared between the transmitters. For both settings, we characterize the optimal secure sum degrees-of-freedom (DoF) or secure DoF as a function of the DoF of common randomness. The results reveal that the secure sum DoF or secure DoF increases as the DoF of common randomness increases, bridging the gap between the extreme DoF point without common randomness and the other extreme DoF point with unlimited common randomness. The proposed scheme is a two-layer coding scheme, in which two sub-schemes are designed in two layers respectively, i.e., at two different power levels, utilizing common randomness in the first layer only. The role of common randomness is to jam partial information signal at the eavesdroppers, without causing interference at the legitimate receivers. To prove the optimality of the proposed scheme, a new converse is also derived in this work.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/ISIT44484.2020.9174174
Thuan Nguyen, Thinh Nguyen
In this paper, we consider a channel whose the input is a binary random source X ∈ {x1,x2} with the probability mass function (pmf) pX = [px1,px2] and the output is a continuous random variable Y ∈ R as a result of a continuous noise, characterized by the channel conditional densities py|x1 = ϕ1(y) and py|x2 = ϕ2(y). A quantizer Q is used to map Y back to a discrete set Z ∈ {z1,z2,...,zN}. To retain most amount of information about X, an optimal Q is one that maximizes I(X;Z). On the other hand, our goal is not only to recover X but also ensure that pZ = [pz1,pz2,...,pzN] satisfies a certain constraint. In particular, we are interested in designing a quantizer that maximizes βI(X;Z)−C(pZ) where β is a tradeoff parameter and C(pZ) is an arbitrary cost function of pZ. Let the posterior probability ${p_{{x_1}mid y}} = {r_y} = frac{{{p_{{x_1}}}{phi _1}(y)}}{{{p_{{x_1}}}{phi _1}(y) + {p_{{x_2}}}{phi _2}(y)}}$, our result shows that the structure of the optimal quantizer separates ry into convex cells. In other words, the optimal quantizer has the form: ${Q^{ast}}left( {{r_y}} right) = {z_i}$, if $a_{i - 1}^{ast} leq {r_y} < a_i^{ast}$ for some optimal thresholds $a_0^{ast} = 0 < a_1^{ast} < a_2^{ast} < cdots < a_{N - 1}^{ast} < a_N^{ast} = 1$. Based on this optimal structure, we describe some fast algorithms for determining the optimal quantizers.
{"title":"Structure of Optimal Quantizer for Binary-Input Continuous-Output Channels with Output Constraints","authors":"Thuan Nguyen, Thinh Nguyen","doi":"10.1109/ISIT44484.2020.9174174","DOIUrl":"https://doi.org/10.1109/ISIT44484.2020.9174174","url":null,"abstract":"In this paper, we consider a channel whose the input is a binary random source X ∈ {x<inf>1</inf>,x<inf>2</inf>} with the probability mass function (pmf) p<inf>X</inf> = [p<inf>x1</inf>,p<inf>x2</inf>] and the output is a continuous random variable Y ∈ R as a result of a continuous noise, characterized by the channel conditional densities p<inf>y|x1</inf> = ϕ<inf>1</inf>(y) and p<inf>y|x2</inf> = ϕ<inf>2</inf>(y). A quantizer Q is used to map Y back to a discrete set Z ∈ {z<inf>1</inf>,z<inf>2</inf>,...,z<inf>N</inf>}. To retain most amount of information about X, an optimal Q is one that maximizes I(X;Z). On the other hand, our goal is not only to recover X but also ensure that p<inf>Z</inf> = [p<inf>z1</inf>,p<inf>z2</inf>,...,p<inf>zN</inf>] satisfies a certain constraint. In particular, we are interested in designing a quantizer that maximizes βI(X;Z)−C(p<inf>Z</inf>) where β is a tradeoff parameter and C(p<inf>Z</inf>) is an arbitrary cost function of p<inf>Z</inf>. Let the posterior probability ${p_{{x_1}mid y}} = {r_y} = frac{{{p_{{x_1}}}{phi _1}(y)}}{{{p_{{x_1}}}{phi _1}(y) + {p_{{x_2}}}{phi _2}(y)}}$, our result shows that the structure of the optimal quantizer separates r<inf>y</inf> into convex cells. In other words, the optimal quantizer has the form: ${Q^{ast}}left( {{r_y}} right) = {z_i}$, if $a_{i - 1}^{ast} leq {r_y} < a_i^{ast}$ for some optimal thresholds $a_0^{ast} = 0 < a_1^{ast} < a_2^{ast} < cdots < a_{N - 1}^{ast} < a_N^{ast} = 1$. Based on this optimal structure, we describe some fast algorithms for determining the optimal quantizers.","PeriodicalId":159311,"journal":{"name":"2020 IEEE International Symposium on Information Theory (ISIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127926750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}