For a class of low-density parity-check (LDPC) code ensembles with right node degrees as binomial distribution, this paper proves that the theoretically optimal LDPC code ensemble should be regular for a binary-symmetric channel (BSC) and Gallager’s decoding algorithm A. Our proof consists of two steps. First, with the assumption of right edge degrees as binomial, we prove that the LDPC threshold of single left edge degree is larger than that of multiple left edge degrees. Second, we verify that the LDPC threshold is the largest when binomial distribution of right node degrees degrades to single value. Very interestingly, although both right and left edge degrees are unique in the theoretically optimal LDPC code ensemble, they are floating values. When the floating degrees are approximated by a two-term binomial distribution, the threshold at half rate is exactly the same as Bazzi’s result via linear programming. It verifies our proof from another angle
{"title":"Theoretically Optimal Low-Density Parity-Check Code Ensemble for Gallager's Decoding Algorithm A","authors":"Feng Wu, Peiwen Yu","doi":"10.1109/DCC.2010.84","DOIUrl":"https://doi.org/10.1109/DCC.2010.84","url":null,"abstract":"For a class of low-density parity-check (LDPC) code ensembles with right node degrees as binomial distribution, this paper proves that the theoretically optimal LDPC code ensemble should be regular for a binary-symmetric channel (BSC) and Gallager’s decoding algorithm A. Our proof consists of two steps. First, with the assumption of right edge degrees as binomial, we prove that the LDPC threshold of single left edge degree is larger than that of multiple left edge degrees. Second, we verify that the LDPC threshold is the largest when binomial distribution of right node degrees degrades to single value. Very interestingly, although both right and left edge degrees are unique in the theoretically optimal LDPC code ensemble, they are floating values. When the floating degrees are approximated by a two-term binomial distribution, the threshold at half rate is exactly the same as Bazzi’s result via linear programming. It verifies our proof from another angle","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128200541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method for improving wavelet-based Compressed Sensing (CS) reconstruction algorithms by exploiting the dependencies among wavelet coefficients. During CS recovery, a simple measure of significance for each wavelet coefficient is calculated using a weighted sum of the (estimated) magnitudes of the wavelet coefficient, its highly correlated neighbors, and parent. This simple measure is incorporated into three CS recovery algorithms, Reweighted L1 minimization algorithms (RL1), Iteratively Reweighted Least Squares (IRLS), and Iterative Hard Thresholding (IHT). Experimental results using one-dimensional signals and images illustrate that the proposed method (i) improves reconstruction quality for a given number of measurements, (ii) requires fewer measurements for a desired reconstruction quality, and (iii) significantly reduces reconstruction time.
{"title":"Exploiting Wavelet-Domain Dependencies in Compressed Sensing","authors":"Yookyung Kim, M. Nadar, A. Bilgin","doi":"10.1109/DCC.2010.51","DOIUrl":"https://doi.org/10.1109/DCC.2010.51","url":null,"abstract":"This paper presents a method for improving wavelet-based Compressed Sensing (CS) reconstruction algorithms by exploiting the dependencies among wavelet coefficients. During CS recovery, a simple measure of significance for each wavelet coefficient is calculated using a weighted sum of the (estimated) magnitudes of the wavelet coefficient, its highly correlated neighbors, and parent. This simple measure is incorporated into three CS recovery algorithms, Reweighted L1 minimization algorithms (RL1), Iteratively Reweighted Least Squares (IRLS), and Iterative Hard Thresholding (IHT). Experimental results using one-dimensional signals and images illustrate that the proposed method (i) improves reconstruction quality for a given number of measurements, (ii) requires fewer measurements for a desired reconstruction quality, and (iii) significantly reduces reconstruction time.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose new data structures for navigation in sequences of balanced parentheses, a standard tool for representing compressed trees. The most striking property of our approach is that it shares most of its internal data structures for all operations. This is reflected in a large reduction of space, and also in faster navigation times. We exhibit these advantages on two examples: succinct range minimum queries and compressed suffix trees. Our data structures are incorporated into a ready-to-use C++-library for succinct data structures.
{"title":"Advantages of Shared Data Structures for Sequences of Balanced Parentheses","authors":"Simon Gog, J. Fischer","doi":"10.1109/DCC.2010.43","DOIUrl":"https://doi.org/10.1109/DCC.2010.43","url":null,"abstract":"We propose new data structures for navigation in sequences of balanced parentheses, a standard tool for representing compressed trees. The most striking property of our approach is that it shares most of its internal data structures for all operations. This is reflected in a large reduction of space, and also in faster navigation times. We exhibit these advantages on two examples: succinct range minimum queries and compressed suffix trees. Our data structures are incorporated into a ready-to-use C++-library for succinct data structures.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126887163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a linear predictive quantization system for causally transmitting parallel sources with temporal memory (colored frames) over an erasure channel. By optimizing within this structure, we derive an achievability result in the high-rate limit and compare it to an upper bound on performance. The proposed system subsumes the well-known PCM and DPCM systems as special cases. While typically DPCM performs well without erasures and PCM suffers less with many erasures, we show that the proposed solution improves performance over both under all severities of erasures, with unbounded improvement in some cases.
{"title":"Causal Transmission of Colored Source Frames over a Packet Erasure Channel","authors":"Ying-zong Huang, Y. Kochman, G. Wornell","doi":"10.1109/DCC.2010.19","DOIUrl":"https://doi.org/10.1109/DCC.2010.19","url":null,"abstract":"We propose a linear predictive quantization system for causally transmitting parallel sources with temporal memory (colored frames) over an erasure channel. By optimizing within this structure, we derive an achievability result in the high-rate limit and compare it to an upper bound on performance. The proposed system subsumes the well-known PCM and DPCM systems as special cases. While typically DPCM performs well without erasures and PCM suffers less with many erasures, we show that the proposed solution improves performance over both under all severities of erasures, with unbounded improvement in some cases.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127559331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents Reduced Cutset Coding, a new Arithmetic Coding (AC) based approach tolossless compression of Markov random fields. In recent workcite{reye:09a}, the authors presented an efficient AC based approachto encoding acyclic MRFs and described a Local Conditioning (LC)based approach to encoding cyclic MRFs. In the present work, weintroduce an algorithm for AC encoding of a cyclic MRF for which thecomplexity of the LC method of cite{reye:09a}, or the acyclicMRF algorithm of cite{reye:09a} combined with the Junction Tree(JT) algorithm, is too large. For encoding an MRF based on acyclic graph $G=(V,E)$, a cutset $Usubset V$ is selected such thatthe subgraph $G_U$ induced by $U$, and each of the components of$Gsetminus U$, are tractable to either LC or JT. Then, the cutsetvariables $X_U$ are AC encoded with coding distributions based on areduced MRF defined on $G_U$, and the remaining components$X_{Vsetminus U}$ of $X_V$ are optimally AC encoded conditioned on$X_U$. The increase in rate over optimal encoding of $X_V$ is thenormalized divergence between the marginal distribution of $X_U$ and thereduced MRF on $G_U$ used for the AC encoding. We show this follows aPythagorean decomposition and, additionally, that the optimalexponential parameter for the reduced MRF on $G_U$ is the one thatpreserves the moments from the marginal distribution. We also showthat the rate of encoding $X_U$ with this moment-matchingexponential parameter is equal to the entropy of the reduced MRFwith this moment-matching parameter. We illustrate the concepts ofour approach by encoding a typical image from an Ising model with acutset consisting of evenly spaced rows. The performance on this image issimilar to that of JBIG.
{"title":"Lossless Reduced Cutset Coding of Markov Random Fields","authors":"M. Reyes, D. Neuhoff","doi":"10.1109/DCC.2010.41","DOIUrl":"https://doi.org/10.1109/DCC.2010.41","url":null,"abstract":"This paper presents Reduced Cutset Coding, a new Arithmetic Coding (AC) based approach tolossless compression of Markov random fields. In recent workcite{reye:09a}, the authors presented an efficient AC based approachto encoding acyclic MRFs and described a Local Conditioning (LC)based approach to encoding cyclic MRFs. In the present work, weintroduce an algorithm for AC encoding of a cyclic MRF for which thecomplexity of the LC method of cite{reye:09a}, or the acyclicMRF algorithm of cite{reye:09a} combined with the Junction Tree(JT) algorithm, is too large. For encoding an MRF based on acyclic graph $G=(V,E)$, a cutset $Usubset V$ is selected such thatthe subgraph $G_U$ induced by $U$, and each of the components of$Gsetminus U$, are tractable to either LC or JT. Then, the cutsetvariables $X_U$ are AC encoded with coding distributions based on areduced MRF defined on $G_U$, and the remaining components$X_{Vsetminus U}$ of $X_V$ are optimally AC encoded conditioned on$X_U$. The increase in rate over optimal encoding of $X_V$ is thenormalized divergence between the marginal distribution of $X_U$ and thereduced MRF on $G_U$ used for the AC encoding. We show this follows aPythagorean decomposition and, additionally, that the optimalexponential parameter for the reduced MRF on $G_U$ is the one thatpreserves the moments from the marginal distribution. We also showthat the rate of encoding $X_U$ with this moment-matchingexponential parameter is equal to the entropy of the reduced MRFwith this moment-matching parameter. We illustrate the concepts ofour approach by encoding a typical image from an Ising model with acutset consisting of evenly spaced rows. The performance on this image issimilar to that of JBIG.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114478278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the source coding problem with side information. Especially, we consider the FV code in the case that the encoder and the decoder can see side information. We obtain the condition that there exists a FV code under the condition that the overflow probability is smaller than or equal to some constant.
{"title":"On the Overflow Probability of Fixed-to-Variable Length Codes with Side Information","authors":"R. Nomura, T. Matsushima","doi":"10.1109/DCC.2010.93","DOIUrl":"https://doi.org/10.1109/DCC.2010.93","url":null,"abstract":"We consider the source coding problem with side information. Especially, we consider the FV code in the case that the encoder and the decoder can see side information. We obtain the condition that there exists a FV code under the condition that the overflow probability is smaller than or equal to some constant.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129352323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a systematic distributed quantizer design method, called {it{localization}}, in which, out of an existing centralized (global) quantizer, one synthesizes the distributed (local) quantizer using high-rate scalar quantization combined with entropy coding. The general localization procedure is presented, along with a practical application to a quantized beamforming problem for multiple-input multiple-output broadcast channels. For our particular application, not only localization provides high performance distributed quantizers with very low feedback rates, but also reveals an interesting property of finite rate feedback schemes that might be of theoretical interest: For single-user multiple-input single-output systems, one can achieve the performance of almost any quantized beamforming scheme with an arbitrarily low feedback rate, when the transmitter power is sufficiently large.
{"title":"A Systematic Distributed Quantizer Design Method with an Application to MIMO Broadcast Channels","authors":"Erdem Koyuncu, H. Jafarkhani","doi":"10.1109/DCC.2010.34","DOIUrl":"https://doi.org/10.1109/DCC.2010.34","url":null,"abstract":"We introduce a systematic distributed quantizer design method, called {it{localization}}, in which, out of an existing centralized (global) quantizer, one synthesizes the distributed (local) quantizer using high-rate scalar quantization combined with entropy coding. The general localization procedure is presented, along with a practical application to a quantized beamforming problem for multiple-input multiple-output broadcast channels. For our particular application, not only localization provides high performance distributed quantizers with very low feedback rates, but also reveals an interesting property of finite rate feedback schemes that might be of theoretical interest: For single-user multiple-input single-output systems, one can achieve the performance of almost any quantized beamforming scheme with an arbitrarily low feedback rate, when the transmitter power is sufficiently large.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Layered Depth Image (LDI) is one of the popular representation and rendering methods for 3D objects with complex geometries. In this paper, we propose the new compression algorithm for depth information of a 3D object represented by LDI. For the purpose, we introduce the concept of partial surfaces to seek highly correlated depth data irrespective of their layer and derive a depth compression algorithm by using them. Partial surfaces are approximated by a Bézier patch and residual information is encoded by a shape-adaptive transform. Experimental results show that our proposed compression method achieves a better compression performance than any other previous methods.
{"title":"Depth Compression of 3D Object Represented by Layered Depth Image","authors":"Sang-Young Park, Seong-Dae Kim","doi":"10.1109/DCC.2010.50","DOIUrl":"https://doi.org/10.1109/DCC.2010.50","url":null,"abstract":"A Layered Depth Image (LDI) is one of the popular representation and rendering methods for 3D objects with complex geometries. In this paper, we propose the new compression algorithm for depth information of a 3D object represented by LDI. For the purpose, we introduce the concept of partial surfaces to seek highly correlated depth data irrespective of their layer and derive a depth compression algorithm by using them. Partial surfaces are approximated by a Bézier patch and residual information is encoded by a shape-adaptive transform. Experimental results show that our proposed compression method achieves a better compression performance than any other previous methods.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129423666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the non-zero pulse position amplitude quantization implicit in algebraic codebook code-excited linear prediction speech coding. It is demonstrated that the quantization used in ACELP is effective in a rate distortion sense at the typical encoding rates commonly used.
{"title":"Analysis of Amplitude Quantization in ACELP Excitation Coding","authors":"W. Patchoo, T. Fischer, Changho Ahn, Sangwon Kang","doi":"10.1109/DCC.2010.52","DOIUrl":"https://doi.org/10.1109/DCC.2010.52","url":null,"abstract":"This paper examines the non-zero pulse position amplitude quantization implicit in algebraic codebook code-excited linear prediction speech coding. It is demonstrated that the quantization used in ACELP is effective in a rate distortion sense at the typical encoding rates commonly used.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133989330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study information theoretical performance of common video coding methodologies at the frame level. Via an abstraction of consecutive video frames as correlated random variables, many existing video coding techniques, including the baseline of MPEG-x and H.26x, the scalable coding and the distributed video coding, can have corresponding information theoretical models. The theoretical achievable rate distortion regions have been completely solved for some systems while for others remain open. We show that the achievable rate region of sequential coding equals to that of predictive coding for Markov sources. We give a theoretical analysis of the coding efficiency of B frames in the popular hybrid video coding architecture, bringing new understanding of the current practice. We also find that distributed sequential video coding generally incurs a performance loss if the source is not Markov.
{"title":"Information Flows in Video Coding","authors":"Jia Wang, Xiaolin Wu","doi":"10.1109/DCC.2010.21","DOIUrl":"https://doi.org/10.1109/DCC.2010.21","url":null,"abstract":"We study information theoretical performance of common video coding methodologies at the frame level. Via an abstraction of consecutive video frames as correlated random variables, many existing video coding techniques, including the baseline of MPEG-x and H.26x, the scalable coding and the distributed video coding, can have corresponding information theoretical models. The theoretical achievable rate distortion regions have been completely solved for some systems while for others remain open. We show that the achievable rate region of sequential coding equals to that of predictive coding for Markov sources. We give a theoretical analysis of the coding efficiency of B frames in the popular hybrid video coding architecture, bringing new understanding of the current practice. We also find that distributed sequential video coding generally incurs a performance loss if the source is not Markov.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121916852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}