The problem of lossy source channel communication under a received power constraint in a simple Gaussian sensor network is studied in this paper. A group of sensors are placed to observe a common Gaussian source. The noisy observations are then transmitted over a Gaussian multiple access channel (MAC) to the sink, where the source is estimated with a quadratic distortion criterion using all received sensor observations. We propose an analogue transmission scheme that uses noiseless causal feedback from the sink to remove correlation between the observation samples, combined with time division multiple access of the MAC. The proposed scheme offers same performance with reduced received power and sensor network size, compared with optimal transmission scheme with single channel use; and converges to the absolute performance bound with low received power level for the same use of bandwidth.
{"title":"Bandwidth Expansion in a Simple Gaussian Sensor Network Using Feedback","authors":"A. N. Kim, T. Ramstad","doi":"10.1109/DCC.2010.31","DOIUrl":"https://doi.org/10.1109/DCC.2010.31","url":null,"abstract":"The problem of lossy source channel communication under a received power constraint in a simple Gaussian sensor network is studied in this paper. A group of sensors are placed to observe a common Gaussian source. The noisy observations are then transmitted over a Gaussian multiple access channel (MAC) to the sink, where the source is estimated with a quadratic distortion criterion using all received sensor observations. We propose an analogue transmission scheme that uses noiseless causal feedback from the sink to remove correlation between the observation samples, combined with time division multiple access of the MAC. The proposed scheme offers same performance with reduced received power and sensor network size, compared with optimal transmission scheme with single channel use; and converges to the absolute performance bound with low received power level for the same use of bandwidth.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kasai, Takayuki Tsujimoto, R. Matsumoto, K. Sakaniwa
Rate-compatible asymmetric Slepian-Wolf coding with non-binary LDPC codes of moderate code length is presented.The proposed encoder and decoder use only one single mother code.With the proposed scheme, better compressed rate and lower error rate than those ofconventional scheme are achieved with even smaller source length.
{"title":"Rate-Compatible Slepian-Wolf Coding with Short Non-Binary LDPC Codes","authors":"K. Kasai, Takayuki Tsujimoto, R. Matsumoto, K. Sakaniwa","doi":"10.1109/DCC.2010.96","DOIUrl":"https://doi.org/10.1109/DCC.2010.96","url":null,"abstract":"Rate-compatible asymmetric Slepian-Wolf coding with non-binary LDPC codes of moderate code length is presented.The proposed encoder and decoder use only one single mother code.With the proposed scheme, better compressed rate and lower error rate than those ofconventional scheme are achieved with even smaller source length.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131146812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Arildsen, Jan Østergaard, M. Murthi, S. Andersen, S. H. Jensen
We consider linear predictive coding and noise shaping for coding and transmission of auto-regressive (AR) sources over lossy networks. We generalize an existing framework to arbitrary filter orders and propose use of fixed-lag smoothing at the decoder, in order to further reduce the impact of transmission failures. We show that fixed-lag smoothing up to a certain delay can be obtained without additional computational complexity by exploiting the state-space structure. We prove that the proposed smoothing strategy strictly improves performance under quite general conditions. Finally, we provide simulations on AR sources, and channels with correlated losses, and show that substantial improvements are possible.
{"title":"Fixed-Lag Smoothing for Low-Delay Predictive Coding with Noise Shaping for Lossy Networks","authors":"Thomas Arildsen, Jan Østergaard, M. Murthi, S. Andersen, S. H. Jensen","doi":"10.1109/DCC.2010.33","DOIUrl":"https://doi.org/10.1109/DCC.2010.33","url":null,"abstract":"We consider linear predictive coding and noise shaping for coding and transmission of auto-regressive (AR) sources over lossy networks. We generalize an existing framework to arbitrary filter orders and propose use of fixed-lag smoothing at the decoder, in order to further reduce the impact of transmission failures. We show that fixed-lag smoothing up to a certain delay can be obtained without additional computational complexity by exploiting the state-space structure. We prove that the proposed smoothing strategy strictly improves performance under quite general conditions. Finally, we provide simulations on AR sources, and channels with correlated losses, and show that substantial improvements are possible.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we describe a sequence compression method based on combining a Bayesian nonparametric sequence model with entropy encoding. The model, a hierarchy of Pitman-Yor processes of unbounded depth previously proposed by Wood et al. [16] in the context of language modelling, allows modelling of long-range dependencies by allowing conditioning contexts of unbounded length. We show that incremental approximate inference can be performed in this model, thereby allowing it to be used in a text compression setting. The resulting compressor reliably outperforms several PPM variants on many types of data, but is particularly effective in compressing data that exhibits power law properties.
{"title":"Lossless Compression Based on the Sequence Memoizer","authors":"Jan Gasthaus, Frank D. Wood, Y. Teh","doi":"10.1109/DCC.2010.36","DOIUrl":"https://doi.org/10.1109/DCC.2010.36","url":null,"abstract":"In this work we describe a sequence compression method based on combining a Bayesian nonparametric sequence model with entropy encoding. The model, a hierarchy of Pitman-Yor processes of unbounded depth previously proposed by Wood et al. [16] in the context of language modelling, allows modelling of long-range dependencies by allowing conditioning contexts of unbounded length. We show that incremental approximate inference can be performed in this model, thereby allowing it to be used in a text compression setting. The resulting compressor reliably outperforms several PPM variants on many types of data, but is particularly effective in compressing data that exhibits power law properties.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134364440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents Reduced Cutset Coding, a new Arithmetic Coding (AC) based approach tolossless compression of Markov random fields. In recent workcite{reye:09a}, the authors presented an efficient AC based approachto encoding acyclic MRFs and described a Local Conditioning (LC)based approach to encoding cyclic MRFs. In the present work, weintroduce an algorithm for AC encoding of a cyclic MRF for which thecomplexity of the LC method of cite{reye:09a}, or the acyclicMRF algorithm of cite{reye:09a} combined with the Junction Tree(JT) algorithm, is too large. For encoding an MRF based on acyclic graph $G=(V,E)$, a cutset $Usubset V$ is selected such thatthe subgraph $G_U$ induced by $U$, and each of the components of$Gsetminus U$, are tractable to either LC or JT. Then, the cutsetvariables $X_U$ are AC encoded with coding distributions based on areduced MRF defined on $G_U$, and the remaining components$X_{Vsetminus U}$ of $X_V$ are optimally AC encoded conditioned on$X_U$. The increase in rate over optimal encoding of $X_V$ is thenormalized divergence between the marginal distribution of $X_U$ and thereduced MRF on $G_U$ used for the AC encoding. We show this follows aPythagorean decomposition and, additionally, that the optimalexponential parameter for the reduced MRF on $G_U$ is the one thatpreserves the moments from the marginal distribution. We also showthat the rate of encoding $X_U$ with this moment-matchingexponential parameter is equal to the entropy of the reduced MRFwith this moment-matching parameter. We illustrate the concepts ofour approach by encoding a typical image from an Ising model with acutset consisting of evenly spaced rows. The performance on this image issimilar to that of JBIG.
{"title":"Lossless Reduced Cutset Coding of Markov Random Fields","authors":"M. Reyes, D. Neuhoff","doi":"10.1109/DCC.2010.41","DOIUrl":"https://doi.org/10.1109/DCC.2010.41","url":null,"abstract":"This paper presents Reduced Cutset Coding, a new Arithmetic Coding (AC) based approach tolossless compression of Markov random fields. In recent workcite{reye:09a}, the authors presented an efficient AC based approachto encoding acyclic MRFs and described a Local Conditioning (LC)based approach to encoding cyclic MRFs. In the present work, weintroduce an algorithm for AC encoding of a cyclic MRF for which thecomplexity of the LC method of cite{reye:09a}, or the acyclicMRF algorithm of cite{reye:09a} combined with the Junction Tree(JT) algorithm, is too large. For encoding an MRF based on acyclic graph $G=(V,E)$, a cutset $Usubset V$ is selected such thatthe subgraph $G_U$ induced by $U$, and each of the components of$Gsetminus U$, are tractable to either LC or JT. Then, the cutsetvariables $X_U$ are AC encoded with coding distributions based on areduced MRF defined on $G_U$, and the remaining components$X_{Vsetminus U}$ of $X_V$ are optimally AC encoded conditioned on$X_U$. The increase in rate over optimal encoding of $X_V$ is thenormalized divergence between the marginal distribution of $X_U$ and thereduced MRF on $G_U$ used for the AC encoding. We show this follows aPythagorean decomposition and, additionally, that the optimalexponential parameter for the reduced MRF on $G_U$ is the one thatpreserves the moments from the marginal distribution. We also showthat the rate of encoding $X_U$ with this moment-matchingexponential parameter is equal to the entropy of the reduced MRFwith this moment-matching parameter. We illustrate the concepts ofour approach by encoding a typical image from an Ising model with acutset consisting of evenly spaced rows. The performance on this image issimilar to that of JBIG.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114478278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an error resilient JU-DFMC is proposed for video transmission over error-prone channels. In the proposed error resilient JU-DFMC, a new error resilient prediction structure of DFMC is firstly presented. Then an end-to-end distortion model is applied for macroblock (MB) level mode decision. Finally a frame level rate distortion cost scheme is proposed to determine how many times the header information will be transmitted in a high quality frame (HQF). The experimental results show that the proposed method can achieve better performance than the previous DFMC schemes.
{"title":"Error Resilient Dual Frame Motion Compensation with Uneven Quality Protection","authors":"Da Liu, Debin Zhao, Siwei Ma","doi":"10.1109/DCC.2010.56","DOIUrl":"https://doi.org/10.1109/DCC.2010.56","url":null,"abstract":"In this paper, an error resilient JU-DFMC is proposed for video transmission over error-prone channels. In the proposed error resilient JU-DFMC, a new error resilient prediction structure of DFMC is firstly presented. Then an end-to-end distortion model is applied for macroblock (MB) level mode decision. Finally a frame level rate distortion cost scheme is proposed to determine how many times the header information will be transmitted in a high quality frame (HQF). The experimental results show that the proposed method can achieve better performance than the previous DFMC schemes.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangtao Wen, Mou Xiao, Jianwen Chen, Pin Tao, Chao Wang
In this paper, a fast RDO (rate-distortion optimization) quantization algorithm for H.264/AVC is proposed. In this algorithm, the searching space of level adjustments is reduced by filtering the input quantized coefficients in a hierarchical way. The well quantized coefficients is first filtered out, and then the RD tradeoff of each level adjustment to each of the rest coefficients is examined to select some good candidates with their associated level adjustments. Finally these good candidates are combined to find the best combination of level adjustments which gives the minimal rate-distortion cost. Furthermore, a fast rate estimation technique is adopted to save the rate-distortion estimation time. Experimental results show that about 44% quantization time on average can be saved at the cost of negligible PSNR loss compared with RDO quantization algorithm implemented in JM.
{"title":"Fast Rate Distortion Optimized Quantization for H.264/AVC","authors":"Jiangtao Wen, Mou Xiao, Jianwen Chen, Pin Tao, Chao Wang","doi":"10.1109/DCC.2010.58","DOIUrl":"https://doi.org/10.1109/DCC.2010.58","url":null,"abstract":"In this paper, a fast RDO (rate-distortion optimization) quantization algorithm for H.264/AVC is proposed. In this algorithm, the searching space of level adjustments is reduced by filtering the input quantized coefficients in a hierarchical way. The well quantized coefficients is first filtered out, and then the RD tradeoff of each level adjustment to each of the rest coefficients is examined to select some good candidates with their associated level adjustments. Finally these good candidates are combined to find the best combination of level adjustments which gives the minimal rate-distortion cost. Furthermore, a fast rate estimation technique is adopted to save the rate-distortion estimation time. Experimental results show that about 44% quantization time on average can be saved at the cost of negligible PSNR loss compared with RDO quantization algorithm implemented in JM.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new approach to theoretically analyze compressive sensing directly from the randomly sampling matrix phi instead of a certain recovery algorithm. For simplifying our analyses, we assume both input source and random sampling matrix as binary. Taking anyone of source bits, we can constitute a tree by parsing the randomly sampling matrix, where the selected source bit as the root. In the rest of tree, measurement nodes and source nodes are connected alternatively according to phi. With the tree, we can formulate the probability if one source bit can be recovered from randomly sampling measurements. The further analyses upon the tree structure reveal the relation between the un-recovery probability with random measurements and the un-recovery probability with source sparsity. The conditions of successful recovery are proven on the parameter S-M plane. Then the results of the tree structure based analyses are compared with the actual recovery process.
{"title":"Tree Structure Based Analyses on Compressive Sensing for Binary Sparse Sources","authors":"Jingjing Fu, Zhouchen Lin, B. Zeng, Feng Wu","doi":"10.1109/DCC.2010.60","DOIUrl":"https://doi.org/10.1109/DCC.2010.60","url":null,"abstract":"This paper proposes a new approach to theoretically analyze compressive sensing directly from the randomly sampling matrix phi instead of a certain recovery algorithm. For simplifying our analyses, we assume both input source and random sampling matrix as binary. Taking anyone of source bits, we can constitute a tree by parsing the randomly sampling matrix, where the selected source bit as the root. In the rest of tree, measurement nodes and source nodes are connected alternatively according to phi. With the tree, we can formulate the probability if one source bit can be recovered from randomly sampling measurements. The further analyses upon the tree structure reveal the relation between the un-recovery probability with random measurements and the un-recovery probability with source sparsity. The conditions of successful recovery are proven on the parameter S-M plane. Then the results of the tree structure based analyses are compared with the actual recovery process.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121374794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose new data structures for navigation in sequences of balanced parentheses, a standard tool for representing compressed trees. The most striking property of our approach is that it shares most of its internal data structures for all operations. This is reflected in a large reduction of space, and also in faster navigation times. We exhibit these advantages on two examples: succinct range minimum queries and compressed suffix trees. Our data structures are incorporated into a ready-to-use C++-library for succinct data structures.
{"title":"Advantages of Shared Data Structures for Sequences of Balanced Parentheses","authors":"Simon Gog, J. Fischer","doi":"10.1109/DCC.2010.43","DOIUrl":"https://doi.org/10.1109/DCC.2010.43","url":null,"abstract":"We propose new data structures for navigation in sequences of balanced parentheses, a standard tool for representing compressed trees. The most striking property of our approach is that it shares most of its internal data structures for all operations. This is reflected in a large reduction of space, and also in faster navigation times. We exhibit these advantages on two examples: succinct range minimum queries and compressed suffix trees. Our data structures are incorporated into a ready-to-use C++-library for succinct data structures.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126887163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a linear predictive quantization system for causally transmitting parallel sources with temporal memory (colored frames) over an erasure channel. By optimizing within this structure, we derive an achievability result in the high-rate limit and compare it to an upper bound on performance. The proposed system subsumes the well-known PCM and DPCM systems as special cases. While typically DPCM performs well without erasures and PCM suffers less with many erasures, we show that the proposed solution improves performance over both under all severities of erasures, with unbounded improvement in some cases.
{"title":"Causal Transmission of Colored Source Frames over a Packet Erasure Channel","authors":"Ying-zong Huang, Y. Kochman, G. Wornell","doi":"10.1109/DCC.2010.19","DOIUrl":"https://doi.org/10.1109/DCC.2010.19","url":null,"abstract":"We propose a linear predictive quantization system for causally transmitting parallel sources with temporal memory (colored frames) over an erasure channel. By optimizing within this structure, we derive an achievability result in the high-rate limit and compare it to an upper bound on performance. The proposed system subsumes the well-known PCM and DPCM systems as special cases. While typically DPCM performs well without erasures and PCM suffers less with many erasures, we show that the proposed solution improves performance over both under all severities of erasures, with unbounded improvement in some cases.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127559331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}