The problem of lossy source channel communication under a received power constraint in a simple Gaussian sensor network is studied in this paper. A group of sensors are placed to observe a common Gaussian source. The noisy observations are then transmitted over a Gaussian multiple access channel (MAC) to the sink, where the source is estimated with a quadratic distortion criterion using all received sensor observations. We propose an analogue transmission scheme that uses noiseless causal feedback from the sink to remove correlation between the observation samples, combined with time division multiple access of the MAC. The proposed scheme offers same performance with reduced received power and sensor network size, compared with optimal transmission scheme with single channel use; and converges to the absolute performance bound with low received power level for the same use of bandwidth.
{"title":"Bandwidth Expansion in a Simple Gaussian Sensor Network Using Feedback","authors":"A. N. Kim, T. Ramstad","doi":"10.1109/DCC.2010.31","DOIUrl":"https://doi.org/10.1109/DCC.2010.31","url":null,"abstract":"The problem of lossy source channel communication under a received power constraint in a simple Gaussian sensor network is studied in this paper. A group of sensors are placed to observe a common Gaussian source. The noisy observations are then transmitted over a Gaussian multiple access channel (MAC) to the sink, where the source is estimated with a quadratic distortion criterion using all received sensor observations. We propose an analogue transmission scheme that uses noiseless causal feedback from the sink to remove correlation between the observation samples, combined with time division multiple access of the MAC. The proposed scheme offers same performance with reduced received power and sensor network size, compared with optimal transmission scheme with single channel use; and converges to the absolute performance bound with low received power level for the same use of bandwidth.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kasai, Takayuki Tsujimoto, R. Matsumoto, K. Sakaniwa
Rate-compatible asymmetric Slepian-Wolf coding with non-binary LDPC codes of moderate code length is presented.The proposed encoder and decoder use only one single mother code.With the proposed scheme, better compressed rate and lower error rate than those ofconventional scheme are achieved with even smaller source length.
{"title":"Rate-Compatible Slepian-Wolf Coding with Short Non-Binary LDPC Codes","authors":"K. Kasai, Takayuki Tsujimoto, R. Matsumoto, K. Sakaniwa","doi":"10.1109/DCC.2010.96","DOIUrl":"https://doi.org/10.1109/DCC.2010.96","url":null,"abstract":"Rate-compatible asymmetric Slepian-Wolf coding with non-binary LDPC codes of moderate code length is presented.The proposed encoder and decoder use only one single mother code.With the proposed scheme, better compressed rate and lower error rate than those ofconventional scheme are achieved with even smaller source length.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131146812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Arildsen, Jan Østergaard, M. Murthi, S. Andersen, S. H. Jensen
We consider linear predictive coding and noise shaping for coding and transmission of auto-regressive (AR) sources over lossy networks. We generalize an existing framework to arbitrary filter orders and propose use of fixed-lag smoothing at the decoder, in order to further reduce the impact of transmission failures. We show that fixed-lag smoothing up to a certain delay can be obtained without additional computational complexity by exploiting the state-space structure. We prove that the proposed smoothing strategy strictly improves performance under quite general conditions. Finally, we provide simulations on AR sources, and channels with correlated losses, and show that substantial improvements are possible.
{"title":"Fixed-Lag Smoothing for Low-Delay Predictive Coding with Noise Shaping for Lossy Networks","authors":"Thomas Arildsen, Jan Østergaard, M. Murthi, S. Andersen, S. H. Jensen","doi":"10.1109/DCC.2010.33","DOIUrl":"https://doi.org/10.1109/DCC.2010.33","url":null,"abstract":"We consider linear predictive coding and noise shaping for coding and transmission of auto-regressive (AR) sources over lossy networks. We generalize an existing framework to arbitrary filter orders and propose use of fixed-lag smoothing at the decoder, in order to further reduce the impact of transmission failures. We show that fixed-lag smoothing up to a certain delay can be obtained without additional computational complexity by exploiting the state-space structure. We prove that the proposed smoothing strategy strictly improves performance under quite general conditions. Finally, we provide simulations on AR sources, and channels with correlated losses, and show that substantial improvements are possible.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we describe a sequence compression method based on combining a Bayesian nonparametric sequence model with entropy encoding. The model, a hierarchy of Pitman-Yor processes of unbounded depth previously proposed by Wood et al. [16] in the context of language modelling, allows modelling of long-range dependencies by allowing conditioning contexts of unbounded length. We show that incremental approximate inference can be performed in this model, thereby allowing it to be used in a text compression setting. The resulting compressor reliably outperforms several PPM variants on many types of data, but is particularly effective in compressing data that exhibits power law properties.
{"title":"Lossless Compression Based on the Sequence Memoizer","authors":"Jan Gasthaus, Frank D. Wood, Y. Teh","doi":"10.1109/DCC.2010.36","DOIUrl":"https://doi.org/10.1109/DCC.2010.36","url":null,"abstract":"In this work we describe a sequence compression method based on combining a Bayesian nonparametric sequence model with entropy encoding. The model, a hierarchy of Pitman-Yor processes of unbounded depth previously proposed by Wood et al. [16] in the context of language modelling, allows modelling of long-range dependencies by allowing conditioning contexts of unbounded length. We show that incremental approximate inference can be performed in this model, thereby allowing it to be used in a text compression setting. The resulting compressor reliably outperforms several PPM variants on many types of data, but is particularly effective in compressing data that exhibits power law properties.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134364440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a Compressed Sensing application to audio signals and analyze its audio perceptual quality with PEAQ.
提出了一种音频信号的压缩感知应用,并利用PEAQ分析了其音频感知质量。
{"title":"Lossy Audio Compression via Compressed Sensing","authors":"Rubem J. V. de Medeiros, E. Gurjão, J. Carvalho","doi":"10.1109/DCC.2010.88","DOIUrl":"https://doi.org/10.1109/DCC.2010.88","url":null,"abstract":"We propose a Compressed Sensing application to audio signals and analyze its audio perceptual quality with PEAQ.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129109082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intra picture coding plays an important role in video coding algorithms. In this paper, we investigate the pixel spatial correlation in HD picture and propose a macroblock level horizontal spatial prediction(HSP) based intra coding method. Because the pixels have the absolutely stronger correlation in horizontal direction than that in the vertical direction, each macroblock will try to be divided into left and right partitions. The right partition will be encoded with the conventional intra mode and then the reconstruction of it will be used to predict the left partition. The strong correlation between the right and left will contribute to the left partition significantly, and the encoding efficiency can be improved for the intra macroblock. With the decision of RDO, about 30-60% macroblocks will benefit from the HSP based intra coding. The experimental results show that the proposed intra coding scheme can improve the encoding efficiency with 0.17dB in average and has the potential to be further improved.
{"title":"Horizontal Spatial Prediction for High Dimension Intra Coding","authors":"Pin Tao, Wenting Wu, Chao Wang, Mou Xiao, Jiangtao Wen","doi":"10.1109/DCC.2010.76","DOIUrl":"https://doi.org/10.1109/DCC.2010.76","url":null,"abstract":"Intra picture coding plays an important role in video coding algorithms. In this paper, we investigate the pixel spatial correlation in HD picture and propose a macroblock level horizontal spatial prediction(HSP) based intra coding method. Because the pixels have the absolutely stronger correlation in horizontal direction than that in the vertical direction, each macroblock will try to be divided into left and right partitions. The right partition will be encoded with the conventional intra mode and then the reconstruction of it will be used to predict the left partition. The strong correlation between the right and left will contribute to the left partition significantly, and the encoding efficiency can be improved for the intra macroblock. With the decision of RDO, about 30-60% macroblocks will benefit from the HSP based intra coding. The experimental results show that the proposed intra coding scheme can improve the encoding efficiency with 0.17dB in average and has the potential to be further improved.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131587081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Wyner-Ziv Quantizer design method is introduced when the indices at the output of the encoder are transmitted over a noisy channel. The source encoder is considered as a scalar Lloyd quantizer followed by a binning and an index assignment (BIA) mapping. A modified simulated annealing based algorithm is used for BIA mapping design. A minimax solution for Wyner-Ziv problem under channel mismatch condition is also suggested when the channel is assumed to be binary symmetric channel and no information about the statistic of channel is available except the range of bit error rate. Finally the simulation results are presented which show the effectiveness of the proposed algorithm over other common alternative approaches. These results approve the proposed minimax solution too.
{"title":"Scalar Quantizer Design for Noisy Channel with Decoder Side Information","authors":"Sepideh Shamaie, F. Lahouti","doi":"10.1109/DCC.2010.91","DOIUrl":"https://doi.org/10.1109/DCC.2010.91","url":null,"abstract":"A Wyner-Ziv Quantizer design method is introduced when the indices at the output of the encoder are transmitted over a noisy channel. The source encoder is considered as a scalar Lloyd quantizer followed by a binning and an index assignment (BIA) mapping. A modified simulated annealing based algorithm is used for BIA mapping design. A minimax solution for Wyner-Ziv problem under channel mismatch condition is also suggested when the channel is assumed to be binary symmetric channel and no information about the statistic of channel is available except the range of bit error rate. Finally the simulation results are presented which show the effectiveness of the proposed algorithm over other common alternative approaches. These results approve the proposed minimax solution too.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132909765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangtao Wen, Mou Xiao, Jianwen Chen, Pin Tao, Chao Wang
In this paper, a fast RDO (rate-distortion optimization) quantization algorithm for H.264/AVC is proposed. In this algorithm, the searching space of level adjustments is reduced by filtering the input quantized coefficients in a hierarchical way. The well quantized coefficients is first filtered out, and then the RD tradeoff of each level adjustment to each of the rest coefficients is examined to select some good candidates with their associated level adjustments. Finally these good candidates are combined to find the best combination of level adjustments which gives the minimal rate-distortion cost. Furthermore, a fast rate estimation technique is adopted to save the rate-distortion estimation time. Experimental results show that about 44% quantization time on average can be saved at the cost of negligible PSNR loss compared with RDO quantization algorithm implemented in JM.
{"title":"Fast Rate Distortion Optimized Quantization for H.264/AVC","authors":"Jiangtao Wen, Mou Xiao, Jianwen Chen, Pin Tao, Chao Wang","doi":"10.1109/DCC.2010.58","DOIUrl":"https://doi.org/10.1109/DCC.2010.58","url":null,"abstract":"In this paper, a fast RDO (rate-distortion optimization) quantization algorithm for H.264/AVC is proposed. In this algorithm, the searching space of level adjustments is reduced by filtering the input quantized coefficients in a hierarchical way. The well quantized coefficients is first filtered out, and then the RD tradeoff of each level adjustment to each of the rest coefficients is examined to select some good candidates with their associated level adjustments. Finally these good candidates are combined to find the best combination of level adjustments which gives the minimal rate-distortion cost. Furthermore, a fast rate estimation technique is adopted to save the rate-distortion estimation time. Experimental results show that about 44% quantization time on average can be saved at the cost of negligible PSNR loss compared with RDO quantization algorithm implemented in JM.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116014093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new approach to theoretically analyze compressive sensing directly from the randomly sampling matrix phi instead of a certain recovery algorithm. For simplifying our analyses, we assume both input source and random sampling matrix as binary. Taking anyone of source bits, we can constitute a tree by parsing the randomly sampling matrix, where the selected source bit as the root. In the rest of tree, measurement nodes and source nodes are connected alternatively according to phi. With the tree, we can formulate the probability if one source bit can be recovered from randomly sampling measurements. The further analyses upon the tree structure reveal the relation between the un-recovery probability with random measurements and the un-recovery probability with source sparsity. The conditions of successful recovery are proven on the parameter S-M plane. Then the results of the tree structure based analyses are compared with the actual recovery process.
{"title":"Tree Structure Based Analyses on Compressive Sensing for Binary Sparse Sources","authors":"Jingjing Fu, Zhouchen Lin, B. Zeng, Feng Wu","doi":"10.1109/DCC.2010.60","DOIUrl":"https://doi.org/10.1109/DCC.2010.60","url":null,"abstract":"This paper proposes a new approach to theoretically analyze compressive sensing directly from the randomly sampling matrix phi instead of a certain recovery algorithm. For simplifying our analyses, we assume both input source and random sampling matrix as binary. Taking anyone of source bits, we can constitute a tree by parsing the randomly sampling matrix, where the selected source bit as the root. In the rest of tree, measurement nodes and source nodes are connected alternatively according to phi. With the tree, we can formulate the probability if one source bit can be recovered from randomly sampling measurements. The further analyses upon the tree structure reveal the relation between the un-recovery probability with random measurements and the un-recovery probability with source sparsity. The conditions of successful recovery are proven on the parameter S-M plane. Then the results of the tree structure based analyses are compared with the actual recovery process.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121374794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, an error resilient JU-DFMC is proposed for video transmission over error-prone channels. In the proposed error resilient JU-DFMC, a new error resilient prediction structure of DFMC is firstly presented. Then an end-to-end distortion model is applied for macroblock (MB) level mode decision. Finally a frame level rate distortion cost scheme is proposed to determine how many times the header information will be transmitted in a high quality frame (HQF). The experimental results show that the proposed method can achieve better performance than the previous DFMC schemes.
{"title":"Error Resilient Dual Frame Motion Compensation with Uneven Quality Protection","authors":"Da Liu, Debin Zhao, Siwei Ma","doi":"10.1109/DCC.2010.56","DOIUrl":"https://doi.org/10.1109/DCC.2010.56","url":null,"abstract":"In this paper, an error resilient JU-DFMC is proposed for video transmission over error-prone channels. In the proposed error resilient JU-DFMC, a new error resilient prediction structure of DFMC is firstly presented. Then an end-to-end distortion model is applied for macroblock (MB) level mode decision. Finally a frame level rate distortion cost scheme is proposed to determine how many times the header information will be transmitted in a high quality frame (HQF). The experimental results show that the proposed method can achieve better performance than the previous DFMC schemes.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}