An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions. Specifically, a bound on the angle between the first Ritz vector and the orthonormal projection of the first eigenvector is developed for the case of a random projection onto a lower-dimensional subspace. It is shown that this bound is expected to be small if the eigenvalues are widely separated, i.e., if the data distribution is highly eccentric. This analysis verifies the validity of a fundamental approximation behind compressive projection principal component analysis,a technique proposed previously to recover from random projections not only the coefficients associated with principal component analysis but also an approximation to the principal-component transform basis itself.
{"title":"Compressive-Projection Principal Component Analysis and the First Eigenvector","authors":"J. Fowler","doi":"10.1109/DCC.2009.44","DOIUrl":"https://doi.org/10.1109/DCC.2009.44","url":null,"abstract":"An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions. Specifically, a bound on the angle between the first Ritz vector and the orthonormal projection of the first eigenvector is developed for the case of a random projection onto a lower-dimensional subspace. It is shown that this bound is expected to be small if the eigenvalues are widely separated, i.e., if the data distribution is highly eccentric. This analysis verifies the validity of a fundamental approximation behind compressive projection principal component analysis,a technique proposed previously to recover from random projections not only the coefficients associated with principal component analysis but also an approximation to the principal-component transform basis itself.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"13 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132532422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transmission of convolutionally encoded multiple descriptions over noisy channels can bene¿t from the use of iterative source-channel decoding methods. This paper investigates the combined use of time-dependencies and inter-description correlation incurred by the multiple description scalar quantizer. We ¿rst modi¿ed the BCJR algorithm in a way that symbol a posteriori probabilities can be derived and used as extrinsic information to help iterative decoding between channel and source decoders. Also proposed is a recursive implementation for the source decoder that exploits the inter-description correlation to jointly decode multiple descriptions. Simulation results indicate that our proposed scheme can achieve signi¿cant improvement over the bit-level iterative decoding schemes.
{"title":"Iterative Decoding of Convolutionally Encoded Multiple Descriptions","authors":"K. Yen, Chun-Feng Wu, Wen-Whei Chang","doi":"10.1109/DCC.2009.85","DOIUrl":"https://doi.org/10.1109/DCC.2009.85","url":null,"abstract":"Transmission of convolutionally encoded multiple descriptions over noisy channels can bene¿t from the use of iterative source-channel decoding methods. This paper investigates the combined use of time-dependencies and inter-description correlation incurred by the multiple description scalar quantizer. We ¿rst modi¿ed the BCJR algorithm in a way that symbol a posteriori probabilities can be derived and used as extrinsic information to help iterative decoding between channel and source decoders. Also proposed is a recursive implementation for the source decoder that exploits the inter-description correlation to jointly decode multiple descriptions. Simulation results indicate that our proposed scheme can achieve signi¿cant improvement over the bit-level iterative decoding schemes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce an analytical expression for the expected distortion of a single layer encoded video bit-stream. Based on the expected distortion model, we propose a distortion optimal unequal error protection (UEP) technique to transmit such bit-stream over a wireless tandem channel. The proposed method allocates the total transmission budget unequally to different frames of a video bit-stream in order to protect the bit-stream against both bit errors caused by fading and packet erasures caused by network buffering. We compare this technique with another UEP technique as well as a one-dimension equal length protection technique. The evaluation results for different choices of packet sizes, available budgets, and channel conditions show that the proposed method outperforms the other alternative schemes.
{"title":"Wireless video transmission: A single layer distortion optimal approach","authors":"Negar Nejati, H. Yousefi’zadeh, H. Jafarkhani","doi":"10.1109/DCC.2009.29","DOIUrl":"https://doi.org/10.1109/DCC.2009.29","url":null,"abstract":"In this paper, we introduce an analytical expression for the expected distortion of a single layer encoded video bit-stream. Based on the expected distortion model, we propose a distortion optimal unequal error protection (UEP) technique to transmit such bit-stream over a wireless tandem channel. The proposed method allocates the total transmission budget unequally to different frames of a video bit-stream in order to protect the bit-stream against both bit errors caused by fading and packet erasures caused by network buffering. We compare this technique with another UEP technique as well as a one-dimension equal length protection technique. The evaluation results for different choices of packet sizes, available budgets, and channel conditions show that the proposed method outperforms the other alternative schemes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the performance of a discrete-time all-analog-processing joint source channel coding system for the transmission of i.i.d. Gaussian and Laplacian sources over AWGN channels. In the encoder, two samples of an i.i.d. source are mapped into a channel symbol using a space-filling curve. Different from previous work in the literature, MMSE decoding instead of ML decoding is considered, and we focus on both high and low channel SNR regions. The main contribution of this paper is to show that the proposed system presents a performance very close to the theoretical limits, even at low SNR, as long as the curve parameters are properly optimized.
{"title":"Analog Joint Source Channel Coding Using Space-Filling Curves and MMSE Decoding","authors":"Yichuan Hu, J. Garcia-Frías, M. Lamarca","doi":"10.1109/DCC.2009.45","DOIUrl":"https://doi.org/10.1109/DCC.2009.45","url":null,"abstract":"We investigate the performance of a discrete-time all-analog-processing joint source channel coding system for the transmission of i.i.d. Gaussian and Laplacian sources over AWGN channels. In the encoder, two samples of an i.i.d. source are mapped into a channel symbol using a space-filling curve. Different from previous work in the literature, MMSE decoding instead of ML decoding is considered, and we focus on both high and low channel SNR regions. The main contribution of this paper is to show that the proposed system presents a performance very close to the theoretical limits, even at low SNR, as long as the curve parameters are properly optimized.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, J. Singh, B. Girod
For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.
{"title":"Tree Histogram Coding for Mobile Image Matching","authors":"David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, J. Singh, B. Girod","doi":"10.1109/DCC.2009.33","DOIUrl":"https://doi.org/10.1109/DCC.2009.33","url":null,"abstract":"For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114310613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiyan Du, Pavel Jaromersky, Yi-Jen Chiang, N. Memon
In this paper we propose a novel {em out-of-core} technique for{em progressive} lossless compression and {em selective}decompression of 3D triangle meshes larger than main memory. Most existing compression methods, in order to optimize compression ratios, only allow {em sequential} decompression. We develop an integrated approach that resolves the issue of so-called {emprefix dependency} to support {em selective} decompression, and in addition enables I/O-efficient compression, while maintaining high compression ratios. Our decompression scheme initially provides a global context of the entire mesh at a coarse resolution, and allows the user to select different {em regions of interest} to further decompress/refine to {bf different}levels of details, to facilitate out-of-core multiresolution rendering for interactive visual inspection. We present experimental results which show that we achieve fast compression/decompression times and low memory footprints, with compression ratios comparable to current out-of-core {em single resolution} methods.
{"title":"Out-of-Core Progressive Lossless Compression and Selective Decompression of Large Triangle Meshes","authors":"Zhiyan Du, Pavel Jaromersky, Yi-Jen Chiang, N. Memon","doi":"10.1109/DCC.2009.73","DOIUrl":"https://doi.org/10.1109/DCC.2009.73","url":null,"abstract":"In this paper we propose a novel {em out-of-core} technique for{em progressive} lossless compression and {em selective}decompression of 3D triangle meshes larger than main memory. Most existing compression methods, in order to optimize compression ratios, only allow {em sequential} decompression. We develop an integrated approach that resolves the issue of so-called {emprefix dependency} to support {em selective} decompression, and in addition enables I/O-efficient compression, while maintaining high compression ratios. Our decompression scheme initially provides a global context of the entire mesh at a coarse resolution, and allows the user to select different {em regions of interest} to further decompress/refine to {bf different}levels of details, to facilitate out-of-core multiresolution rendering for interactive visual inspection. We present experimental results which show that we achieve fast compression/decompression times and low memory footprints, with compression ratios comparable to current out-of-core {em single resolution} methods.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a fast partial distortion algorithm using normalized dithering matching scan to get uniform distribution of partial distortion which can reduce only unnecessary computation significantly. Our algorithm is based on normalized dithering order matching scan and calibration of threshold error using LOG value for each sub-block continuously for efficient elimination of unlike candidate blocks. Our algorithm reduces about 60% of computations for block matching error compared with conventional PDE (partial distortion elimination) algorithm without any prediction quality.
{"title":"A Fast Partial Distortion Elimination Algorithm Using Dithering Matching Pattern","authors":"Jong-Nam Kim, Taekyung Ryu, Won-Hee Kim","doi":"10.1109/DCC.2009.82","DOIUrl":"https://doi.org/10.1109/DCC.2009.82","url":null,"abstract":"In this paper, we propose a fast partial distortion algorithm using normalized dithering matching scan to get uniform distribution of partial distortion which can reduce only unnecessary computation significantly. Our algorithm is based on normalized dithering order matching scan and calibration of threshold error using LOG value for each sub-block continuously for efficient elimination of unlike candidate blocks. Our algorithm reduces about 60% of computations for block matching error compared with conventional PDE (partial distortion elimination) algorithm without any prediction quality.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123261237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper reports a new fast intra prediction algorithms based on separating the transformed coefficients of neighboring blocks. The prediction blocks are obtained from the transformed and quantized neighboring blocks that generate minimum distortion for each DC and AC coefficients. To obtain fast coding with comparable coding efficiency compared to H.264/AVC, we present the Full Block Search Prediction (FBSP) and the Edge Based Distance Prediction (EBDP). These are immune to both the intra prediction error and the drift propagation; in addition, do not require a low pass filtering named extrapolation and mode decisions to obtain a prediction block. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC.
{"title":"Fast Intra Prediction in the Transform Domain","authors":"Chanyul Kim, N. O’Connor, Y. Oh","doi":"10.1109/DCC.2009.26","DOIUrl":"https://doi.org/10.1109/DCC.2009.26","url":null,"abstract":"The paper reports a new fast intra prediction algorithms based on separating the transformed coefficients of neighboring blocks. The prediction blocks are obtained from the transformed and quantized neighboring blocks that generate minimum distortion for each DC and AC coefficients. To obtain fast coding with comparable coding efficiency compared to H.264/AVC, we present the Full Block Search Prediction (FBSP) and the Edge Based Distance Prediction (EBDP). These are immune to both the intra prediction error and the drift propagation; in addition, do not require a low pass filtering named extrapolation and mode decisions to obtain a prediction block. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129843621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes to use a bipartite graph to represent compressive sensing (CS). The evolution of nodes and edges in the bipartite graph, which is equivalent to the decoding process of compressive sensing, is characterized by a set of differential equations. One of main contributions in this paper is that we derive the close-form formulation of the evolution in statistics, which enable us to more accurately analyze the performance of compressive sensing. Based on the formulation, the distortion of random sampling and the rate needed to code measurements are analyzed briefly. Finally, numerical experiments verify our formulation of the evolution and the rate-distortion curves of compressive sensing are drawn to be compared with entropy coding.
{"title":"Analysis on Rate-Distortion Performance of Compressive Sensing for Binary Sparse Source","authors":"Feng Wu, Jingjing Fu, Zhouchen Lin, B. Zeng","doi":"10.1109/DCC.2009.24","DOIUrl":"https://doi.org/10.1109/DCC.2009.24","url":null,"abstract":"This paper proposes to use a bipartite graph to represent compressive sensing (CS). The evolution of nodes and edges in the bipartite graph, which is equivalent to the decoding process of compressive sensing, is characterized by a set of differential equations. One of main contributions in this paper is that we derive the close-form formulation of the evolution in statistics, which enable us to more accurately analyze the performance of compressive sensing. Based on the formulation, the distortion of random sampling and the rate needed to code measurements are analyzed briefly. Finally, numerical experiments verify our formulation of the evolution and the rate-distortion curves of compressive sensing are drawn to be compared with entropy coding.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130220546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the tight rate-distortion bound for K-channel symmetric multiple-description coding for a memory less Gaussian source. We find that the product of a function of the individual side distortions (for single received descriptions) and the central distortion (for K received descriptions) is asymptotically independent of the redundancy among the descriptions. Using this property, we analyze the asymptotic behaviors of two different practical multiple-description lattice vector quantizers (MDLVQ). Our analysis includes the treatment of a MDLVQ system from a new geometric viewpoint, which results in an expression for the side distortions using the normalized second moment of a sphere of higher dimensionality than the quantization space. The expression of the distortion product derived from the lower bound is then applied as a criterion to assess the performance losses of the considered MDLVQ systems.
{"title":"Analysis of K-Channel Multiple Description Quantization","authors":"Guoqiang Zhang, J. Klejsa, W. Kleijn","doi":"10.1109/DCC.2009.36","DOIUrl":"https://doi.org/10.1109/DCC.2009.36","url":null,"abstract":"This paper studies the tight rate-distortion bound for K-channel symmetric multiple-description coding for a memory less Gaussian source. We find that the product of a function of the individual side distortions (for single received descriptions) and the central distortion (for K received descriptions) is asymptotically independent of the redundancy among the descriptions. Using this property, we analyze the asymptotic behaviors of two different practical multiple-description lattice vector quantizers (MDLVQ). Our analysis includes the treatment of a MDLVQ system from a new geometric viewpoint, which results in an expression for the side distortions using the normalized second moment of a sphere of higher dimensionality than the quantization space. The expression of the distortion product derived from the lower bound is then applied as a criterion to assess the performance losses of the considered MDLVQ systems.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128374202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}