首页 > 最新文献

2009 Data Compression Conference最新文献

英文 中文
Compressive-Projection Principal Component Analysis and the First Eigenvector 压缩-投影主成分分析与第一特征向量
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.44
J. Fowler
An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions. Specifically, a bound on the angle between the first Ritz vector and the orthonormal projection of the first eigenvector is developed for the case of a random projection onto a lower-dimensional subspace. It is shown that this bound is expected to be small if the eigenvalues are widely separated, i.e., if the data distribution is highly eccentric. This analysis verifies the validity of a fundamental approximation behind compressive projection principal component analysis,a technique proposed previously to recover from random projections not only the coefficients associated with principal component analysis but also an approximation to the principal-component transform basis itself.
将现有的瑞利-里兹理论推广到高偏心分布的特殊情况。具体地说,对于低维子空间上的随机投影,给出了第一里兹向量与第一特征向量的正交投影之间夹角的界。结果表明,如果特征值间距很大,也就是说,如果数据分布是高度偏心的,则期望这个边界很小。该分析验证了压缩投影主成分分析背后的基本近似的有效性,压缩投影主成分分析是一种先前提出的技术,不仅可以从随机投影中恢复与主成分分析相关的系数,还可以近似于主成分变换基本身。
{"title":"Compressive-Projection Principal Component Analysis and the First Eigenvector","authors":"J. Fowler","doi":"10.1109/DCC.2009.44","DOIUrl":"https://doi.org/10.1109/DCC.2009.44","url":null,"abstract":"An analysis is presented that extends existing Rayleigh-Ritz theory to the special case of highly eccentric distributions. Specifically, a bound on the angle between the first Ritz vector and the orthonormal projection of the first eigenvector is developed for the case of a random projection onto a lower-dimensional subspace. It is shown that this bound is expected to be small if the eigenvalues are widely separated, i.e., if the data distribution is highly eccentric. This analysis verifies the validity of a fundamental approximation behind compressive projection principal component analysis,a technique proposed previously to recover from random projections not only the coefficients associated with principal component analysis but also an approximation to the principal-component transform basis itself.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"13 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132532422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Iterative Decoding of Convolutionally Encoded Multiple Descriptions 卷积编码多重描述的迭代解码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.85
K. Yen, Chun-Feng Wu, Wen-Whei Chang
Transmission of convolutionally encoded multiple descriptions over noisy channels can bene¿t from the use of iterative source-channel decoding methods. This paper investigates the combined use of time-dependencies and inter-description correlation incurred by the multiple description scalar quantizer. We ¿rst modi¿ed the BCJR algorithm in a way that symbol a posteriori probabilities can be derived and used as extrinsic information to help iterative decoding between channel and source decoders. Also proposed is a recursive implementation for the source decoder that exploits the inter-description correlation to jointly decode multiple descriptions. Simulation results indicate that our proposed scheme can achieve signi¿cant improvement over the bit-level iterative decoding schemes.
卷积编码的多重描述在噪声信道上的传输可以受益于迭代源信道解码方法的使用。本文研究了多描述标量量化器产生的时间依赖性和描述间相关性的综合利用。我们首先对BCJR算法进行了改进,使符号后验概率可以被导出并用作外部信息,以帮助信道和源解码器之间的迭代解码。本文还提出了一种源解码器的递归实现,利用描述间的相关性对多个描述进行联合解码。仿真结果表明,该方案比现有的位级迭代译码方案有明显的改进。
{"title":"Iterative Decoding of Convolutionally Encoded Multiple Descriptions","authors":"K. Yen, Chun-Feng Wu, Wen-Whei Chang","doi":"10.1109/DCC.2009.85","DOIUrl":"https://doi.org/10.1109/DCC.2009.85","url":null,"abstract":"Transmission of convolutionally encoded multiple descriptions over noisy channels can bene¿t from the use of iterative source-channel decoding methods. This paper investigates the combined use of time-dependencies and inter-description correlation incurred by the multiple description scalar quantizer. We ¿rst modi¿ed the BCJR algorithm in a way that symbol a posteriori probabilities can be derived and used as extrinsic information to help iterative decoding between channel and source decoders. Also proposed is a recursive implementation for the source decoder that exploits the inter-description correlation to jointly decode multiple descriptions. Simulation results indicate that our proposed scheme can achieve signi¿cant improvement over the bit-level iterative decoding schemes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wireless video transmission: A single layer distortion optimal approach 无线视频传输:一种单层失真优化方法
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.29
Negar Nejati, H. Yousefi’zadeh, H. Jafarkhani
In this paper, we introduce an analytical expression for the expected distortion of a single layer encoded video bit-stream. Based on the expected distortion model, we propose a distortion optimal unequal error protection (UEP) technique to transmit such bit-stream over a wireless tandem channel. The proposed method allocates the total transmission budget unequally to different frames of a video bit-stream in order to protect the bit-stream against both bit errors caused by fading and packet erasures caused by network buffering. We compare this technique with another UEP technique as well as a one-dimension equal length protection technique. The evaluation results for different choices of packet sizes, available budgets, and channel conditions show that the proposed method outperforms the other alternative schemes.
本文介绍了单层编码视频码流预期失真的解析表达式。在期望失真模型的基础上,提出了一种失真最优不等错误保护(UEP)技术,用于在无线串列信道上传输这种比特流。该方法将总传输预算不均匀地分配给视频码流的不同帧,以防止码流因衰落引起的误码和网络缓冲引起的数据包擦除。我们将该技术与另一种UEP技术以及一维等长保护技术进行了比较。在不同分组大小、可用预算和信道条件下的评估结果表明,该方法优于其他替代方案。
{"title":"Wireless video transmission: A single layer distortion optimal approach","authors":"Negar Nejati, H. Yousefi’zadeh, H. Jafarkhani","doi":"10.1109/DCC.2009.29","DOIUrl":"https://doi.org/10.1109/DCC.2009.29","url":null,"abstract":"In this paper, we introduce an analytical expression for the expected distortion of a single layer encoded video bit-stream. Based on the expected distortion model, we propose a distortion optimal unequal error protection (UEP) technique to transmit such bit-stream over a wireless tandem channel. The proposed method allocates the total transmission budget unequally to different frames of a video bit-stream in order to protect the bit-stream against both bit errors caused by fading and packet erasures caused by network buffering. We compare this technique with another UEP technique as well as a one-dimension equal length protection technique. The evaluation results for different choices of packet sizes, available budgets, and channel conditions show that the proposed method outperforms the other alternative schemes.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analog Joint Source Channel Coding Using Space-Filling Curves and MMSE Decoding 利用空间填充曲线和MMSE解码模拟联合源信道编码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.45
Yichuan Hu, J. Garcia-Frías, M. Lamarca
We investigate the performance of a discrete-time all-analog-processing joint source channel coding system for the transmission of i.i.d. Gaussian and Laplacian sources over AWGN channels. In the encoder, two samples of an i.i.d. source are mapped into a channel symbol using a space-filling curve. Different from previous work in the literature, MMSE decoding instead of ML decoding is considered, and we focus on both high and low channel SNR regions. The main contribution of this paper is to show that the proposed system presents a performance very close to the theoretical limits, even at low SNR, as long as the curve parameters are properly optimized.
我们研究了一种离散时间全模拟处理联合源信道编码系统的性能,用于在AWGN信道上传输i.i.d高斯和拉普拉斯源。在编码器中,使用空间填充曲线将i.i.d源的两个样本映射到信道符号中。与以往的文献工作不同,我们考虑了MMSE解码而不是ML解码,并且我们同时关注高信道和低信道信噪比区域。本文的主要贡献是表明,只要对曲线参数进行适当优化,即使在低信噪比下,所提出的系统的性能也非常接近理论极限。
{"title":"Analog Joint Source Channel Coding Using Space-Filling Curves and MMSE Decoding","authors":"Yichuan Hu, J. Garcia-Frías, M. Lamarca","doi":"10.1109/DCC.2009.45","DOIUrl":"https://doi.org/10.1109/DCC.2009.45","url":null,"abstract":"We investigate the performance of a discrete-time all-analog-processing joint source channel coding system for the transmission of i.i.d. Gaussian and Laplacian sources over AWGN channels. In the encoder, two samples of an i.i.d. source are mapped into a channel symbol using a space-filling curve. Different from previous work in the literature, MMSE decoding instead of ML decoding is considered, and we focus on both high and low channel SNR regions. The main contribution of this paper is to show that the proposed system presents a performance very close to the theoretical limits, even at low SNR, as long as the curve parameters are properly optimized.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Tree Histogram Coding for Mobile Image Matching 移动图像匹配的树直方图编码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.33
David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, J. Singh, B. Girod
For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.
对于移动图像匹配应用,移动设备捕获查询图像,提取描述性特征,并将这些特征无线传输到服务器。服务器通过将提取的特征与数据库进行比较来识别查询图像,并返回与识别结果相关的信息。对于慢速链接,查询特征压缩对于低延迟检索至关重要。以往的图像检索系统传输压缩特征描述符,非常适合于图像的两两匹配。然而,为了从大型数据库中快速检索,通常使用可伸缩的词汇树。在本文中,我们提出了一种基于树检索的高效率编解码器。通过编码树直方图,与发送压缩特征描述符相比,我们的编解码器可以实现5倍以上的速率降低。通过丢弃特征列表中的顺序,直方图编码比为每个特征发送树节点索引所需的速率低1.5倍。通过统计分析,研究了编码符号的熵随树深度和特征个数的变化规律。
{"title":"Tree Histogram Coding for Mobile Image Matching","authors":"David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, J. Singh, B. Girod","doi":"10.1109/DCC.2009.33","DOIUrl":"https://doi.org/10.1109/DCC.2009.33","url":null,"abstract":"For mobile image matching applications, a mobile device captures a query image, extracts descriptive features, and transmits these features wirelessly to a server. The server recognizes the query image by comparing the extracted features to its database and returns information associated with the recognition result. For slow links, query feature compression is crucial for low-latency retrieval. Previous image retrieval systems transmit compressed feature descriptors, which is well suited for pairwise image matching. For fast retrieval from large databases, however, scalable vocabulary trees are commonly employed. In this paper, we propose a rate-efficient codec designed for tree-based retrieval. By encoding a tree histogram, our codec can achieve a more than 5x rate reduction compared to sending compressed feature descriptors. By discarding the order amongst a list of features, histogram coding requires 1.5x lower rate than sending a tree node index for every feature. A statistical analysis is performed to study how the entropy of encoded symbols varies with tree depth and the number of features.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114310613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Out-of-Core Progressive Lossless Compression and Selective Decompression of Large Triangle Meshes 大型三角形网格的核外渐进式无损压缩和选择性解压缩
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.73
Zhiyan Du, Pavel Jaromersky, Yi-Jen Chiang, N. Memon
In this paper we propose a novel {em out-of-core} technique for{em progressive} lossless compression and {em selective}decompression of 3D triangle meshes larger than main memory. Most existing compression methods, in order to optimize compression ratios, only allow {em sequential} decompression. We develop an integrated approach that resolves the issue of so-called {emprefix dependency} to support {em selective} decompression, and in addition enables I/O-efficient compression, while maintaining high compression ratios. Our decompression scheme initially provides a global context of the entire mesh at a coarse resolution, and allows the user to select different {em regions of interest} to further decompress/refine to {bf different}levels of details, to facilitate out-of-core multiresolution rendering for interactive visual inspection. We present experimental results which show that we achieve fast compression/decompression times and low memory footprints, with compression ratios comparable to current out-of-core {em single resolution} methods.
本文提出了一种新颖的{em出核}技术,用于大于主存储器的三维三角形网格的{em渐进}无损压缩和{em选择性}解压缩。大多数现有的压缩方法,为了优化压缩比,只允许{em顺序}解压缩。我们开发了一种集成的方法,解决了所谓的{emprefix依赖}问题,以支持{emprefix选择性}解压缩,此外还支持I/ o高效压缩,同时保持高压缩比。我们的解压缩方案最初以粗分辨率提供整个网格的全局上下文,并允许用户选择不同的{em感兴趣的区域}进一步解压缩/细化到{bf不同的}个细节级别,以促进核心外的多分辨率渲染以进行交互式视觉检查。我们提出的实验结果表明,我们实现了快速的压缩/解压缩时间和低内存占用,压缩比可与当前的外核(em单分辨率)方法相媲美。
{"title":"Out-of-Core Progressive Lossless Compression and Selective Decompression of Large Triangle Meshes","authors":"Zhiyan Du, Pavel Jaromersky, Yi-Jen Chiang, N. Memon","doi":"10.1109/DCC.2009.73","DOIUrl":"https://doi.org/10.1109/DCC.2009.73","url":null,"abstract":"In this paper we propose a novel {em out-of-core} technique for{em progressive} lossless compression and {em selective}decompression of 3D triangle meshes larger than main memory. Most existing compression methods, in order to optimize compression ratios, only allow {em sequential} decompression. We develop an integrated approach that resolves the issue of so-called {emprefix dependency} to support {em selective} decompression, and in addition enables I/O-efficient compression, while maintaining high compression ratios. Our decompression scheme initially provides a global context of the entire mesh at a coarse resolution, and allows the user to select different {em regions of interest} to further decompress/refine to {bf different}levels of details, to facilitate out-of-core multiresolution rendering for interactive visual inspection. We present experimental results which show that we achieve fast compression/decompression times and low memory footprints, with compression ratios comparable to current out-of-core {em single resolution} methods.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A Fast Partial Distortion Elimination Algorithm Using Dithering Matching Pattern 一种基于抖动匹配模式的快速局部失真消除算法
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.82
Jong-Nam Kim, Taekyung Ryu, Won-Hee Kim
In this paper, we propose a fast partial distortion algorithm using normalized dithering matching scan to get uniform distribution of partial distortion which can reduce only unnecessary computation significantly. Our algorithm is based on normalized dithering order matching scan and calibration of threshold error using LOG value for each sub-block continuously for efficient elimination of unlike candidate blocks. Our algorithm reduces about 60% of computations for block matching error compared with conventional PDE (partial distortion elimination) algorithm without any prediction quality.
本文提出了一种采用归一化抖动匹配扫描的快速局部失真算法,该算法可以得到均匀分布的局部失真,大大减少了不必要的计算。我们的算法是基于归一化抖动顺序匹配扫描和校准阈值误差使用LOG值为每个子块连续有效地消除不相同的候选块。与传统的无预测质量的部分失真消除算法相比,该算法减少了约60%的块匹配误差计算。
{"title":"A Fast Partial Distortion Elimination Algorithm Using Dithering Matching Pattern","authors":"Jong-Nam Kim, Taekyung Ryu, Won-Hee Kim","doi":"10.1109/DCC.2009.82","DOIUrl":"https://doi.org/10.1109/DCC.2009.82","url":null,"abstract":"In this paper, we propose a fast partial distortion algorithm using normalized dithering matching scan to get uniform distribution of partial distortion which can reduce only unnecessary computation significantly. Our algorithm is based on normalized dithering order matching scan and calibration of threshold error using LOG value for each sub-block continuously for efficient elimination of unlike candidate blocks. Our algorithm reduces about 60% of computations for block matching error compared with conventional PDE (partial distortion elimination) algorithm without any prediction quality.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123261237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Intra Prediction in the Transform Domain 变换域内快速预测
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.26
Chanyul Kim, N. O’Connor, Y. Oh
The paper reports a new fast intra prediction algorithms based on separating the transformed coefficients of neighboring blocks. The prediction blocks are obtained from the transformed and quantized neighboring blocks that generate minimum distortion for each DC and AC coefficients. To obtain fast coding with comparable coding efficiency compared to H.264/AVC, we present the Full Block Search Prediction (FBSP) and the Edge Based Distance Prediction (EBDP). These are immune to both the intra prediction error and the drift propagation; in addition, do not require a low pass filtering named extrapolation and mode decisions to obtain a prediction block. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC.
本文提出了一种基于分离相邻块变换系数的快速块内预测算法。预测块是由对每个直流和交流系数产生最小失真的相邻块进行变换和量化得到的。为了获得与H.264/AVC编码效率相当的快速编码,我们提出了全块搜索预测(FBSP)和基于边缘的距离预测(EBDP)。它们不受内预测误差和漂移传播的影响;此外,不需要低通滤波命名外推和模式决策来获得预测块。实验结果表明,与H.264/AVC相比,变换系数的使用大大提高了帧内预测的效率,同时保持了较低的复杂度。
{"title":"Fast Intra Prediction in the Transform Domain","authors":"Chanyul Kim, N. O’Connor, Y. Oh","doi":"10.1109/DCC.2009.26","DOIUrl":"https://doi.org/10.1109/DCC.2009.26","url":null,"abstract":"The paper reports a new fast intra prediction algorithms based on separating the transformed coefficients of neighboring blocks. The prediction blocks are obtained from the transformed and quantized neighboring blocks that generate minimum distortion for each DC and AC coefficients. To obtain fast coding with comparable coding efficiency compared to H.264/AVC, we present the Full Block Search Prediction (FBSP) and the Edge Based Distance Prediction (EBDP). These are immune to both the intra prediction error and the drift propagation; in addition, do not require a low pass filtering named extrapolation and mode decisions to obtain a prediction block. Experimental results show that the use of transform coefficients greatly enhances the efficiency of intra prediction whilst keeping complexity low compared to H.264/AVC.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129843621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analysis on Rate-Distortion Performance of Compressive Sensing for Binary Sparse Source 二值稀疏源压缩感知的率失真性能分析
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.24
Feng Wu, Jingjing Fu, Zhouchen Lin, B. Zeng
This paper proposes to use a bipartite graph to represent compressive sensing (CS). The evolution of nodes and edges in the bipartite graph, which is equivalent to the decoding process of compressive sensing, is characterized by a set of differential equations. One of main contributions in this paper is that we derive the close-form formulation of the evolution in statistics, which enable us to more accurately analyze the performance of compressive sensing. Based on the formulation, the distortion of random sampling and the rate needed to code measurements are analyzed briefly. Finally, numerical experiments verify our formulation of the evolution and the rate-distortion curves of compressive sensing are drawn to be compared with entropy coding.
本文提出用二部图来表示压缩感知(CS)。二部图中节点和边的演化过程等效于压缩感知的解码过程,用一组微分方程来表征。本文的主要贡献之一是我们推导出了统计演化的封闭形式公式,这使我们能够更准确地分析压缩感知的性能。在此基础上,简要分析了随机抽样的失真和编码测量所需的速率。最后,通过数值实验验证了我们的演化公式,并绘制了压缩感知的速率失真曲线,并与熵编码进行了比较。
{"title":"Analysis on Rate-Distortion Performance of Compressive Sensing for Binary Sparse Source","authors":"Feng Wu, Jingjing Fu, Zhouchen Lin, B. Zeng","doi":"10.1109/DCC.2009.24","DOIUrl":"https://doi.org/10.1109/DCC.2009.24","url":null,"abstract":"This paper proposes to use a bipartite graph to represent compressive sensing (CS). The evolution of nodes and edges in the bipartite graph, which is equivalent to the decoding process of compressive sensing, is characterized by a set of differential equations. One of main contributions in this paper is that we derive the close-form formulation of the evolution in statistics, which enable us to more accurately analyze the performance of compressive sensing. Based on the formulation, the distortion of random sampling and the rate needed to code measurements are analyzed briefly. Finally, numerical experiments verify our formulation of the evolution and the rate-distortion curves of compressive sensing are drawn to be compared with entropy coding.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130220546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Analysis of K-Channel Multiple Description Quantization k -信道多重描述量化分析
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.36
Guoqiang Zhang, J. Klejsa, W. Kleijn
This paper studies the tight rate-distortion bound for K-channel symmetric multiple-description coding for a memory less Gaussian source. We find that the product of a function of the individual side distortions (for single received descriptions) and the central distortion (for K received descriptions) is asymptotically independent of the redundancy among the descriptions. Using this property, we analyze the asymptotic behaviors of two different practical  multiple-description lattice vector quantizers (MDLVQ). Our analysis includes the treatment of a MDLVQ system from a new geometric viewpoint, which results in an expression for the side distortions using the normalized second moment of a sphere of higher dimensionality than the quantization space. The expression of the distortion product derived from the lower bound is then applied as a criterion to assess the performance losses of the considered MDLVQ systems.
本文研究了k信道对称多描述编码的紧率失真界。我们发现单个侧畸变(对于单个接收描述)和中心畸变(对于K个接收描述)的函数的乘积与描述之间的冗余是渐近独立的。利用这一性质,我们分析了两种不同的实用多重描述晶格矢量量化器(MDLVQ)的渐近行为。我们的分析包括从一种新的几何观点处理MDLVQ系统,这导致使用比量化空间更高维的球面的归一化第二矩来表达侧畸变。由下界导出的失真积表达式随后被用作评估所考虑的MDLVQ系统的性能损失的标准。
{"title":"Analysis of K-Channel Multiple Description Quantization","authors":"Guoqiang Zhang, J. Klejsa, W. Kleijn","doi":"10.1109/DCC.2009.36","DOIUrl":"https://doi.org/10.1109/DCC.2009.36","url":null,"abstract":"This paper studies the tight rate-distortion bound for K-channel symmetric multiple-description coding for a memory less Gaussian source. We find that the product of a function of the individual side distortions (for single received descriptions) and the central distortion (for K received descriptions) is asymptotically independent of the redundancy among the descriptions. Using this property, we analyze the asymptotic behaviors of two different practical  multiple-description lattice vector quantizers (MDLVQ). Our analysis includes the treatment of a MDLVQ system from a new geometric viewpoint, which results in an expression for the side distortions using the normalized second moment of a sphere of higher dimensionality than the quantization space. The expression of the distortion product derived from the lower bound is then applied as a criterion to assess the performance losses of the considered MDLVQ systems.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128374202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2009 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1