首页 > 最新文献

2009 Data Compression Conference最新文献

英文 中文
New Families and New Members of Integer Sequence Based Coding Methods 基于整数序列编码方法的新族和新成员
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.87
Daniel Lowell, D. Tamir
This paper presents Integer sequences that have the property of being additively and/or multiplicatively complete, Zekendorf, and unique Zekendorf. In addition, a generalized Elias coding scheme is developed. Features of Zekendorf sequence based and generalized Elias coding compression methods including compression rate, universality, asymptotic optimality, and coding complexity are analyzed.
本文给出了具有加性和/或乘性完全、齐肯多夫和唯一齐肯多夫性质的整数序列。此外,还提出了一种广义的Elias编码方案。分析了基于Zekendorf序列和广义Elias编码压缩方法的压缩率、通用性、渐近最优性和编码复杂度等特征。
{"title":"New Families and New Members of Integer Sequence Based Coding Methods","authors":"Daniel Lowell, D. Tamir","doi":"10.1109/DCC.2009.87","DOIUrl":"https://doi.org/10.1109/DCC.2009.87","url":null,"abstract":"This paper presents Integer sequences that have the property of being additively and/or multiplicatively complete, Zekendorf, and unique Zekendorf. In addition, a generalized Elias coding scheme is developed. Features of Zekendorf sequence based and generalized Elias coding compression methods including compression rate, universality, asymptotic optimality, and coding complexity are analyzed.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115601447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear Suffix Array Construction by Almost Pure Induced-Sorting 基于几乎纯诱导排序的线性后缀数组构造
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.42
Ge Nong, Sen Zhang, W. H. Chan
We present a linear time and space suffix array (SA) construction algorithm called the SA-IS algorithm.The SA-IS algorithm is novel because of the LMS-substrings used for the problem reduction and the pure induced-sorting (specially coined for this algorithm)used to propagate the order of suffixes as well as that of LMS-substrings, which makes the algorithm almost purely relying on induced sorting at both its crucial steps.The pure induced-sorting renders the algorithm an elegant design and in turn a surprisingly compact implementation which consists of less than 100 lines of C code.The experimental results demonstrate that this newly proposed algorithm yields noticeably better time and space efficiencies than all the currently published linear time algorithms for SA construction.
提出了一种线性时空后缀阵列(SA)构造算法,称为SA- is算法。SA-IS算法是新颖的,因为用于问题约简的lms子串和用于传播后缀顺序的纯诱导排序(专门为该算法创造的)以及lms子串,这使得算法在其关键步骤几乎完全依赖于诱导排序。纯粹的诱导排序使算法具有优雅的设计,并且反过来具有令人惊讶的紧凑实现,它由不到100行C代码组成。实验结果表明,该算法的时间和空间效率明显优于现有的线性时间算法。
{"title":"Linear Suffix Array Construction by Almost Pure Induced-Sorting","authors":"Ge Nong, Sen Zhang, W. H. Chan","doi":"10.1109/DCC.2009.42","DOIUrl":"https://doi.org/10.1109/DCC.2009.42","url":null,"abstract":"We present a linear time and space suffix array (SA) construction algorithm called the SA-IS algorithm.The SA-IS algorithm is novel because of the LMS-substrings used for the problem reduction and the pure induced-sorting (specially coined for this algorithm)used to propagate the order of suffixes as well as that of LMS-substrings, which makes the algorithm almost purely relying on induced sorting at both its crucial steps.The pure induced-sorting renders the algorithm an elegant design and in turn a surprisingly compact implementation which consists of less than 100 lines of C code.The experimental results demonstrate that this newly proposed algorithm yields noticeably better time and space efficiencies than all the currently published linear time algorithms for SA construction.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126743006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 160
Model-Guided Adaptive Recovery of Compressive Sensing 模型导向的压缩感知自适应恢复
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.69
Xiaolin Wu, Xiangjun Zhang, Jia Wang
For the new signal acquisition methodology of compressive sensing (CS) a challenge is to find a space in which the signal is sparse and hence recoverable faithfully. Given the nonstationarity of many natural signals such as images, the sparse space is varying in time or spatial domain. As such, CS recovery should be conducted in locally adaptive, signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem we propose a new model-based framework to facilitate the use of adaptive bases in CS recovery. In a case study we integrate a piecewise stationary autoregressive model into the recovery process for CS-coded images, and are able to increase the reconstruction quality by $2 thicksim 7$dB over existing methods. The new CS recovery framework can readily incorporate prior knowledge to boost reconstruction quality.
对于新的压缩感知信号采集方法来说,一个挑战是找到一个信号稀疏的空间,从而忠实地恢复信号。由于图像等自然信号的非平稳性,稀疏空间在时域或空域上是变化的。因此,CS恢复应该在局部自适应、信号依赖的空间中进行,以应对CS测量是全局的、与信号结构无关的事实。相反,现有的CS重建方法对整个信号使用一组固定的基(例如,小波、DCT和梯度空间)。为了纠正这一问题,我们提出了一个新的基于模型的框架,以促进自适应基在CS恢复中的使用。在一个案例研究中,我们将分段平稳自回归模型集成到cs编码图像的恢复过程中,并且能够将重建质量提高到现有方法的2 thicksim 7$dB。新的CS恢复框架可以很容易地结合先验知识来提高重建质量。
{"title":"Model-Guided Adaptive Recovery of Compressive Sensing","authors":"Xiaolin Wu, Xiangjun Zhang, Jia Wang","doi":"10.1109/DCC.2009.69","DOIUrl":"https://doi.org/10.1109/DCC.2009.69","url":null,"abstract":"For the new signal acquisition methodology of compressive sensing (CS) a challenge is to find a space in which the signal is sparse and hence recoverable faithfully. Given the nonstationarity of many natural signals such as images, the sparse space is varying in time or spatial domain. As such, CS recovery should be conducted in locally adaptive, signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem we propose a new model-based framework to facilitate the use of adaptive bases in CS recovery. In a case study we integrate a piecewise stationary autoregressive model into the recovery process for CS-coded images, and are able to increase the reconstruction quality by $2 thicksim 7$dB over existing methods. The new CS recovery framework can readily incorporate prior knowledge to boost reconstruction quality.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Modeling the Correlation Noise in Spatial Domain Distributed Video Coding 空域分布式视频编码中相关噪声的建模
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.37
N. Deligiannis, A. Munteanu, T. Clerckx, P. Schelkens, J. Cornelis
Conventional models in distributed video coding (DVC) consider the correlation noise to be distributed independently from the realization of the side-information. This paper introduces a novel model, of which the standard deviation depends spatially on the realization of the side-information. The performance penalty in video coding caused by side-information-independency assumptions is theoretical quantified and experimentally confirmed. Furthermore, inspired by the spatial side-information-dependency of the proposed model, a novel approach for estimating the correlation channel from the partial knowledge of it is introduced. The proposed technique is incorporated into a spatial-domain unidirectional DVC system, providing state-of-the-art performance.
传统的分布式视频编码模型认为相关噪声的分布与侧信息的实现是独立的。本文提出了一种新的模型,该模型的标准差在空间上依赖于侧信息的实现。对侧信息无关假设导致的视频编码性能损失进行了理论量化和实验验证。在此基础上,利用模型的空间侧信息依赖性,提出了一种利用模型的部分信息估计相关信道的新方法。该技术被整合到一个空间域单向DVC系统中,提供了最先进的性能。
{"title":"Modeling the Correlation Noise in Spatial Domain Distributed Video Coding","authors":"N. Deligiannis, A. Munteanu, T. Clerckx, P. Schelkens, J. Cornelis","doi":"10.1109/DCC.2009.37","DOIUrl":"https://doi.org/10.1109/DCC.2009.37","url":null,"abstract":"Conventional models in distributed video coding (DVC) consider the correlation noise to be distributed independently from the realization of the side-information. This paper introduces a novel model, of which the standard deviation depends spatially on the realization of the side-information. The performance penalty in video coding caused by side-information-independency assumptions is theoretical quantified and experimentally confirmed. Furthermore, inspired by the spatial side-information-dependency of the proposed model, a novel approach for estimating the correlation channel from the partial knowledge of it is introduced. The proposed technique is incorporated into a spatial-domain unidirectional DVC system, providing state-of-the-art performance.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast Data Reduction via KDE Approximation 通过KDE近似快速数据缩减
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.47
D. Freedman, P. Kisilev
Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.
当今现实世界中的许多应用程序都需要处理和分析不断增长的数据量,而收集数据的成本却在下降。因此,主要的技术障碍是获取数据的速度比处理数据的速度快。因此,数据简化方法变得越来越重要,因为它们允许人们从庞大的数据集中提取最相关和最重要的信息。我们提出了一种这样的方法,基于压缩集点概率分布估计的描述长度。
{"title":"Fast Data Reduction via KDE Approximation","authors":"D. Freedman, P. Kisilev","doi":"10.1109/DCC.2009.47","DOIUrl":"https://doi.org/10.1109/DCC.2009.47","url":null,"abstract":"Many of today’s real world applications need to handle and analyze continually growing amounts of data, while the cost of collecting data decreases. As a result, the main technological hurdle is that the data is acquired faster than it can be processed. Data reduction methods are thus increasingly important, as they allow one to extract the most relevant and important information from giant data sets. We present one such method, based on compressing the description length of an estimate of the probability distribution of a set points.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130377305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Lossless Image Compression by PPM-Based Prediction Coding 基于ppm预测编码的无损图像压缩
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.34
M. Kitakami, Kensuke Tai
Most of speech and image data compressed by lossy compression whose decompressed data are different from the original ones. Here, the different between the decompressed data and the original ones cannot be recognized by most of people. Lossless image compression, which gives exactly the same decompressed data as the original ones, is necessary for medical image, art work image, and satellite image, which are frequently processed by computers now. This paper proposes lossless image compression by prediction coding whose frequency table operation is based on PPM(Prediction by Partial Match). The prediction algorithm for the proposed method is based on that for CALIC, an existing lossless image compression method; and the difference between the predicted value and the actual one is encoded by PPM-based compression method. In this compression method, initial values in the frequency table and frequency table operation method are modified to achieve efficient compression ratio. Computer simulation says that the compression ratio of the proposed method is better than that of CALIC by about 0.07 bit/pixel.
大多数语音和图像数据采用有损压缩,其解压缩后的数据与原始数据不同。在这里,解压缩后的数据与原始数据的区别是大多数人无法识别的。图像无损压缩是目前计算机处理频繁的医学图像、艺术作品图像和卫星图像所必需的,它能提供与原始图像完全相同的解压缩数据。本文提出了一种基于PPM(prediction by Partial Match)的预测编码的无损图像压缩方法。该方法的预测算法是基于现有无损图像压缩方法CALIC的预测算法;利用基于ppm的压缩方法对预测值与实际值之间的差值进行编码。该压缩方法通过修改频率表和频率表运算方法中的初始值来实现有效的压缩比。计算机仿真表明,该方法比CALIC压缩比提高了0.07 bit/pixel。
{"title":"Lossless Image Compression by PPM-Based Prediction Coding","authors":"M. Kitakami, Kensuke Tai","doi":"10.1109/DCC.2009.34","DOIUrl":"https://doi.org/10.1109/DCC.2009.34","url":null,"abstract":"Most of speech and image data compressed by lossy compression whose decompressed data are different from the original ones. Here, the different between the decompressed data and the original ones cannot be recognized by most of people. Lossless image compression, which gives exactly the same decompressed data as the original ones, is necessary for medical image, art work image, and satellite image, which are frequently processed by computers now. This paper proposes lossless image compression by prediction coding whose frequency table operation is based on PPM(Prediction by Partial Match). The prediction algorithm for the proposed method is based on that for CALIC, an existing lossless image compression method; and the difference between the predicted value and the actual one is encoded by PPM-based compression method. In this compression method, initial values in the frequency table and frequency table operation method are modified to achieve efficient compression ratio. Computer simulation says that the compression ratio of the proposed method is better than that of CALIC by about 0.07 bit/pixel.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126538319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Inverse Wavelet Transform by Compressive Sensing Decoding with Deconvolution 反卷积压缩感知译码改进小波变换
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.19
Dong Liu, Xiaoyan Sun, Feng Wu
By virtue of compressive sensing (CS) that can recover sparse signals from a few linear and non-adaptive measurements, we propose an alternative decoding method for inverse wavelet transform when only partial coefficients are available. Classic CS decoding such as $l_1$-minimization indeed provides better reconstruction of sparse signals than inverse wavelet transform. Since many natural images are not sparse, we propose to further improve CS decoding from the Bayesian point of view. Specifically, as wavelet transform can be described as convolution, we present an iterative deconvolution method for CS decoding in the case of partial wavelet coefficients. Experimental results demonstrate the efficiency of our method. We conclude that such findings indicate promising applications in compression.
利用压缩感知(CS)可以从少量线性和非自适应测量中恢复稀疏信号的特性,我们提出了一种只有部分系数可用的反小波变换译码方法。经典的CS解码,如$l_1$最小化,确实比小波逆变换提供了更好的稀疏信号重构。由于许多自然图像不是稀疏的,我们建议从贝叶斯的角度进一步改进CS解码。具体来说,由于小波变换可以被描述为卷积,我们提出了一种迭代反卷积方法,用于部分小波系数情况下的CS解码。实验结果证明了该方法的有效性。我们的结论是,这些发现表明有希望的应用在压缩。
{"title":"Improving Inverse Wavelet Transform by Compressive Sensing Decoding with Deconvolution","authors":"Dong Liu, Xiaoyan Sun, Feng Wu","doi":"10.1109/DCC.2009.19","DOIUrl":"https://doi.org/10.1109/DCC.2009.19","url":null,"abstract":"By virtue of compressive sensing (CS) that can recover sparse signals from a few linear and non-adaptive measurements, we propose an alternative decoding method for inverse wavelet transform when only partial coefficients are available. Classic CS decoding such as $l_1$-minimization indeed provides better reconstruction of sparse signals than inverse wavelet transform. Since many natural images are not sparse, we propose to further improve CS decoding from the Bayesian point of view. Specifically, as wavelet transform can be described as convolution, we present an iterative deconvolution method for CS decoding in the case of partial wavelet coefficients. Experimental results demonstrate the efficiency of our method. We conclude that such findings indicate promising applications in compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132583964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performing Vector Quantization Using Reduced Data Representation 使用简化的数据表示执行矢量量化
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.74
Erickson Miranda, Guoqiang Shan, V. Megalooikonomou
We propose a method to improve the performance of vector quantization by using different resolutions of the dataset for each GLA iteration. We discuss the use of wavelet decomposition, principal components analysis and other data and dimensionality reduction techniques on the dataset at different stages of vector quantization. Experimental results on both real and simulated datasets show that the proposed technique outperforms ordinary vector quantization in terms of mean squared error or running time.
我们提出了一种提高矢量量化性能的方法,即在每次GLA迭代中使用不同的数据集分辨率。我们讨论了在矢量量化的不同阶段对数据集使用小波分解、主成分分析和其他数据降维技术。在真实和模拟数据集上的实验结果表明,该方法在均方误差和运行时间方面优于普通矢量量化。
{"title":"Performing Vector Quantization Using Reduced Data Representation","authors":"Erickson Miranda, Guoqiang Shan, V. Megalooikonomou","doi":"10.1109/DCC.2009.74","DOIUrl":"https://doi.org/10.1109/DCC.2009.74","url":null,"abstract":"We propose a method to improve the performance of vector quantization by using different resolutions of the dataset for each GLA iteration. We discuss the use of wavelet decomposition, principal components analysis and other data and dimensionality reduction techniques on the dataset at different stages of vector quantization. Experimental results on both real and simulated datasets show that the proposed technique outperforms ordinary vector quantization in terms of mean squared error or running time.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129627333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossy to Lossless Spatially Scalable Depth Map Coding with Cellular Automata 基于元胞自动机的有损到无损空间可扩展深度图编码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.41
L. Cappellari, Carlos Cruz-Reyes, G. Calvagno, J. Kari
Spatially scalable image coding algorithms are mostly based on linear filtering techniques that give a multi-resolution representation of the data. Reversible cellular automata can be instead used as simpler, non-linear filter banks that give similar performance. In this paper, we investigate the use of reversible cellular automata for lossy to lossless and spatially scalable coding of smooth multi-level images, such as depth maps. In a few cases, the compression performance of the proposed coding method is comparable to that of the JBIG standard, but, under most test conditions, we show better compression performances than those obtained with the JBIG or the JPEG2000 standards. The results stimulate further investigation into cellular automata-based methods for multi-level image compression.
空间可扩展的图像编码算法主要基于线性滤波技术,该技术提供了数据的多分辨率表示。可逆元胞自动机可以作为更简单的非线性滤波器组,提供类似的性能。在本文中,我们研究了可逆元胞自动机在光滑多级图像(如深度图)的有损到无损和空间可扩展编码中的应用。在少数情况下,所提出的编码方法的压缩性能与JBIG标准相当,但在大多数测试条件下,我们显示出比JBIG或JPEG2000标准获得的压缩性能更好。这些结果激发了对基于元胞自动机的多级图像压缩方法的进一步研究。
{"title":"Lossy to Lossless Spatially Scalable Depth Map Coding with Cellular Automata","authors":"L. Cappellari, Carlos Cruz-Reyes, G. Calvagno, J. Kari","doi":"10.1109/DCC.2009.41","DOIUrl":"https://doi.org/10.1109/DCC.2009.41","url":null,"abstract":"Spatially scalable image coding algorithms are mostly based on linear filtering techniques that give a multi-resolution representation of the data. Reversible cellular automata can be instead used as simpler, non-linear filter banks that give similar performance. In this paper, we investigate the use of reversible cellular automata for lossy to lossless and spatially scalable coding of smooth multi-level images, such as depth maps. In a few cases, the compression performance of the proposed coding method is comparable to that of the JBIG standard, but, under most test conditions, we show better compression performances than those obtained with the JBIG or the JPEG2000 standards. The results stimulate further investigation into cellular automata-based methods for multi-level image compression.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130428355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Highly Accurate Distortion Estimation for JPEG2000 through PDF-Based Estimators 基于pdf估计器的JPEG2000高精度失真估计
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.20
Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà
Distortion estimation techniques are often employed in bitplane coding engines to minimize the computational load, or the memory requirements, of the encoder. A common approach is to determine distortion estimators that approximate the mean squared error decreases when data are successively coded and transmitted. Such estimators usually assume that coefficients are uniformly distributed in the quantization interval. Even though this assumption simplifies estimation, it does not exactly correspond with the nature of the signal. This work introduces new distortion estimators determined through a precise approximation of the coefficient's distribution within the quantization intervals. Experimental results obtained when our estimators are used for the post-compression rate-distortion optimization process of JPEG2000 suggest that they are able to approximate distortion with very high accuracy.
失真估计技术通常用于位平面编码引擎,以尽量减少编码器的计算负荷或内存需求。一种常见的方法是确定失真估计器,使其近似于数据连续编码和传输时均方误差减小。这种估计通常假定系数在量化区间内均匀分布。尽管这个假设简化了估计,但它并不完全符合信号的性质。本文介绍了一种新的失真估计器,它是通过精确逼近量化区间内的系数分布来确定的。将该估计器用于JPEG2000压缩后失真率优化过程的实验结果表明,该估计器能够以很高的精度逼近失真。
{"title":"Highly Accurate Distortion Estimation for JPEG2000 through PDF-Based Estimators","authors":"Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà","doi":"10.1109/DCC.2009.20","DOIUrl":"https://doi.org/10.1109/DCC.2009.20","url":null,"abstract":"Distortion estimation techniques are often employed in bitplane coding engines to minimize the computational load, or the memory requirements, of the encoder. A common approach is to determine distortion estimators that approximate the mean squared error decreases when data are successively coded and transmitted. Such estimators usually assume that coefficients are uniformly distributed in the quantization interval. Even though this assumption simplifies estimation, it does not exactly correspond with the nature of the signal. This work introduces new distortion estimators determined through a precise approximation of the coefficient's distribution within the quantization intervals. Experimental results obtained when our estimators are used for the post-compression rate-distortion optimization process of JPEG2000 suggest that they are able to approximate distortion with very high accuracy.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126282185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2009 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1