首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
An iterative technique for universal lossy compression of individual sequences 单个序列的通用有损压缩迭代技术
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581995
Daniel Manor, M. Feder
Universal lossy compression of a data sequence can be obtained by fitting to the source sequence a "simple" reconstruction sequence that can be encoded efficiently and yet be within a tolerable distortion from the given source sequence. We develop iterative algorithms to find such a reconstruction sequence, for a given source sequence, using different criteria of simplicity for the reconstruction sequence. As a result we obtain a practical universal lossy compression method. The proposed method can be applied to source sequences defined over finite or continuous alphabets. We discuss the relation between our method and quantization techniques like entropy coded vector quantization (ECVQ) and trellis coded quantization (TCQ).
数据序列的普遍有损压缩可以通过在源序列上拟合一个“简单”的重构序列来获得,该序列可以被有效地编码,并且在给定源序列的可容忍失真范围内。我们开发了迭代算法来找到这样的重建序列,对于给定的源序列,使用不同的简单性准则的重建序列。得到了一种实用的通用有损压缩方法。该方法可以应用于有限或连续字母上定义的源序列。讨论了该方法与熵矢量编码量化(ECVQ)和网格编码量化(TCQ)等量化技术之间的关系。
{"title":"An iterative technique for universal lossy compression of individual sequences","authors":"Daniel Manor, M. Feder","doi":"10.1109/DCC.1997.581995","DOIUrl":"https://doi.org/10.1109/DCC.1997.581995","url":null,"abstract":"Universal lossy compression of a data sequence can be obtained by fitting to the source sequence a \"simple\" reconstruction sequence that can be encoded efficiently and yet be within a tolerable distortion from the given source sequence. We develop iterative algorithms to find such a reconstruction sequence, for a given source sequence, using different criteria of simplicity for the reconstruction sequence. As a result we obtain a practical universal lossy compression method. The proposed method can be applied to source sequences defined over finite or continuous alphabets. We discuss the relation between our method and quantization techniques like entropy coded vector quantization (ECVQ) and trellis coded quantization (TCQ).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131174903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
L/sub /spl infin//-constrained high-fidelity image compression via adaptive context modeling 基于自适应上下文建模的L/sub /spl infin//约束高保真图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581978
Xiaolin Wu, W. K. Choi, P. Bao
We study high-fidelity image compression with a given tight bound on the maximum error magnitude. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residues, a problem common to the current DPCM like predictive nearly-lossless image coders. By incorporating the proposed techniques into the nearly-lossless version of CALIC, we were able to increase its PSNR by 1 dB or more and/or reduce its bit rate by ten per cent or more. More encouragingly, at bit rates around 1.25 bpp our method obtained competitive PSNR results against the best wavelet coders, while obtaining much smaller maximum error magnitude.
我们研究了在给定最大误差大小的严格约束下的高保真图像压缩。我们提出了一些实用的自适应上下文建模技术来纠正由量化预测残差引起的预测偏差,这是当前DPCM常见的问题,如预测近无损图像编码器。通过将所提出的技术整合到近乎无损的CALIC版本中,我们能够将其PSNR提高1 dB或更多,并且/或将其比特率降低10%或更多。更令人鼓舞的是,在比特率约为1.25 bpp时,我们的方法与最佳小波编码器相比获得了具有竞争力的PSNR结果,同时获得了更小的最大误差幅度。
{"title":"L/sub /spl infin//-constrained high-fidelity image compression via adaptive context modeling","authors":"Xiaolin Wu, W. K. Choi, P. Bao","doi":"10.1109/DCC.1997.581978","DOIUrl":"https://doi.org/10.1109/DCC.1997.581978","url":null,"abstract":"We study high-fidelity image compression with a given tight bound on the maximum error magnitude. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residues, a problem common to the current DPCM like predictive nearly-lossless image coders. By incorporating the proposed techniques into the nearly-lossless version of CALIC, we were able to increase its PSNR by 1 dB or more and/or reduce its bit rate by ten per cent or more. More encouragingly, at bit rates around 1.25 bpp our method obtained competitive PSNR results against the best wavelet coders, while obtaining much smaller maximum error magnitude.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Adaptive vector quantization-Part I: a unifying structure 自适应矢量量化-第一部分:统一结构
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582094
J. Fowler
Summary form only given. Although rate-distortion theory establishes optimal coding properties for vector quantization (VQ) of stationary sources, the fact that real sources are, in actuality, nonstationary has led to the proposal of adaptive-VQ (AVQ) algorithms that compensate for changing source statistics. Because of the scarcity of rate-distortion results for nonstationary sources, proposed AVQ algorithms have been mostly heuristically, rather than analytically, motivated. As a result, there has been, to date, little attempt to develop a general model of AVQ or to compare the performance associated with existing AVQ algorithms. We summarize observations resulting from detailed studies of a number of previously published AVQ algorithms. To our knowledge, the observations represent the first attempt to define and describe AVQ in a general framework. We begin by proposing a mathematical definition of AVQ. Because of the large variety of algorithms that have purported to be AVQ, it is unclear from prior literature precisely what is meant by this term. Any resulting confusion is likely due to a certain imprecise, and sometimes ambiguous, use of the word "adaptive" in VQ literature. However, common to a large part of this literature is the notion that AVQ properly refers to techniques that dynamically vary the contents of a VQ codebook as coding progresses. Our definition of AVQ captures this idea of progressive codebook updating in a general mathematical framework.
只提供摘要形式。尽管率失真理论为平稳源的矢量量化(VQ)建立了最佳编码特性,但实际源实际上是非平稳的这一事实导致了自适应矢量量化(AVQ)算法的提出,该算法可以补偿源统计量的变化。由于非平稳源的率失真结果的稀缺性,所提出的AVQ算法大多是启发式的,而不是分析性的。因此,迄今为止,很少有人尝试开发AVQ的通用模型或比较与现有AVQ算法相关的性能。我们总结了许多先前发表的AVQ算法的详细研究结果。据我们所知,这些观察代表了在一般框架中定义和描述AVQ的第一次尝试。我们首先提出AVQ的数学定义。由于各种各样的算法都声称是AVQ,从先前的文献中不清楚这个术语的确切含义。由此产生的任何混乱都可能是由于VQ文献中对“自适应”一词的某种不精确,有时甚至是模棱两可的使用。然而,这些文献的大部分都有一个共同的概念,即AVQ正确地指的是随着编码的进行动态改变VQ码本内容的技术。我们对AVQ的定义抓住了在一般数学框架中渐进式码本更新的思想。
{"title":"Adaptive vector quantization-Part I: a unifying structure","authors":"J. Fowler","doi":"10.1109/DCC.1997.582094","DOIUrl":"https://doi.org/10.1109/DCC.1997.582094","url":null,"abstract":"Summary form only given. Although rate-distortion theory establishes optimal coding properties for vector quantization (VQ) of stationary sources, the fact that real sources are, in actuality, nonstationary has led to the proposal of adaptive-VQ (AVQ) algorithms that compensate for changing source statistics. Because of the scarcity of rate-distortion results for nonstationary sources, proposed AVQ algorithms have been mostly heuristically, rather than analytically, motivated. As a result, there has been, to date, little attempt to develop a general model of AVQ or to compare the performance associated with existing AVQ algorithms. We summarize observations resulting from detailed studies of a number of previously published AVQ algorithms. To our knowledge, the observations represent the first attempt to define and describe AVQ in a general framework. We begin by proposing a mathematical definition of AVQ. Because of the large variety of algorithms that have purported to be AVQ, it is unclear from prior literature precisely what is meant by this term. Any resulting confusion is likely due to a certain imprecise, and sometimes ambiguous, use of the word \"adaptive\" in VQ literature. However, common to a large part of this literature is the notion that AVQ properly refers to techniques that dynamically vary the contents of a VQ codebook as coding progresses. Our definition of AVQ captures this idea of progressive codebook updating in a general mathematical framework.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132748021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast residue coding for lossless textual image compression 快速残余编码无损文本图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582065
C. Constantinescu, R. Arps
Lossless textual image compression based on pattern matching classically includes a "residue" coding step that refines an initially lossy reconstructed image to its lossless original form. This step is typically accomplished by arithmetically coding the predicted value for each lossless image pixel, based on the values of previously reconstructed nearby pixels in both the lossless image and its precursor lossy image. Our contribution describes background typical prediction (TPR-B), a fast method for residue coding based on "typical prediction" which permits the skipping of pixels to be arithmetically encoded; and non-symbol typical prediction (TPR-NS), an improved compression method for residue coding also based on "typical prediction". Experimental results are reported based on the residue coding method proposed in Howard's (see Proc. of '96 Data Compression Conf., Snowbird, Utah, p.210-19, 1996) SPM algorithm and the lossy images it generates when applied to eight CCITT bi-level test images. These results demonstrate that after lossy image coding, 88% of the lossless image pixels in the test set can be predicted using TPR-B and need not be residue coded at all. In terms of saved SPM arithmetic coding operations while residue coding, TPR-B achieves an average coding speed increase of 8 times. Using TPR-NS together with TPR-B increases the SPM residue coding compression ratios by an average of 11%.
基于模式匹配的无损文本图像压缩通常包括一个“残差”编码步骤,该步骤将初始有损重构图像细化到其无损原始形式。这一步通常是通过对每个无损图像像素的预测值进行算术编码来完成的,这是基于在无损图像及其前体有损图像中先前重建的附近像素的值。我们的贡献描述了背景典型预测(TPR-B),这是一种基于“典型预测”的快速残差编码方法,它允许对像素的跳过进行算术编码;非符号典型预测(TPR-NS),也是基于“典型预测”改进的残基编码压缩方法。基于Howard提出的残差编码方法(参见Proc. of '96 Data Compression Conf., Snowbird, Utah, p.210- 19,1996) SPM算法及其对8张CCITT双电平测试图像产生的有损图像,报告了实验结果。这些结果表明,在有损图像编码后,使用TPR-B可以预测测试集中88%的无损图像像素,根本不需要进行残差编码。TPR-B在剩余编码时节省的SPM算术编码操作方面,平均编码速度提高了8倍。将TPR-NS与TPR-B结合使用,SPM残基编码压缩比平均提高11%。
{"title":"Fast residue coding for lossless textual image compression","authors":"C. Constantinescu, R. Arps","doi":"10.1109/DCC.1997.582065","DOIUrl":"https://doi.org/10.1109/DCC.1997.582065","url":null,"abstract":"Lossless textual image compression based on pattern matching classically includes a \"residue\" coding step that refines an initially lossy reconstructed image to its lossless original form. This step is typically accomplished by arithmetically coding the predicted value for each lossless image pixel, based on the values of previously reconstructed nearby pixels in both the lossless image and its precursor lossy image. Our contribution describes background typical prediction (TPR-B), a fast method for residue coding based on \"typical prediction\" which permits the skipping of pixels to be arithmetically encoded; and non-symbol typical prediction (TPR-NS), an improved compression method for residue coding also based on \"typical prediction\". Experimental results are reported based on the residue coding method proposed in Howard's (see Proc. of '96 Data Compression Conf., Snowbird, Utah, p.210-19, 1996) SPM algorithm and the lossy images it generates when applied to eight CCITT bi-level test images. These results demonstrate that after lossy image coding, 88% of the lossless image pixels in the test set can be predicted using TPR-B and need not be residue coded at all. In terms of saved SPM arithmetic coding operations while residue coding, TPR-B achieves an average coding speed increase of 8 times. Using TPR-NS together with TPR-B increases the SPM residue coding compression ratios by an average of 11%.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124385847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Low bit rate color image coding with adaptive encoding of wavelet coefficients 小波系数自适应编码的低码率彩色图像编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582118
S. Meadows, S. Mitra
We report the performance of the embedded zerotree wavelet (EZW) using successive-approximation quantization and an adaptive arithmetic coding for effective reduction in bit rates while maintaining high visual quality of reconstructed color images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp by EZW yielding a compression ratio (CR) of 50:1. Further bit rate reduction to 0.375 bpp results in a visible degradation by EZW, as is the case when using the adaptive vector quantizer AFLC-VQ. However, the bit rate reduction by AFLC-VQ was computed from the quantizer output and did not include any subsequent entropy coding. Therefore entropy coding of the multi-resolution codebooks generated by adaptive vector quantization of the wavelet coefficients in the AFLC-VQ scheme should reduce the bit rate to at least 0.36 bpp (CR 67:1) at the desired quality currently obtainable at 0.48 bpp by EZW.
我们报告了使用逐次逼近量化和自适应算术编码的嵌入式零树小波(EZW)的性能,以有效降低比特率,同时保持重建彩色图像的高视觉质量。对于24位彩色图像,EZW可以将比特率降低到约0.48 bpp,从而产生50:1的压缩比(CR),从而保持出色的视觉质量。进一步降低比特率至0.375 bpp会导致EZW的明显退化,就像使用自适应矢量量化器AFLC-VQ时一样。然而,AFLC-VQ的比特率降低是从量化器输出中计算出来的,不包括任何后续的熵编码。因此,对AFLC-VQ方案中小波系数的自适应矢量量化产生的多分辨率码本进行熵编码,应将比特率降低到至少0.36 bpp (CR 67:1),而目前EZW可获得0.48 bpp的期望质量。
{"title":"Low bit rate color image coding with adaptive encoding of wavelet coefficients","authors":"S. Meadows, S. Mitra","doi":"10.1109/DCC.1997.582118","DOIUrl":"https://doi.org/10.1109/DCC.1997.582118","url":null,"abstract":"We report the performance of the embedded zerotree wavelet (EZW) using successive-approximation quantization and an adaptive arithmetic coding for effective reduction in bit rates while maintaining high visual quality of reconstructed color images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp by EZW yielding a compression ratio (CR) of 50:1. Further bit rate reduction to 0.375 bpp results in a visible degradation by EZW, as is the case when using the adaptive vector quantizer AFLC-VQ. However, the bit rate reduction by AFLC-VQ was computed from the quantizer output and did not include any subsequent entropy coding. Therefore entropy coding of the multi-resolution codebooks generated by adaptive vector quantization of the wavelet coefficients in the AFLC-VQ scheme should reduce the bit rate to at least 0.36 bpp (CR 67:1) at the desired quality currently obtainable at 0.48 bpp by EZW.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An overhead reduction technique for mega-state compression schemes 一种用于大状态压缩方案的开销减少技术
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582061
A. Bookstein, S. T. Klein, T. Raita
Many of the most effective compression methods involve complicated models. Unfortunately, as model complexity increases, so does the cost of storing the model itself. This paper examines a method to reduce the amount of storage needed to represent a Markov model with an extended alphabet, by applying a clustering scheme that brings together similar states. Experiments run on a variety of large natural language texts show that much of the overhead of storing the model can be saved at the cost of a very small loss of compression efficiency.
许多最有效的压缩方法都涉及复杂的模型。不幸的是,随着模型复杂性的增加,存储模型本身的成本也在增加。本文研究了一种方法,通过应用将相似状态聚集在一起的聚类方案,减少用扩展字母表表示马尔可夫模型所需的存储量。在各种大型自然语言文本上运行的实验表明,以很小的压缩效率损失为代价,可以节省存储模型的大部分开销。
{"title":"An overhead reduction technique for mega-state compression schemes","authors":"A. Bookstein, S. T. Klein, T. Raita","doi":"10.1109/DCC.1997.582061","DOIUrl":"https://doi.org/10.1109/DCC.1997.582061","url":null,"abstract":"Many of the most effective compression methods involve complicated models. Unfortunately, as model complexity increases, so does the cost of storing the model itself. This paper examines a method to reduce the amount of storage needed to represent a Markov model with an extended alphabet, by applying a clustering scheme that brings together similar states. Experiments run on a variety of large natural language texts show that much of the overhead of storing the model can be saved at the cost of a very small loss of compression efficiency.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129909918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Compression of generalised Gaussian sources 广义高斯源的压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582131
A. Puga, A. P. Alves
Summary form only given. This article introduces a non-linear statistical approach to the interframe video coding assuming a priori that the source is non-Gaussian. To this end generalised Gaussian (GG) modelling and high-order statistics are used and a new optimal coding problem is identified as a simultaneous diagonalisation of 2nd and 4th order cumulant tensors. This problem, named the high-order Karhunen-Loeve transform (HOKLT), is an independent component analysis (ICA) method. Using the available linear techniques for cumulant tensor diagonalisation the HOKLT problem cannot be, in general, solved exactly. Considering the impossibility of solving HOKLT problem within the linear group, a non-linear methodology named non-linear independent components analysis (NLICA) that solves the HOKLT problem was introduced. The structure of the analysis operator produced by NLICA is a linear-nonlinear-linear transformation where the first linear stage is an isoentropic ICA operator and the last linear stage is a principal components analysis (PCA) operator. The non-linear stage is diagonal and it converts marginal densities to Gaussianity conserving marginal entropies. Considering the three basic coding modes within DPCM video coders and the three colour components there are nine different sources. Fitting this sources to GG family, done in this work, has shown how far from Gaussianity these sources are and supports the GG modelling effectiveness.
只提供摘要形式。本文介绍了一种非线性统计方法对帧间视频编码假设一个先验的源是非高斯的。为此,使用了广义高斯(GG)建模和高阶统计量,并确定了一个新的最优编码问题,即2阶和4阶累积张量的同时对角化。该问题被称为高阶Karhunen-Loeve变换(HOKLT),是一种独立分量分析(ICA)方法。利用现有的线性累积张量对角化技术,通常不能精确地解决HOKLT问题。考虑到在线性群内求解HOKLT问题的不可能性,提出了一种求解HOKLT问题的非线性方法——非线性独立分量分析(NLICA)。NLICA产生的分析算子的结构是线性-非线性-线性变换,其中第一个线性阶段是等熵ICA算子,最后一个线性阶段是主成分分析(PCA)算子。非线性阶段是对角线的,它将边际密度转换为高斯守恒的边际熵。考虑到DPCM视频编码器的三种基本编码模式和三种颜色分量,有九种不同的源。在这项工作中,将这些源拟合到GG族,表明这些源离高斯性有多远,并支持GG建模的有效性。
{"title":"Compression of generalised Gaussian sources","authors":"A. Puga, A. P. Alves","doi":"10.1109/DCC.1997.582131","DOIUrl":"https://doi.org/10.1109/DCC.1997.582131","url":null,"abstract":"Summary form only given. This article introduces a non-linear statistical approach to the interframe video coding assuming a priori that the source is non-Gaussian. To this end generalised Gaussian (GG) modelling and high-order statistics are used and a new optimal coding problem is identified as a simultaneous diagonalisation of 2nd and 4th order cumulant tensors. This problem, named the high-order Karhunen-Loeve transform (HOKLT), is an independent component analysis (ICA) method. Using the available linear techniques for cumulant tensor diagonalisation the HOKLT problem cannot be, in general, solved exactly. Considering the impossibility of solving HOKLT problem within the linear group, a non-linear methodology named non-linear independent components analysis (NLICA) that solves the HOKLT problem was introduced. The structure of the analysis operator produced by NLICA is a linear-nonlinear-linear transformation where the first linear stage is an isoentropic ICA operator and the last linear stage is a principal components analysis (PCA) operator. The non-linear stage is diagonal and it converts marginal densities to Gaussianity conserving marginal entropies. Considering the three basic coding modes within DPCM video coders and the three colour components there are nine different sources. Fitting this sources to GG family, done in this work, has shown how far from Gaussianity these sources are and supports the GG modelling effectiveness.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125550350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptually lossless image compression 感知无损图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582100
Peter J. Hahn, V., John Mathews
Summary form only given. This paper presents an algorithm for perceptually lossless image compression. The approach utilizes properties of the human visual system in the form of a perceptual threshold function (PTF) model. The PTF model determines the amount of distortion that can be introduced at each location of the image. Thus, constraining all quantization errors to levels below the PTF results in perceptually lossless image compression. The system employs a modified form of the embedded zerotree wavelet (EZW) coding algorithm that limits the quantization errors of the wavelet transform coefficients to levels below those specified by the model of the perceptual threshold function. Experimental results demonstrate perceptually lossless compression of monochrome images at bit rates ranging from 0.4 to 1.2 bits per pixel at a viewing distance of six times the image height and at bit rates from 0.2 to 0.5 bits per pixel at a viewing distance of twelve times the image height.
只提供摘要形式。提出了一种感知无损图像压缩算法。该方法以感知阈值函数(PTF)模型的形式利用了人类视觉系统的特性。PTF模型决定了在图像的每个位置可以引入的失真量。因此,将所有量化误差限制在PTF以下的水平会导致感知上无损的图像压缩。该系统采用一种改进的嵌入式零树小波(EZW)编码算法,将小波变换系数的量化误差限制在感知阈值函数模型指定的水平以下。实验结果表明,在观察距离为图像高度的6倍时,单色图像的比特率为0.4至1.2比特/像素,在观察距离为图像高度的12倍时,比特率为0.2至0.5比特/像素,可以实现感知无损压缩。
{"title":"Perceptually lossless image compression","authors":"Peter J. Hahn, V., John Mathews","doi":"10.1109/DCC.1997.582100","DOIUrl":"https://doi.org/10.1109/DCC.1997.582100","url":null,"abstract":"Summary form only given. This paper presents an algorithm for perceptually lossless image compression. The approach utilizes properties of the human visual system in the form of a perceptual threshold function (PTF) model. The PTF model determines the amount of distortion that can be introduced at each location of the image. Thus, constraining all quantization errors to levels below the PTF results in perceptually lossless image compression. The system employs a modified form of the embedded zerotree wavelet (EZW) coding algorithm that limits the quantization errors of the wavelet transform coefficients to levels below those specified by the model of the perceptual threshold function. Experimental results demonstrate perceptually lossless compression of monochrome images at bit rates ranging from 0.4 to 1.2 bits per pixel at a viewing distance of six times the image height and at bit rates from 0.2 to 0.5 bits per pixel at a viewing distance of twelve times the image height.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126371180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Effective management of compressed data with packed file systems 有效地管理压缩数据与打包文件系统
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582127
Y. Okada, M. Tokuyo, S. Yoshida, N. Okayasu, H. Shimoi
Summary form only given. Lossless data compression is commonly used on personal computers to increase their storage capacity. For example, we can get twice the normal capacity by using lossless data compression algorithms. However, it is necessary to locate compressed data of variable sizes in a fixed-size block with as little fragmentation as possible. This can be accomplished by compressed data management (CDM). The amount of storage capacity provided by data compression depends on the ability of CDM. If CDM does not eliminate fragmentation sufficiently, one cannot attain the storage capacity corresponding to the compression ratio. We present an efficient CDM using a new packed file system (PFS). We confirmed that the PFS achieves and maintains 95% of high space efficiency by using only 1/1000 of the table size needed for the entire storage capacity without employing garbage collection.
只提供摘要形式。无损数据压缩通常用于个人计算机,以增加其存储容量。例如,通过使用无损数据压缩算法,我们可以获得正常容量的两倍。然而,有必要将可变大小的压缩数据定位在固定大小的块中,并且尽可能少地碎片化。这可以通过压缩数据管理(CDM)来实现。数据压缩所提供的存储容量取决于CDM的能力。如果CDM不能充分消除碎片,就不能获得与压缩比相对应的存储容量。我们提出了一种使用新的打包文件系统(PFS)的高效CDM。我们确认,在不使用垃圾收集的情况下,PFS只使用整个存储容量所需表大小的1/1000,从而实现并保持了95%的高空间效率。
{"title":"Effective management of compressed data with packed file systems","authors":"Y. Okada, M. Tokuyo, S. Yoshida, N. Okayasu, H. Shimoi","doi":"10.1109/DCC.1997.582127","DOIUrl":"https://doi.org/10.1109/DCC.1997.582127","url":null,"abstract":"Summary form only given. Lossless data compression is commonly used on personal computers to increase their storage capacity. For example, we can get twice the normal capacity by using lossless data compression algorithms. However, it is necessary to locate compressed data of variable sizes in a fixed-size block with as little fragmentation as possible. This can be accomplished by compressed data management (CDM). The amount of storage capacity provided by data compression depends on the ability of CDM. If CDM does not eliminate fragmentation sufficiently, one cannot attain the storage capacity corresponding to the compression ratio. We present an efficient CDM using a new packed file system (PFS). We confirmed that the PFS achieves and maintains 95% of high space efficiency by using only 1/1000 of the table size needed for the entire storage capacity without employing garbage collection.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116219927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An embedded wavelet video coder using three-dimensional set partitioning in hierarchical trees (SPIHT) 基于层次树三维集划分的嵌入式小波视频编码器
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582048
Beong-Jo Kim, W. Pearlman
The SPIHT (set partitioning in hierarchical trees) algorithm by Said and Pearlman (see IEEE Trans. on Circuits and Systems for Video Technology, no.6, p.243-250, 1996) is known to have produced some of the best results in still image coding. It is a fully embedded wavelet coding algorithm with precise rate control and low complexity. We present an application of the SPIHT algorithm to video sequences, using three-dimensional (3D) wavelet decompositions and 3D spatio-temporal dependence trees. A full 3D-SPIHT encoder/decoder is implemented in software and is compared against MPEG-2 in parallel simulations. Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.
由Said和Pearlman提出的SPIHT(分层树集合划分)算法。视频技术电路与系统,第2期。6, p.243-250, 1996)在静止图像编码中产生了一些最好的结果。它是一种全嵌入式小波编码算法,具有精确的速率控制和较低的复杂度。我们提出了SPIHT算法的应用视频序列,使用三维(3D)小波分解和三维时空依赖树。在软件中实现了完整的3D-SPIHT编码器/解码器,并在并行模拟中与MPEG-2进行了比较。虽然在3D SPIHT中没有运动估计和补偿,但它在视觉上的表现明显优于采用复杂运动估计和补偿的MPEG-2。
{"title":"An embedded wavelet video coder using three-dimensional set partitioning in hierarchical trees (SPIHT)","authors":"Beong-Jo Kim, W. Pearlman","doi":"10.1109/DCC.1997.582048","DOIUrl":"https://doi.org/10.1109/DCC.1997.582048","url":null,"abstract":"The SPIHT (set partitioning in hierarchical trees) algorithm by Said and Pearlman (see IEEE Trans. on Circuits and Systems for Video Technology, no.6, p.243-250, 1996) is known to have produced some of the best results in still image coding. It is a fully embedded wavelet coding algorithm with precise rate control and low complexity. We present an application of the SPIHT algorithm to video sequences, using three-dimensional (3D) wavelet decompositions and 3D spatio-temporal dependence trees. A full 3D-SPIHT encoder/decoder is implemented in software and is compared against MPEG-2 in parallel simulations. Although there is no motion estimation or compensation in the 3D SPIHT, it performs measurably and visually better than MPEG-2, which employs complicated motion estimation and compensation.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127311303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 361
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1