首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Lossless compression by simulated annealing 模拟退火的无损压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515562
R. Bowen-Wright, K. Sayood
Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating "optimum" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no "holes" in it.
只提供摘要形式。线性预测方案是无损图像压缩中最简单的技术之一。尽管它们很简单,但它们已被证明具有惊人的效率。目前的JPEG图像编码标准在其无损模式下使用线性预测编码器。预测编码最初用于有损压缩技术,如差分脉冲编码调制(DPCM)。在这些技术中,对预测误差进行量化,并将量化后的值传输给接收机。为了减小量化误差,必须减小预测误差方差。因此,生成“最佳”预测系数的技术通常试图最小化某些预测误差方差的度量。在无损压缩中,目标是最小化预测误差的熵,因此,旨在最小化预测误差方差的技术可能不适合获得预测系数。我们试图通过最小化预测误差的一阶熵来获得无损图像压缩的预测系数。我们使用模拟退火来执行最小化。提高线性预测技术性能的一种方法是首先重新映射像素值,使重新映射图像的直方图中不包含“洞”。
{"title":"Lossless compression by simulated annealing","authors":"R. Bowen-Wright, K. Sayood","doi":"10.1109/DCC.1995.515562","DOIUrl":"https://doi.org/10.1109/DCC.1995.515562","url":null,"abstract":"Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating \"optimum\" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no \"holes\" in it.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A new model of perceptual threshold functions for application in image compression systems
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515527
K. S. Prashant, V. J. Mathews, Peter J. Hahn
This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.
本文讨论了人类视觉系统的感知阈值模型的发展。感知阈值函数描述了人类观察者无法检测到的图像中每个位置的扭曲程度。感知阈值函数模型在图像压缩问题中很有用,因为图像压缩系统将编码图像中的失真限制在感知阈值函数建议的水平以下,从而执行感知无损压缩。我们的模型涉及将输入图像分解为其傅里叶分量和空间局部化的Gabor初等函数。然后使用心理物理掩蔽实验的数据来计算正弦掩蔽存在时每个Gabor变换系数的感知检测阈值。本文还包括一个实验的结果,该实验涉及使用阈值模型所建议的幅度加性噪声来扭曲图像。
{"title":"A new model of perceptual threshold functions for application in image compression systems","authors":"K. S. Prashant, V. J. Mathews, Peter J. Hahn","doi":"10.1109/DCC.1995.515527","DOIUrl":"https://doi.org/10.1109/DCC.1995.515527","url":null,"abstract":"This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Constraining the size of the instantaneous alphabet in trellis quantizers 在网格量化器中限制瞬时字母表的大小
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515492
M. F. Larsen, R. L. Frost
A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.
提出了一种降低栅格量化编码器(TQ)计算复杂度的方法。我们首先在考虑的量子平均瞬时数的约束下发展速率畸变理论。这个约束具有实际意义:在TQ中,量子的平均瞬时数恰好是编码器所需的平均乘法数。该理论表明,如果将每个量子的条件概率限制在有限的支持区域内,则可以在SQNR性能中以很少或没有成本的情况下使所考虑的量子的瞬时数量非常小。tq的模拟证实了这一预测。这种复杂性的降低使得基于模型的tq (mtq)的使用变得可行,这在以前被认为在计算上是不合理的。对于语音,在计算复杂度相似的情况下,仅使用一阶MTQ就可以获得多个dB SQNR优于自适应预测方案的性能增益。
{"title":"Constraining the size of the instantaneous alphabet in trellis quantizers","authors":"M. F. Larsen, R. L. Frost","doi":"10.1109/DCC.1995.515492","DOIUrl":"https://doi.org/10.1109/DCC.1995.515492","url":null,"abstract":"A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization 基于混合DPCM/DCT和熵约束网格编码量化的高光谱图像压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515522
G. Abousleman
A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.
提出了一种基于网格编码量化(TCQ)的高光谱图像压缩系统。具体而言,采用DPCM对高光谱数据进行频谱去相关,采用二维离散余弦变换(DCT)编码方案对高光谱数据进行空间去相关。熵约束码本是使用广义劳埃德算法的改进版本设计的。该编码器的压缩比大于70:1,编码的高光谱序列的平均PSNR超过40.0 dB。
{"title":"Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization","authors":"G. Abousleman","doi":"10.1109/DCC.1995.515522","DOIUrl":"https://doi.org/10.1109/DCC.1995.515522","url":null,"abstract":"A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Unbounded length contexts for PPM PPM的无界长度上下文
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515495
J. Cleary, W. Teahan
The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.
部分匹配预测(PPM)数据压缩方案在过去十年中为文本无损压缩设定了性能标准。最初的算法由Cleary和Witten于1984年首次发表,Moffat(1990)描述了一系列改进,最终实现了一个称为PPMC的谨慎实现,该算法已成为基准版本。尽管有许多改进的尝试,但这种方法的结果仍然优于几乎所有其他压缩方法。PPM是一种有限上下文统计建模技术,可以将其视为将几个固定顺序的上下文模型混合在一起,以预测输入序列中的下一个字符。模型中每个上下文的预测概率由自适应更新的频率计数计算得到;实际出现的符号是用算术编码相对于其预测分布进行编码的。本文描述了一种利用无界长度上下文的新算法PPM*。它可靠地实现了优于PPMC的压缩,尽管我们当前的实现使用了相当大的计算资源(时间和空间)。描述了基本的PPM压缩方案,展示了无界长度上下文的使用,以及如何使用树形数据结构实现它。给出的一些结果表明,该方法比旧方法改进了约6%。
{"title":"Unbounded length contexts for PPM","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1995.515495","DOIUrl":"https://doi.org/10.1109/DCC.1995.515495","url":null,"abstract":"The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129185366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 385
Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of "stable" compression schemes 多分辨率分段线性图像分解:量化误差传播和“稳定”压缩方案的设计
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515580
O. Kiselyov, P. Fisher
Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.
只提供摘要形式。本文介绍了一种设计稳定的无瓦片效应多分辨率图像压缩方案的新方法。它着重于分解系数中的量化误差如何影响解压缩图像的质量,误差如何在多分辨率分解中传播,以及如何设计最小化量化误差影响的压缩方案(视觉上和定量上)。本文还介绍并分析了最简单的拉普拉斯金字塔族(使用三点因果滤波器),它可以产生多分辨率分段线性图像分解。这使重建图像的视觉效果更好,没有块状物,如例子所示。误差传播分析导致发现了特定的拉普拉斯金字塔,其中量化误差在传播时不会放大,而是迅速衰减。
{"title":"Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of \"stable\" compression schemes","authors":"O. Kiselyov, P. Fisher","doi":"10.1109/DCC.1995.515580","DOIUrl":"https://doi.org/10.1109/DCC.1995.515580","url":null,"abstract":"Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129391195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An image segmentation method based on a color space distribution model 一种基于色彩空间分布模型的图像分割方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515549
M. Aizu, O. Nakagawa, M. Takagi
Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a "black box". On the other hand, the distribution of data for an object in a scene can be described by the "dichromatic surface model", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.
只提供摘要形式。使用图像分割方法进行第二代图像编码已经受到了相当多的研究关注,因为均匀化的部分图像数据可以在单独的基础上有效地编码。对于彩色图像编码,传统的分割技术在应用于统一的颜色空间时特别有用,例如Miyahara等人(参见IEICE Trans。关于D-II卷,第j76 -D-II号。5 . (p.1023-1037, 1993)开发了一种静态图像编码的图像分割方法,该方法在均匀色彩空间中进行聚类并实现分段集成技术。然而,这种方法的一个缺点是,颜色数据分布的形状被认为是一个“黑盒子”。另一方面,场景中物体的数据分布可以用“二色表面模型”来描述,其中从介电非均匀材料上的一点反射的光由两个分量的线性组合来描述,即(1)从材料表面反射的光和(2)从材料体内部反射的光。在此基础上,我们提出了一种启发式模型,利用均匀色彩空间中与物体对应的一个或多个椭圆来描述分布形状,其中每个椭圆的起点和终点都在亮度轴上。为了测试该方法的性能,我们进行了计算机仿真。
{"title":"An image segmentation method based on a color space distribution model","authors":"M. Aizu, O. Nakagawa, M. Takagi","doi":"10.1109/DCC.1995.515549","DOIUrl":"https://doi.org/10.1109/DCC.1995.515549","url":null,"abstract":"Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a \"black box\". On the other hand, the distribution of data for an object in a scene can be described by the \"dichromatic surface model\", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving LZFG data compression algorithm 改进LZFG数据压缩算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515585
Jianmin Jiang
Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.
只提供摘要形式。本文提出了两种改进LZFG数据压缩算法的方法。一种方法是引入一种自适应的基于单词的方案,以显著改善英语文本压缩。另一种方法是应用简单的移到前端方案,进一步减少复制节点统计数据中的冗余。实验结果表明,两种方法均能取得较好的效果。自适应的基于单词的方案将所有连续的英文字符作为一个单词。ASCII码中的任何其他字符将被视为一个单词。例如,输入消息“(2+x)由y表示”可以分为9个单词。要在PATRICIA树中运行基于单词的模式,需要修改数据结构。
{"title":"Improving LZFG data compression algorithm","authors":"Jianmin Jiang","doi":"10.1109/DCC.1995.515585","DOIUrl":"https://doi.org/10.1109/DCC.1995.515585","url":null,"abstract":"Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130046911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast pattern matching for entropy bounded text 熵有限文本的快速模式匹配
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515518
Shenfeng Chen, J. Reif
We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.
我们提出了已知的第一个有界熵文本的一维和二维字符串匹配算法。设n为文本的长度,m为模式的长度。我们表明,对于模式分布的各种假设,算法的预期复杂性与文本的熵有关。对于均匀分布模式的情况,我们的一维匹配算法在O(nlogm/(pm))预期运行时间内工作,其中H是文本的熵,p=1-(1-H/sup 2/)/sup H/(1+H)/。如果V是生成模式的源的方差,则最坏情况下运行时间T也可以由(n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V))限定。我们的算法利用在某些无损数据压缩方案中发现的数据结构和概率分析技术。
{"title":"Fast pattern matching for entropy bounded text","authors":"Shenfeng Chen, J. Reif","doi":"10.1109/DCC.1995.515518","DOIUrl":"https://doi.org/10.1109/DCC.1995.515518","url":null,"abstract":"We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"635 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generalized region based transform coding for video compression 基于广义区域的视频压缩变换编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515588
K. Sum, R. Murch
Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted "Miss America" and "Salesman" video sequences.
只提供摘要形式。基于块的变换编码(BBTC)由于其硬件实现简单,是目前最流行的视频压缩编码方法之一。然而,在低比特率传输时,这种方法不能保持可接受的分辨率和图像质量。另一方面,基于区域的编码方法已被证明具有通过识别人类感知来提高视觉质量的能力。为了充分利用这两种编码方法的优点,提出了一种将BBTC和基于区域的编码相结合的新方法。利用这种技术,产生了一类新的视频编码方法,称为基于区域的变换编码(RBTC)。在广义RBTC中,我们用轮廓包围的纹理来表示包含运动的区域。然后分别编码轮廓和纹理。新技术是对区域的像素值进行扫描,形成一个矢量。然后将矢量进一步转换为若干固定大小的图像块。利用该技术,可以直接对纹理块进行常规的变换编码。轮廓可以使用传统的轮廓编码方法或任何其他位平面编码方法进行编码。为了证明这种新的视频编码方法的思想,对一种称为分段运动变换编码(SMTC)的方案进行了仿真。在SMTC中,轮廓编码采用链码。模拟是使用CIF格式的“美国小姐”和“推销员”视频序列的前60帧进行的。
{"title":"Generalized region based transform coding for video compression","authors":"K. Sum, R. Murch","doi":"10.1109/DCC.1995.515588","DOIUrl":"https://doi.org/10.1109/DCC.1995.515588","url":null,"abstract":"Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted \"Miss America\" and \"Salesman\" video sequences.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1