首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
The structure of DMC [dynamic Markov compression] 动态马尔可夫压缩的结构
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515497
S. Bunton
The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.
流行的动态马尔可夫压缩算法(DMC)提供最先进的压缩性能和无与伦比的概念简单性。然而,在实践中,DMC的简单性和性能的代价往往是惊人的内存消耗。为了减少DMC笨拙的模型增长,一些已知的尝试已经使DMC的压缩性能失去了竞争力。DMC模型增长问题难以解决的一个原因是,人们对该算法的理解很差。DMC是唯一已发表的随机数据模型,其状态的表征,在条件作用方面,是未知的。到目前为止,关于DMC的所有确定的是有限上下文表征的存在,这是用有限论证证明的。本文提出并证明了DMC数据模型状态的第一个有限上下文表征。我们的分析表明,DMC模型,无论是否具有反生产部分,都提供了其他模型中没有的抽象结构特征。具有讽刺意味的是,渴求空间的DMC算法实际上比同类算法具有更大的经济模型表示能力。一旦确定,DMC的显著特性就可以很容易地与其他技术的最佳特性结合起来。这些组合有可能实现非常有竞争力的压缩/内存权衡。
{"title":"The structure of DMC [dynamic Markov compression]","authors":"S. Bunton","doi":"10.1109/DCC.1995.515497","DOIUrl":"https://doi.org/10.1109/DCC.1995.515497","url":null,"abstract":"The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126003448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A new model of perceptual threshold functions for application in image compression systems
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515527
K. S. Prashant, V. J. Mathews, Peter J. Hahn
This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.
本文讨论了人类视觉系统的感知阈值模型的发展。感知阈值函数描述了人类观察者无法检测到的图像中每个位置的扭曲程度。感知阈值函数模型在图像压缩问题中很有用,因为图像压缩系统将编码图像中的失真限制在感知阈值函数建议的水平以下,从而执行感知无损压缩。我们的模型涉及将输入图像分解为其傅里叶分量和空间局部化的Gabor初等函数。然后使用心理物理掩蔽实验的数据来计算正弦掩蔽存在时每个Gabor变换系数的感知检测阈值。本文还包括一个实验的结果,该实验涉及使用阈值模型所建议的幅度加性噪声来扭曲图像。
{"title":"A new model of perceptual threshold functions for application in image compression systems","authors":"K. S. Prashant, V. J. Mathews, Peter J. Hahn","doi":"10.1109/DCC.1995.515527","DOIUrl":"https://doi.org/10.1109/DCC.1995.515527","url":null,"abstract":"This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Quantization of wavelet coefficients for image compression 用于图像压缩的小波系数量化
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515593
A. Mohammed, K. Sayood
Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.
仅给出摘要形式,如下。小波和多分辨率分析在图像压缩中越来越受欢迎。我们研究了几种不同的小波系数量化方法。子带编码的一种标准方法是使用DPCM对最低的频带进行编码,而对较高的频带使用每个频带的标量量化器或矢量量化器进行量化。我们使用各种量化器来实现这些方案,包括PDF优化量化器和递归索引标量量化器(RISQ)。然后,我们结合一个阈值操作,以防止去除感知上重要的信息。我们表明,当我们使用RISQ和感知阈值时,在性能上有主观和客观的改进。客观性能测量显示,在广泛的速率范围内,始终有2到3 dB的改进。最后,我们使用递归索引矢量量化器(RIVQ)对小波系数进行编码。RIVQ可以在相对较高的速率下工作,因此特别适合于量化最低波段的系数。
{"title":"Quantization of wavelet coefficients for image compression","authors":"A. Mohammed, K. Sayood","doi":"10.1109/DCC.1995.515593","DOIUrl":"https://doi.org/10.1109/DCC.1995.515593","url":null,"abstract":"Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unbounded length contexts for PPM PPM的无界长度上下文
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515495
J. Cleary, W. Teahan
The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.
部分匹配预测(PPM)数据压缩方案在过去十年中为文本无损压缩设定了性能标准。最初的算法由Cleary和Witten于1984年首次发表,Moffat(1990)描述了一系列改进,最终实现了一个称为PPMC的谨慎实现,该算法已成为基准版本。尽管有许多改进的尝试,但这种方法的结果仍然优于几乎所有其他压缩方法。PPM是一种有限上下文统计建模技术,可以将其视为将几个固定顺序的上下文模型混合在一起,以预测输入序列中的下一个字符。模型中每个上下文的预测概率由自适应更新的频率计数计算得到;实际出现的符号是用算术编码相对于其预测分布进行编码的。本文描述了一种利用无界长度上下文的新算法PPM*。它可靠地实现了优于PPMC的压缩,尽管我们当前的实现使用了相当大的计算资源(时间和空间)。描述了基本的PPM压缩方案,展示了无界长度上下文的使用,以及如何使用树形数据结构实现它。给出的一些结果表明,该方法比旧方法改进了约6%。
{"title":"Unbounded length contexts for PPM","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1995.515495","DOIUrl":"https://doi.org/10.1109/DCC.1995.515495","url":null,"abstract":"The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129185366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 385
Fast pattern matching for entropy bounded text 熵有限文本的快速模式匹配
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515518
Shenfeng Chen, J. Reif
We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.
我们提出了已知的第一个有界熵文本的一维和二维字符串匹配算法。设n为文本的长度,m为模式的长度。我们表明,对于模式分布的各种假设,算法的预期复杂性与文本的熵有关。对于均匀分布模式的情况,我们的一维匹配算法在O(nlogm/(pm))预期运行时间内工作,其中H是文本的熵,p=1-(1-H/sup 2/)/sup H/(1+H)/。如果V是生成模式的源的方差,则最坏情况下运行时间T也可以由(n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V))限定。我们的算法利用在某些无损数据压缩方案中发现的数据结构和概率分析技术。
{"title":"Fast pattern matching for entropy bounded text","authors":"Shenfeng Chen, J. Reif","doi":"10.1109/DCC.1995.515518","DOIUrl":"https://doi.org/10.1109/DCC.1995.515518","url":null,"abstract":"We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"635 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Alternative methods for codebook design in vector quantization 矢量量化中码本设计的替代方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515595
V. Delport
A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image "Lena".
矢量量化器将多维矢量空间映射到称为码本的复制矢量的有限子集。对于码本优化,通常使用众所周知的LBG算法或模拟退火技术。提出了模糊均值(FCM)和遗传算法(GA)两种替代方法。为了说明算法的性能,选择了DCT-VQ。对测试图像“Lena”给出了基于每系数平均能量的固定分割方案。
{"title":"Alternative methods for codebook design in vector quantization","authors":"V. Delport","doi":"10.1109/DCC.1995.515595","DOIUrl":"https://doi.org/10.1109/DCC.1995.515595","url":null,"abstract":"A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image \"Lena\".","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128673304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of "stable" compression schemes 多分辨率分段线性图像分解:量化误差传播和“稳定”压缩方案的设计
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515580
O. Kiselyov, P. Fisher
Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.
只提供摘要形式。本文介绍了一种设计稳定的无瓦片效应多分辨率图像压缩方案的新方法。它着重于分解系数中的量化误差如何影响解压缩图像的质量,误差如何在多分辨率分解中传播,以及如何设计最小化量化误差影响的压缩方案(视觉上和定量上)。本文还介绍并分析了最简单的拉普拉斯金字塔族(使用三点因果滤波器),它可以产生多分辨率分段线性图像分解。这使重建图像的视觉效果更好,没有块状物,如例子所示。误差传播分析导致发现了特定的拉普拉斯金字塔,其中量化误差在传播时不会放大,而是迅速衰减。
{"title":"Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of \"stable\" compression schemes","authors":"O. Kiselyov, P. Fisher","doi":"10.1109/DCC.1995.515580","DOIUrl":"https://doi.org/10.1109/DCC.1995.515580","url":null,"abstract":"Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129391195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An image segmentation method based on a color space distribution model 一种基于色彩空间分布模型的图像分割方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515549
M. Aizu, O. Nakagawa, M. Takagi
Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a "black box". On the other hand, the distribution of data for an object in a scene can be described by the "dichromatic surface model", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.
只提供摘要形式。使用图像分割方法进行第二代图像编码已经受到了相当多的研究关注,因为均匀化的部分图像数据可以在单独的基础上有效地编码。对于彩色图像编码,传统的分割技术在应用于统一的颜色空间时特别有用,例如Miyahara等人(参见IEICE Trans。关于D-II卷,第j76 -D-II号。5 . (p.1023-1037, 1993)开发了一种静态图像编码的图像分割方法,该方法在均匀色彩空间中进行聚类并实现分段集成技术。然而,这种方法的一个缺点是,颜色数据分布的形状被认为是一个“黑盒子”。另一方面,场景中物体的数据分布可以用“二色表面模型”来描述,其中从介电非均匀材料上的一点反射的光由两个分量的线性组合来描述,即(1)从材料表面反射的光和(2)从材料体内部反射的光。在此基础上,我们提出了一种启发式模型,利用均匀色彩空间中与物体对应的一个或多个椭圆来描述分布形状,其中每个椭圆的起点和终点都在亮度轴上。为了测试该方法的性能,我们进行了计算机仿真。
{"title":"An image segmentation method based on a color space distribution model","authors":"M. Aizu, O. Nakagawa, M. Takagi","doi":"10.1109/DCC.1995.515549","DOIUrl":"https://doi.org/10.1109/DCC.1995.515549","url":null,"abstract":"Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a \"black box\". On the other hand, the distribution of data for an object in a scene can be described by the \"dichromatic surface model\", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving LZFG data compression algorithm 改进LZFG数据压缩算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515585
Jianmin Jiang
Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.
只提供摘要形式。本文提出了两种改进LZFG数据压缩算法的方法。一种方法是引入一种自适应的基于单词的方案,以显著改善英语文本压缩。另一种方法是应用简单的移到前端方案,进一步减少复制节点统计数据中的冗余。实验结果表明,两种方法均能取得较好的效果。自适应的基于单词的方案将所有连续的英文字符作为一个单词。ASCII码中的任何其他字符将被视为一个单词。例如,输入消息“(2+x)由y表示”可以分为9个单词。要在PATRICIA树中运行基于单词的模式,需要修改数据结构。
{"title":"Improving LZFG data compression algorithm","authors":"Jianmin Jiang","doi":"10.1109/DCC.1995.515585","DOIUrl":"https://doi.org/10.1109/DCC.1995.515585","url":null,"abstract":"Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130046911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized region based transform coding for video compression 基于广义区域的视频压缩变换编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515588
K. Sum, R. Murch
Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted "Miss America" and "Salesman" video sequences.
只提供摘要形式。基于块的变换编码(BBTC)由于其硬件实现简单,是目前最流行的视频压缩编码方法之一。然而,在低比特率传输时,这种方法不能保持可接受的分辨率和图像质量。另一方面,基于区域的编码方法已被证明具有通过识别人类感知来提高视觉质量的能力。为了充分利用这两种编码方法的优点,提出了一种将BBTC和基于区域的编码相结合的新方法。利用这种技术,产生了一类新的视频编码方法,称为基于区域的变换编码(RBTC)。在广义RBTC中,我们用轮廓包围的纹理来表示包含运动的区域。然后分别编码轮廓和纹理。新技术是对区域的像素值进行扫描,形成一个矢量。然后将矢量进一步转换为若干固定大小的图像块。利用该技术,可以直接对纹理块进行常规的变换编码。轮廓可以使用传统的轮廓编码方法或任何其他位平面编码方法进行编码。为了证明这种新的视频编码方法的思想,对一种称为分段运动变换编码(SMTC)的方案进行了仿真。在SMTC中,轮廓编码采用链码。模拟是使用CIF格式的“美国小姐”和“推销员”视频序列的前60帧进行的。
{"title":"Generalized region based transform coding for video compression","authors":"K. Sum, R. Murch","doi":"10.1109/DCC.1995.515588","DOIUrl":"https://doi.org/10.1109/DCC.1995.515588","url":null,"abstract":"Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted \"Miss America\" and \"Salesman\" video sequences.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1