首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
Compression of silhouette-like images based on WFA 基于WFA的类轮廓图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582089
K. Culík, V. Valenta, J. Kari
Summary form only given. The authors present the design a lossy fractal compression method for silhouette-like bi-level images that has an excellent quality to compression rate ratio. Their approach is based on weighted finite automata (WFA). We reduce the problem of the encoding of a silhouette-like bi-level image to the encoding of two one-variable functions describing the boundary (-ies) of the black and white regions of the given image. One advantage is that the automata encoding different bitplanes can share states.
只提供摘要形式。本文设计了一种有损分形压缩方法,对类剪影双级图像进行压缩,并获得了较好的质量和压缩率比。该方法基于加权有限自动机(WFA)。我们将类轮廓双级图像的编码问题简化为描述给定图像的黑白区域边界(-ies)的两个单变量函数的编码。一个优点是编码不同位面的自动机可以共享状态。
{"title":"Compression of silhouette-like images based on WFA","authors":"K. Culík, V. Valenta, J. Kari","doi":"10.1109/DCC.1997.582089","DOIUrl":"https://doi.org/10.1109/DCC.1997.582089","url":null,"abstract":"Summary form only given. The authors present the design a lossy fractal compression method for silhouette-like bi-level images that has an excellent quality to compression rate ratio. Their approach is based on weighted finite automata (WFA). We reduce the problem of the encoding of a silhouette-like bi-level image to the encoding of two one-variable functions describing the boundary (-ies) of the black and white regions of the given image. One advantage is that the automata encoding different bitplanes can share states.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114981150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
On maximal parsings of strings 对字符串的最大解析
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582052
H. Helfgott, M. Cohn
Given a sequence, we consider the maximum number of distinct phrases in any parsing; this definition of complexity is invariant under string reversal. We show that the Lempel-Ziv (1976, 1978) parsings can vary under reversal by a factor on the order of the log of the sequence length. We give two interpretations of maximal parsing, show that they are not equivalent and that one lacks a plausible monotonicity property.
给定一个序列,我们考虑在任何解析中不同短语的最大数目;这个复杂度的定义在字符串反转下是不变的。我们证明了Lempel-Ziv(1976,1978)解析可以在序列长度的对数量级上的因子反转下变化。给出了极大解析的两种解释,证明它们是不等价的,其中一种缺乏似是而非的单调性。
{"title":"On maximal parsings of strings","authors":"H. Helfgott, M. Cohn","doi":"10.1109/DCC.1997.582052","DOIUrl":"https://doi.org/10.1109/DCC.1997.582052","url":null,"abstract":"Given a sequence, we consider the maximum number of distinct phrases in any parsing; this definition of complexity is invariant under string reversal. We show that the Lempel-Ziv (1976, 1978) parsings can vary under reversal by a factor on the order of the log of the sequence length. We give two interpretations of maximal parsing, show that they are not equivalent and that one lacks a plausible monotonicity property.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"410 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124362221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A lexicographic framework for MPEG rate control 一个用于MPEG速率控制的词典编纂框架
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581982
Dzung T. Hoang, Elliot L. Linzer, J. Vitter
We consider the problem of allocating bits among pictures in an MPEG video coder to equalize the visual quality of the coded pictures, while meeting buffer and channel constraints imposed by the MPEG video buffering verifier. We address this problem within a framework that consists of three components: (1) a bit production model for the input pictures, (2) a set of bit-rate constraints imposed by the video buffering verifier, and (3) a novel lexicographic criterion for optimality. Under this framework, we derive simple necessary and sufficient conditions for optimality that lead to efficient algorithms.
我们考虑了在MPEG视频编码器中图像之间分配比特的问题,以均衡编码图像的视觉质量,同时满足MPEG视频缓冲验证器施加的缓冲区和信道约束。我们在一个由三个部分组成的框架内解决了这个问题:(1)输入图片的比特产生模型,(2)视频缓冲验证器施加的一组比特率约束,以及(3)一种新的最佳性词典标准。在此框架下,我们推导出了简单的最优性的充分必要条件,从而得到了高效的算法。
{"title":"A lexicographic framework for MPEG rate control","authors":"Dzung T. Hoang, Elliot L. Linzer, J. Vitter","doi":"10.1109/DCC.1997.581982","DOIUrl":"https://doi.org/10.1109/DCC.1997.581982","url":null,"abstract":"We consider the problem of allocating bits among pictures in an MPEG video coder to equalize the visual quality of the coded pictures, while meeting buffer and channel constraints imposed by the MPEG video buffering verifier. We address this problem within a framework that consists of three components: (1) a bit production model for the input pictures, (2) a set of bit-rate constraints imposed by the video buffering verifier, and (3) a novel lexicographic criterion for optimality. Under this framework, we derive simple necessary and sufficient conditions for optimality that lead to efficient algorithms.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125526744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
POCS based error concealment for packet video 基于POCS的分组视频错误隐藏
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582151
G.-S. Yu, M. Marcellin, M.M.-K. Liu
Summary form only given. This paper proposes a new error concealment algorithm for packet video, effectively eliminating error propagation effects. Most standard video codecs use motion compensation to remove temporal redundancy. With such motion compensated interframe processing, any packet loss may generate serious error propagation over more than 10 consecutive frames. This kind of error propagation leads to perceptually annoying artifacts. Thus, proper error concealment algorithms need to be used to reduce this effect. The proposed algorithm adopts a one pixel block overlap coding structure to solve the error propagation problem. If no packet loss occurs, the decoded pixel intensities on the overlap areas should be consistent (with small differences caused by quantization error). When a packet loss occurs, the corresponding reconstructed frame and any frames referring to it are all damaged. Such damage causes inconsistent pixel intensities on the overlap areas of damaged frames. The proposed error concealment method poses the packet loss recovery problem as one of parameter estimation. Lost transform coefficients are estimated by the method of projection onto convex sets (POCS). The estimation is performed in a manner that maximizes the consistency of pixel intensities in the overlap areas of the reconstructed frames. Experimental results (using a modified version of CCITT H.261) show that it can have good error concealment results even when the damaged frame loses all the DCT coefficients.
只提供摘要形式。本文提出了一种新的分组视频错误隐藏算法,有效地消除了错误传播的影响。大多数标准视频编解码器使用运动补偿来消除时间冗余。通过这种帧间运动补偿处理,任何丢包都可能在超过10个连续帧中产生严重的错误传播。这种类型的错误传播导致感知上令人讨厌的工件。因此,需要使用适当的错误隐藏算法来减少这种影响。该算法采用一像素块重叠编码结构,解决了误差传播问题。在没有丢包的情况下,重叠区域的解码像素强度应该是一致的(由于量化误差造成的差异很小)。当丢包发生时,相应的重构帧和引用它的所有帧都被损坏。这种损坏会导致损坏帧重叠区域的像素强度不一致。所提出的错误隐藏方法将丢包恢复问题作为参数估计问题之一。利用凸集投影法(POCS)估计损失变换系数。所述估计以使重构帧的重叠区域中像素强度的一致性最大化的方式执行。实验结果表明(使用改进版的CCITT H.261),即使在损坏帧丢失所有DCT系数的情况下,该方法仍能取得良好的错误隐藏效果。
{"title":"POCS based error concealment for packet video","authors":"G.-S. Yu, M. Marcellin, M.M.-K. Liu","doi":"10.1109/DCC.1997.582151","DOIUrl":"https://doi.org/10.1109/DCC.1997.582151","url":null,"abstract":"Summary form only given. This paper proposes a new error concealment algorithm for packet video, effectively eliminating error propagation effects. Most standard video codecs use motion compensation to remove temporal redundancy. With such motion compensated interframe processing, any packet loss may generate serious error propagation over more than 10 consecutive frames. This kind of error propagation leads to perceptually annoying artifacts. Thus, proper error concealment algorithms need to be used to reduce this effect. The proposed algorithm adopts a one pixel block overlap coding structure to solve the error propagation problem. If no packet loss occurs, the decoded pixel intensities on the overlap areas should be consistent (with small differences caused by quantization error). When a packet loss occurs, the corresponding reconstructed frame and any frames referring to it are all damaged. Such damage causes inconsistent pixel intensities on the overlap areas of damaged frames. The proposed error concealment method poses the packet loss recovery problem as one of parameter estimation. Lost transform coefficients are estimated by the method of projection onto convex sets (POCS). The estimation is performed in a manner that maximizes the consistency of pixel intensities in the overlap areas of the reconstructed frames. Experimental results (using a modified version of CCITT H.261) show that it can have good error concealment results even when the damaged frame loses all the DCT coefficients.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114951724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Models of English text 英语文本模式
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581953
W. Teahan, J. Cleary
The problem of constructing models of English text is considered. A number of applications of such models including cryptology, spelling correction and speech recognition are reviewed. The best current models for English text have been the result of research into compression. Not only is this an important application of such models but the amount of compression provides a measure of how well such models perform. Three main classes of models are considered: character based models, word based models, and models which use auxiliary information in the form of parts of speech. These models are compared in terms of their memory usage and compression.
探讨了英语语篇模型的构建问题。本文综述了该模型在密码学、拼写校正和语音识别等方面的应用。目前最好的英语文本模型是压缩研究的结果。这不仅是这类模型的一个重要应用,而且压缩量提供了衡量这类模型执行情况的标准。主要考虑了三类模型:基于字符的模型、基于词的模型和使用词性形式的辅助信息的模型。这些模型在内存使用和压缩方面进行比较。
{"title":"Models of English text","authors":"W. Teahan, J. Cleary","doi":"10.1109/DCC.1997.581953","DOIUrl":"https://doi.org/10.1109/DCC.1997.581953","url":null,"abstract":"The problem of constructing models of English text is considered. A number of applications of such models including cryptology, spelling correction and speech recognition are reviewed. The best current models for English text have been the result of research into compression. Not only is this an important application of such models but the amount of compression provides a measure of how well such models perform. Three main classes of models are considered: character based models, word based models, and models which use auxiliary information in the form of parts of speech. These models are compared in terms of their memory usage and compression.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115449008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Parametric warping for motion estimation 运动估计的参数翘曲
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582124
Aria Nosratinia
Summary form only given. In warping (also known as mesh-based) motion estimation, motion vectors at individual pixels are computed through an interpolation of a subsampled set of motion vectors. A method for calculating optimal warping coefficients was introduced previously. This algorithm finds the interpolation coefficients, at each individual pixel location (within a block), such that the mean squared luminance errors are minimized. It has been observed that optimal coefficients vary widely with time and across different sequences. This observation motivates the optimization of the warping coefficients locally in time. However, doing so requires the encoder to transmit the coefficients to the decoder. Assuming a 16/spl times/16 block and four floating point coefficients per pixel, this would require a considerable overhead in bitrate. Especially in low bitrate regimes, such overhead is likely to be unacceptable. This paper proposes a parametric class of functions to represent the warping interpolation kernels. More specifically, we propose to use the two-parameter family of functions.
只提供摘要形式。在扭曲(也称为基于网格的)运动估计中,单个像素的运动矢量是通过对一组运动矢量的次采样插值来计算的。前面介绍了一种计算最优翘曲系数的方法。该算法在每个单独的像素位置(在一个块内)找到插值系数,使得均方亮度误差最小化。已观察到,最优系数随时间和不同序列变化很大。这一观察激发了局部时间翘曲系数的优化。然而,这样做需要编码器将系数传输到解码器。假设16/spl times/16块和每个像素4个浮点系数,这将需要相当大的比特率开销。特别是在低比特率系统中,这样的开销可能是不可接受的。本文提出了一类参数函数来表示翘曲插值核。更具体地说,我们建议使用双参数函数族。
{"title":"Parametric warping for motion estimation","authors":"Aria Nosratinia","doi":"10.1109/DCC.1997.582124","DOIUrl":"https://doi.org/10.1109/DCC.1997.582124","url":null,"abstract":"Summary form only given. In warping (also known as mesh-based) motion estimation, motion vectors at individual pixels are computed through an interpolation of a subsampled set of motion vectors. A method for calculating optimal warping coefficients was introduced previously. This algorithm finds the interpolation coefficients, at each individual pixel location (within a block), such that the mean squared luminance errors are minimized. It has been observed that optimal coefficients vary widely with time and across different sequences. This observation motivates the optimization of the warping coefficients locally in time. However, doing so requires the encoder to transmit the coefficients to the decoder. Assuming a 16/spl times/16 block and four floating point coefficients per pixel, this would require a considerable overhead in bitrate. Especially in low bitrate regimes, such overhead is likely to be unacceptable. This paper proposes a parametric class of functions to represent the warping interpolation kernels. More specifically, we propose to use the two-parameter family of functions.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116051110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A fast block-sorting algorithm for lossless data compression 一种用于无损数据压缩的快速块排序算法
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582137
Dianne M Schindler
Summary form only given. Introduces a new transformation for block-sorting data compression methods. The transformation is similar to the one presented by Burrows and Wheeler, but avoids the drawbacks of uncertain runtime and low performance with large blocks. The cost is a small compression loss and a slower back transformation. In addition to that it is well suited for hardware implementation. Typical applications include real-time data recording, fast communication lines, on the fly compression and any other task requiring high throughput. The difference between this transformation and the original block-sort transformation is that the original transformation sorts on unlimited context, whereas this transformation sorts on limited context (typically a few bytes) and uses the position in the input block to determine the sort order in the case of equal contexts.
只提供摘要形式。介绍了一种新的块排序数据压缩方法的转换。这种转换类似于Burrows和Wheeler提出的转换,但避免了运行时不确定和大块时性能低下的缺点。代价是压缩损失小,反向变换速度慢。除此之外,它还非常适合硬件实现。典型的应用包括实时数据记录、快速通信线路、动态压缩和任何其他需要高吞吐量的任务。此转换与原始块排序转换之间的区别在于,原始转换对无限上下文进行排序,而此转换对有限上下文(通常是几个字节)进行排序,并且在相等上下文的情况下使用输入块中的位置来确定排序顺序。
{"title":"A fast block-sorting algorithm for lossless data compression","authors":"Dianne M Schindler","doi":"10.1109/DCC.1997.582137","DOIUrl":"https://doi.org/10.1109/DCC.1997.582137","url":null,"abstract":"Summary form only given. Introduces a new transformation for block-sorting data compression methods. The transformation is similar to the one presented by Burrows and Wheeler, but avoids the drawbacks of uncertain runtime and low performance with large blocks. The cost is a small compression loss and a slower back transformation. In addition to that it is well suited for hardware implementation. Typical applications include real-time data recording, fast communication lines, on the fly compression and any other task requiring high throughput. The difference between this transformation and the original block-sort transformation is that the original transformation sorts on unlimited context, whereas this transformation sorts on limited context (typically a few bytes) and uses the position in the input block to determine the sort order in the case of equal contexts.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121973574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
A remapping technique based on permutations for lossless compression of multispectral images 基于排列的多光谱图像无损压缩重映射技术
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582067
Z. Arnavut
Multispectral images, such as Thematic Mapper (TM) images, have high spectral correlation among some bands. These bands also have different dynamic ranges. Hence, when linear predictive techniques employed to exploit the spectral and spatial correlation among the bands of a TM image, the variance of the prediction errors becomes greater. Markas and Reif (1993), have used histogram equalization (modification) techniques for lossy compression of multispectral images. In general, histogram equalization techniques are not reversible. However, by defining a monotonically increasing transformation, so that two adjacent gray values will not map to the same gray value of the transformed image, and selecting a target image with a wider probability density function than the source image, one can define a reversible mapping. We introduce a distinct reversible remapping scheme which utilizes sorting permutations. This technique differs from histogram equalization. It is a reversible transformation. We show that, by utilizing the remapping technique introduced and employing linear predictive techniques on a pair of bands, one can achieve better lossless compression than the results reported previously.
主题性地图(Thematic Mapper, TM)等多光谱图像在某些波段之间具有较高的光谱相关性。这些波段也有不同的动态范围。因此,当采用线性预测技术利用TM图像波段间的光谱相关性和空间相关性时,预测误差的方差会变大。Markas和Reif(1993)使用直方图均衡化(修改)技术对多光谱图像进行有损压缩。一般来说,直方图均衡化技术是不可逆的。然而,通过定义单调递增变换,使两个相邻的灰度值不会映射到变换后图像的相同灰度值,并选择比源图像具有更宽概率密度函数的目标图像,可以定义可逆映射。我们引入了一种利用排序置换的明显可逆重映射方案。这种技术不同于直方图均衡化。这是一个可逆变换。我们表明,通过利用引入的重映射技术和一对波段上的线性预测技术,可以实现比以前报道的结果更好的无损压缩。
{"title":"A remapping technique based on permutations for lossless compression of multispectral images","authors":"Z. Arnavut","doi":"10.1109/DCC.1997.582067","DOIUrl":"https://doi.org/10.1109/DCC.1997.582067","url":null,"abstract":"Multispectral images, such as Thematic Mapper (TM) images, have high spectral correlation among some bands. These bands also have different dynamic ranges. Hence, when linear predictive techniques employed to exploit the spectral and spatial correlation among the bands of a TM image, the variance of the prediction errors becomes greater. Markas and Reif (1993), have used histogram equalization (modification) techniques for lossy compression of multispectral images. In general, histogram equalization techniques are not reversible. However, by defining a monotonically increasing transformation, so that two adjacent gray values will not map to the same gray value of the transformed image, and selecting a target image with a wider probability density function than the source image, one can define a reversible mapping. We introduce a distinct reversible remapping scheme which utilizes sorting permutations. This technique differs from histogram equalization. It is a reversible transformation. We show that, by utilizing the remapping technique introduced and employing linear predictive techniques on a pair of bands, one can achieve better lossless compression than the results reported previously.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132525783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive vector quantization .II. Classification and comparison of algorithms 自适应矢量量化。算法的分类和比较
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582095
J. Fowler
Summary form only given. For pt.I see ibid., p.436, 1997. We review prominent examples of adaptive vector quantization (AVQ) algorithms from prior literature and develop a classification of these algorithms. Well known theorems from rate-distortion theory suggest two approaches to the nonadaptive vector quantization (VQ) of a stationary, ergodic random process. These two nonadaptive VQ approaches have, in turn, inspired two general types of AVQ algorithms for the coding of nonstationary sources. In constrained-distortion AVQ algorithms, the algorithm limits the distortion to some maximum value and then attempts to minimize the rate subject to this distortion constraint. Constrained-rate AVQ algorithms do the opposite, limiting the rate to be less than or equal to some maximum value and attempting to produce a coding with the smallest distortion. A third category of AVQ algorithms, rate-distortion-based algorithms minimize the rate-distortion cost functions. We discuss each of the three categories of AVQ algorithms in detail and mention notable algorithms found in each category. Afterwards, we summarize the discussion with an algorithm taxonomy. Finally, we present experimental results for several prominent AVQ algorithms on an artificial nonstationary random process. Our results suggest that, one, the class of rate-distortion-based algorithms is capable of coding performance superior than that of other algorithms, particularly for low-rate coding, and, two, that complex, batch coding algorithms are not as competitive as simpler, online algorithms.
只提供摘要形式。见同上,第436页,1997年。我们回顾了先前文献中自适应矢量量化(AVQ)算法的突出例子,并对这些算法进行了分类。速率失真理论中众所周知的定理为平稳遍历随机过程的非自适应矢量量化(VQ)提出了两种方法。这两种非自适应VQ方法反过来启发了用于非平稳源编码的两种一般类型的AVQ算法。在约束失真AVQ算法中,该算法将失真限制在某个最大值,然后在此失真约束下尝试最小化速率。约束速率AVQ算法则相反,将速率限制在小于或等于某个最大值,并试图产生具有最小失真的编码。AVQ算法的第三类,基于率失真的算法最小化率失真代价函数。我们详细讨论了AVQ算法的三个类别,并提到了在每个类别中发现的值得注意的算法。随后,我们用一个算法分类法对讨论进行了总结。最后,我们给出了几种主要的AVQ算法在人工非平稳随机过程上的实验结果。我们的研究结果表明,第一,基于速率失真的算法能够比其他算法的编码性能更好,特别是对于低速率编码,第二,复杂的批量编码算法不如更简单的在线算法具有竞争力。
{"title":"Adaptive vector quantization .II. Classification and comparison of algorithms","authors":"J. Fowler","doi":"10.1109/DCC.1997.582095","DOIUrl":"https://doi.org/10.1109/DCC.1997.582095","url":null,"abstract":"Summary form only given. For pt.I see ibid., p.436, 1997. We review prominent examples of adaptive vector quantization (AVQ) algorithms from prior literature and develop a classification of these algorithms. Well known theorems from rate-distortion theory suggest two approaches to the nonadaptive vector quantization (VQ) of a stationary, ergodic random process. These two nonadaptive VQ approaches have, in turn, inspired two general types of AVQ algorithms for the coding of nonstationary sources. In constrained-distortion AVQ algorithms, the algorithm limits the distortion to some maximum value and then attempts to minimize the rate subject to this distortion constraint. Constrained-rate AVQ algorithms do the opposite, limiting the rate to be less than or equal to some maximum value and attempting to produce a coding with the smallest distortion. A third category of AVQ algorithms, rate-distortion-based algorithms minimize the rate-distortion cost functions. We discuss each of the three categories of AVQ algorithms in detail and mention notable algorithms found in each category. Afterwards, we summarize the discussion with an algorithm taxonomy. Finally, we present experimental results for several prominent AVQ algorithms on an artificial nonstationary random process. Our results suggest that, one, the class of rate-distortion-based algorithms is capable of coding performance superior than that of other algorithms, particularly for low-rate coding, and, two, that complex, batch coding algorithms are not as competitive as simpler, online algorithms.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134121440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of functions defined on surfaces of 3D objects 压缩定义在三维物体表面上的函数
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582051
K. Kolarov, W. Lynch
We present a technique to compress scalar functions defined on 2-manifolds. Our approach combines discrete wavelet transforms with zerotree compression, building on ideas from three previous developments: the lifting scheme, spherical wavelets, and embedded zerotree coding methods. Applications lie in the efficient storage and rapid transmission of complex data sets. Typical data sets are Earth topography, satellite images, and surface parametrizations. Our contribution is the novel combination and application of these techniques to general 2-manifolds.
提出了一种压缩2-流形上定义的标量函数的方法。我们的方法结合了离散小波变换和零树压缩,建立在三个先前发展的思想之上:提升方案,球面小波和嵌入式零树编码方法。应用在于复杂数据集的高效存储和快速传输。典型的数据集是地球地形、卫星图像和地表参数化。我们的贡献是将这些技术新颖地结合并应用于一般的2-流形。
{"title":"Compression of functions defined on surfaces of 3D objects","authors":"K. Kolarov, W. Lynch","doi":"10.1109/DCC.1997.582051","DOIUrl":"https://doi.org/10.1109/DCC.1997.582051","url":null,"abstract":"We present a technique to compress scalar functions defined on 2-manifolds. Our approach combines discrete wavelet transforms with zerotree compression, building on ideas from three previous developments: the lifting scheme, spherical wavelets, and embedded zerotree coding methods. Applications lie in the efficient storage and rapid transmission of complex data sets. Typical data sets are Earth topography, satellite images, and surface parametrizations. Our contribution is the novel combination and application of these techniques to general 2-manifolds.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134524296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1