首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Matching pursuit video coding at very low bit rates 匹配追踪视频编码在非常低的比特率
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515531
Ralph Neff, A. Zakhor
Matching pursuits refers to a greedy algorithm which matches structures in a signal to a large dictionary of functions. In this paper, we present a matching-pursuit based video coding system which codes motion residual images using a large dictionary of Gabor functions. One feature of our system is that bits are assigned progressively to the highest-energy areas in the motion residual image. The large dictionary size is another advantage, since it allows structures in the motion residual to be represented using few significant coefficients. Experimental results compare the performance of the matching-pursuit system to a hybrid-DCT system at various bit rates between 6 and 128 kbit/s. Additional experiments show how the matching pursuit system performs if the Gabor dictionary is replaced by an 8/spl times/8 DCT dictionary.
匹配追踪指的是一种贪婪算法,它将信号中的结构匹配到一个大的函数字典中。本文提出了一种基于匹配追踪的视频编码系统,该系统使用一个大的Gabor函数字典对运动残差图像进行编码。我们的系统的一个特点是,比特被逐步分配到运动残差图像中能量最高的区域。大字典大小是另一个优点,因为它允许运动残差中的结构使用很少的有效系数来表示。实验结果比较了匹配跟踪系统在6 ~ 128 kbit/s不同比特率下与混合dct系统的性能。另外的实验表明,如果将Gabor字典替换为8/spl times/8 DCT字典,则匹配追踪系统的性能如何。
{"title":"Matching pursuit video coding at very low bit rates","authors":"Ralph Neff, A. Zakhor","doi":"10.1109/DCC.1995.515531","DOIUrl":"https://doi.org/10.1109/DCC.1995.515531","url":null,"abstract":"Matching pursuits refers to a greedy algorithm which matches structures in a signal to a large dictionary of functions. In this paper, we present a matching-pursuit based video coding system which codes motion residual images using a large dictionary of Gabor functions. One feature of our system is that bits are assigned progressively to the highest-energy areas in the motion residual image. The large dictionary size is another advantage, since it allows structures in the motion residual to be represented using few significant coefficients. Experimental results compare the performance of the matching-pursuit system to a hybrid-DCT system at various bit rates between 6 and 128 kbit/s. Additional experiments show how the matching pursuit system performs if the Gabor dictionary is replaced by an 8/spl times/8 DCT dictionary.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
A comparison of the Z, E/sub 8/ and Leech lattices for image subband quantization 用于图像子带量化的Z、E/sub 8/和Leech格的比较
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515521
Zheng Gao, Feng Chen, B. Belzer, J. Villasenor
Lattice vector quantization schemes offer high coding efficiency without the burden associated with generating and searching a codebook. The distortion associated with a given lattice is often expressed in terms of the G number, which is a measure of the mean square error per dimension generated by quantization of a uniform source. Subband image coefficients, however, are best modeled by a generalized Gaussian, leading to distortion characteristics that are quite different from those encountered for uniform, Laplacian, or Gaussian sources. We present here the distortion associated with Z, E/sub 8/, and Leech lattice quantization for coding of generalized Gaussian sources, and show that for low bit rates the Z lattice offers both the best performance and the lowest implementational complexity.
点阵矢量量化方案提供了很高的编码效率,而没有生成和搜索码本的负担。与给定晶格相关的畸变通常用G数表示,G数是均匀源量化产生的每维均方误差的度量。然而,子带图像系数最好用广义高斯模型来建模,这导致畸变特性与均匀、拉普拉斯或高斯源所遇到的畸变特性大不相同。本文给出了广义高斯源编码中与Z、E/sub 8/和Leech晶格量化相关的失真,并表明在低比特率下,Z晶格提供了最好的性能和最低的实现复杂性。
{"title":"A comparison of the Z, E/sub 8/ and Leech lattices for image subband quantization","authors":"Zheng Gao, Feng Chen, B. Belzer, J. Villasenor","doi":"10.1109/DCC.1995.515521","DOIUrl":"https://doi.org/10.1109/DCC.1995.515521","url":null,"abstract":"Lattice vector quantization schemes offer high coding efficiency without the burden associated with generating and searching a codebook. The distortion associated with a given lattice is often expressed in terms of the G number, which is a measure of the mean square error per dimension generated by quantization of a uniform source. Subband image coefficients, however, are best modeled by a generalized Gaussian, leading to distortion characteristics that are quite different from those encountered for uniform, Laplacian, or Gaussian sources. We present here the distortion associated with Z, E/sub 8/, and Leech lattice quantization for coding of generalized Gaussian sources, and show that for low bit rates the Z lattice offers both the best performance and the lowest implementational complexity.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127463324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Parallel algorithms for the static dictionary compression 静态字典压缩的并行算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515506
H. Nagumo, Mi Lu, K. Watson
Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n/sup 2/) processors, or in O(L+log/sup 2/n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L/sup 2/).
研究了两种静态字典压缩策略的并行算法。一种是使用具有前缀属性的字典进行最佳字典压缩,为此我们的算法需要O(L+log n)时间和O(n)个处理器,其中L是字典条目的最大允许长度,而以前的结果使用O(n/sup 2/)处理器在O(L+log n)时间内运行,或者使用O(n)个处理器在O(L+log 2/n)时间内运行。另一种算法是最长片段优先(LFF)字典压缩,我们的算法需要O(L+log n)个时间和O(nL)个处理器,而之前的结果在O(n/log n)个处理器上具有O(L log n)个时间性能。我们还证明了顺序LFF字典压缩可以在线计算,其前瞻长度为0 (L/sup 2/)。
{"title":"Parallel algorithms for the static dictionary compression","authors":"H. Nagumo, Mi Lu, K. Watson","doi":"10.1109/DCC.1995.515506","DOIUrl":"https://doi.org/10.1109/DCC.1995.515506","url":null,"abstract":"Studies parallel algorithms for two static dictionary compression strategies. One is the optimal dictionary compression with dictionaries that have the prefix property, for which our algorithm requires O(L+log n) time and O(n) processors, where L is the maximum allowable length of the dictionary entries, while previous results run in O(L+log n) time using O(n/sup 2/) processors, or in O(L+log/sup 2/n) time using O(n) processors. The other algorithm is the longest-fragment-first (LFF) dictionary compression, for which our algorithm requires O(L+log n) time and O(nL) processors, while the previous result has O(L log n) time performance on O(n/log n) processors. We also show that the sequential LFF dictionary compression can be computed online with a lookahead of length O(L/sup 2/).","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
The effect of non-greedy parsing in Ziv-Lempel compression methods 非贪婪解析在Ziv-Lempel压缩方法中的作用
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515520
R. Horspool
Most practical compression methods in the LZ77 and LZ78 families parse their input using a greedy heuristic. However the popular gzip compression program demonstrates that modest but significant gains in compression performance are possible if non-greedy parsing is used. Practical implementations for using non-greedy parsing in LZ77 and LZ78 compression are explored and some experimental measurements are presented.
LZ77和LZ78系列中最实用的压缩方法使用贪婪启发式来解析它们的输入。然而,流行的gzip压缩程序表明,如果使用非贪婪解析,压缩性能可能会有适度但显著的提高。探讨了在LZ77和LZ78压缩中使用非贪婪解析的实际实现,并给出了一些实验测量结果。
{"title":"The effect of non-greedy parsing in Ziv-Lempel compression methods","authors":"R. Horspool","doi":"10.1109/DCC.1995.515520","DOIUrl":"https://doi.org/10.1109/DCC.1995.515520","url":null,"abstract":"Most practical compression methods in the LZ77 and LZ78 families parse their input using a greedy heuristic. However the popular gzip compression program demonstrates that modest but significant gains in compression performance are possible if non-greedy parsing is used. Practical implementations for using non-greedy parsing in LZ77 and LZ78 compression are explored and some experimental measurements are presented.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Vector quantization and clustering: a pyramid approach 向量量化和聚类:金字塔方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515592
D. Tamir, Chi-Yeon Park, Wook-Sung Yoo
A multi-resolution K-means clustering method is presented. Starting with a low resolution sample of the input data the K-means algorithm is applied to a sequence of monotonically increasing-resolution samples of the given data. The cluster centers obtained from a low resolution stage are used as initial cluster centers for the next stage which is a higher resolution stage. The idea behind this method is that a good estimation of the initial location of the cluster centers can be obtained through K-means clustering of a sample of the input data. K-means clustering of the entire data with the initial cluster centers estimated by clustering a sample of the input data, reduces the convergence time of the algorithm.
提出了一种多分辨率k均值聚类方法。从输入数据的低分辨率样本开始,K-means算法应用于给定数据的单调递增分辨率样本序列。从低分辨率阶段获得的星团中心用作下一阶段即高分辨率阶段的初始星团中心。这种方法背后的思想是,通过对输入数据样本的K-means聚类,可以很好地估计聚类中心的初始位置。K-means对整个数据进行聚类,通过对输入数据的一个样本进行聚类估计初始聚类中心,从而减少了算法的收敛时间。
{"title":"Vector quantization and clustering: a pyramid approach","authors":"D. Tamir, Chi-Yeon Park, Wook-Sung Yoo","doi":"10.1109/DCC.1995.515592","DOIUrl":"https://doi.org/10.1109/DCC.1995.515592","url":null,"abstract":"A multi-resolution K-means clustering method is presented. Starting with a low resolution sample of the input data the K-means algorithm is applied to a sequence of monotonically increasing-resolution samples of the given data. The cluster centers obtained from a low resolution stage are used as initial cluster centers for the next stage which is a higher resolution stage. The idea behind this method is that a good estimation of the initial location of the cluster centers can be obtained through K-means clustering of a sample of the input data. K-means clustering of the entire data with the initial cluster centers estimated by clustering a sample of the input data, reduces the convergence time of the algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127530741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Lossless region coding schemes 无损区域编码方案
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515591
M. Turner
Summary form only given. The use of describing regions as separate entities within an image has been applied within specific fields of image compression for many years. This study hopes to show that the technique, when applied with care, is practical for virtually all image types. Three different schemes for segmenting and coding an image have been considered: array covering; region numbering; and edge following.
只提供摘要形式。将区域描述为图像中的独立实体的使用已经在图像压缩的特定领域中应用了很多年。这项研究希望表明,如果使用得当,这项技术对几乎所有的图像类型都是实用的。考虑了三种不同的图像分割和编码方案:数组覆盖;地区编号;边跟随。
{"title":"Lossless region coding schemes","authors":"M. Turner","doi":"10.1109/DCC.1995.515591","DOIUrl":"https://doi.org/10.1109/DCC.1995.515591","url":null,"abstract":"Summary form only given. The use of describing regions as separate entities within an image has been applied within specific fields of image compression for many years. This study hopes to show that the technique, when applied with care, is practical for virtually all image types. Three different schemes for segmenting and coding an image have been considered: array covering; region numbering; and edge following.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132825522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time VLSI compression for high-speed wireless local area networks 高速无线局域网的实时VLSI压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515541
Bongjin Jung, W. Burleson
Summary form only presented; substantially as follows. Presents a new compact, power-efficient, and scalable VLSI array for the first Lempel-Ziv (LZ) algorithm to be used in high-speed wireless data communication systems. Lossless data compression can be used to inexpensively halve the amount of data to be transmitted, thus improving the effective bandwidth of the communication channel and in turn, the overall network performance. For wireless networks, the data rate and latency requirement are appropriate for a dedicated VLSI implementation of LZ compression. The nature of wireless networks requires that any additional VLSI hardware also be small, low-power and inexpensive. The architecture uses a novel custom systolic array and a simple dictionary FIFO which is implemented using conventional SRAM. The architecture consists of M simple processing elements where M is the maximum length of the string to be replaced with a codeword, which for practical LAN applications, can range from 16 to 32. The systolic cell has been optimized to remove any superfluous state information or logic, thus making it completely dedicated to the task of LZ compression. A prototype chip has been implemented using 2 /spl mu/s CMOS technology. Using M=32, and assuming a 2:1 compression ratio, the system can process approximately 90 Mbps with a 100 MHz clock rate.
仅提供摘要形式;大体上如下。提出了一种新的紧凑,节能,可扩展的VLSI阵列,用于高速无线数据通信系统的第一个Lempel-Ziv (LZ)算法。无损数据压缩可以使传输的数据量便宜地减少一半,从而提高通信信道的有效带宽,进而提高整体网络性能。对于无线网络,数据速率和延迟要求适合LZ压缩的专用VLSI实现。无线网络的性质要求任何额外的VLSI硬件也必须体积小、功耗低且价格低廉。该架构使用一种新颖的自定义收缩阵列和一个简单的字典FIFO,使用传统的SRAM实现。该体系结构由M个简单的处理元素组成,其中M是要用码字替换的字符串的最大长度,对于实际的局域网应用,其范围可以从16到32。收缩细胞已被优化,以删除任何多余的状态信息或逻辑,从而使其完全致力于LZ压缩任务。采用2 /spl mu/s CMOS技术实现了原型芯片。使用M=32,假设压缩比为2:1,系统可以在100mhz时钟速率下处理大约90mbps的数据。
{"title":"Real-time VLSI compression for high-speed wireless local area networks","authors":"Bongjin Jung, W. Burleson","doi":"10.1109/DCC.1995.515541","DOIUrl":"https://doi.org/10.1109/DCC.1995.515541","url":null,"abstract":"Summary form only presented; substantially as follows. Presents a new compact, power-efficient, and scalable VLSI array for the first Lempel-Ziv (LZ) algorithm to be used in high-speed wireless data communication systems. Lossless data compression can be used to inexpensively halve the amount of data to be transmitted, thus improving the effective bandwidth of the communication channel and in turn, the overall network performance. For wireless networks, the data rate and latency requirement are appropriate for a dedicated VLSI implementation of LZ compression. The nature of wireless networks requires that any additional VLSI hardware also be small, low-power and inexpensive. The architecture uses a novel custom systolic array and a simple dictionary FIFO which is implemented using conventional SRAM. The architecture consists of M simple processing elements where M is the maximum length of the string to be replaced with a codeword, which for practical LAN applications, can range from 16 to 32. The systolic cell has been optimized to remove any superfluous state information or logic, thus making it completely dedicated to the task of LZ compression. A prototype chip has been implemented using 2 /spl mu/s CMOS technology. Using M=32, and assuming a 2:1 compression ratio, the system can process approximately 90 Mbps with a 100 MHz clock rate.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A new approach to scalable video coding 一种可扩展视频编码的新方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515528
W. Chung, F. Kossentini, Mark J. T. Smith
This paper introduces a new framework for video coding that facilitates operation over a wide range of transmission rates. The new method is a subband coding approach that employs motion compensation, and uses prediction-frame and intra-frame coding within the framework. It is unique in that it allows lossy coding of the motion vectors through its use of multistage residual vector quantization (RVQ). Furthermore, it selects the motion vector with the best rate-distortion tradeoff among a number of possible motion vector candidates, and provides a rate-distortion-based mechanism for alternating between intra-frame and inter-frame coding. The framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission.
本文介绍了一种新的视频编码框架,便于在大传输速率范围内进行操作。该方法是一种采用运动补偿的子带编码方法,并在框架内使用预测帧和帧内编码。它的独特之处在于它允许通过使用多级残差矢量量化(RVQ)对运动矢量进行有损编码。此外,该算法在众多候选运动向量中选择具有最佳速率失真折衷的运动向量,并提供了一种基于速率失真的帧内和帧间编码交替机制。该框架提供了一种简单的方法来控制系统的复杂性和性能,并固有地支持多分辨率传输。
{"title":"A new approach to scalable video coding","authors":"W. Chung, F. Kossentini, Mark J. T. Smith","doi":"10.1109/DCC.1995.515528","DOIUrl":"https://doi.org/10.1109/DCC.1995.515528","url":null,"abstract":"This paper introduces a new framework for video coding that facilitates operation over a wide range of transmission rates. The new method is a subband coding approach that employs motion compensation, and uses prediction-frame and intra-frame coding within the framework. It is unique in that it allows lossy coding of the motion vectors through its use of multistage residual vector quantization (RVQ). Furthermore, it selects the motion vector with the best rate-distortion tradeoff among a number of possible motion vector candidates, and provides a rate-distortion-based mechanism for alternating between intra-frame and inter-frame coding. The framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130998899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Convergence of fractal encoded images 分形编码图像的收敛性
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515514
J. Kominek
Fractal image compression, despite its great potential, suffers from some flaws that may prevent its adaptation from becoming more widespread. One such problem is the difficulty of guaranteeing convergence, let alone a specific error tolerance. To help surmount this problem, we have introduced the terms compound, cycle, and partial contractivity concepts indispensable for understanding convergence of fractal images. Most important, they connect the behavior of individual pixels to the image as a whole, and relate such behavior to the component affine transforms.
尽管分形图像压缩具有巨大的潜力,但它也存在一些缺陷,这些缺陷可能会阻碍它的适应得到更广泛的应用。其中一个问题是难以保证收敛性,更不用说特定的容错性了。为了帮助克服这个问题,我们引入了复合、循环和部分收缩的概念,这对于理解分形图像的收敛性是必不可少的。最重要的是,它们将单个像素的行为与图像作为一个整体连接起来,并将这种行为与组件仿射变换联系起来。
{"title":"Convergence of fractal encoded images","authors":"J. Kominek","doi":"10.1109/DCC.1995.515514","DOIUrl":"https://doi.org/10.1109/DCC.1995.515514","url":null,"abstract":"Fractal image compression, despite its great potential, suffers from some flaws that may prevent its adaptation from becoming more widespread. One such problem is the difficulty of guaranteeing convergence, let alone a specific error tolerance. To help surmount this problem, we have introduced the terms compound, cycle, and partial contractivity concepts indispensable for understanding convergence of fractal images. Most important, they connect the behavior of individual pixels to the image as a whole, and relate such behavior to the component affine transforms.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130970659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Quantization distortion in block transform-compressed data 块变换压缩数据的量化失真
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515537
A. Boden
Summary form only given, as follows. The JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into blocks that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. Block transform compression schemes exhibit sharp discontinuities at data block boundaries: this phenomenon is a visible manifestation of the compression quantization distortion. For example, in compression algorithms such as JPEG these blocking effects manifest themselves visually as discontinuities between adjacent 8×8 pixel image blocks. In general the distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error. In particular, the blocking effects exhibited by JPEG are explained by two simple observations demonstrated in this work: a disproportionate fraction of the total quantization error accumulates on block edge pixels; and the quantization errors among pixels within a compression block are highly correlated, while the quantization errors between pixels in separate blocks are uncorrelated. A generic model of block transform compression quantization noise is introduced, applied to synthesized and real one and two dimensional data using the DCT as the transform basis, and results of the model are shown to predict distortion patterns observed in data compressed with JPEG.
仅给出摘要形式,如下。JPEG图像压缩标准是基于块变换的压缩方案的一个例子;图像被系统地细分为单独转换、量化和编码的块。压缩是通过量化转换后的数据,减少数据熵,从而促进有效的编码来实现的。块变换压缩方案在数据块边界处表现出明显的不连续:这种现象是压缩量化失真的明显表现。例如,在JPEG等压缩算法中,这些块效果在视觉上表现为相邻8×8像素图像块之间的不连续。从变换基函数的性质和变换系数的量化误差来看,基于分块变换的压缩技术的失真特性是可以理解的。特别是,JPEG所表现出的块效应可以用两个简单的观察结果来解释:在块边缘像素上累积的总量化误差的不成比例的部分;压缩块内像素间的量化误差高度相关,而单独块内像素间的量化误差不相关。介绍了一种通用的块变换压缩量化噪声模型,并以DCT为变换基,将该模型应用于合成和真实的一、二维数据,结果表明该模型可以预测JPEG压缩数据中观察到的失真模式。
{"title":"Quantization distortion in block transform-compressed data","authors":"A. Boden","doi":"10.1109/DCC.1995.515537","DOIUrl":"https://doi.org/10.1109/DCC.1995.515537","url":null,"abstract":"Summary form only given, as follows. The JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into blocks that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. Block transform compression schemes exhibit sharp discontinuities at data block boundaries: this phenomenon is a visible manifestation of the compression quantization distortion. For example, in compression algorithms such as JPEG these blocking effects manifest themselves visually as discontinuities between adjacent 8×8 pixel image blocks. In general the distortion characteristics of block transform-based compression techniques are understandable in terms of the properties of the transform basis functions and the transform coefficient quantization error. In particular, the blocking effects exhibited by JPEG are explained by two simple observations demonstrated in this work: a disproportionate fraction of the total quantization error accumulates on block edge pixels; and the quantization errors among pixels within a compression block are highly correlated, while the quantization errors between pixels in separate blocks are uncorrelated. A generic model of block transform compression quantization noise is introduced, applied to synthesized and real one and two dimensional data using the DCT as the transform basis, and results of the model are shown to predict distortion patterns observed in data compressed with JPEG.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123873608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1