首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Efficient handling of large sets of tuples with sharing trees 有效地处理具有共享树的大型元组集
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515538
D. Zampuniéris, B. Le Charlier
Summary form only given; substantially as follows. Computing with sets of tuples (n-ary relations) is often required in programming, while being a major cause of performance degradation as the size of sets increases. The authors present a new data structure dedicated to the manipulation of large sets of tuples, dubbed a sharing tree. The main idea to reduce memory consumption is to share some sub-tuples of the set represented by a sharing tree. Various conditions are given. The authors have developed algorithms for common set operations: membership, insertion, equality, union, intersection, ... that have theoretical complexities proportional to the sizes of the sharing trees given as arguments, which are usually much smaller than the sizes of the represented sets.
只提供摘要形式;大体上如下。在编程中经常需要使用元组(n元关系)集进行计算,而随着集合大小的增加,这是导致性能下降的主要原因。作者提出了一种新的数据结构,专门用于操作大型元组集,称为共享树。减少内存消耗的主要思想是共享由共享树表示的集合的一些子元组。给出了各种条件。作者开发了常见集合运算的算法:隶属、插入、相等、并、交……它们的理论复杂性与作为参数的共享树的大小成正比,共享树的大小通常比表示集的大小小得多。
{"title":"Efficient handling of large sets of tuples with sharing trees","authors":"D. Zampuniéris, B. Le Charlier","doi":"10.1109/DCC.1995.515538","DOIUrl":"https://doi.org/10.1109/DCC.1995.515538","url":null,"abstract":"Summary form only given; substantially as follows. Computing with sets of tuples (n-ary relations) is often required in programming, while being a major cause of performance degradation as the size of sets increases. The authors present a new data structure dedicated to the manipulation of large sets of tuples, dubbed a sharing tree. The main idea to reduce memory consumption is to share some sub-tuples of the set represented by a sharing tree. Various conditions are given. The authors have developed algorithms for common set operations: membership, insertion, equality, union, intersection, ... that have theoretical complexities proportional to the sizes of the sharing trees given as arguments, which are usually much smaller than the sizes of the represented sets.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131721941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Context selection and quantization for lossless image coding 无损图像编码的上下文选择和量化
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515563
Xiaolin Wu
Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.
只提供摘要形式。在上下文量化之后,使用L2/sup K/ (L是量化水平,K是比特数)条件概率的熵编码器仍然是不切实际的。相反,只有期望值是由相对于不同的量化背景的样本均值近似的。计算样本均值只涉及累积量化上下文C(d,t)中的误差项,并对C(d,t)的出现次数进行计数。因此,所描述的基于上下文的预测误差建模的时间和空间复杂性为0 (L2/sup K/)。基于量化的上下文C(d,t),编码器进行DPCM预测I,将最可能的预测误差添加到I中,然后得到自适应的、基于上下文的非线性预测。然后对误差e进行熵编码。e的编码是用L个条件概率完成的。本文给出了基于上下文的无损图像压缩技术的结果。
{"title":"Context selection and quantization for lossless image coding","authors":"Xiaolin Wu","doi":"10.1109/DCC.1995.515563","DOIUrl":"https://doi.org/10.1109/DCC.1995.515563","url":null,"abstract":"Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132014631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
New relationships in operator-based backward motion compensation 基于算子的反向运动补偿中的新关系
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515529
Aria Nosratinia, M. Orchard
The transmission and storage of digital video at reduced bit rates requires a source coding scheme, which generally contains motion compensated prediction as an essential part. The class of motion estimation algorithms known as backward methods have the advantage of dense motion field sampling, and in coding applications the decoder needs no motion information from the coder. In this paper, we first present an overview of operator based motion compensators with interpolative and non-interpolative kernels. We then proceed with two new results. The first offers a new perspective on the classical pel-recursive methods; one that exposes the weaknesses of traditional approaches and offers an explanation for the improved performance of operator-based algorithms. The second result introduces a minimum norm intra-frame operator and establishes an equivalence relationship between this and the original (least squares) operator. This equivalence induces interesting duality properties that, in addition to offering insights into operator-based motion estimators, can be used to relax either the maximum needed computational power or the frame buffer length.
低比特率数字视频的传输和存储需要一种源编码方案,而源编码方案通常包含运动补偿预测作为其重要组成部分。运动估计算法被称为反向方法,它具有密集运动场采样的优点,并且在编码应用中,解码器不需要来自编码器的运动信息。在本文中,我们首先概述了基于算子的运动补偿器的插值和非插值核。然后我们得到两个新的结果。第一种方法提供了对经典递归方法的新视角;它揭示了传统方法的弱点,并为基于算子的算法的性能改进提供了解释。第二种结果引入了最小范数帧内算子,并建立了该算子与原始(最小二乘)算子之间的等价关系。这种等价性引出了有趣的对偶性,除了提供对基于算子的运动估计器的见解之外,还可以用来放松所需的最大计算能力或帧缓冲长度。
{"title":"New relationships in operator-based backward motion compensation","authors":"Aria Nosratinia, M. Orchard","doi":"10.1109/DCC.1995.515529","DOIUrl":"https://doi.org/10.1109/DCC.1995.515529","url":null,"abstract":"The transmission and storage of digital video at reduced bit rates requires a source coding scheme, which generally contains motion compensated prediction as an essential part. The class of motion estimation algorithms known as backward methods have the advantage of dense motion field sampling, and in coding applications the decoder needs no motion information from the coder. In this paper, we first present an overview of operator based motion compensators with interpolative and non-interpolative kernels. We then proceed with two new results. The first offers a new perspective on the classical pel-recursive methods; one that exposes the weaknesses of traditional approaches and offers an explanation for the improved performance of operator-based algorithms. The second result introduces a minimum norm intra-frame operator and establishes an equivalence relationship between this and the original (least squares) operator. This equivalence induces interesting duality properties that, in addition to offering insights into operator-based motion estimators, can be used to relax either the maximum needed computational power or the frame buffer length.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"504 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Adaptive bidirectional time-recursive interpolation for deinterlacing 自适应双向时间递归插值去隔行
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515556
J. Kovacevic, R. Safranek, E. Yeh
Summary form only given. There exists a need for finding a good deinterlacing (scan format conversion) system, since, for example, current available cameras are interlaced and the US HDTV Grand Alliance has put forward a proposal containing both interlaced and progressive scanning formats. On the other hand, over the next few years, the local broadcasting stations will find themselves in the position of receiving video material that could be HDTV quality, progressively scanned, while their news/commercials are still NTSC produced (interlaced scanning). We have developed a new algorithm for deinterlacing based on the algorithm of Nguyen and Dubois (see Proc. Int. Workshop on HDTV, November 1992). It interpolates the missing pixels using a weighted combination of spatial and temporal methods. The algorithm is self-adaptive, since it weights various processing blocks based on the error they introduce. Experiments were run on both "real-world" and computer generated video sequences. The results were compared to the "original" obtained as an output of the ray-tracer, as well as to the reference algorithm provided by the AT&T HDTV group.
只提供摘要形式。有必要找到一个好的去隔行(扫描格式转换)系统,因为,例如,目前可用的摄像机是隔行的,美国高清电视大联盟已经提出了一个包含隔行和逐行扫描格式的建议。另一方面,在接下来的几年里,地方广播电台将发现他们接收的视频材料可能是高清电视质量的,逐步扫描的,而他们的新闻/广告仍然是NTSC制作的(隔行扫描)。我们在Nguyen和Dubois算法的基础上开发了一种新的去隔行算法(参见Proc. Int)。高清电视讲习班,1992年11月)。它使用空间和时间方法的加权组合来插值缺失的像素。该算法是自适应的,因为它根据它们引入的误差对各种处理块进行加权。实验在“真实世界”和计算机生成的视频序列上进行。将结果与作为光线追踪器输出的“原始”结果以及AT&T HDTV组提供的参考算法进行比较。
{"title":"Adaptive bidirectional time-recursive interpolation for deinterlacing","authors":"J. Kovacevic, R. Safranek, E. Yeh","doi":"10.1109/DCC.1995.515556","DOIUrl":"https://doi.org/10.1109/DCC.1995.515556","url":null,"abstract":"Summary form only given. There exists a need for finding a good deinterlacing (scan format conversion) system, since, for example, current available cameras are interlaced and the US HDTV Grand Alliance has put forward a proposal containing both interlaced and progressive scanning formats. On the other hand, over the next few years, the local broadcasting stations will find themselves in the position of receiving video material that could be HDTV quality, progressively scanned, while their news/commercials are still NTSC produced (interlaced scanning). We have developed a new algorithm for deinterlacing based on the algorithm of Nguyen and Dubois (see Proc. Int. Workshop on HDTV, November 1992). It interpolates the missing pixels using a weighted combination of spatial and temporal methods. The algorithm is self-adaptive, since it weights various processing blocks based on the error they introduce. Experiments were run on both \"real-world\" and computer generated video sequences. The results were compared to the \"original\" obtained as an output of the ray-tracer, as well as to the reference algorithm provided by the AT&T HDTV group.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114920519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the performance of affine index assignments for redundancy free source-channel coding 无冗余源信道编码仿射索引分配的性能研究
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515543
A. Méhes, K. Zeger
Summary form only given. Many popular redundancy free codes are linear or affine, including the natural binary code (NBC), the folded binary code (FBC), the Gray code (GC), and the two's complement code (TCC). A theorem which considers the channel distortion of a uniform 2/sup n/ level scalar quantizer with stepsize /spl Delta/, which uses an affine index assignment with generator matrix G to transmit across a binary symmetric channel with crossover probability q, is given. Using this theorem we compare the NBC and the FBC for any source distribution.
只提供摘要形式。许多流行的无冗余码是线性或仿射的,包括自然二进制码(NBC)、折叠二进制码(FBC)、格雷码(GC)和二补码(TCC)。给出了一个考虑步长为/spl的2/sup n/阶均匀标量量化器的信道失真的定理,该量化器使用发生器矩阵G的仿射索引赋值在交叉概率为q的二进制对称信道上传输。利用这个定理,我们比较了任意源分布的NBC和FBC。
{"title":"On the performance of affine index assignments for redundancy free source-channel coding","authors":"A. Méhes, K. Zeger","doi":"10.1109/DCC.1995.515543","DOIUrl":"https://doi.org/10.1109/DCC.1995.515543","url":null,"abstract":"Summary form only given. Many popular redundancy free codes are linear or affine, including the natural binary code (NBC), the folded binary code (FBC), the Gray code (GC), and the two's complement code (TCC). A theorem which considers the channel distortion of a uniform 2/sup n/ level scalar quantizer with stepsize /spl Delta/, which uses an affine index assignment with generator matrix G to transmit across a binary symmetric channel with crossover probability q, is given. Using this theorem we compare the NBC and the FBC for any source distribution.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114754840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Self-organized dynamic Huffman coding without frequency counts 无频率计数的自组织动态霍夫曼编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515583
Y. Okada, N. Satoh, K. Murashita, S. Yoshida
Summary form only given. Dynamic Huffman coding uses a binary code tree data structure to encode the relative frequency counts of the symbols being coded. The authors aim is to obtain a simple and practical statistical algorithm in order to improve the processing speed while maintaining a high compression ratio. The algorithm proposed uses a self-organizing rule (transpose heuristic) to reconstruct the code tree. It renews the code tree by only switching the ordered positions of corresponding symbols. This method is called self organized dynamic Huffman coding. To achieve a higher compression ratio they employ context modelling.
只提供摘要形式。动态霍夫曼编码使用二叉码树数据结构来编码被编码符号的相对频率计数。作者的目的是获得一种简单实用的统计算法,以提高处理速度,同时保持较高的压缩比。该算法采用自组织规则(转置启发式)重构代码树。它只通过交换相应符号的有序位置来更新代码树。这种方法被称为自组织动态霍夫曼编码。为了获得更高的压缩比,他们采用了上下文建模。
{"title":"Self-organized dynamic Huffman coding without frequency counts","authors":"Y. Okada, N. Satoh, K. Murashita, S. Yoshida","doi":"10.1109/DCC.1995.515583","DOIUrl":"https://doi.org/10.1109/DCC.1995.515583","url":null,"abstract":"Summary form only given. Dynamic Huffman coding uses a binary code tree data structure to encode the relative frequency counts of the symbols being coded. The authors aim is to obtain a simple and practical statistical algorithm in order to improve the processing speed while maintaining a high compression ratio. The algorithm proposed uses a self-organizing rule (transpose heuristic) to reconstruct the code tree. It renews the code tree by only switching the ordered positions of corresponding symbols. This method is called self organized dynamic Huffman coding. To achieve a higher compression ratio they employ context modelling.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132265889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Queuing models of synchronous compressors 同步压缩器的排队模型
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515555
M. S. Moellenhoff, M.W. Maier
Summary form only given. In synchronous compression, a lossless data compressor attempts to equalize the rates of two synchronous communication channels. Synchronous compression is of broad applicability in improving the efficiency of internetwork links over public digital networks. The most notable features of the synchronous compression application are the mixed traffic it must tolerate and the rate buffering role played by the compression processor. The resulting system can be modeled in the time domain by queuing methods. The performance of a compression algorithm in this application is governed by the interplay of its ultimate compression ratio, its computational efficiency, and the distribution function of its instantaneous consumption rate of the source. The queuing model for synchronous compression represents the compressor as the server fed by a single queue. We describe the basic model, develop the required basic queuing theory, look at service time statistics, and compare to simulation. We develop the queuing model for synchronous compression and relate it to theoretical and empirical properties of queuing systems and Lempel-Ziv compression algorithm performance. We illustrate that synchronous compression simulations are in agreement with the predictions of queuing theory. In addition, we observe various interesting properties of match length distributions and their impact on compression in the time-domain.
只提供摘要形式。在同步压缩中,无损数据压缩器试图使两个同步通信信道的速率相等。同步压缩在提高公共数字网络互联链路的效率方面具有广泛的适用性。同步压缩应用程序最显著的特点是它必须容忍混合流量和压缩处理器所起的速率缓冲作用。所得到的系统可以通过排队方法在时域中建模。在此应用程序中,压缩算法的性能由其最终压缩比、计算效率和源的瞬时消耗率的分布函数的相互作用决定。同步压缩的排队模型将压缩器表示为由单个队列提供服务的服务器。我们描述了基本模型,开发了所需的基本排队理论,查看了服务时间统计数据,并与模拟进行了比较。我们建立了同步压缩的排队模型,并将其与排队系统的理论和经验性质以及Lempel-Ziv压缩算法的性能联系起来。我们证明了同步压缩模拟与排队论的预测是一致的。此外,我们还观察到匹配长度分布的各种有趣性质及其对时域压缩的影响。
{"title":"Queuing models of synchronous compressors","authors":"M. S. Moellenhoff, M.W. Maier","doi":"10.1109/DCC.1995.515555","DOIUrl":"https://doi.org/10.1109/DCC.1995.515555","url":null,"abstract":"Summary form only given. In synchronous compression, a lossless data compressor attempts to equalize the rates of two synchronous communication channels. Synchronous compression is of broad applicability in improving the efficiency of internetwork links over public digital networks. The most notable features of the synchronous compression application are the mixed traffic it must tolerate and the rate buffering role played by the compression processor. The resulting system can be modeled in the time domain by queuing methods. The performance of a compression algorithm in this application is governed by the interplay of its ultimate compression ratio, its computational efficiency, and the distribution function of its instantaneous consumption rate of the source. The queuing model for synchronous compression represents the compressor as the server fed by a single queue. We describe the basic model, develop the required basic queuing theory, look at service time statistics, and compare to simulation. We develop the queuing model for synchronous compression and relate it to theoretical and empirical properties of queuing systems and Lempel-Ziv compression algorithm performance. We illustrate that synchronous compression simulations are in agreement with the predictions of queuing theory. In addition, we observe various interesting properties of match length distributions and their impact on compression in the time-domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"603 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space-efficient construction of optimal prefix codes 最优前缀码的空间高效构造
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515509
Alistair Moffat, A. Turpin, J. Katajainen
Shows that the use of the lazy list processing technique from the world of functional languages allows, under certain conditions, the package-merge algorithm to be executed in much less space than is indicated by the O(nL) space worst-case bound. For example, the revised implementation generates a 32-bit limited code for the TREC distribution within 15 Mb of memory. It is also shown how a second observation-that in large-alphabet situations it is often the case that there are many symbols with the same frequency-can be exploited to further reduce the space required, for both unlimited and length-limited coding. This second improvement allows calculation of an optimal length-limited code for the TREC word distribution in under 8 Mb of memory; and calculation of an unrestricted Huffman code in under 1 Mb of memory.
显示了使用函数式语言世界中的延迟列表处理技术,在某些条件下,包合并算法可以在比O(nL)空间最坏情况界所指示的更小的空间中执行。例如,修订后的实现在15 Mb内存内为TREC发行版生成32位限制代码。还展示了如何利用第二个观察结果——在大字母的情况下,通常会有许多具有相同频率的符号——来进一步减少无限编码和长度限制编码所需的空间。第二个改进允许在小于8mb的内存中计算TREC字分布的最佳长度限制代码;以及在1mb内存下计算一个不受限制的霍夫曼码。
{"title":"Space-efficient construction of optimal prefix codes","authors":"Alistair Moffat, A. Turpin, J. Katajainen","doi":"10.1109/DCC.1995.515509","DOIUrl":"https://doi.org/10.1109/DCC.1995.515509","url":null,"abstract":"Shows that the use of the lazy list processing technique from the world of functional languages allows, under certain conditions, the package-merge algorithm to be executed in much less space than is indicated by the O(nL) space worst-case bound. For example, the revised implementation generates a 32-bit limited code for the TREC distribution within 15 Mb of memory. It is also shown how a second observation-that in large-alphabet situations it is often the case that there are many symbols with the same frequency-can be exploited to further reduce the space required, for both unlimited and length-limited coding. This second improvement allows calculation of an optimal length-limited code for the TREC word distribution in under 8 Mb of memory; and calculation of an unrestricted Huffman code in under 1 Mb of memory.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132873584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Asymmetric lossless image compression 非对称无损图像压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515567
N. Memon, K. Sayood
Summary form only given. Lossless image compression is often required in situations where compression is done once and decompression is to be performed a multiple number of times. Since compression is to be performed only once, time taken for compression is not a critical factor while selecting an appropriate compression scheme. What is more critical is the amount of time and memory needed for decompression and also the compression ratio obtained. Compression schemes that satisfy the above constraints are called asymmetric techniques. While there exist many asymmetric techniques for the lossy compression of image data, most techniques reported for lossless compression of image data have been symmetric. We present a new lossless compression technique that is well suited for asymmetric applications. It gives superior performance compared to standard lossless compression techniques by exploiting 'global' correlations. By 'global' correlations we mean similar patterns of pixels that re-occur within the image, not necessarily at close proximity. The developed technique can also potentially be adapted for use in symmetric applications that require high compression ratios. We develop algorithms for codebook design using LBG like clustering of image blocks. For the sake of a preliminary investigation, codebooks of various sizes were constructed using different block sizes and using the 8 JPEG predictors as the set of prediction schemes.
只提供摘要形式。在只进行一次压缩而要执行多次解压缩的情况下,通常需要无损图像压缩。由于压缩只执行一次,因此在选择合适的压缩方案时,压缩所花费的时间并不是一个关键因素。更关键的是解压所需的时间和内存以及获得的压缩比。满足上述约束的压缩方案称为非对称技术。虽然存在许多非对称的图像数据有损压缩技术,但大多数报道的图像数据无损压缩技术都是对称的。我们提出了一种新的无损压缩技术,它非常适合于非对称应用。通过利用“全局”相关性,与标准无损压缩技术相比,它提供了优越的性能。通过“全局”相关性,我们指的是在图像中重复出现的相似像素模式,而不一定是近距离的。所开发的技术还可能适用于需要高压缩比的对称应用程序。我们开发了使用LBG如图像块聚类的码本设计算法。为了进行初步研究,使用不同的块大小和8个JPEG预测器作为预测方案集,构建了不同大小的码本。
{"title":"Asymmetric lossless image compression","authors":"N. Memon, K. Sayood","doi":"10.1109/DCC.1995.515567","DOIUrl":"https://doi.org/10.1109/DCC.1995.515567","url":null,"abstract":"Summary form only given. Lossless image compression is often required in situations where compression is done once and decompression is to be performed a multiple number of times. Since compression is to be performed only once, time taken for compression is not a critical factor while selecting an appropriate compression scheme. What is more critical is the amount of time and memory needed for decompression and also the compression ratio obtained. Compression schemes that satisfy the above constraints are called asymmetric techniques. While there exist many asymmetric techniques for the lossy compression of image data, most techniques reported for lossless compression of image data have been symmetric. We present a new lossless compression technique that is well suited for asymmetric applications. It gives superior performance compared to standard lossless compression techniques by exploiting 'global' correlations. By 'global' correlations we mean similar patterns of pixels that re-occur within the image, not necessarily at close proximity. The developed technique can also potentially be adapted for use in symmetric applications that require high compression ratios. We develop algorithms for codebook design using LBG like clustering of image blocks. For the sake of a preliminary investigation, codebooks of various sizes were constructed using different block sizes and using the 8 JPEG predictors as the set of prediction schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130541500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Constrained-storage vector quantization with a universal codebook 具有通用码本的约束存储矢量量化
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515494
Sangeeta Ramakrishnan, Kenneth Rose, A. Gersho
Many compression applications consist of compressing multiple sources with significantly different distributions. In the context of vector quantization (VQ) these sources are typically quantized using separate codebooks. Since memory is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. As a natural generalization, we propose the design of a size-limited universal codebook consisting of the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal codevectors and provides greater design flexibility which improves the storage-constrained performance. Further advantages of the proposed approach include the fact that no two sources need be encoded at the same rate, and the close relation to universal, adaptive, and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative descent algorithm is introduced to impose these conditions on the resulting quantizer. Possible applications of the proposed technique are enumerated and its effectiveness is illustrated for coding of images using finite-state vector quantization.
许多压缩应用程序包括压缩具有明显不同分布的多个源。在矢量量化(VQ)的背景下,这些源通常使用单独的码本进行量化。由于内存在大多数应用程序中是有限的,因此需要一种方便的方式来优雅地在性能和存储之间进行权衡。早期的工作通过将多个源聚类到少数源组中来解决这个问题,其中每个组共享一个代码本。作为一种自然的推广,我们提出了一个由重叠源码本的联合组成的有限大小的通用码本的设计。该框架允许每个源代码本由通用编码向量的任何期望子集组成,并提供更大的设计灵活性,从而提高存储受限的性能。该方法的其他优点包括不需要以相同的速率对两个源进行编码,并且与通用、自适应和分类量化密切相关。给出了通用码本和提取的源码本的最优性的必要条件。引入了一种迭代下降算法,将这些条件施加到结果量化器上。列举了该技术的可能应用,并说明了其在使用有限状态矢量量化的图像编码中的有效性。
{"title":"Constrained-storage vector quantization with a universal codebook","authors":"Sangeeta Ramakrishnan, Kenneth Rose, A. Gersho","doi":"10.1109/DCC.1995.515494","DOIUrl":"https://doi.org/10.1109/DCC.1995.515494","url":null,"abstract":"Many compression applications consist of compressing multiple sources with significantly different distributions. In the context of vector quantization (VQ) these sources are typically quantized using separate codebooks. Since memory is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. As a natural generalization, we propose the design of a size-limited universal codebook consisting of the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal codevectors and provides greater design flexibility which improves the storage-constrained performance. Further advantages of the proposed approach include the fact that no two sources need be encoded at the same rate, and the close relation to universal, adaptive, and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative descent algorithm is introduced to impose these conditions on the resulting quantizer. Possible applications of the proposed technique are enumerated and its effectiveness is illustrated for coding of images using finite-state vector quantization.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1