首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Byte-aligned bitmap compression 字节对齐位图压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515586
G. Antoshenkov
Summary form only given. Bitmap compression reduces storage space and transmission time for unstructured bit sequences like in inverted files, spatial objects, etc. On the down side, the compressed bitmaps loose their functional properties. For example, checking a given bit position, set intersection, union, and difference can be performed only after full decoding, thus causing a many-folded operational speed degradation. The proposed byte-aligned bitmap compression method (BBC) aims to support fast set operations on the compressed bitmap formats and, at the same time, to retain a competitive compression rate. To achieve this objective, BBC abandons the traditional approach of encoding run-lengths (distances between two ones separated by zeros). Instead, BBC deals only with byte aligned byte-size bitmap portions that are easy to fetch, store, AND, OR, and convert. The bitmap bytes are classified as gaps containing only zeros or only ones and maps containing a mixture of both. We also introduced a simple extension mechanism for existing methods to accommodate a dual-gap (zeros and ones) run-length encoding. With this extension, encoding of long "one" sequences becomes as efficient and better than arithmetic encoding.
只提供摘要形式。位图压缩减少了非结构化位序列的存储空间和传输时间,如倒置文件、空间对象等。缺点是,压缩的位图失去了它们的功能属性。例如,检查给定的位位置、集合交集、并集和差值只能在完全解码后执行,从而导致许多折叠的操作速度下降。提出的字节对齐位图压缩方法(BBC)旨在支持压缩位图格式的快速集合操作,同时保持有竞争力的压缩率。为了实现这一目标,BBC放弃了传统的编码行程长度的方法(以零分隔的两个1之间的距离)。相反,BBC只处理易于获取、存储、AND、OR和转换的字节对齐的字节大小的位图部分。位图字节分为仅包含0或1的间隙和包含两者的混合的映射。我们还为现有方法引入了一种简单的扩展机制,以适应双间隙(0和1)运行长度编码。有了这个扩展,长“1”序列的编码变得和算术编码一样高效和更好。
{"title":"Byte-aligned bitmap compression","authors":"G. Antoshenkov","doi":"10.1109/DCC.1995.515586","DOIUrl":"https://doi.org/10.1109/DCC.1995.515586","url":null,"abstract":"Summary form only given. Bitmap compression reduces storage space and transmission time for unstructured bit sequences like in inverted files, spatial objects, etc. On the down side, the compressed bitmaps loose their functional properties. For example, checking a given bit position, set intersection, union, and difference can be performed only after full decoding, thus causing a many-folded operational speed degradation. The proposed byte-aligned bitmap compression method (BBC) aims to support fast set operations on the compressed bitmap formats and, at the same time, to retain a competitive compression rate. To achieve this objective, BBC abandons the traditional approach of encoding run-lengths (distances between two ones separated by zeros). Instead, BBC deals only with byte aligned byte-size bitmap portions that are easy to fetch, store, AND, OR, and convert. The bitmap bytes are classified as gaps containing only zeros or only ones and maps containing a mixture of both. We also introduced a simple extension mechanism for existing methods to accommodate a dual-gap (zeros and ones) run-length encoding. With this extension, encoding of long \"one\" sequences becomes as efficient and better than arithmetic encoding.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117143086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 210
Optimal representation of motion fields for video compression 视频压缩运动场的最佳表示
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515530
J. V. Gísladóttir, M. Orchard
A new video coding scheme in which an image sequence is fully represented through its motion field is introduced. The motivation behind the new coding scheme is that motion fields are generally more efficient representations of image sequences. We describe the new coding scheme, and present a new generalized and optimized representation through the motion field. An important aspect of the new coding approach is that we are free to choose parameters in the representation of the motion field. Our goal is to choose those parameters so that the motion field can be coded most efficiently. We describe our definition of the motion field, and illustrate how the parameters of the motion model can be chosen. We also present the results of applying those parameters to the coding procedure.
介绍了一种新的视频编码方案,该方案通过图像序列的运动场来完全表示图像序列。新编码方案背后的动机是运动场通常是更有效的图像序列表示。我们描述了新的编码方案,并通过运动场提出了一种新的广义优化表示。新编码方法的一个重要方面是我们可以自由选择运动场表示中的参数。我们的目标是选择这些参数,以便运动场可以最有效地编码。我们描述了运动场的定义,并说明了如何选择运动模型的参数。我们还介绍了将这些参数应用于编码过程的结果。
{"title":"Optimal representation of motion fields for video compression","authors":"J. V. Gísladóttir, M. Orchard","doi":"10.1109/DCC.1995.515530","DOIUrl":"https://doi.org/10.1109/DCC.1995.515530","url":null,"abstract":"A new video coding scheme in which an image sequence is fully represented through its motion field is introduced. The motivation behind the new coding scheme is that motion fields are generally more efficient representations of image sequences. We describe the new coding scheme, and present a new generalized and optimized representation through the motion field. An important aspect of the new coding approach is that we are free to choose parameters in the representation of the motion field. Our goal is to choose those parameters so that the motion field can be coded most efficiently. We describe our definition of the motion field, and illustrate how the parameters of the motion model can be chosen. We also present the results of applying those parameters to the coding procedure.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"453 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117155316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classified conditional entropy coding of LSP parameters LSP参数的分类条件熵编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515545
Junchen Du, S.P. Kim
Summary form only given. A new LSP speech parameter compression scheme is proposed which uses conditional probability information through classification. For efficient compression of speech LSP parameter vectors it is essential that higher order correlations are exploited. The use of conditional probability information has been hindered by high complexity of the information. For example, a LSP vector has 34 bit representation at 4.8 K bps CELP coding (FS1016 standard). It is impractical to use the first order probability information directly since 2/sup 34//spl ap/1.7/spl times/10/sup 10/ number of probability tables would be required and training of such information would be practically impossible. In order to reduce the complexity, we reduce the input alphabet size by classifying the LSP vectors according to their phonetic relevance. In other words, speech LSP parameters are classified into groups representing loosely defined various phonemes. The number of phoneme groups used was 32 considering the ambiguity of similar phonemes and background noises. Then conditional probability tables are constructed for each class by training. In order to further reduce the complexity, split-VQ has been employed. The classification is achieved through vector quantization with a mean squared distortion measure in the LSP domain.
只提供摘要形式。提出了一种利用条件概率信息进行分类的LSP语音参数压缩方案。为了有效地压缩语音LSP参数向量,必须利用高阶相关性。条件概率信息的高度复杂性阻碍了条件概率信息的应用。例如,在4.8 K bps的CELP编码(FS1016标准)下,LSP向量有34位表示。直接使用一阶概率信息是不切实际的,因为需要2/sup 34//spl / ap/1.7/spl倍/10/sup 10/数的概率表,并且训练这些信息实际上是不可能的。为了降低复杂度,我们根据LSP向量的语音相关性对其进行分类,从而减小输入字母的大小。换句话说,语音LSP参数被分成代表松散定义的各种音素的组。考虑到相似音素的模糊性和背景噪声,使用的音素组数为32个。然后通过训练为每个类构造条件概率表。为了进一步降低复杂度,采用了split-VQ。该分类是通过在LSP域中使用均方失真度量的矢量量化来实现的。
{"title":"Classified conditional entropy coding of LSP parameters","authors":"Junchen Du, S.P. Kim","doi":"10.1109/DCC.1995.515545","DOIUrl":"https://doi.org/10.1109/DCC.1995.515545","url":null,"abstract":"Summary form only given. A new LSP speech parameter compression scheme is proposed which uses conditional probability information through classification. For efficient compression of speech LSP parameter vectors it is essential that higher order correlations are exploited. The use of conditional probability information has been hindered by high complexity of the information. For example, a LSP vector has 34 bit representation at 4.8 K bps CELP coding (FS1016 standard). It is impractical to use the first order probability information directly since 2/sup 34//spl ap/1.7/spl times/10/sup 10/ number of probability tables would be required and training of such information would be practically impossible. In order to reduce the complexity, we reduce the input alphabet size by classifying the LSP vectors according to their phonetic relevance. In other words, speech LSP parameters are classified into groups representing loosely defined various phonemes. The number of phoneme groups used was 32 considering the ambiguity of similar phonemes and background noises. Then conditional probability tables are constructed for each class by training. In order to further reduce the complexity, split-VQ has been employed. The classification is achieved through vector quantization with a mean squared distortion measure in the LSP domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114516216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive error correcting codes based on cooperative play of the game of "twenty questions with a liar" 基于合作玩“说谎者二十问”游戏的自适应纠错码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515574
E. Lawler, S. Sarkissian
Summary form only given. The existence of a noiseless, delayless feedback channel permits the transmitter to detect transmission errors at the time they occur. Such a feedback channel does not increase channel capacity, but it does permit the use of adaptive codes with significantly enhanced error correction capabilities. It is well known that codes of this type can be based on cooperative play of the game of "twenty questions with a liar". However, it is perhaps not so well appreciated that it is practicable to implement codes of this type on general purpose computers. We describe a simple and fast implementation in which the transmitter makes reference to a fully worked out game tree. In the worst case, storage requirements grow exponentially with block length. For many cases of interest, storage requirements are not excessive. For example, a 4-error correcting code with 8 information bits and 13 check bits requires only 103.2 kilobytes of storage. By contrast, an 8-error correcting code with 6 information bits and 25 check bits requires 21.2 megabytes of storage. However, no nonadaptive code is capable of correcting as many as 8 errors when 6 information bits are encoded in a block of length 31.
只提供摘要形式。无噪声、无延迟反馈信道的存在允许发射机在传输错误发生时检测到它们。这样的反馈信道不会增加信道容量,但它确实允许使用具有显著增强的纠错能力的自适应编码。众所周知,这种类型的代码可以基于“与骗子的二十个问题”游戏的合作游戏。然而,在通用计算机上实现这种类型的代码是否可行,可能还没有得到很好的认识。我们描述了一个简单而快速的实现,其中发射器参考了一个完整的游戏树。在最坏的情况下,存储需求会随着块长度呈指数级增长。对于许多感兴趣的情况,存储需求并不过分。例如,一个包含8个信息位和13个校验位的4个纠错码只需要103.2千字节的存储空间。相比之下,一个包含6个信息位和25个校验位的8个纠错码需要21.2兆字节的存储空间。然而,当在长度为31的块中编码6个信息位时,没有任何非自适应码能够纠正多达8个错误。
{"title":"Adaptive error correcting codes based on cooperative play of the game of \"twenty questions with a liar\"","authors":"E. Lawler, S. Sarkissian","doi":"10.1109/DCC.1995.515574","DOIUrl":"https://doi.org/10.1109/DCC.1995.515574","url":null,"abstract":"Summary form only given. The existence of a noiseless, delayless feedback channel permits the transmitter to detect transmission errors at the time they occur. Such a feedback channel does not increase channel capacity, but it does permit the use of adaptive codes with significantly enhanced error correction capabilities. It is well known that codes of this type can be based on cooperative play of the game of \"twenty questions with a liar\". However, it is perhaps not so well appreciated that it is practicable to implement codes of this type on general purpose computers. We describe a simple and fast implementation in which the transmitter makes reference to a fully worked out game tree. In the worst case, storage requirements grow exponentially with block length. For many cases of interest, storage requirements are not excessive. For example, a 4-error correcting code with 8 information bits and 13 check bits requires only 103.2 kilobytes of storage. By contrast, an 8-error correcting code with 6 information bits and 25 check bits requires 21.2 megabytes of storage. However, no nonadaptive code is capable of correcting as many as 8 errors when 6 information bits are encoded in a block of length 31.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115722315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coding with partially hidden Markov models 部分隐马尔可夫模型编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515499
Søren Forchhammer, J. Rissanen
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given. The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt to instationarities in the images.
介绍了部分隐马尔可夫模型(PHMM)。它们是隐马尔可夫模型(HMM)的一种变体,结合了对过去观测的显式条件反射的能力和使用隐藏状态的能力。HMM可与算术编码相结合,实现无损数据压缩。提出了一种基于PHMM的模型阶数未知的两部分编码方案。对于这些模型,给出了带重新定义的后向变量的参数前向后向重估计,并用于估计未知参数。给出了这种重估计的收敛性证明。PHMM的结构和收敛性证明的条件使得PHMM可以应用于图像编码。研究了隐马尔可夫模型与PHMM模型之间的关系。给出了用PHMM编码方案对双级图像进行编码的结果。结果表明,PHMM能够适应图像中的不稳定性。
{"title":"Coding with partially hidden Markov models","authors":"Søren Forchhammer, J. Rissanen","doi":"10.1109/DCC.1995.515499","DOIUrl":"https://doi.org/10.1109/DCC.1995.515499","url":null,"abstract":"Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given. The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt to instationarities in the images.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114794252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An efficient variable length coding scheme for an IID source 一个有效的可变长度编码方案的IID源
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515508
K. Cheung, A. Kiely
In this article we examine a scheme that uses two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. One Huffman code encodes the length of runs of the dominant symbol, the other encodes the remaining symbols. We call this combined strategy alternating runlength Huffman (ARH) coding. This is a popular scheme, used for example in the efficient pyramid image coder (EPIC) subband coding algorithm. Since the runlengths of the dominant symbol are geometrically distributed, they can be encoded using the Huffman codes identified by Golomb (1966) and later generalized by Gallager and Van Voorhis (1975). This runlength encoding allows the most likely symbol to be encoded using less than one bit per sample, providing a simple method for overcoming a drawback of prefix codes-that the redundancy approaches one as the largest symbol probability P approaches one. For ARH coding, the redundancy approaches zero as P approaches one. Comparing the average code rate of ARH with direct Huffman coding we find that: 1. If P<1/3, ARH is less efficient than Huffman coding. 2. If 1/3/spl les/P<2/5, ARH is less than or equally efficient as Huffman coding, depending on the source distribution. 3. If 2/5/spl les/P/spl les/0.618, ARH and Huffman coding are equally efficient. 4. If P>0.618, ARH is more efficient than Huffman coding. We give examples of applying ARH coding to some specific sources.
在这篇文章中,我们研究了一个方案,使用两个交替的霍夫曼码来编码一个离散的独立的和同分布的源与一个主导符号。一个霍夫曼码编码主导符号的运行长度,另一个编码剩余的符号。我们称这种组合策略为交替跑长霍夫曼(ARH)编码。这是一种流行的方案,例如用于高效金字塔图像编码器(EPIC)子带编码算法。由于主导符号的长度呈几何分布,因此可以使用Golomb(1966)确定的霍夫曼码进行编码,后来由Gallager和Van Voorhis(1975)推广。这种运行长度编码允许最可能的符号在每个样本中使用少于一个比特进行编码,这为克服前缀代码的缺点提供了一种简单的方法——当最大符号概率P接近1时,冗余接近1。对于ARH编码,当P趋于1时,冗余趋于零。将ARH编码与直接霍夫曼编码的平均码率进行比较,我们发现:1。如果P0.618, ARH编码比Huffman编码效率更高。我们给出了将ARH编码应用于某些特定源的示例。
{"title":"An efficient variable length coding scheme for an IID source","authors":"K. Cheung, A. Kiely","doi":"10.1109/DCC.1995.515508","DOIUrl":"https://doi.org/10.1109/DCC.1995.515508","url":null,"abstract":"In this article we examine a scheme that uses two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. One Huffman code encodes the length of runs of the dominant symbol, the other encodes the remaining symbols. We call this combined strategy alternating runlength Huffman (ARH) coding. This is a popular scheme, used for example in the efficient pyramid image coder (EPIC) subband coding algorithm. Since the runlengths of the dominant symbol are geometrically distributed, they can be encoded using the Huffman codes identified by Golomb (1966) and later generalized by Gallager and Van Voorhis (1975). This runlength encoding allows the most likely symbol to be encoded using less than one bit per sample, providing a simple method for overcoming a drawback of prefix codes-that the redundancy approaches one as the largest symbol probability P approaches one. For ARH coding, the redundancy approaches zero as P approaches one. Comparing the average code rate of ARH with direct Huffman coding we find that: 1. If P<1/3, ARH is less efficient than Huffman coding. 2. If 1/3/spl les/P<2/5, ARH is less than or equally efficient as Huffman coding, depending on the source distribution. 3. If 2/5/spl les/P/spl les/0.618, ARH and Huffman coding are equally efficient. 4. If P>0.618, ARH is more efficient than Huffman coding. We give examples of applying ARH coding to some specific sources.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114840373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Efficient error free chain coding of binary documents 有效的无错误链编码二进制文件
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515502
Robert R V Estes, Ralph Algazi
Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.
有限上下文模型提高了基于链的编码器的性能,使它们成为二值图像压缩的有吸引力的替代模型。结果代码在200 dpi时比JBIG效率低4%,在400 dpi时效率高9%。
{"title":"Efficient error free chain coding of binary documents","authors":"Robert R V Estes, Ralph Algazi","doi":"10.1109/DCC.1995.515502","DOIUrl":"https://doi.org/10.1109/DCC.1995.515502","url":null,"abstract":"Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127545467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A parallel vector quantization algorithm for SIMD multiprocessor systems SIMD多处理器系统的并行矢量量化算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515589
H.J. Lee, J.C. Liu, A. Chan, C. Chui
Summary form only given , as follows. This article proposes a parallel vector quantization (VQ) algorithm for an exhaustive search of codebooks on a single-instruction-multiple-data (SIMD) multiprocessor. The proposed parallel VQ algorithm can be integrated with the parallel wavelet-transform techniques for fast image compression. This algorithm has been implemented on the MasPar parallel computer to achieve favorable performance gains. Our results show that VQ can be efficiently parallelized on commercial SIMD machines to meet the real-time performance requirements of numerous applications. Note that although processors in the MP-1 machine are based on relatively old VLSI technology, the drastic speedup gained by parallelization of the computations is marked. Since our algorithm is applicable to any image size, it can be readily used on larger, faster SIMD multiprocessor systems for real-time processing of very large images.
仅给出摘要形式,如下。本文提出了一种并行矢量量化(VQ)算法,用于单指令多数据(SIMD)多处理器上的码本穷举搜索。提出的并行VQ算法可以与并行小波变换技术相结合,实现图像的快速压缩。该算法已在MasPar并行计算机上实现,取得了良好的性能提升。我们的研究结果表明,VQ可以在商用SIMD机器上有效地并行化,以满足许多应用程序的实时性能要求。请注意,尽管MP-1机器中的处理器基于相对较旧的VLSI技术,但通过并行化计算获得的巨大加速是明显的。由于我们的算法适用于任何大小的图像,因此它可以很容易地用于更大、更快的SIMD多处理器系统,以实时处理非常大的图像。
{"title":"A parallel vector quantization algorithm for SIMD multiprocessor systems","authors":"H.J. Lee, J.C. Liu, A. Chan, C. Chui","doi":"10.1109/DCC.1995.515589","DOIUrl":"https://doi.org/10.1109/DCC.1995.515589","url":null,"abstract":"Summary form only given , as follows. This article proposes a parallel vector quantization (VQ) algorithm for an exhaustive search of codebooks on a single-instruction-multiple-data (SIMD) multiprocessor. The proposed parallel VQ algorithm can be integrated with the parallel wavelet-transform techniques for fast image compression. This algorithm has been implemented on the MasPar parallel computer to achieve favorable performance gains. Our results show that VQ can be efficiently parallelized on commercial SIMD machines to meet the real-time performance requirements of numerous applications. Note that although processors in the MP-1 machine are based on relatively old VLSI technology, the drastic speedup gained by parallelization of the computations is marked. Since our algorithm is applicable to any image size, it can be readily used on larger, faster SIMD multiprocessor systems for real-time processing of very large images.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122086267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Compression of data traffic in packet-based LANs 在基于分组的局域网中压缩数据流量
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515540
K. Pawlikowski, T. Bell, H. Emberson, P. Ashton
Abstract only given; substantially as follows. Data compression is very commonly used to improve the performance of telecommunications systems. In local-area networks compression technology can help reduce transmission bottlenecks if the network transmits data slower than the computers can generate it. Network designers need to have accurate models of the network traffic in order to plan network capacity. When designing networks, one must take into account not only the amount of traffic, but also the nature of the traffic. The authors are particularly concerned with communication systems that are packet-based, and with how compression changes the statistical properties of packet sizes. They also discuss how adaptive compression can be used with connectionless protocols, which pose serious synchronization difficulties. They have collected large quantities of data from a live network, and have simulated the effect of compressing the data using several different techniques. Relevant statistics of the simulated system have been calculated, allowing to characterize data compression as a stochastic transformation of teletraffic. Compression can improve throughput in packet-based networks by decreasing the size and number of packets. When applied to individual packets, not only is the mean packet size reduced, but the variance also decreases. Compression has the potential to give significant performance improvements.
抽象只给出;大体上如下。数据压缩通常用于提高电信系统的性能。在局域网中,如果网络传输数据的速度比计算机生成数据的速度慢,压缩技术可以帮助减少传输瓶颈。为了规划网络容量,网络设计者需要有准确的网络流量模型。在设计网络时,不仅要考虑流量的大小,还要考虑流量的性质。作者特别关注基于数据包的通信系统,以及压缩如何改变数据包大小的统计属性。他们还讨论了如何将自适应压缩用于造成严重同步困难的无连接协议。他们从一个实时网络中收集了大量数据,并使用几种不同的技术模拟了压缩数据的效果。计算了模拟系统的相关统计数据,从而将数据压缩描述为远程通信的随机变换。压缩可以通过减少数据包的大小和数量来提高基于数据包的网络的吞吐量。当应用于单个数据包时,不仅平均数据包大小减小,而且方差也减小。压缩有可能带来显著的性能改进。
{"title":"Compression of data traffic in packet-based LANs","authors":"K. Pawlikowski, T. Bell, H. Emberson, P. Ashton","doi":"10.1109/DCC.1995.515540","DOIUrl":"https://doi.org/10.1109/DCC.1995.515540","url":null,"abstract":"Abstract only given; substantially as follows. Data compression is very commonly used to improve the performance of telecommunications systems. In local-area networks compression technology can help reduce transmission bottlenecks if the network transmits data slower than the computers can generate it. Network designers need to have accurate models of the network traffic in order to plan network capacity. When designing networks, one must take into account not only the amount of traffic, but also the nature of the traffic. The authors are particularly concerned with communication systems that are packet-based, and with how compression changes the statistical properties of packet sizes. They also discuss how adaptive compression can be used with connectionless protocols, which pose serious synchronization difficulties. They have collected large quantities of data from a live network, and have simulated the effect of compressing the data using several different techniques. Relevant statistics of the simulated system have been calculated, allowing to characterize data compression as a stochastic transformation of teletraffic. Compression can improve throughput in packet-based networks by decreasing the size and number of packets. When applied to individual packets, not only is the mean packet size reduced, but the variance also decreases. Compression has the potential to give significant performance improvements.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Quadtree based JBIG compression 基于四叉树的JBIG压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515500
B. Fowler, R. Arps, A. Gamal, D. Yang
A JBIG compliant, quadtree based, lossless image compression algorithm is described. In terms of the number of arithmetic coding operations required to code an image, this algorithm is significantly faster than previous JBIG algorithm variations. Based on this criterion, our algorithm achieves an average speed increase of more than 9 times with only a 5% decrease in compression when tested on the eight CCITT bi-level test images and compared against the basic non-progressive JBIG algorithm. The fastest JBIG variation that we know of, using "PRES" resolution reduction and progressive buildup, achieved an average speed increase of less than 6 times with a 7% decrease in compression, under the same conditions.
描述了一种符合JBIG标准的、基于四叉树的无损图像压缩算法。就编码图像所需的算术编码操作数量而言,该算法明显快于以前的JBIG算法变体。基于这一标准,与基本的非渐进式JBIG算法相比,我们的算法在8张CCITT双水平测试图像上的平均速度提高了9倍以上,压缩仅降低了5%。我们所知道的最快的JBIG变化,使用“PRES”分辨率降低和渐进式累积,在相同条件下,平均速度增加不到6倍,压缩减少7%。
{"title":"Quadtree based JBIG compression","authors":"B. Fowler, R. Arps, A. Gamal, D. Yang","doi":"10.1109/DCC.1995.515500","DOIUrl":"https://doi.org/10.1109/DCC.1995.515500","url":null,"abstract":"A JBIG compliant, quadtree based, lossless image compression algorithm is described. In terms of the number of arithmetic coding operations required to code an image, this algorithm is significantly faster than previous JBIG algorithm variations. Based on this criterion, our algorithm achieves an average speed increase of more than 9 times with only a 5% decrease in compression when tested on the eight CCITT bi-level test images and compared against the basic non-progressive JBIG algorithm. The fastest JBIG variation that we know of, using \"PRES\" resolution reduction and progressive buildup, achieved an average speed increase of less than 6 times with a 7% decrease in compression, under the same conditions.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131334571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1