Summary form only given. Bitmap compression reduces storage space and transmission time for unstructured bit sequences like in inverted files, spatial objects, etc. On the down side, the compressed bitmaps loose their functional properties. For example, checking a given bit position, set intersection, union, and difference can be performed only after full decoding, thus causing a many-folded operational speed degradation. The proposed byte-aligned bitmap compression method (BBC) aims to support fast set operations on the compressed bitmap formats and, at the same time, to retain a competitive compression rate. To achieve this objective, BBC abandons the traditional approach of encoding run-lengths (distances between two ones separated by zeros). Instead, BBC deals only with byte aligned byte-size bitmap portions that are easy to fetch, store, AND, OR, and convert. The bitmap bytes are classified as gaps containing only zeros or only ones and maps containing a mixture of both. We also introduced a simple extension mechanism for existing methods to accommodate a dual-gap (zeros and ones) run-length encoding. With this extension, encoding of long "one" sequences becomes as efficient and better than arithmetic encoding.
{"title":"Byte-aligned bitmap compression","authors":"G. Antoshenkov","doi":"10.1109/DCC.1995.515586","DOIUrl":"https://doi.org/10.1109/DCC.1995.515586","url":null,"abstract":"Summary form only given. Bitmap compression reduces storage space and transmission time for unstructured bit sequences like in inverted files, spatial objects, etc. On the down side, the compressed bitmaps loose their functional properties. For example, checking a given bit position, set intersection, union, and difference can be performed only after full decoding, thus causing a many-folded operational speed degradation. The proposed byte-aligned bitmap compression method (BBC) aims to support fast set operations on the compressed bitmap formats and, at the same time, to retain a competitive compression rate. To achieve this objective, BBC abandons the traditional approach of encoding run-lengths (distances between two ones separated by zeros). Instead, BBC deals only with byte aligned byte-size bitmap portions that are easy to fetch, store, AND, OR, and convert. The bitmap bytes are classified as gaps containing only zeros or only ones and maps containing a mixture of both. We also introduced a simple extension mechanism for existing methods to accommodate a dual-gap (zeros and ones) run-length encoding. With this extension, encoding of long \"one\" sequences becomes as efficient and better than arithmetic encoding.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117143086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new video coding scheme in which an image sequence is fully represented through its motion field is introduced. The motivation behind the new coding scheme is that motion fields are generally more efficient representations of image sequences. We describe the new coding scheme, and present a new generalized and optimized representation through the motion field. An important aspect of the new coding approach is that we are free to choose parameters in the representation of the motion field. Our goal is to choose those parameters so that the motion field can be coded most efficiently. We describe our definition of the motion field, and illustrate how the parameters of the motion model can be chosen. We also present the results of applying those parameters to the coding procedure.
{"title":"Optimal representation of motion fields for video compression","authors":"J. V. Gísladóttir, M. Orchard","doi":"10.1109/DCC.1995.515530","DOIUrl":"https://doi.org/10.1109/DCC.1995.515530","url":null,"abstract":"A new video coding scheme in which an image sequence is fully represented through its motion field is introduced. The motivation behind the new coding scheme is that motion fields are generally more efficient representations of image sequences. We describe the new coding scheme, and present a new generalized and optimized representation through the motion field. An important aspect of the new coding approach is that we are free to choose parameters in the representation of the motion field. Our goal is to choose those parameters so that the motion field can be coded most efficiently. We describe our definition of the motion field, and illustrate how the parameters of the motion model can be chosen. We also present the results of applying those parameters to the coding procedure.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"453 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117155316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A new LSP speech parameter compression scheme is proposed which uses conditional probability information through classification. For efficient compression of speech LSP parameter vectors it is essential that higher order correlations are exploited. The use of conditional probability information has been hindered by high complexity of the information. For example, a LSP vector has 34 bit representation at 4.8 K bps CELP coding (FS1016 standard). It is impractical to use the first order probability information directly since 2/sup 34//spl ap/1.7/spl times/10/sup 10/ number of probability tables would be required and training of such information would be practically impossible. In order to reduce the complexity, we reduce the input alphabet size by classifying the LSP vectors according to their phonetic relevance. In other words, speech LSP parameters are classified into groups representing loosely defined various phonemes. The number of phoneme groups used was 32 considering the ambiguity of similar phonemes and background noises. Then conditional probability tables are constructed for each class by training. In order to further reduce the complexity, split-VQ has been employed. The classification is achieved through vector quantization with a mean squared distortion measure in the LSP domain.
只提供摘要形式。提出了一种利用条件概率信息进行分类的LSP语音参数压缩方案。为了有效地压缩语音LSP参数向量,必须利用高阶相关性。条件概率信息的高度复杂性阻碍了条件概率信息的应用。例如,在4.8 K bps的CELP编码(FS1016标准)下,LSP向量有34位表示。直接使用一阶概率信息是不切实际的,因为需要2/sup 34//spl / ap/1.7/spl倍/10/sup 10/数的概率表,并且训练这些信息实际上是不可能的。为了降低复杂度,我们根据LSP向量的语音相关性对其进行分类,从而减小输入字母的大小。换句话说,语音LSP参数被分成代表松散定义的各种音素的组。考虑到相似音素的模糊性和背景噪声,使用的音素组数为32个。然后通过训练为每个类构造条件概率表。为了进一步降低复杂度,采用了split-VQ。该分类是通过在LSP域中使用均方失真度量的矢量量化来实现的。
{"title":"Classified conditional entropy coding of LSP parameters","authors":"Junchen Du, S.P. Kim","doi":"10.1109/DCC.1995.515545","DOIUrl":"https://doi.org/10.1109/DCC.1995.515545","url":null,"abstract":"Summary form only given. A new LSP speech parameter compression scheme is proposed which uses conditional probability information through classification. For efficient compression of speech LSP parameter vectors it is essential that higher order correlations are exploited. The use of conditional probability information has been hindered by high complexity of the information. For example, a LSP vector has 34 bit representation at 4.8 K bps CELP coding (FS1016 standard). It is impractical to use the first order probability information directly since 2/sup 34//spl ap/1.7/spl times/10/sup 10/ number of probability tables would be required and training of such information would be practically impossible. In order to reduce the complexity, we reduce the input alphabet size by classifying the LSP vectors according to their phonetic relevance. In other words, speech LSP parameters are classified into groups representing loosely defined various phonemes. The number of phoneme groups used was 32 considering the ambiguity of similar phonemes and background noises. Then conditional probability tables are constructed for each class by training. In order to further reduce the complexity, split-VQ has been employed. The classification is achieved through vector quantization with a mean squared distortion measure in the LSP domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114516216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The existence of a noiseless, delayless feedback channel permits the transmitter to detect transmission errors at the time they occur. Such a feedback channel does not increase channel capacity, but it does permit the use of adaptive codes with significantly enhanced error correction capabilities. It is well known that codes of this type can be based on cooperative play of the game of "twenty questions with a liar". However, it is perhaps not so well appreciated that it is practicable to implement codes of this type on general purpose computers. We describe a simple and fast implementation in which the transmitter makes reference to a fully worked out game tree. In the worst case, storage requirements grow exponentially with block length. For many cases of interest, storage requirements are not excessive. For example, a 4-error correcting code with 8 information bits and 13 check bits requires only 103.2 kilobytes of storage. By contrast, an 8-error correcting code with 6 information bits and 25 check bits requires 21.2 megabytes of storage. However, no nonadaptive code is capable of correcting as many as 8 errors when 6 information bits are encoded in a block of length 31.
{"title":"Adaptive error correcting codes based on cooperative play of the game of \"twenty questions with a liar\"","authors":"E. Lawler, S. Sarkissian","doi":"10.1109/DCC.1995.515574","DOIUrl":"https://doi.org/10.1109/DCC.1995.515574","url":null,"abstract":"Summary form only given. The existence of a noiseless, delayless feedback channel permits the transmitter to detect transmission errors at the time they occur. Such a feedback channel does not increase channel capacity, but it does permit the use of adaptive codes with significantly enhanced error correction capabilities. It is well known that codes of this type can be based on cooperative play of the game of \"twenty questions with a liar\". However, it is perhaps not so well appreciated that it is practicable to implement codes of this type on general purpose computers. We describe a simple and fast implementation in which the transmitter makes reference to a fully worked out game tree. In the worst case, storage requirements grow exponentially with block length. For many cases of interest, storage requirements are not excessive. For example, a 4-error correcting code with 8 information bits and 13 check bits requires only 103.2 kilobytes of storage. By contrast, an 8-error correcting code with 6 information bits and 25 check bits requires 21.2 megabytes of storage. However, no nonadaptive code is capable of correcting as many as 8 errors when 6 information bits are encoded in a block of length 31.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115722315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given. The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt to instationarities in the images.
{"title":"Coding with partially hidden Markov models","authors":"Søren Forchhammer, J. Rissanen","doi":"10.1109/DCC.1995.515499","DOIUrl":"https://doi.org/10.1109/DCC.1995.515499","url":null,"abstract":"Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given. The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt to instationarities in the images.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114794252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article we examine a scheme that uses two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. One Huffman code encodes the length of runs of the dominant symbol, the other encodes the remaining symbols. We call this combined strategy alternating runlength Huffman (ARH) coding. This is a popular scheme, used for example in the efficient pyramid image coder (EPIC) subband coding algorithm. Since the runlengths of the dominant symbol are geometrically distributed, they can be encoded using the Huffman codes identified by Golomb (1966) and later generalized by Gallager and Van Voorhis (1975). This runlength encoding allows the most likely symbol to be encoded using less than one bit per sample, providing a simple method for overcoming a drawback of prefix codes-that the redundancy approaches one as the largest symbol probability P approaches one. For ARH coding, the redundancy approaches zero as P approaches one. Comparing the average code rate of ARH with direct Huffman coding we find that: 1. If P<1/3, ARH is less efficient than Huffman coding. 2. If 1/3/spl les/P<2/5, ARH is less than or equally efficient as Huffman coding, depending on the source distribution. 3. If 2/5/spl les/P/spl les/0.618, ARH and Huffman coding are equally efficient. 4. If P>0.618, ARH is more efficient than Huffman coding. We give examples of applying ARH coding to some specific sources.
{"title":"An efficient variable length coding scheme for an IID source","authors":"K. Cheung, A. Kiely","doi":"10.1109/DCC.1995.515508","DOIUrl":"https://doi.org/10.1109/DCC.1995.515508","url":null,"abstract":"In this article we examine a scheme that uses two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. One Huffman code encodes the length of runs of the dominant symbol, the other encodes the remaining symbols. We call this combined strategy alternating runlength Huffman (ARH) coding. This is a popular scheme, used for example in the efficient pyramid image coder (EPIC) subband coding algorithm. Since the runlengths of the dominant symbol are geometrically distributed, they can be encoded using the Huffman codes identified by Golomb (1966) and later generalized by Gallager and Van Voorhis (1975). This runlength encoding allows the most likely symbol to be encoded using less than one bit per sample, providing a simple method for overcoming a drawback of prefix codes-that the redundancy approaches one as the largest symbol probability P approaches one. For ARH coding, the redundancy approaches zero as P approaches one. Comparing the average code rate of ARH with direct Huffman coding we find that: 1. If P<1/3, ARH is less efficient than Huffman coding. 2. If 1/3/spl les/P<2/5, ARH is less than or equally efficient as Huffman coding, depending on the source distribution. 3. If 2/5/spl les/P/spl les/0.618, ARH and Huffman coding are equally efficient. 4. If P>0.618, ARH is more efficient than Huffman coding. We give examples of applying ARH coding to some specific sources.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114840373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.
{"title":"Efficient error free chain coding of binary documents","authors":"Robert R V Estes, Ralph Algazi","doi":"10.1109/DCC.1995.515502","DOIUrl":"https://doi.org/10.1109/DCC.1995.515502","url":null,"abstract":"Finite context models improve the performance of chain based encoders to the point that they become attractive, alternative models for binary image compression. The resulting code is within 4% of JBIG at 200 dpi and is 9% more efficient at 400 dpi.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127545467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given , as follows. This article proposes a parallel vector quantization (VQ) algorithm for an exhaustive search of codebooks on a single-instruction-multiple-data (SIMD) multiprocessor. The proposed parallel VQ algorithm can be integrated with the parallel wavelet-transform techniques for fast image compression. This algorithm has been implemented on the MasPar parallel computer to achieve favorable performance gains. Our results show that VQ can be efficiently parallelized on commercial SIMD machines to meet the real-time performance requirements of numerous applications. Note that although processors in the MP-1 machine are based on relatively old VLSI technology, the drastic speedup gained by parallelization of the computations is marked. Since our algorithm is applicable to any image size, it can be readily used on larger, faster SIMD multiprocessor systems for real-time processing of very large images.
{"title":"A parallel vector quantization algorithm for SIMD multiprocessor systems","authors":"H.J. Lee, J.C. Liu, A. Chan, C. Chui","doi":"10.1109/DCC.1995.515589","DOIUrl":"https://doi.org/10.1109/DCC.1995.515589","url":null,"abstract":"Summary form only given , as follows. This article proposes a parallel vector quantization (VQ) algorithm for an exhaustive search of codebooks on a single-instruction-multiple-data (SIMD) multiprocessor. The proposed parallel VQ algorithm can be integrated with the parallel wavelet-transform techniques for fast image compression. This algorithm has been implemented on the MasPar parallel computer to achieve favorable performance gains. Our results show that VQ can be efficiently parallelized on commercial SIMD machines to meet the real-time performance requirements of numerous applications. Note that although processors in the MP-1 machine are based on relatively old VLSI technology, the drastic speedup gained by parallelization of the computations is marked. Since our algorithm is applicable to any image size, it can be readily used on larger, faster SIMD multiprocessor systems for real-time processing of very large images.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122086267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract only given; substantially as follows. Data compression is very commonly used to improve the performance of telecommunications systems. In local-area networks compression technology can help reduce transmission bottlenecks if the network transmits data slower than the computers can generate it. Network designers need to have accurate models of the network traffic in order to plan network capacity. When designing networks, one must take into account not only the amount of traffic, but also the nature of the traffic. The authors are particularly concerned with communication systems that are packet-based, and with how compression changes the statistical properties of packet sizes. They also discuss how adaptive compression can be used with connectionless protocols, which pose serious synchronization difficulties. They have collected large quantities of data from a live network, and have simulated the effect of compressing the data using several different techniques. Relevant statistics of the simulated system have been calculated, allowing to characterize data compression as a stochastic transformation of teletraffic. Compression can improve throughput in packet-based networks by decreasing the size and number of packets. When applied to individual packets, not only is the mean packet size reduced, but the variance also decreases. Compression has the potential to give significant performance improvements.
{"title":"Compression of data traffic in packet-based LANs","authors":"K. Pawlikowski, T. Bell, H. Emberson, P. Ashton","doi":"10.1109/DCC.1995.515540","DOIUrl":"https://doi.org/10.1109/DCC.1995.515540","url":null,"abstract":"Abstract only given; substantially as follows. Data compression is very commonly used to improve the performance of telecommunications systems. In local-area networks compression technology can help reduce transmission bottlenecks if the network transmits data slower than the computers can generate it. Network designers need to have accurate models of the network traffic in order to plan network capacity. When designing networks, one must take into account not only the amount of traffic, but also the nature of the traffic. The authors are particularly concerned with communication systems that are packet-based, and with how compression changes the statistical properties of packet sizes. They also discuss how adaptive compression can be used with connectionless protocols, which pose serious synchronization difficulties. They have collected large quantities of data from a live network, and have simulated the effect of compressing the data using several different techniques. Relevant statistics of the simulated system have been calculated, allowing to characterize data compression as a stochastic transformation of teletraffic. Compression can improve throughput in packet-based networks by decreasing the size and number of packets. When applied to individual packets, not only is the mean packet size reduced, but the variance also decreases. Compression has the potential to give significant performance improvements.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A JBIG compliant, quadtree based, lossless image compression algorithm is described. In terms of the number of arithmetic coding operations required to code an image, this algorithm is significantly faster than previous JBIG algorithm variations. Based on this criterion, our algorithm achieves an average speed increase of more than 9 times with only a 5% decrease in compression when tested on the eight CCITT bi-level test images and compared against the basic non-progressive JBIG algorithm. The fastest JBIG variation that we know of, using "PRES" resolution reduction and progressive buildup, achieved an average speed increase of less than 6 times with a 7% decrease in compression, under the same conditions.
{"title":"Quadtree based JBIG compression","authors":"B. Fowler, R. Arps, A. Gamal, D. Yang","doi":"10.1109/DCC.1995.515500","DOIUrl":"https://doi.org/10.1109/DCC.1995.515500","url":null,"abstract":"A JBIG compliant, quadtree based, lossless image compression algorithm is described. In terms of the number of arithmetic coding operations required to code an image, this algorithm is significantly faster than previous JBIG algorithm variations. Based on this criterion, our algorithm achieves an average speed increase of more than 9 times with only a 5% decrease in compression when tested on the eight CCITT bi-level test images and compared against the basic non-progressive JBIG algorithm. The fastest JBIG variation that we know of, using \"PRES\" resolution reduction and progressive buildup, achieved an average speed increase of less than 6 times with a 7% decrease in compression, under the same conditions.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131334571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}