[Summary form only given]. We introduce a motion wavelet transform zero tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size. The codec employs a group of pictures (GOP) of two interlaced video frames, edge filters for the boundaries, intermediate field image compression and block compression structure. Specific features of the implementation for a small single chip are: 1) Transform filters are short and use dyadic rational coefficients with small numerators. Implementation can be accomplished with adds and shifts. We propose a Mallat pyramid resulting from five filter applications in the horizontal direction and three applications in the vertical direction. We use modified edge filters near block and image boundaries so as to utilize actual image values. 2) Motion image compression is used in place of motion compensation. We have applied transform compression in the temporal direction to a GOP of four fields. A two level temporal Mallat pyramid is used as a tensor product with the spatial pyramid. The linear edge filters are used at the fine level and the modified Haar filters at the coarse level, resulting in four temporal subbands. 3) Processing can be decoupled into the processing of blocks of 8 scan lines of 32 pixels each. This helps reduce the RAM requirements to the point that the RAM can be placed in the ASIC itself. 4) Quantization denominators are powers of two, enabling implementation by shifts. 5) Zero-tree coding yields a progressive encoding which is easily rate controlled. 6) The codec itself imposes a very low delay of less than 3.5 ms within a field and 67 ms for a GOP. The overall conclusion is that it is reasonable to expect that this method can be implemented, including memory, in a few mm/sup 2/ of silicon.
{"title":"A fractional chip wavelet zero tree codec (WZT) for video compression","authors":"K. Kolarov, W. Lynch, Bill Arrighi, Bob Hoover","doi":"10.1109/DCC.1999.785692","DOIUrl":"https://doi.org/10.1109/DCC.1999.785692","url":null,"abstract":"[Summary form only given]. We introduce a motion wavelet transform zero tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size. The codec employs a group of pictures (GOP) of two interlaced video frames, edge filters for the boundaries, intermediate field image compression and block compression structure. Specific features of the implementation for a small single chip are: 1) Transform filters are short and use dyadic rational coefficients with small numerators. Implementation can be accomplished with adds and shifts. We propose a Mallat pyramid resulting from five filter applications in the horizontal direction and three applications in the vertical direction. We use modified edge filters near block and image boundaries so as to utilize actual image values. 2) Motion image compression is used in place of motion compensation. We have applied transform compression in the temporal direction to a GOP of four fields. A two level temporal Mallat pyramid is used as a tensor product with the spatial pyramid. The linear edge filters are used at the fine level and the modified Haar filters at the coarse level, resulting in four temporal subbands. 3) Processing can be decoupled into the processing of blocks of 8 scan lines of 32 pixels each. This helps reduce the RAM requirements to the point that the RAM can be placed in the ASIC itself. 4) Quantization denominators are powers of two, enabling implementation by shifts. 5) Zero-tree coding yields a progressive encoding which is easily rate controlled. 6) The codec itself imposes a very low delay of less than 3.5 ms within a field and 67 ms for a GOP. The overall conclusion is that it is reasonable to expect that this method can be implemented, including memory, in a few mm/sup 2/ of silicon.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133553667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
[Summary form only given] Typically, the lossless compression of color images is achieved by separately compressing the three RGB monochromatic image components. The proposed method takes into account the fact that high spatial correlations exist not only within each monochromatic frame but also between similar spatial locations in adjacent monochromatic frames. Based on the observation that the prediction errors produced by the JPEG predictor in each RGB monochromatic frame present very similar structures, we propose two new chromatic predictors, called chromatic differential predictor (CDP) and classified CDP (CCDP), to capture the spectral dependencies between the monochromatic frames. In addition to prediction schemes, we consider context modeling schemes that take into account the prediction errors in spatially and/or spectrally adjacent pixels in order to efficiently encode the prediction errors. In order to demonstrate the advantage of the proposed lossless color image compression scheme, 5 different types of images are selected from the KODAK image set. All images are RGB 24 bpp color images with resolution 768/spl times/512. The experimental results demonstrate significant improvement in compression performance. Its fast implementation and high compression ratio may be a promising approach for the application of real-time color video compression.
{"title":"Lossless color image compression using chromatic correlation","authors":"Wen Jiang, L. Bruton","doi":"10.1109/DCC.1999.785690","DOIUrl":"https://doi.org/10.1109/DCC.1999.785690","url":null,"abstract":"[Summary form only given] Typically, the lossless compression of color images is achieved by separately compressing the three RGB monochromatic image components. The proposed method takes into account the fact that high spatial correlations exist not only within each monochromatic frame but also between similar spatial locations in adjacent monochromatic frames. Based on the observation that the prediction errors produced by the JPEG predictor in each RGB monochromatic frame present very similar structures, we propose two new chromatic predictors, called chromatic differential predictor (CDP) and classified CDP (CCDP), to capture the spectral dependencies between the monochromatic frames. In addition to prediction schemes, we consider context modeling schemes that take into account the prediction errors in spatially and/or spectrally adjacent pixels in order to efficiently encode the prediction errors. In order to demonstrate the advantage of the proposed lossless color image compression scheme, 5 different types of images are selected from the KODAK image set. All images are RGB 24 bpp color images with resolution 768/spl times/512. The experimental results demonstrate significant improvement in compression performance. Its fast implementation and high compression ratio may be a promising approach for the application of real-time color video compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134580337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Implementation of variable-length code (VLC) decoders can involve a tradeoff between the number of decoding steps and memory usage. In this paper, we proposed a novel scheme for optimizing this tradeoff using a machine model abstracted from general purpose processors with hierarchical memories. We formulate the VLC decode problem as an optimization problem where the objective is to minimize the average decoding time. After showing that the problem is NP-complete, we present a Lagrangian algorithm that finds an approximate solution with bounded error. An implementation is automatically synthesized by a code generator. To demonstrate the efficacy of our approach, we conducted experiments of decoding codebooks for a pruned tree-structured vector quantizer and H.263 motion vector that show a performance gain of our proposed algorithm over single table lookup implementation and logic implementation.
{"title":"Software synthesis of variable-length code decoder using a mixture of programmed logic and table lookups","authors":"Gene Cheung, S. McCanne, C. Papadimitriou","doi":"10.1109/DCC.1999.755661","DOIUrl":"https://doi.org/10.1109/DCC.1999.755661","url":null,"abstract":"Implementation of variable-length code (VLC) decoders can involve a tradeoff between the number of decoding steps and memory usage. In this paper, we proposed a novel scheme for optimizing this tradeoff using a machine model abstracted from general purpose processors with hierarchical memories. We formulate the VLC decode problem as an optimization problem where the objective is to minimize the average decoding time. After showing that the problem is NP-complete, we present a Lagrangian algorithm that finds an approximate solution with bounded error. An implementation is automatically synthesized by a code generator. To demonstrate the efficacy of our approach, we conducted experiments of decoding codebooks for a pruned tree-structured vector quantizer and H.263 motion vector that show a performance gain of our proposed algorithm over single table lookup implementation and logic implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132572314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications require high-quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray-scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of the code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of inter-band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than inter-band CALIC.
{"title":"SICLIC: a simple inter-color lossless image coder","authors":"R. Barequet, M. Feder","doi":"10.1109/DCC.1999.755700","DOIUrl":"https://doi.org/10.1109/DCC.1999.755700","url":null,"abstract":"Many applications require high-quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray-scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of the code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of inter-band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than inter-band CALIC.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114147247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In order to limit error propagation, we divide the topological data of the entire mesh into several segments. Each segment is identified by its synchronization word and header. Due to the use of the arithmetic coder, data of a whole segment would often become useless in the presence of even a single bit error. Furthermore, several adjacent segments may be corrupted simultaneously at high bit error rates (BER). As a result, a lot of data would be required to be retransmitted in the presence of errors. Retransmitted data may also in turn get corrupted in high BER conditions. This would result in a considerable loss of coding efficiency and increased delay. We propose the use of reversible variable length codes (RVLC) to solve this problem. RVLC not only prevents error propagation in one segment but also efficiently detects the distorted portion of the bitstream due to their capability of two-way decoding. This would allow the recovery of a large portion of data from a corrupted segment. The amount of retransmitted data can thus be drastically reduced. RVLC can be matched to various sources with different probability distributions by adjusting their suffix length, and have been found suitable for image and video coding. However, the application of RVLC to robust 3D mesh coding has not yet been studied. Our study of the suitability of RVLC for the topological data is presented in this research. Experiments have been carried to prove the efficiency of the proposed robust 3D graphic coding algorithm. To design an efficient pre-defined code table, a large set of 300 MPEG-4 selected 3D models have been used in our experiments. The use of predefined code tables would result in a significantly reduced computational complexity.
{"title":"Reversible variable length codes (RVLC) for robust coding of 3D topological mesh data","authors":"Z. Yan, Sunil Kumar, Jiankun Li, C.-C. Jay Kuo","doi":"10.1109/DCC.1999.785717","DOIUrl":"https://doi.org/10.1109/DCC.1999.785717","url":null,"abstract":"Summary form only given. In order to limit error propagation, we divide the topological data of the entire mesh into several segments. Each segment is identified by its synchronization word and header. Due to the use of the arithmetic coder, data of a whole segment would often become useless in the presence of even a single bit error. Furthermore, several adjacent segments may be corrupted simultaneously at high bit error rates (BER). As a result, a lot of data would be required to be retransmitted in the presence of errors. Retransmitted data may also in turn get corrupted in high BER conditions. This would result in a considerable loss of coding efficiency and increased delay. We propose the use of reversible variable length codes (RVLC) to solve this problem. RVLC not only prevents error propagation in one segment but also efficiently detects the distorted portion of the bitstream due to their capability of two-way decoding. This would allow the recovery of a large portion of data from a corrupted segment. The amount of retransmitted data can thus be drastically reduced. RVLC can be matched to various sources with different probability distributions by adjusting their suffix length, and have been found suitable for image and video coding. However, the application of RVLC to robust 3D mesh coding has not yet been studied. Our study of the suitability of RVLC for the topological data is presented in this research. Experiments have been carried to prove the efficiency of the proposed robust 3D graphic coding algorithm. To design an efficient pre-defined code table, a large set of 300 MPEG-4 selected 3D models have been used in our experiments. The use of predefined code tables would result in a significantly reduced computational complexity.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121843924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a lossy data compression scheme based on an approximate two-dimensional pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we show that the so-called fixed database model leads to suboptimal compression. Furthermore, the compression ratio of this model is as low as the generalized entropy that we define. We use this model for our video compression scheme and present experimental results. For image compression we use a growing database model. The implementation of PD-PMC is a challenging problem from the algorithmic point of view. We use a range of novel techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.4 Mbit/s for video compression.
{"title":"2D-pattern matching image and video compression","authors":"Marc Alzina, W. Szpankowski, A. Grama","doi":"10.1109/DCC.1999.755692","DOIUrl":"https://doi.org/10.1109/DCC.1999.755692","url":null,"abstract":"We propose a lossy data compression scheme based on an approximate two-dimensional pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we show that the so-called fixed database model leads to suboptimal compression. Furthermore, the compression ratio of this model is as low as the generalized entropy that we define. We use this model for our video compression scheme and present experimental results. For image compression we use a growing database model. The implementation of PD-PMC is a challenging problem from the algorithmic point of view. We use a range of novel techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.4 Mbit/s for video compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121938297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The SPIHT algorithm is shown to implicitly use quadtree-based classification. The rate-distortion encoding performance of the classes is described, and quantization improvements presented. A new encoding algorithm combines a general SPIHT data structure with the granular gain of multi-dimensional quantization to achieve improved PSNR versus rate performance.
{"title":"Quadtree classification and TCQ image coding","authors":"B. A. Banister, T. Fischer","doi":"10.1109/DCC.1999.755664","DOIUrl":"https://doi.org/10.1109/DCC.1999.755664","url":null,"abstract":"The SPIHT algorithm is shown to implicitly use quadtree-based classification. The rate-distortion encoding performance of the classes is described, and quantization improvements presented. A new encoding algorithm combines a general SPIHT data structure with the granular gain of multi-dimensional quantization to achieve improved PSNR versus rate performance.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127163168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.
{"title":"Context quantization with Fisher discriminant for adaptive embedded wavelet image coding","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755659","DOIUrl":"https://doi.org/10.1109/DCC.1999.755659","url":null,"abstract":"Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126169915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.
{"title":"Low complexity high-order context modeling of embedded wavelet bit streams","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755660","DOIUrl":"https://doi.org/10.1109/DCC.1999.755660","url":null,"abstract":"In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.
{"title":"Move-to-front and permutation based inversion coding","authors":"Z. Arnavut","doi":"10.1109/DCC.1999.785672","DOIUrl":"https://doi.org/10.1109/DCC.1999.785672","url":null,"abstract":"[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116202835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}