Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.
{"title":"Reduced-search fractal block coding of images","authors":"W. Kinsner, L. Wall","doi":"10.1109/DCC.1995.515571","DOIUrl":"https://doi.org/10.1109/DCC.1995.515571","url":null,"abstract":"Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127663991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek
Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a "reversible" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.
{"title":"CREW: Compression with Reversible Embedded Wavelets","authors":"A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek","doi":"10.1109/DCC.1995.515511","DOIUrl":"https://doi.org/10.1109/DCC.1995.515511","url":null,"abstract":"Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a \"reversible\" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.
{"title":"Hierarchical vector quantization of perceptually weighted block transforms","authors":"N. Chaddha, M. Vishwanath, P. Chou","doi":"10.1109/DCC.1995.515490","DOIUrl":"https://doi.org/10.1109/DCC.1995.515490","url":null,"abstract":"This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.
{"title":"An improved hierarchical lossless text compression algorithm","authors":"Chia-Yuan Teng, D. Neuhoff","doi":"10.1109/DCC.1995.515519","DOIUrl":"https://doi.org/10.1109/DCC.1995.515519","url":null,"abstract":"Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.
{"title":"A massively parallel algorithm for vector quantization","authors":"K. S. Prashant, V. J. Mathews","doi":"10.1109/DCC.1995.515604","DOIUrl":"https://doi.org/10.1109/DCC.1995.515604","url":null,"abstract":"Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126704123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.
{"title":"RD-OPT: an efficient algorithm for optimizing DCT quantization tables","authors":"Viresh Ratnakar, M. Livny","doi":"10.1109/DCC.1995.515523","DOIUrl":"https://doi.org/10.1109/DCC.1995.515523","url":null,"abstract":"The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"30 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.
{"title":"Adaptive wavelet subband coding for music compression","authors":"K. Ferens, W. Kinsner","doi":"10.1109/DCC.1995.515570","DOIUrl":"https://doi.org/10.1109/DCC.1995.515570","url":null,"abstract":"This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123534064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image "Lena".
{"title":"Alternative methods for codebook design in vector quantization","authors":"V. Delport","doi":"10.1109/DCC.1995.515595","DOIUrl":"https://doi.org/10.1109/DCC.1995.515595","url":null,"abstract":"A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image \"Lena\".","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128673304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.
{"title":"Quantization of wavelet coefficients for image compression","authors":"A. Mohammed, K. Sayood","doi":"10.1109/DCC.1995.515593","DOIUrl":"https://doi.org/10.1109/DCC.1995.515593","url":null,"abstract":"Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.
{"title":"The structure of DMC [dynamic Markov compression]","authors":"S. Bunton","doi":"10.1109/DCC.1995.515497","DOIUrl":"https://doi.org/10.1109/DCC.1995.515497","url":null,"abstract":"The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126003448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}