During its long gestation in the 1970s and early 1980s, arithmetic coding was widely regarded more as an academic curiosity than a practical coding technique. One factor that helped it gain the popularity it enjoys today was the publication in 1987 of source code for a multi symbol arithmetic coder in Communications of the ACM. Now (1995), our understanding of arithmetic coding has further matured, and it is timely to review the components of that implementation and summarise the improvements that we and other authors have developed since then. We also describe a novel method for performing the underlying calculation needed for arithmetic coding. Accompanying the paper is a "Mark II" implementation that incorporates the improvements we suggest. The areas examined include: changes to the coding procedure that reduce the number of multiplications and divisions and permit them to be done to low precision; the increased range of probability approximations and alphabet sizes that can be supported using limited precision calculation; data structures for support of arithmetic coding on large alphabets; the interface between the modelling and coding subsystems; the use of enhanced models to allow high performance compression. For each of these areas, we consider how the new implementation differs from the CACM package.
{"title":"Arithmetic coding revisited","authors":"Alistair Moffat, Radford M. Neal, I. Witten","doi":"10.1109/DCC.1995.515510","DOIUrl":"https://doi.org/10.1109/DCC.1995.515510","url":null,"abstract":"During its long gestation in the 1970s and early 1980s, arithmetic coding was widely regarded more as an academic curiosity than a practical coding technique. One factor that helped it gain the popularity it enjoys today was the publication in 1987 of source code for a multi symbol arithmetic coder in Communications of the ACM. Now (1995), our understanding of arithmetic coding has further matured, and it is timely to review the components of that implementation and summarise the improvements that we and other authors have developed since then. We also describe a novel method for performing the underlying calculation needed for arithmetic coding. Accompanying the paper is a \"Mark II\" implementation that incorporates the improvements we suggest. The areas examined include: changes to the coding procedure that reduce the number of multiplications and divisions and permit them to be done to low precision; the increased range of probability approximations and alphabet sizes that can be supported using limited precision calculation; data structures for support of arithmetic coding on large alphabets; the interface between the modelling and coding subsystems; the use of enhanced models to allow high performance compression. For each of these areas, we consider how the new implementation differs from the CACM package.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New algorithms are proposed for designing optimal binary vector quantizers. These algorithms aim to avoid the problem of the generalized Lloyd method of easily getting trapped into a poor local minimum. To improve the subjective quality of vector-quantized binary images, a constrained optimal binary VQ framework is proposed. Within this framework, the optimal VQ design can be done via an interesting use of linear codes.
{"title":"New algorithms for optimal binary vector quantizer design","authors":"Xiaolin Wu, Yonggang Fang","doi":"10.1109/DCC.1995.515503","DOIUrl":"https://doi.org/10.1109/DCC.1995.515503","url":null,"abstract":"New algorithms are proposed for designing optimal binary vector quantizers. These algorithms aim to avoid the problem of the generalized Lloyd method of easily getting trapped into a poor local minimum. To improve the subjective quality of vector-quantized binary images, a constrained optimal binary VQ framework is proposed. Within this framework, the optimal VQ design can be done via an interesting use of linear codes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The present work arose from a need to transmit architectural line drawings over relatively slow communication links, such as telephone circuits. The images are mostly large line drawings, but with some shading. The application required good compression, incremental transmission, and excellent reproduction of sharp lines and fine detail such as text. The final system uses an initial wavelet transform stage (actually using a wave-packet transform), an adaptive vector quantiser stage, and a final post-compression stage. This paper emphasises the vector quantiser. Incremental transmission makes it desirable to use only actual data vectors in the database. The standard Linde Buzo Gray (LBG) algorithm was slow, taking 30-60 minutes for a training set, tended to use 'near-zero' vectors instead of 'true-zero' vectors introducing undesirable texture into the reconstructed image, and the quality could not be guaranteed with some images producing; artifacts at even low compression rates. The final vector quantiser uses new techniques with LRU maintenance of the database, updating for 'exact matches' to an existing vector and for 'near matches', using a combination of mean-square error and magnitude error. A conventional counting LRU mechanism is used, with different aging parameters for the two types of LRU update. The new vector quantiser requires about 10 seconds per image (compared with 30-60 minutes for LBG) and essentially eliminates the undesirable compression artifacts.
{"title":"Vector quantisation for wavelet based image compression","authors":"P. Fenwick, S. Woolford","doi":"10.1109/DCC.1995.515575","DOIUrl":"https://doi.org/10.1109/DCC.1995.515575","url":null,"abstract":"Summary form only given. The present work arose from a need to transmit architectural line drawings over relatively slow communication links, such as telephone circuits. The images are mostly large line drawings, but with some shading. The application required good compression, incremental transmission, and excellent reproduction of sharp lines and fine detail such as text. The final system uses an initial wavelet transform stage (actually using a wave-packet transform), an adaptive vector quantiser stage, and a final post-compression stage. This paper emphasises the vector quantiser. Incremental transmission makes it desirable to use only actual data vectors in the database. The standard Linde Buzo Gray (LBG) algorithm was slow, taking 30-60 minutes for a training set, tended to use 'near-zero' vectors instead of 'true-zero' vectors introducing undesirable texture into the reconstructed image, and the quality could not be guaranteed with some images producing; artifacts at even low compression rates. The final vector quantiser uses new techniques with LRU maintenance of the database, updating for 'exact matches' to an existing vector and for 'near matches', using a combination of mean-square error and magnitude error. A conventional counting LRU mechanism is used, with different aging parameters for the two types of LRU update. The new vector quantiser requires about 10 seconds per image (compared with 30-60 minutes for LBG) and essentially eliminates the undesirable compression artifacts.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125997777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe an adaptive wavelet-based compression scheme for images. We decompose an image into a set of quantized wavelet coefficients and quantized wavelet subtrees. The vector codebook used for quantizing the subtrees is drawn from the image. Subtrees are quantized to contracted isometries of coarser scale subtrees. This codebook drawn from the contracted image is effective for quantizing locally smooth regions and locally straight edges. We prove that this self-quantization enables us to recover the fine scale wavelet coefficients of an image given its coarse scale coefficients. We show that this self-quantization algorithm is equivalent to a fractal image compression scheme when the wavelet basis is the Haar basis. The wavelet framework places fractal compression schemes in the context of existing wavelet subtree coding schemes. We obtain a simple convergence proof which strengthens existing fractal compression results considerably, derive an improved means of estimating the error incurred in decoding fractal compressed images, and describe a new reconstruction algorithm which requires O(N) operations for an N pixel image.
{"title":"Self-quantized wavelet subtrees: a wavelet-based theory for fractal image compression","authors":"G. Davis","doi":"10.1109/DCC.1995.515513","DOIUrl":"https://doi.org/10.1109/DCC.1995.515513","url":null,"abstract":"We describe an adaptive wavelet-based compression scheme for images. We decompose an image into a set of quantized wavelet coefficients and quantized wavelet subtrees. The vector codebook used for quantizing the subtrees is drawn from the image. Subtrees are quantized to contracted isometries of coarser scale subtrees. This codebook drawn from the contracted image is effective for quantizing locally smooth regions and locally straight edges. We prove that this self-quantization enables us to recover the fine scale wavelet coefficients of an image given its coarse scale coefficients. We show that this self-quantization algorithm is equivalent to a fractal image compression scheme when the wavelet basis is the Haar basis. The wavelet framework places fractal compression schemes in the context of existing wavelet subtree coding schemes. We obtain a simple convergence proof which strengthens existing fractal compression results considerably, derive an improved means of estimating the error incurred in decoding fractal compressed images, and describe a new reconstruction algorithm which requires O(N) operations for an N pixel image.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124060381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We propose a new approach for text compression where fast decoding is more desirable than encoding. An example of such a requirement is an information retrieval system. For efficient compression, high-order conditional probability information of text data is analyzed and modeled by utilizing vector quantization concept. Generally, vector quantization (VQ) has been used for lossy compression where the input symbol is not exactly recovered at the decoder, hence it does not seem applicable to lossless text compression problems. However, VQ can be applied to high-order conditional probability information so that the complexity of the information can be reduced. We represent the conditional probability information of a source in a tree structure where each node in the first level of the tree is associated with respective 1-st order conditional probability and the second level nodes with the 2nd order conditional probability. For good text compression performances, it is necessary that fourth or higher order conditional probability information be used. It is essential that the model be simplified enough for training with a reasonable size of training set. We reduce the number of conditional probability tables and also discuss a semi-adaptive operating mode of the model where the tree is derived through training but actual probability information at each node is obtained adaptively from input data. The performance of the proposed algorithm is comparable to or exceeds other methods such as prediction by partial matching (PPM) but requires smaller memory size.
{"title":"VQ-based model design algorithms for text compression","authors":"S.P. Kim, X. Ginesta","doi":"10.1109/DCC.1995.515544","DOIUrl":"https://doi.org/10.1109/DCC.1995.515544","url":null,"abstract":"Summary form only given. We propose a new approach for text compression where fast decoding is more desirable than encoding. An example of such a requirement is an information retrieval system. For efficient compression, high-order conditional probability information of text data is analyzed and modeled by utilizing vector quantization concept. Generally, vector quantization (VQ) has been used for lossy compression where the input symbol is not exactly recovered at the decoder, hence it does not seem applicable to lossless text compression problems. However, VQ can be applied to high-order conditional probability information so that the complexity of the information can be reduced. We represent the conditional probability information of a source in a tree structure where each node in the first level of the tree is associated with respective 1-st order conditional probability and the second level nodes with the 2nd order conditional probability. For good text compression performances, it is necessary that fourth or higher order conditional probability information be used. It is essential that the model be simplified enough for training with a reasonable size of training set. We reduce the number of conditional probability tables and also discuss a semi-adaptive operating mode of the model where the tree is derived through training but actual probability information at each node is obtained adaptively from input data. The performance of the proposed algorithm is comparable to or exceeds other methods such as prediction by partial matching (PPM) but requires smaller memory size.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123054002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Reports a parallel scheme for text data compression. The scheme utilizes the simple, regular, modular and cascadable structure of cellular automata (CA) with local interconnection structure that ideally suits VLSI technology. The state transition behaviour of a particular class of non-group CA, referred to as TPSA (two predecessor single attractor) CA, has been studied extensively and the results are utilized to develop a parallel scheme for data compression. The state transition diagram of a TPSA CA generates a unique inverted binary tree. This inverted binary tree is a labeled tree whose leaves and internal nodes have a unique pattern generated by the CA in successive cycles. This unique structure can be viewed as a dictionary for text compression. In effect, storage and retrieval of dictionary of conventional data compression techniques get replaced by the autonomous mode operation of the CA that generates the dictionary dynamically with appropriate mapping of dictionary data to CA states wherever necessary.
{"title":"An efficient data compression hardware based on cellular automata","authors":"S. Bhattacharjee, J. Bhattacharya, P. Chaudhuri","doi":"10.1109/DCC.1995.515582","DOIUrl":"https://doi.org/10.1109/DCC.1995.515582","url":null,"abstract":"Summary form only given. Reports a parallel scheme for text data compression. The scheme utilizes the simple, regular, modular and cascadable structure of cellular automata (CA) with local interconnection structure that ideally suits VLSI technology. The state transition behaviour of a particular class of non-group CA, referred to as TPSA (two predecessor single attractor) CA, has been studied extensively and the results are utilized to develop a parallel scheme for data compression. The state transition diagram of a TPSA CA generates a unique inverted binary tree. This inverted binary tree is a labeled tree whose leaves and internal nodes have a unique pattern generated by the CA in successive cycles. This unique structure can be viewed as a dictionary for text compression. In effect, storage and retrieval of dictionary of conventional data compression techniques get replaced by the autonomous mode operation of the CA that generates the dictionary dynamically with appropriate mapping of dictionary data to CA states wherever necessary.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125651927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Bradley, C. Brislawn, D. Quinlan, H.D. Zhang, V. Nuri
Summary form only given. The work focuses on developing discrete wavelet transform/scalar quantization data compression software that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing. The approach is to use the A++/P++ array class library, a C++ software library originally designed for adaptive mesh PDE algorithms. Using a C++ class library has the advantage of allowing to write the scientific algorithm in a high-level, platform-independent syntax; the machine-dependent optimization is hidden in low-level definitions of the library objects. Thus, the high-level code can be ported between different architectures with no rewriting of source code once the machine-dependent layers have been compiled. In particular, while "A++" refers to a serial library, the same source code can be linked to "P++" libraries, which contain platform-dependent parallelized code. The paper compares the overhead incurred in using A++ library operations with a serial implementation (written in C) when compressing the output of a global ocean circulation model running at the Los Alamos Advanced Computing Lab.
{"title":"Wavelet subband coding of computer simulation output using the A++ array class library","authors":"J. Bradley, C. Brislawn, D. Quinlan, H.D. Zhang, V. Nuri","doi":"10.1109/DCC.1995.515564","DOIUrl":"https://doi.org/10.1109/DCC.1995.515564","url":null,"abstract":"Summary form only given. The work focuses on developing discrete wavelet transform/scalar quantization data compression software that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing. The approach is to use the A++/P++ array class library, a C++ software library originally designed for adaptive mesh PDE algorithms. Using a C++ class library has the advantage of allowing to write the scientific algorithm in a high-level, platform-independent syntax; the machine-dependent optimization is hidden in low-level definitions of the library objects. Thus, the high-level code can be ported between different architectures with no rewriting of source code once the machine-dependent layers have been compiled. In particular, while \"A++\" refers to a serial library, the same source code can be linked to \"P++\" libraries, which contain platform-dependent parallelized code. The paper compares the overhead incurred in using A++ library operations with a serial implementation (written in C) when compressing the output of a global ocean circulation model running at the Los Alamos Advanced Computing Lab.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Docef, F. Kossentini, W. Chung, Mark J. T. Smith
This paper describes a very computationally efficient design algorithm for color image coding at low bit rates. The proposed algorithm is based on uniform tree-structured subband decomposition, multistage scalar quantization of the image subbands, and high order entropy coding. The main advantage of the algorithm is that no multiplications are required in both analysis/synthesis and encoding/decoding. This can lead to a simple hardware implementation of the subband coder, while maintaining a high level of performance.
{"title":"Multiplication-free subband coding of color images","authors":"A. Docef, F. Kossentini, W. Chung, Mark J. T. Smith","doi":"10.1109/DCC.1995.515525","DOIUrl":"https://doi.org/10.1109/DCC.1995.515525","url":null,"abstract":"This paper describes a very computationally efficient design algorithm for color image coding at low bit rates. The proposed algorithm is based on uniform tree-structured subband decomposition, multistage scalar quantization of the image subbands, and high order entropy coding. The main advantage of the algorithm is that no multiplications are required in both analysis/synthesis and encoding/decoding. This can lead to a simple hardware implementation of the subband coder, while maintaining a high level of performance.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114203263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Telecommunications Industry Association (TIA) Technical Committee TR-30 ad hoc Committee on Compression of Synchronous Data for DSUs has submitted three documents to TR30.1 as contributions which specify a standard data compression protocol. The proposed standard uses the Point-to-Point Protocol developed by the Internet Engineering Task Force (IETF) with certain extensions. Following a time for comment, the ad hoc committee planned to submit the draft standard document to TR30.1 for ballot at the January 30, 1995, meeting with balloting expected to be completed in May.
{"title":"An investigation of effective compression ratios for the proposed synchronous data compression proto","authors":"R. R. Little","doi":"10.1109/DCC.1995.515597","DOIUrl":"https://doi.org/10.1109/DCC.1995.515597","url":null,"abstract":"The Telecommunications Industry Association (TIA) Technical Committee TR-30 ad hoc Committee on Compression of Synchronous Data for DSUs has submitted three documents to TR30.1 as contributions which specify a standard data compression protocol. The proposed standard uses the Point-to-Point Protocol developed by the Internet Engineering Task Force (IETF) with certain extensions. Following a time for comment, the ad hoc committee planned to submit the draft standard document to TR30.1 for ballot at the January 30, 1995, meeting with balloting expected to be completed in May.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123894296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.
{"title":"Subband coding methods for seismic data compression","authors":"A. Kiely, F. Pollara","doi":"10.1109/DCC.1995.515557","DOIUrl":"https://doi.org/10.1109/DCC.1995.515557","url":null,"abstract":"Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}