Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.
{"title":"Reduced-search fractal block coding of images","authors":"W. Kinsner, L. Wall","doi":"10.1109/DCC.1995.515571","DOIUrl":"https://doi.org/10.1109/DCC.1995.515571","url":null,"abstract":"Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127663991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek
Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a "reversible" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.
{"title":"CREW: Compression with Reversible Embedded Wavelets","authors":"A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek","doi":"10.1109/DCC.1995.515511","DOIUrl":"https://doi.org/10.1109/DCC.1995.515511","url":null,"abstract":"Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a \"reversible\" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.
{"title":"Hierarchical vector quantization of perceptually weighted block transforms","authors":"N. Chaddha, M. Vishwanath, P. Chou","doi":"10.1109/DCC.1995.515490","DOIUrl":"https://doi.org/10.1109/DCC.1995.515490","url":null,"abstract":"This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.
{"title":"An improved hierarchical lossless text compression algorithm","authors":"Chia-Yuan Teng, D. Neuhoff","doi":"10.1109/DCC.1995.515519","DOIUrl":"https://doi.org/10.1109/DCC.1995.515519","url":null,"abstract":"Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.
{"title":"A massively parallel algorithm for vector quantization","authors":"K. S. Prashant, V. J. Mathews","doi":"10.1109/DCC.1995.515604","DOIUrl":"https://doi.org/10.1109/DCC.1995.515604","url":null,"abstract":"Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126704123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.
{"title":"RD-OPT: an efficient algorithm for optimizing DCT quantization tables","authors":"Viresh Ratnakar, M. Livny","doi":"10.1109/DCC.1995.515523","DOIUrl":"https://doi.org/10.1109/DCC.1995.515523","url":null,"abstract":"The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"30 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.
{"title":"Adaptive wavelet subband coding for music compression","authors":"K. Ferens, W. Kinsner","doi":"10.1109/DCC.1995.515570","DOIUrl":"https://doi.org/10.1109/DCC.1995.515570","url":null,"abstract":"This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123534064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating "optimum" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no "holes" in it.
{"title":"Lossless compression by simulated annealing","authors":"R. Bowen-Wright, K. Sayood","doi":"10.1109/DCC.1995.515562","DOIUrl":"https://doi.org/10.1109/DCC.1995.515562","url":null,"abstract":"Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating \"optimum\" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no \"holes\" in it.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.
{"title":"Constraining the size of the instantaneous alphabet in trellis quantizers","authors":"M. F. Larsen, R. L. Frost","doi":"10.1109/DCC.1995.515492","DOIUrl":"https://doi.org/10.1109/DCC.1995.515492","url":null,"abstract":"A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.
{"title":"Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization","authors":"G. Abousleman","doi":"10.1109/DCC.1995.515522","DOIUrl":"https://doi.org/10.1109/DCC.1995.515522","url":null,"abstract":"A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}