This paper shows that if one is willing to relax the requirement of finding the true nearest neighbor, it is possible to achieve significant improvements in running time and at only a very small loss in the performance of the vector quantizer. The authors present three algorithms for nearest neighbor searching: standard and priority k-d tree search algorithms and a neighborhood graph search algorithm in which a directed graph is constructed for the point set and edges join neighboring points.<>
{"title":"Algorithms for fast vector quantization","authors":"S. Arya, D. Mount","doi":"10.1109/DCC.1993.253111","DOIUrl":"https://doi.org/10.1109/DCC.1993.253111","url":null,"abstract":"This paper shows that if one is willing to relax the requirement of finding the true nearest neighbor, it is possible to achieve significant improvements in running time and at only a very small loss in the performance of the vector quantizer. The authors present three algorithms for nearest neighbor searching: standard and priority k-d tree search algorithms and a neighborhood graph search algorithm in which a directed graph is constructed for the point set and edges join neighboring points.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132177564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors examine the resource requirements and compression efficiency of the coding phase, concentrating on applications with medium and large alphabets. When semi-static two-pass encoding can be used, Huffman coding is two to four times faster than arithmetic coding, and sometimes results in superior compression. When an adaptive coder is required the difference in speed is smaller, but Gallager's implementation of dynamic Huffman coding is still faster than arithmetic coding in most situations. The compression loss through the use of Huffman codes is negligible in all but extreme circumstances. Where very high speed is necessary splay coding is also worth considering, although it yields poorer compression.<>
{"title":"An empirical evaluation of coding methods for multi-symbol alphabets","authors":"Alistair Moffat, Neil Sharman, I. Witten, T. Bell","doi":"10.1109/DCC.1993.253139","DOIUrl":"https://doi.org/10.1109/DCC.1993.253139","url":null,"abstract":"The authors examine the resource requirements and compression efficiency of the coding phase, concentrating on applications with medium and large alphabets. When semi-static two-pass encoding can be used, Huffman coding is two to four times faster than arithmetic coding, and sometimes results in superior compression. When an adaptive coder is required the difference in speed is smaller, but Gallager's implementation of dynamic Huffman coding is still faster than arithmetic coding in most situations. The compression loss through the use of Huffman codes is negligible in all but extreme circumstances. Where very high speed is necessary splay coding is also worth considering, although it yields poorer compression.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131154676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization. The authors have tested this algorithm with a host of standard test images (e.g. gray scale magazine images, medical images, space and scientific images, fingerprint images, and handwriting images) and with no prior knowledge of the data or training, for a given fidelity the compression achieved typically equals or exceeds that of the JPEG standard. The only information that must be specified in advance is the fidelity criterion.<>
{"title":"On-line adaptive vector quantization with variable size codebook entries","authors":"C. Constantinescu, J. Storer","doi":"10.1109/DCC.1993.253147","DOIUrl":"https://doi.org/10.1109/DCC.1993.253147","url":null,"abstract":"A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization. The authors have tested this algorithm with a host of standard test images (e.g. gray scale magazine images, medical images, space and scientific images, fingerprint images, and handwriting images) and with no prior knowledge of the data or training, for a given fidelity the compression achieved typically equals or exceeds that of the JPEG standard. The only information that must be specified in advance is the fidelity criterion.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125529445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LZ77 and more recently LZSS text compression use one-bit flags to identify a following pointer or literal. This paper investigates the use of multi-bit flags to allow a greater variety of entities in the compressed data stream. Two approaches are described. The first uses flags of 2 or 3 bits with operands constrained to be 1, 2 or 3 bytes long. The other codes entirely in units of 2 or 3 bits (instead of the more usual single bits). Both methods are shown to yield compressors of good performance.<>
{"title":"Ziv-Lempel encoding with multi-bit flags","authors":"P. Fenwick","doi":"10.1109/DCC.1993.253136","DOIUrl":"https://doi.org/10.1109/DCC.1993.253136","url":null,"abstract":"LZ77 and more recently LZSS text compression use one-bit flags to identify a following pointer or literal. This paper investigates the use of multi-bit flags to allow a greater variety of entities in the compressed data stream. Two approaches are described. The first uses flags of 2 or 3 bits with operands constrained to be 1, 2 or 3 bytes long. The other codes entirely in units of 2 or 3 bits (instead of the more usual single bits). Both methods are shown to yield compressors of good performance.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Bath Fractal Transform (BFT) defines a strategy for obtaining least squares fractal approximations and can be implemented using functions of varying complexity. The approximation method used in ITT-coding is itself the zero-order instance of the BFT. Some of the complexity options available are explored by combining various orders of BFT approximation with various degrees and types of searching. This may be regarded either as the inclusion of searching with the BFT or as a generalization of the matching criterion of ITT-coding. This is considered from the point of view of the cost-fidelity trade-offs incurred, and the implications for practical application to multimedia information retrieval systems and real-time video are discussed.<>
{"title":"Generalized fractal transforms: complexity issues","authors":"D. Monro","doi":"10.1109/DCC.1993.253124","DOIUrl":"https://doi.org/10.1109/DCC.1993.253124","url":null,"abstract":"The Bath Fractal Transform (BFT) defines a strategy for obtaining least squares fractal approximations and can be implemented using functions of varying complexity. The approximation method used in ITT-coding is itself the zero-order instance of the BFT. Some of the complexity options available are explored by combining various orders of BFT approximation with various degrees and types of searching. This may be regarded either as the inclusion of searching with the BFT or as a generalization of the matching criterion of ITT-coding. This is considered from the point of view of the cost-fidelity trade-offs incurred, and the implications for practical application to multimedia information retrieval systems and real-time video are discussed.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125959244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The unacceptability of block artifacts in medical image data compression has led to systems employing full-frame discrete cosine transform (DCT) compression. Although the DCT is the optimum fast transform when block coding is used, it is outperformed by the discrete Fourier transform (DFT) and discrete Hartley transform for images obtained using positron emission tomography and magnetic resonance imaging. Such images are characterized by a roughly circular region of non-zero intensity bounded by a region R in which the image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels, but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. The DCT therefore loses its advantage over the DFT because neither transform introduced significant artificial discontinuities.<>
{"title":"Full-frame compression of tomographic images using the discrete Fourier transform","authors":"J. Villasenor","doi":"10.1109/DCC.1993.253130","DOIUrl":"https://doi.org/10.1109/DCC.1993.253130","url":null,"abstract":"The unacceptability of block artifacts in medical image data compression has led to systems employing full-frame discrete cosine transform (DCT) compression. Although the DCT is the optimum fast transform when block coding is used, it is outperformed by the discrete Fourier transform (DFT) and discrete Hartley transform for images obtained using positron emission tomography and magnetic resonance imaging. Such images are characterized by a roughly circular region of non-zero intensity bounded by a region R in which the image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels, but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. The DCT therefore loses its advantage over the DFT because neither transform introduced significant artificial discontinuities.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132784216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given M quantizers of variable rates, scaler or/and vector, the globally optimal allocation of B bits to the M quantizers can be computed in O(MB/sup 2/) time with integer constraint, or O(MB2/sup B/) time without. The author also considers the nested optimization problem of optimal bit allocation with respect to optimal quantizers. Various algorithmic techniques are proposed to solve this new problem in pseudo-polynomial time.<>
{"title":"Globally optimal bit allocation","authors":"Xiaolin Wu","doi":"10.1109/DCC.1993.253148","DOIUrl":"https://doi.org/10.1109/DCC.1993.253148","url":null,"abstract":"Given M quantizers of variable rates, scaler or/and vector, the globally optimal allocation of B bits to the M quantizers can be computed in O(MB/sup 2/) time with integer constraint, or O(MB2/sup B/) time without. The author also considers the nested optimization problem of optimal bit allocation with respect to optimal quantizers. Various algorithmic techniques are proposed to solve this new problem in pseudo-polynomial time.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images. The loss of information ranges from a perceptually lossless level, achieved at 20-30:1 compression ratios, to a one where exploitation of the images is still possible (over 100:1 ratios). A one-dimensional transform coder removes the spectral redundancy, and a two-dimensional wavelet transform removes the spatial redundancy of multispectral images. The transformed images are subsequently divided into active regions that contain significant wavelet coefficients. Each active block is then hierarchically encoded using multidimensional bitmap trees. Application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance. Landsat Thematic Mapper data are used to illustrate the performance of the proposed algorithm.<>
{"title":"Multispectral image compression algorithms","authors":"T. Markas, J. Reif","doi":"10.1109/DCC.1993.253110","DOIUrl":"https://doi.org/10.1109/DCC.1993.253110","url":null,"abstract":"This paper presents a data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images. The loss of information ranges from a perceptually lossless level, achieved at 20-30:1 compression ratios, to a one where exploitation of the images is still possible (over 100:1 ratios). A one-dimensional transform coder removes the spectral redundancy, and a two-dimensional wavelet transform removes the spatial redundancy of multispectral images. The transformed images are subsequently divided into active regions that contain significant wavelet coefficients. Each active block is then hierarchically encoded using multidimensional bitmap trees. Application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance. Landsat Thematic Mapper data are used to illustrate the performance of the proposed algorithm.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the compression of computer files of data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adel'son-Vel'skii-Landis trees, is effective to any word length. Its application to the lossless compression of gray-scale images shows wider applicability to any ordered set of 18-bit or 36-bit data.<>
{"title":"Application of AVL trees to adaptive compression of numerical data","authors":"H. Yokoo","doi":"10.1109/DCC.1993.253118","DOIUrl":"https://doi.org/10.1109/DCC.1993.253118","url":null,"abstract":"This paper discusses the compression of computer files of data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adel'son-Vel'skii-Landis trees, is effective to any word length. Its application to the lossless compression of gray-scale images shows wider applicability to any ordered set of 18-bit or 36-bit data.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"70 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114004308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors show how to integrate wavelet-based and fractal-based approaches for data compression. If the data is self-similar or smooth, one can efficiently store its wavelet coefficients using fractal compression techniques resulting in high compression ratios.<>
{"title":"Efficient compression of wavelet coefficients for smooth and fractal-like data","authors":"K. Culík, S. Dube","doi":"10.1109/DCC.1993.253126","DOIUrl":"https://doi.org/10.1109/DCC.1993.253126","url":null,"abstract":"The authors show how to integrate wavelet-based and fractal-based approaches for data compression. If the data is self-similar or smooth, one can efficiently store its wavelet coefficients using fractal compression techniques resulting in high compression ratios.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}