Progressive transmission (PT) using vector quantization (VQ) is called progressive vector quantization (PVQ) and is used for efficient telebrowsing and dissemination of multispectral image data via computer networks. Theoretically any compression technique can be used in PT mode. Here VQ is selected as the baseline compression technique because the VQ encoded images can be decoded by simple table lookup process so that the users are not burdened with computational problems for using compressed data. Codebook generation or training phase is the most critical part of VQ. Two different algorithms have been used for this purpose. The first of these is based on well-known Linde-Buzo-Gray (LBG) algorithm. The other one is based on self organizing feature maps (SOFM). Since both training and encoding are computationally intensive tasks, the authors have used MasPar, a SIMD machine for this purpose. The multispectral imagery obtained from Advanced Very High Resolution Radiometer (AVHRR) instrument images form the testbed. The results from these two VQ techniques have been compared in compression ratios for a given mean squared error (MSE). The number of bytes required to transmit the image data without loss using this progressive compression technique is usually less than the number of bytes required by standard unix compress algorithm.<>
{"title":"Progressive vector quantization of multispectral image data using a massively parallel SIMD machine","authors":"M. Manohar, J. Tilton","doi":"10.1109/DCC.1992.227463","DOIUrl":"https://doi.org/10.1109/DCC.1992.227463","url":null,"abstract":"Progressive transmission (PT) using vector quantization (VQ) is called progressive vector quantization (PVQ) and is used for efficient telebrowsing and dissemination of multispectral image data via computer networks. Theoretically any compression technique can be used in PT mode. Here VQ is selected as the baseline compression technique because the VQ encoded images can be decoded by simple table lookup process so that the users are not burdened with computational problems for using compressed data. Codebook generation or training phase is the most critical part of VQ. Two different algorithms have been used for this purpose. The first of these is based on well-known Linde-Buzo-Gray (LBG) algorithm. The other one is based on self organizing feature maps (SOFM). Since both training and encoding are computationally intensive tasks, the authors have used MasPar, a SIMD machine for this purpose. The multispectral imagery obtained from Advanced Very High Resolution Radiometer (AVHRR) instrument images form the testbed. The results from these two VQ techniques have been compared in compression ratios for a given mean squared error (MSE). The number of bytes required to transmit the image data without loss using this progressive compression technique is usually less than the number of bytes required by standard unix compress algorithm.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116458353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An adaptive method for lossy data compression and the associated VLSI architecture have been developed. This scheme does not require a-priori knowledge of the source statistics and codebook training. The codebook is generated on the fly and is constantly updated to capture local textual features of data. The algorithm is proven to reach rate distortion function for memoryless sources. The authors also propose a computing architecture which consists of a vector quantizer and an encoded-data generator. By using this method, a high-speed VLSI processor with good local adaptivity, less complexity and fair compression ratio can be achieved.<>
{"title":"An adaptive high-speed lossy data compression","authors":"O. Chen, Zhen Zhang, B. Sheu","doi":"10.1109/DCC.1992.227446","DOIUrl":"https://doi.org/10.1109/DCC.1992.227446","url":null,"abstract":"An adaptive method for lossy data compression and the associated VLSI architecture have been developed. This scheme does not require a-priori knowledge of the source statistics and codebook training. The codebook is generated on the fly and is constantly updated to capture local textual features of data. The algorithm is proven to reach rate distortion function for memoryless sources. The authors also propose a computing architecture which consists of a vector quantizer and an encoded-data generator. By using this method, a high-speed VLSI processor with good local adaptivity, less complexity and fair compression ratio can be achieved.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129402831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors present new vector quantization algorithms. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. They first solve its relaxed linear program by linear programming techniques. Then they transform the linear program solution into a provably good solution for the vector quantization problem. These methods lead to the first known polynomial-time full-search vector quantization codebook design algorithm and tree pruning algorithm with provable worst-case performance guarantees. They also introduce the notion of pseudorandom pruned tree-structured vector quantizers. Initial experimental results on image compression are very encouraging.<>
{"title":"Nearly optimal vector quantization via linear programming","authors":"Jyh-Han Lin, J. Vitter","doi":"10.1109/DCC.1992.227479","DOIUrl":"https://doi.org/10.1109/DCC.1992.227479","url":null,"abstract":"The authors present new vector quantization algorithms. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. They first solve its relaxed linear program by linear programming techniques. Then they transform the linear program solution into a provably good solution for the vector quantization problem. These methods lead to the first known polynomial-time full-search vector quantization codebook design algorithm and tree pruning algorithm with provable worst-case performance guarantees. They also introduce the notion of pseudorandom pruned tree-structured vector quantizers. Initial experimental results on image compression are very encouraging.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123087599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Central to vector quantization is the design of optimal code book. The construction of a globally optimal code book has been shown to be NP-complete. However, if the partition halfplanes are restricted to be orthogonal to the principal direction of the training vectors, then the globally optimal K-partition of a set of N D-dimensional data points can be computed in O((N+KM/sup 2/)D) time by dynamic programming, where M is the intensity resolution. This constrained optimization strategy improves the performance of vector quantizer over the classic LBG algorithm and the popular methods of tree-structured recursive greedy bipartition of the training data set.<>
{"title":"Vector quantizer design by constrained global optimization","authors":"Xiaolin Wu","doi":"10.1109/DCC.1992.227468","DOIUrl":"https://doi.org/10.1109/DCC.1992.227468","url":null,"abstract":"Central to vector quantization is the design of optimal code book. The construction of a globally optimal code book has been shown to be NP-complete. However, if the partition halfplanes are restricted to be orthogonal to the principal direction of the training vectors, then the globally optimal K-partition of a set of N D-dimensional data points can be computed in O((N+KM/sup 2/)D) time by dynamic programming, where M is the intensity resolution. This constrained optimization strategy improves the performance of vector quantizer over the classic LBG algorithm and the popular methods of tree-structured recursive greedy bipartition of the training data set.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128053743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Run-length coding (RC) is a simple and yet quite effective technique for bi-level image coding. A problem with the conventional RC which describes an image by alternating runs of white and black pixels is that it only exploits the redundancy within the same scan line. The modified relative address run-length coding (MRC) used in Group III facsimile transmission is more efficient by making use of the correlation between adjacent lines. The paper presents a vector run-length coding (VRC) technique which exploits the spatial redundancy more thoroughly by representing images with vector or black patterns and vector run-lengths. Depending on the coding method for the block patterns, various algorithms have been developed, including single run-length VRC (SVRC), double run-length VRC (DVRC), and block VRC (BVRC). The conventional RC is a special case of BVRC with block size of 1*1. The proposed methods have been applied to the CCITT standard test documents and the best result has been obtained with the BVRC method. With a block dimension of 4*4, it has yielded compression gains higher than the MRC with k=4 by 15.5% and 22.7%, when using a single and multiple run-length codebooks, respectively.<>
{"title":"Vector run-length coding of Bi-level images","authors":"Y. Wang, J. Wu","doi":"10.1109/DCC.1992.227452","DOIUrl":"https://doi.org/10.1109/DCC.1992.227452","url":null,"abstract":"Run-length coding (RC) is a simple and yet quite effective technique for bi-level image coding. A problem with the conventional RC which describes an image by alternating runs of white and black pixels is that it only exploits the redundancy within the same scan line. The modified relative address run-length coding (MRC) used in Group III facsimile transmission is more efficient by making use of the correlation between adjacent lines. The paper presents a vector run-length coding (VRC) technique which exploits the spatial redundancy more thoroughly by representing images with vector or black patterns and vector run-lengths. Depending on the coding method for the block patterns, various algorithms have been developed, including single run-length VRC (SVRC), double run-length VRC (DVRC), and block VRC (BVRC). The conventional RC is a special case of BVRC with block size of 1*1. The proposed methods have been applied to the CCITT standard test documents and the best result has been obtained with the BVRC method. With a block dimension of 4*4, it has yielded compression gains higher than the MRC with k=4 by 15.5% and 22.7%, when using a single and multiple run-length codebooks, respectively.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133605892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Results of psychophysical experiments on human vision conducted in the last three decades indicate that the eye performs a multichannel decomposition of the incident images. The paper presents a subband vector quantization algorithm that employs hexagonal filter banks. The hexagonal filter bank provides an image decomposition similar to what the eye is believed to do. Consequently, the image coder is able to make use of the properties of the human visual system and produce compressed images of high quality at low bit rates. A systematic approach is presented for optimal allocation of available bits among the subbands and also for the selection of the size of the vectors in each of the subbands.<>
{"title":"Subband vector quantization of images using hexagonal filter banks","authors":"O. Haddadin, V. J. Mathews, T. Stockham","doi":"10.1109/DCC.1992.227481","DOIUrl":"https://doi.org/10.1109/DCC.1992.227481","url":null,"abstract":"Results of psychophysical experiments on human vision conducted in the last three decades indicate that the eye performs a multichannel decomposition of the incident images. The paper presents a subband vector quantization algorithm that employs hexagonal filter banks. The hexagonal filter bank provides an image decomposition similar to what the eye is believed to do. Consequently, the image coder is able to make use of the properties of the human visual system and produce compressed images of high quality at low bit rates. A systematic approach is presented for optimal allocation of available bits among the subbands and also for the selection of the size of the vectors in each of the subbands.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116934954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In vector quantization (VQ) with fast search techniques, the storage available limits the number of codevectors used in VQ. Variable precision representation (VPR) is a simple codebook compression scheme. VPR for each vector y stores the number e(y), the number of leading bits which are zero in all elements, and avoids storing those leading bits. When storing the difference of codevectors in a binary tree structured VQ codebook, VPR can save from 24% to 44% in storage. Storing the codevector difference removes the redundancy between similar codevectors. Also as the mean square error of the VQ encoder is lowered, on the average, the difference becomes smaller and yields to better compression. To process vectors in VPR format, the operator uses a bit-serial, element-parallel scheme to evaluate the inner product. The operator's throughput can be increased by replicating its core. >
{"title":"Variable precision representation for efficient VQ codebook storage","authors":"Raffi Dionysian, M. Ercegovac","doi":"10.1109/DCC.1992.227449","DOIUrl":"https://doi.org/10.1109/DCC.1992.227449","url":null,"abstract":"In vector quantization (VQ) with fast search techniques, the storage available limits the number of codevectors used in VQ. Variable precision representation (VPR) is a simple codebook compression scheme. VPR for each vector y stores the number e(y), the number of leading bits which are zero in all elements, and avoids storing those leading bits. When storing the difference of codevectors in a binary tree structured VQ codebook, VPR can save from 24% to 44% in storage. Storing the codevector difference removes the redundancy between similar codevectors. Also as the mean square error of the VQ encoder is lowered, on the average, the difference becomes smaller and yields to better compression. To process vectors in VPR format, the operator uses a bit-serial, element-parallel scheme to evaluate the inner product. The operator's throughput can be increased by replicating its core. >","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126246813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}