Mean shape-gain vector quantization (MSGVQ) is extended to include negative gains and square isometries. Square isometries together with a classification technique based on average block intensities enable us to enlarge the MSGVQ codebook size without any additional storage requirements while keeping the complexity of both the codebook generation and the encoding manageable. Variable rate codes are obtained with a quadtree segmentation based on a rate-distortion criterion. Experimental results show that our scheme performs favorably when compared to previous product code techniques or quadtree based VQ methods.
{"title":"Quadtree based variable rate oriented mean shape-gain vector quantization","authors":"R. Hamzaoui, Bertram Ganz, D. Saupe","doi":"10.1109/DCC.1997.582056","DOIUrl":"https://doi.org/10.1109/DCC.1997.582056","url":null,"abstract":"Mean shape-gain vector quantization (MSGVQ) is extended to include negative gains and square isometries. Square isometries together with a classification technique based on average block intensities enable us to enlarge the MSGVQ codebook size without any additional storage requirements while keeping the complexity of both the codebook generation and the encoding manageable. Variable rate codes are obtained with a quadtree segmentation based on a rate-distortion criterion. Experimental results show that our scheme performs favorably when compared to previous product code techniques or quadtree based VQ methods.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Bhattacharjee, S. Das, Y. Chowdhury, P. P. Chaudhuri
Summary form only given. The article reports a pipelined architecture that can support on-line compression/decompression of image data. Spatial and spectral redundancy of an image data file are detected and removed with a simple and elegant scheme that can be easily implemented on a pipelined hardware. The scheme provides the user with the facility of trading off the image quality with the compression ratio. The basic theory of byte error correcting code (ECC) is employed to compress a pixel row with reference to its adjacent row. A simple scheme is developed to encode pixel rows of an image, both monochrome and colour. The compression ratio and quality obtained by this new technique has been compared with JPEG which shows comparable compression ratio with acceptable quality. The scheme is hardware based for both color and monochrome image compression, that can match a high speed communication link, thereby supporting on-line applications.
{"title":"A pipelined architecture algorithm for image compression","authors":"S. Bhattacharjee, S. Das, Y. Chowdhury, P. P. Chaudhuri","doi":"10.1109/DCC.1997.582080","DOIUrl":"https://doi.org/10.1109/DCC.1997.582080","url":null,"abstract":"Summary form only given. The article reports a pipelined architecture that can support on-line compression/decompression of image data. Spatial and spectral redundancy of an image data file are detected and removed with a simple and elegant scheme that can be easily implemented on a pipelined hardware. The scheme provides the user with the facility of trading off the image quality with the compression ratio. The basic theory of byte error correcting code (ECC) is employed to compress a pixel row with reference to its adjacent row. A simple scheme is developed to encode pixel rows of an image, both monochrome and colour. The compression ratio and quality obtained by this new technique has been compared with JPEG which shows comparable compression ratio with acceptable quality. The scheme is hardware based for both color and monochrome image compression, that can match a high speed communication link, thereby supporting on-line applications.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The choice of expressions for the coding probabilities in general, and the escape probability in particular, is of great importance in the family of prediction by partial matching (PPM) algorithms. We present a parameterized version of the escape probability estimator which, together with a "compactness" criterion, provides guidelines for the estimator design given a "representative" set of files. This parameterization also makes it possible to adapt the expression of the escape probability during one-pass coding. Finally, we present results for one such compression scheme that illustrates the usefulness of our approach.
{"title":"Towards understanding and improving escape probabilities in PPM","authors":"J. Åberg, Y. Shtarkov, B. Smeets","doi":"10.1109/DCC.1997.581954","DOIUrl":"https://doi.org/10.1109/DCC.1997.581954","url":null,"abstract":"The choice of expressions for the coding probabilities in general, and the escape probability in particular, is of great importance in the family of prediction by partial matching (PPM) algorithms. We present a parameterized version of the escape probability estimator which, together with a \"compactness\" criterion, provides guidelines for the estimator design given a \"representative\" set of files. This parameterization also makes it possible to adapt the expression of the escape probability during one-pass coding. Finally, we present results for one such compression scheme that illustrates the usefulness of our approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133843994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Prediction of the value of the pixels in an image is often used in image compression. The residual image, the difference between the image and its predicted value, can usually be coded with fewer bits than the original image. In linear prediction the value of each pixel of an image is estimated from the value of surrounding pixels using a predictor P. In noncausal prediction pixels surrounding the pixel to be predicted are used. In causal prediction only "earlier" pixels are used. Usually noncausal prediction offers better prediction than causal prediction because all pixels surrounding the pixel to be predicted are considered. The reconstruction of the image from the residual after noncausal prediction is more difficult than when causal prediction is used. This paper explores two methods of reconstruction for noncausal prediction: iterative reconstruction and direct reconstruction. As an example, the effect of quantization of the residual on the reconstructed image is considered. It shows an improved image quality using the noncausal predictor.
{"title":"Noncausal image prediction and reconstruction","authors":"J. Marchand, H. Rhody","doi":"10.1109/DCC.1997.582114","DOIUrl":"https://doi.org/10.1109/DCC.1997.582114","url":null,"abstract":"Summary form only given. Prediction of the value of the pixels in an image is often used in image compression. The residual image, the difference between the image and its predicted value, can usually be coded with fewer bits than the original image. In linear prediction the value of each pixel of an image is estimated from the value of surrounding pixels using a predictor P. In noncausal prediction pixels surrounding the pixel to be predicted are used. In causal prediction only \"earlier\" pixels are used. Usually noncausal prediction offers better prediction than causal prediction because all pixels surrounding the pixel to be predicted are considered. The reconstruction of the image from the residual after noncausal prediction is more difficult than when causal prediction is used. This paper explores two methods of reconstruction for noncausal prediction: iterative reconstruction and direct reconstruction. As an example, the effect of quantization of the residual on the reconstructed image is considered. It shows an improved image quality using the noncausal predictor.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124331367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Schiller, Chun-Ksiung Chuang, S.M. King, J. Storer
This paper describes selective resolution (SR), an image compression method which allows the efficient use of available bandwidth with selective preservation of details. SR simply applies perceptually lossless compression to the central part of the image while compressing the peripheral of the image with higher compression. The central part of the image allows higher quality imagery for details while the peripheral efficiently cues the viewers on interesting sites. SR is especially valuable in video with reduced frame rate because successive images would have much less correlation needed for effective interframe algorithms. In fact, SR, which takes advantage of human vision habits, may be viewed as an alternative to interframe compression. We have implemented SR with the motion compensated VQ algorithm.
{"title":"Selective resolution for surveillance video compression","authors":"I. Schiller, Chun-Ksiung Chuang, S.M. King, J. Storer","doi":"10.1109/DCC.1997.582136","DOIUrl":"https://doi.org/10.1109/DCC.1997.582136","url":null,"abstract":"This paper describes selective resolution (SR), an image compression method which allows the efficient use of available bandwidth with selective preservation of details. SR simply applies perceptually lossless compression to the central part of the image while compressing the peripheral of the image with higher compression. The central part of the image allows higher quality imagery for details while the peripheral efficiently cues the viewers on interesting sites. SR is especially valuable in video with reduced frame rate because successive images would have much less correlation needed for effective interframe algorithms. In fact, SR, which takes advantage of human vision habits, may be viewed as an alternative to interframe compression. We have implemented SR with the motion compensated VQ algorithm.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122781451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We attack the problem of robust and efficient image compression for transmission over noisy channels. To achieve the dual goals of high compression efficiency and low sensitivity to channel noise we introduce a multimode coding framework. Multimode coders are quasi-fixed length in nature, and allow optimization of the tradeoff between the compression capability of variable-length coding and the robustness to channel errors of fixed length coding. We apply our framework to develop multimode image coding (MIC) schemes for noisy channels, based on the adaptive DCT. The robustness of the proposed MIC is further enhanced by the incorporation of a channel protection scheme suitable for the constraints on complexity and delay. To demonstrate the power of the technique we develop two specific image coding algorithms optimized for the binary symmetric channel. The first, MIC1, incorporates channel optimized quantizers and the second, MIC2, uses rate compatible punctured convolutional codes within the multimode framework. Simulations demonstrate that the multimode coders obtain significant performance gains of up to 6 dB over conventional fixed length coding techniques.
{"title":"Multimode image coding for noisy channels","authors":"S. Regunathan, K. Rose, S. Gadkari","doi":"10.1109/DCC.1997.581974","DOIUrl":"https://doi.org/10.1109/DCC.1997.581974","url":null,"abstract":"We attack the problem of robust and efficient image compression for transmission over noisy channels. To achieve the dual goals of high compression efficiency and low sensitivity to channel noise we introduce a multimode coding framework. Multimode coders are quasi-fixed length in nature, and allow optimization of the tradeoff between the compression capability of variable-length coding and the robustness to channel errors of fixed length coding. We apply our framework to develop multimode image coding (MIC) schemes for noisy channels, based on the adaptive DCT. The robustness of the proposed MIC is further enhanced by the incorporation of a channel protection scheme suitable for the constraints on complexity and delay. To demonstrate the power of the technique we develop two specific image coding algorithms optimized for the binary symmetric channel. The first, MIC1, incorporates channel optimized quantizers and the second, MIC2, uses rate compatible punctured convolutional codes within the multimode framework. Simulations demonstrate that the multimode coders obtain significant performance gains of up to 6 dB over conventional fixed length coding techniques.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124984348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The results of an experimental study of different modifications of the context tree weighting algorithm are described. In particular, the combination of this algorithm with the well-known PPM approach is studied. For one of the considered modifications the decrease of the average (for the Calgary Corpus) coding rate is 0.091 bits compared with PPMD.
{"title":"Text compression by context tree weighting","authors":"J. Åberg, Y. Shtarkov","doi":"10.1109/DCC.1997.582062","DOIUrl":"https://doi.org/10.1109/DCC.1997.582062","url":null,"abstract":"The results of an experimental study of different modifications of the context tree weighting algorithm are described. In particular, the combination of this algorithm with the well-known PPM approach is studied. For one of the considered modifications the decrease of the average (for the Calgary Corpus) coding rate is 0.091 bits compared with PPMD.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128516745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an analytical framework for describing the distortion in an image communication system that includes wavelet transformation, uniform scalar quantization, run length coding, entropy coding, forward error control, and transmission over a binary symmetric channel. Simulations performed using ideal source models as well as real image subbands confirm the accuracy of the distortion description. The resulting equations can be used to choose channel code rates in an unequal error protection scheme in which subbands are protected according to their importance.
{"title":"An analytical treatment of channel-induced distortion in run length coded subbands","authors":"J. Garcia-Frías, J. Villasenor","doi":"10.1109/DCC.1997.581965","DOIUrl":"https://doi.org/10.1109/DCC.1997.581965","url":null,"abstract":"We present an analytical framework for describing the distortion in an image communication system that includes wavelet transformation, uniform scalar quantization, run length coding, entropy coding, forward error control, and transmission over a binary symmetric channel. Simulations performed using ideal source models as well as real image subbands confirm the accuracy of the distortion description. The resulting equations can be used to choose channel code rates in an unequal error protection scheme in which subbands are protected according to their importance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121784896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ubiquity of networking and computational capacity associated with the new communications media unveil a universe of new requirements for image representation. Among such requirements is the ability of the representation used for coding to support higher-level tasks such as content-based retrieval. We explore the relationships between probabilistic modeling and data compression to introduce a representation-library-based coding-which, by enabling retrieval in the compressed domain, satisfies this requirement. Because it contains an embedded probabilistic description of the source, this new representation allows the construction of good inference models without compromise of compression efficiency, leads to very efficient procedures for query and retrieval, and provides a framework for higher level tasks such as the analysis and classification of video shots.
{"title":"Library-based coding: a representation for efficient video compression and retrieval","authors":"N. Vasconcelos, A. Lippman","doi":"10.1109/DCC.1997.581989","DOIUrl":"https://doi.org/10.1109/DCC.1997.581989","url":null,"abstract":"The ubiquity of networking and computational capacity associated with the new communications media unveil a universe of new requirements for image representation. Among such requirements is the ability of the representation used for coding to support higher-level tasks such as content-based retrieval. We explore the relationships between probabilistic modeling and data compression to introduce a representation-library-based coding-which, by enabling retrieval in the compressed domain, satisfies this requirement. Because it contains an embedded probabilistic description of the source, this new representation allows the construction of good inference models without compromise of compression efficiency, leads to very efficient procedures for query and retrieval, and provides a framework for higher level tasks such as the analysis and classification of video shots.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130678188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data.
{"title":"Capturing global redundancy to improve compression of large images","authors":"B. L. Kess, S. Reichenbach","doi":"10.1109/DCC.1997.581967","DOIUrl":"https://doi.org/10.1109/DCC.1997.581967","url":null,"abstract":"A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129575340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}