Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating "optimum" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no "holes" in it.
{"title":"Lossless compression by simulated annealing","authors":"R. Bowen-Wright, K. Sayood","doi":"10.1109/DCC.1995.515562","DOIUrl":"https://doi.org/10.1109/DCC.1995.515562","url":null,"abstract":"Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating \"optimum\" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no \"holes\" in it.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.
{"title":"A new model of perceptual threshold functions for application in image compression systems","authors":"K. S. Prashant, V. J. Mathews, Peter J. Hahn","doi":"10.1109/DCC.1995.515527","DOIUrl":"https://doi.org/10.1109/DCC.1995.515527","url":null,"abstract":"This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.
{"title":"Constraining the size of the instantaneous alphabet in trellis quantizers","authors":"M. F. Larsen, R. L. Frost","doi":"10.1109/DCC.1995.515492","DOIUrl":"https://doi.org/10.1109/DCC.1995.515492","url":null,"abstract":"A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.
{"title":"Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization","authors":"G. Abousleman","doi":"10.1109/DCC.1995.515522","DOIUrl":"https://doi.org/10.1109/DCC.1995.515522","url":null,"abstract":"A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.
{"title":"Unbounded length contexts for PPM","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1995.515495","DOIUrl":"https://doi.org/10.1109/DCC.1995.515495","url":null,"abstract":"The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129185366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.
{"title":"Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of \"stable\" compression schemes","authors":"O. Kiselyov, P. Fisher","doi":"10.1109/DCC.1995.515580","DOIUrl":"https://doi.org/10.1109/DCC.1995.515580","url":null,"abstract":"Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129391195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a "black box". On the other hand, the distribution of data for an object in a scene can be described by the "dichromatic surface model", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.
{"title":"An image segmentation method based on a color space distribution model","authors":"M. Aizu, O. Nakagawa, M. Takagi","doi":"10.1109/DCC.1995.515549","DOIUrl":"https://doi.org/10.1109/DCC.1995.515549","url":null,"abstract":"Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a \"black box\". On the other hand, the distribution of data for an object in a scene can be described by the \"dichromatic surface model\", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.
{"title":"Improving LZFG data compression algorithm","authors":"Jianmin Jiang","doi":"10.1109/DCC.1995.515585","DOIUrl":"https://doi.org/10.1109/DCC.1995.515585","url":null,"abstract":"Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130046911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.
{"title":"Fast pattern matching for entropy bounded text","authors":"Shenfeng Chen, J. Reif","doi":"10.1109/DCC.1995.515518","DOIUrl":"https://doi.org/10.1109/DCC.1995.515518","url":null,"abstract":"We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"635 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted "Miss America" and "Salesman" video sequences.
{"title":"Generalized region based transform coding for video compression","authors":"K. Sum, R. Murch","doi":"10.1109/DCC.1995.515588","DOIUrl":"https://doi.org/10.1109/DCC.1995.515588","url":null,"abstract":"Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted \"Miss America\" and \"Salesman\" video sequences.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}