The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.
{"title":"The structure of DMC [dynamic Markov compression]","authors":"S. Bunton","doi":"10.1109/DCC.1995.515497","DOIUrl":"https://doi.org/10.1109/DCC.1995.515497","url":null,"abstract":"The popular dynamic Markov compression algorithm (DMC) offers state-of-the-art compression performance and matchless conceptual simplicity. In practice, however, the cost of DMC's simplicity and performance is often outrageous memory consumption. Several known attempts at reducing DMC's unwieldy model growth have rendered DMC's compression performance uncompetitive. One reason why DMC's model growth problem has resisted solution is that the algorithm is poorly understood. DMC is the only published stochastic data model for which a characterization of its states, in terms of conditioning contexts, is unknown. Up until now, all that was certain about DMC was that a finite-context characterization exists, which was proved in using a finiteness argument. This paper presents and proves the first finite-context characterization of the states of DMC's data model Our analysis reveals that the DMC model, with or without its counterproductive portions, offers abstract structural features not found in other models. Ironically, the space-hungry DMC algorithm actually has a greater capacity for economical model representation than its counterparts have. Once identified, DMC's distinguishing features combine easily with the best features from other techniques. These combinations have the potential for achieving very competitive compression/memory tradeoffs.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126003448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.
{"title":"A new model of perceptual threshold functions for application in image compression systems","authors":"K. S. Prashant, V. J. Mathews, Peter J. Hahn","doi":"10.1109/DCC.1995.515527","DOIUrl":"https://doi.org/10.1109/DCC.1995.515527","url":null,"abstract":"This paper discusses the development of a perceptual threshold model for the human visual system. The perceptual threshold functions describe the levels of distortions present at each location in an image that human observers can not detect. Models of perceptual threshold functions are useful in image compression problems because an image compression system that constrains the distortion in the coded images below the levels suggested by the perceptual threshold function performs perceptually lossless compression. Our model involves the decomposition of an input image into its Fourrier components and spatially localized Gabor elementary functions. Data from psychophysical masking experiments are then used to calculate the perceptual detection threshold for each Gabor transform coefficient in the presence of sinusoidal masks. The result of one experiment involving distorting an image using additive noise of magnitudes as suggested by the threshold model is also included in this paper.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.
{"title":"Quantization of wavelet coefficients for image compression","authors":"A. Mohammed, K. Sayood","doi":"10.1109/DCC.1995.515593","DOIUrl":"https://doi.org/10.1109/DCC.1995.515593","url":null,"abstract":"Summary form only given, as follows. The use of wavelets and multiresolution analysis is becoming increasingly popular for image compression. We examine several different approaches to the quantization of wavelet coefficients. A standard approach in subband coding is to use DPCM to encode the lowest band while the higher bands are quantized using a scalar quantizer for each band or a vector quantizer. We implement these schemes using a variety of quantizer including PDF optimized quantizers and recursively indexed scalar quantizers (RISQ). We then incorporate a threshold operation to prevent the removal of perceptually important information. We show that there is a both subjective and objective improvements in performance when we use the RISQ and the perceptual thresholds. The objective performance measure shows a consistent two to three dB improvement over a wide range of rates. Finally we use a recursively indexed vector quantizer (RIVQ) to encode the wavelet coefficients. The RIVQ can operate at relatively high rates and is therefore particularly suited for quantizing the coefficients in the lowest band.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.
{"title":"Unbounded length contexts for PPM","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1995.515495","DOIUrl":"https://doi.org/10.1109/DCC.1995.515495","url":null,"abstract":"The prediction by partial matching (PPM) data compression scheme has set the performance standard in lossless compression of text throughout the past decade. The original algorithm was first published in 1984 by Cleary and Witten, and a series of improvements was described by Moffat (1990), culminating in a careful implementation, called PPMC, which has become the benchmark version. This still achieves results superior to virtually all other compression methods, despite many attempts to better it. PPM, is a finite-context statistical modeling technique that can be viewed as blending together several fixed-order context models to predict the next character in the input sequence. Prediction probabilities for each context in the model are calculated from frequency counts which are updated adaptively; and the symbol that actually occurs is encoded relative to its predicted distribution using arithmetic coding. The paper describes a new algorithm, PPM*, which exploits contexts of unbounded length. It reliably achieves compression superior to PPMC, although our current implementation uses considerably greater computational resources (both time and space). The basic PPM compression scheme is described, showing the use of contexts of unbounded length, and how it can be implemented using a tree data structure. Some results are given that demonstrate an improvement of about 6% over the old method.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129185366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.
{"title":"Fast pattern matching for entropy bounded text","authors":"Shenfeng Chen, J. Reif","doi":"10.1109/DCC.1995.515518","DOIUrl":"https://doi.org/10.1109/DCC.1995.515518","url":null,"abstract":"We present the first known case of one-dimensional and two-dimensional string matching algorithms for text with bounded entropy. Let n be the length of the text and m be the length of the pattern. We show that the expected complexity of the algorithms is related to the entropy of the text for various assumptions of the distribution of the pattern. For the case of uniformly distributed patterns, our one dimensional matching algorithm works in O(nlogm/(pm)) expected running time where H is the entropy of the text and p=1-(1-H/sup 2/)/sup H/(1+H)/. The worst case running time T can also be bounded by (n log m/p(m+/spl radic/V))/spl les/T/spl les/(n log m/p(m-/spl radic/V)) if V is the variance of the source from which the pattern is generated. Our algorithm utilizes data structures and probabilistic analysis techniques that are found in certain lossless data compression schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"635 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image "Lena".
{"title":"Alternative methods for codebook design in vector quantization","authors":"V. Delport","doi":"10.1109/DCC.1995.515595","DOIUrl":"https://doi.org/10.1109/DCC.1995.515595","url":null,"abstract":"A vector quantizer maps a multidimensional vector space into a finite subset of reproduction vectors called a codebook. For codebook optimization the well known LBG algorithm or a simulated annealing technique are commonly used. Two alternative methods the fuzzy-c-mean (FCM) and a genetic algorithm (GA) are proposed. In order to illustrate the algorithm performance a DCT-VQ has been chosen. The fixed partition scheme based on the mean energy per coefficient is shown for the test image \"Lena\".","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128673304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.
{"title":"Multiresolutional piecewise-linear image decompositions: quantization error propagation and design of \"stable\" compression schemes","authors":"O. Kiselyov, P. Fisher","doi":"10.1109/DCC.1995.515580","DOIUrl":"https://doi.org/10.1109/DCC.1995.515580","url":null,"abstract":"Summary form only given. The paper introduces a new approach to design of stable tile-effect-free multiresolutional image compression schemes. It focuses on how quantization errors in the decomposition coefficients affect the quality of the decompressed picture, how the errors propagate in a multiresolutional decomposition, and how to design a compression scheme where the effect of quantization errors is minimized (visually and quantitatively). It also introduces and analyzes the simplest family of Laplacian pyramids (using 3-point causal filters) which yield multiresolutional piecewise-linear image decompositions. This gives reconstructed images much better visual appearance without blockiness, as the examples. The error propagation analysis has lead to discovery of particular Laplacian pyramids where quantizations errors do not amplify as they propagate, but quickly decay.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129391195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a "black box". On the other hand, the distribution of data for an object in a scene can be described by the "dichromatic surface model", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.
{"title":"An image segmentation method based on a color space distribution model","authors":"M. Aizu, O. Nakagawa, M. Takagi","doi":"10.1109/DCC.1995.515549","DOIUrl":"https://doi.org/10.1109/DCC.1995.515549","url":null,"abstract":"Summary form only given. The use of image segmentation methods to perform second generation image coding has received considerable research attention because homogenized partial image data can be efficiently coded on a separate basis. Regarding color image coding, conventional segmentation techniques are especially useful when applied to a uniform color space, e.g., Miyahara et al. ( see IEICE Trans. on D-II, vol.J76-D-II, no.5, p.1023-1037, 1993) developed an image segmentation method for still image coding which performs clustering in a uniform color space and implement segment integration techniques. One drawback of such methodology, however, is that the shape of the distribution of color data is considered as a \"black box\". On the other hand, the distribution of data for an object in a scene can be described by the \"dichromatic surface model\", where the light, which is reflected from a point on a dielectric nonuniform material, is described by a linear combination of two components, i.e., (1) the light reflected off the material surface, and (2) the light reflected off the inside of the material body. Based on this model, we propose a heuristic model for describing the distribution shape using one or more ellipses corresponding to an object body in uniform color space, where the start and end points of each ellipse are both on the luminance axis. To test the method's performance, we carried out a computer simulation.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129441934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.
{"title":"Improving LZFG data compression algorithm","authors":"Jianmin Jiang","doi":"10.1109/DCC.1995.515585","DOIUrl":"https://doi.org/10.1109/DCC.1995.515585","url":null,"abstract":"Summary form only given. This paper presents two approaches to improve the LZFG data compression algorithm. One approach is to introduce a self-adaptive word based scheme to achieve significant improvement for English text compression. The other is to apply a simple move-to-front scheme to further reduce the redundancy within the statistics of copy nodes. The experiments show that an overall improvement is achieved from both approaches. The self-adaptive word-based scheme takes all the consecutive English characters as one word. Any other character in the ASCII codes will be taken as one single word. As an example, the input message '(2+x) is represented by y' can be classified into 9 words. To run the word-based scheme in PATRICIA tree, the data structure is modified.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130046911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted "Miss America" and "Salesman" video sequences.
{"title":"Generalized region based transform coding for video compression","authors":"K. Sum, R. Murch","doi":"10.1109/DCC.1995.515588","DOIUrl":"https://doi.org/10.1109/DCC.1995.515588","url":null,"abstract":"Summary form only given. Block based transform coding (BBTC) is among the most popular coding method for video compression due to its simplicity of hardware implementation. At low bit rate transmission however this approach cannot maintain acceptable resolution and image quality. On the other hand, region based coding methods have been shown to have the capability to improve the visual quality by the acknowledgment of human perception. In order to take the advantages from both of the coding methods, a novel technique is introduced to combine BBTC and region based coding. Using this technique, a new class of video coding methods are generated and termed region based transform coding (RBTC). In the generalized RBTC, we represent regions containing motion in terms of texture surrounded by contours. Contours and textures are then coded separately. The novel technique is that the pixel values of the regions are scanned to form a vector. Then the vector is further converted to a number of fixed size image blocks. Using this technique, conventional transform coding can be applied on the blocks of texture directly. Contour can be coded using traditional contour coding methods or any other bit plane encoding methods. To prove the idea of this new class of video coding methods, a scheme called segmented motion transform coding (SMTC) is simulated. In SMTC, chain codes are used for contour coding. The simulations are performed using the first 60 frames of both of the CIF formatted \"Miss America\" and \"Salesman\" video sequences.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125124798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}