We describe a precompression algorithm that effectively represents any long common strings that appear in a file. The algorithm interacts well with standard compression algorithms that represent shorter strings that are near in the input text. Our experiments show that some real data sets do indeed contain many long common strings. We extend the fingerprint mechanisms of our algorithm to a program that identifies long common strings in an input file. This program gives interesting insights into the structure of real data files that contain long common strings.
{"title":"Data compression using long common strings","authors":"J. Bentley","doi":"10.1109/DCC.1999.755678","DOIUrl":"https://doi.org/10.1109/DCC.1999.755678","url":null,"abstract":"We describe a precompression algorithm that effectively represents any long common strings that appear in a file. The algorithm interacts well with standard compression algorithms that represent shorter strings that are near in the input text. Our experiments show that some real data sets do indeed contain many long common strings. We extend the fingerprint mechanisms of our algorithm to a program that identifies long common strings in an input file. This program gives interesting insights into the structure of real data files that contain long common strings.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116241451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We developed a wavelet-based SAR image compression algorithm which combines tree-structured texture analysis, soft-thresholding speckle reduction, quadtree homogeneous decomposition, and a modified zero-tree coding scheme. First, the tree-structured wavelet transform is applied to the SAR image. The decomposition is no longer simply applied to the low-scale subsignals recursively but to the output of any filter. The measurement of the decomposition is the energy of the image. If the energy of a subimage is significantly smaller than others, we stop the decomposition in this region since it contains less information. The texture factors are created after this step, which represents the amount of texture information. Second, quadtree decomposition is used to split the components in the lowest scale component into two sets, a homogeneous set and a target set. The homogeneous set consists of the relatively homogeneous regions. The target set consists of those non-homogeneous regions which have been further decomposed into single component regions. A conventional soft-threshold is applied to reduce speckle noise on all the wavelet coefficients except those of the lowest scale. The feature factor is used to set the threshold. Finally, the conventional SPIHT methods are modified based on the result from the tree-structured decomposition and the quadtree decomposition. In the encoder, the amount of speckle reduction is chosen based on the requirements of the user. Different coding schemes are applied to the homogeneous set and the target set. The skewed distribution of the residuals makes arithmetic coding the best choice for lossless compression.
{"title":"Modified SPIHT encoding for SAR image data","authors":"Z. Zeng, I. Cumming","doi":"10.1109/DCC.1999.785719","DOIUrl":"https://doi.org/10.1109/DCC.1999.785719","url":null,"abstract":"Summary form only given. We developed a wavelet-based SAR image compression algorithm which combines tree-structured texture analysis, soft-thresholding speckle reduction, quadtree homogeneous decomposition, and a modified zero-tree coding scheme. First, the tree-structured wavelet transform is applied to the SAR image. The decomposition is no longer simply applied to the low-scale subsignals recursively but to the output of any filter. The measurement of the decomposition is the energy of the image. If the energy of a subimage is significantly smaller than others, we stop the decomposition in this region since it contains less information. The texture factors are created after this step, which represents the amount of texture information. Second, quadtree decomposition is used to split the components in the lowest scale component into two sets, a homogeneous set and a target set. The homogeneous set consists of the relatively homogeneous regions. The target set consists of those non-homogeneous regions which have been further decomposed into single component regions. A conventional soft-threshold is applied to reduce speckle noise on all the wavelet coefficients except those of the lowest scale. The feature factor is used to set the threshold. Finally, the conventional SPIHT methods are modified based on the result from the tree-structured decomposition and the quadtree decomposition. In the encoder, the amount of speckle reduction is chosen based on the requirements of the user. Different coding schemes are applied to the homogeneous set and the target set. The skewed distribution of the residuals makes arithmetic coding the best choice for lossless compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent rate-distortion analyses of image transform coders are based on a trade-off between the lossless coding of coefficient positions versus the lossy coding of the coefficient values. We propose spike processes as a tool that allows a more fundamental trade-off, namely between lossy position coding and lossy value coding. We investigate the Hamming distortion case and give analytic results for single and multiple spikes. We then consider upper bounds for a single Gaussian spike with squared error distortion. The obtained results show a rate distortion behavior which switches from linear at low rates to exponential at high rates.
{"title":"Rate-distortion analysis of spike processes","authors":"C. Weidmann, M. Vetterli","doi":"10.1109/DCC.1999.755657","DOIUrl":"https://doi.org/10.1109/DCC.1999.755657","url":null,"abstract":"Recent rate-distortion analyses of image transform coders are based on a trade-off between the lossless coding of coefficient positions versus the lossy coding of the coefficient values. We propose spike processes as a tool that allows a more fundamental trade-off, namely between lossy position coding and lossy value coding. We investigate the Hamming distortion case and give analytic results for single and multiple spikes. We then consider upper bounds for a single Gaussian spike with squared error distortion. The obtained results show a rate distortion behavior which switches from linear at low rates to exponential at high rates.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125838046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Error resilience is an important requirement when errors occur during video transmission. The video transmitted over the Internet is usually a packetized stream and thus the common errors for the Internet video are due to packet loss, caused by buffer overflows in routers, late arrival of packets, and bit errors in the network. This loss results in single or multiple macroblock losses in the decoding process and causes severe degradation in perceived quality and error propagation. We present a perceptual preprocessor based on the insensitivity of the human visual system to the mild changes in pixel intensity in order to segment video into regions according to perceptibility of picture changes. With the information of segmentation, we determine which macroblocks require motion estimation and then which macroblocks need to be included in the second layer. The second layer contains the coarse (less quantized) version of the most perceptually-critical picture information to provide redundancy used to reconstruct lost coding blocks. This information is transmitted in a separate packet, which provides path and time diversities when packet losses are uncorrelated. This combination of methods provides a significant improvement in received quality when losses occur, without significantly degrading the video in a low-bit-rate video channel. Our proposed scheme is easily scalable to various data bitrates, picture quality, and computational complexity for use on different platforms. Because the data in our layered video stream is standards-compliant, our proposed schemes require no extra non-standard device to encode/decode the video and they are easily integrated into the current video standards such as H.261/263, MPEG1/MPEG2 and the forthcoming MPEG4.
{"title":"A perceptual-based video coder for error resilience","authors":"Yi-jen Chiu","doi":"10.1109/DCC.1999.785678","DOIUrl":"https://doi.org/10.1109/DCC.1999.785678","url":null,"abstract":"Summary form only given. Error resilience is an important requirement when errors occur during video transmission. The video transmitted over the Internet is usually a packetized stream and thus the common errors for the Internet video are due to packet loss, caused by buffer overflows in routers, late arrival of packets, and bit errors in the network. This loss results in single or multiple macroblock losses in the decoding process and causes severe degradation in perceived quality and error propagation. We present a perceptual preprocessor based on the insensitivity of the human visual system to the mild changes in pixel intensity in order to segment video into regions according to perceptibility of picture changes. With the information of segmentation, we determine which macroblocks require motion estimation and then which macroblocks need to be included in the second layer. The second layer contains the coarse (less quantized) version of the most perceptually-critical picture information to provide redundancy used to reconstruct lost coding blocks. This information is transmitted in a separate packet, which provides path and time diversities when packet losses are uncorrelated. This combination of methods provides a significant improvement in received quality when losses occur, without significantly degrading the video in a low-bit-rate video channel. Our proposed scheme is easily scalable to various data bitrates, picture quality, and computational complexity for use on different platforms. Because the data in our layered video stream is standards-compliant, our proposed schemes require no extra non-standard device to encode/decode the video and they are easily integrated into the current video standards such as H.261/263, MPEG1/MPEG2 and the forthcoming MPEG4.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper shows the existence of binary pseudowavelets, bases on the binary domain that exhibit some of the properties of wavelets, such as multiresolution reconstruction and compact support. The binary pseudowavelets are defined on B/sup n/ (binary vectors of length n) and are operated upon with the binary operators logical and, and exclusive or. The forward transform, or analysis, is the decomposition of a binary vector into its constituent binary pseudowavelets. Binary pseudowavelets allow multiresolution, progressive reconstruction of binary vectors by using progressively more coefficients in the inverse transform. Binary pseudowavelets bases, being sparse matrices, also provide for fast transforms; moreover pseudowavelets rely on hardware-friendly operations for efficient software and hardware implementation.
{"title":"Binary pseudowavelets and applications to bilevel image processing","authors":"S. Pigeon, Yoshua Bengio","doi":"10.1109/DCC.1999.755686","DOIUrl":"https://doi.org/10.1109/DCC.1999.755686","url":null,"abstract":"This paper shows the existence of binary pseudowavelets, bases on the binary domain that exhibit some of the properties of wavelets, such as multiresolution reconstruction and compact support. The binary pseudowavelets are defined on B/sup n/ (binary vectors of length n) and are operated upon with the binary operators logical and, and exclusive or. The forward transform, or analysis, is the decomposition of a binary vector into its constituent binary pseudowavelets. Binary pseudowavelets allow multiresolution, progressive reconstruction of binary vectors by using progressively more coefficients in the inverse transform. Binary pseudowavelets bases, being sparse matrices, also provide for fast transforms; moreover pseudowavelets rely on hardware-friendly operations for efficient software and hardware implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129891432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization. The reduced comparison search detects the activity of the code vectors and utilizes it on the classification of the training vectors. For training vectors whose current code vector has not been modified, we calculate distances only to the active code vectors. A large proportion of the distance calculations can be omitted without sacrificing the optimality of the partition. The new method is included in several fast GLA variants reducing their running times over 50% on average.
{"title":"Reduced comparison search for the exact GLA","authors":"T. Kaukoranta, P. Fränti, O. Nevalainen","doi":"10.1109/DCC.1999.755651","DOIUrl":"https://doi.org/10.1109/DCC.1999.755651","url":null,"abstract":"This paper introduces a new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization. The reduced comparison search detects the activity of the code vectors and utilizes it on the classification of the training vectors. For training vectors whose current code vector has not been modified, we calculate distances only to the active code vectors. A large proportion of the distance calculations can be omitted without sacrificing the optimality of the partition. The new method is included in several fast GLA variants reducing their running times over 50% on average.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130475499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.
{"title":"Two space-economical algorithms for calculating minimum redundancy prefix codes","authors":"R. Milidiú, A. Pessoa, E. Laber","doi":"10.1109/DCC.1999.755676","DOIUrl":"https://doi.org/10.1109/DCC.1999.755676","url":null,"abstract":"The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131888574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report on the performance evaluation of greedy parsing with a single-step lookahead, denoted as flexible parsing. We also introduce a new fingerprint-based data structure which enables efficient linear-time implementation.
{"title":"The effect of flexible parsing for dynamic dictionary-based data compression","authors":"Yossi Matias, N. Rajpoot, S. C. Sahinalp","doi":"10.1109/DCC.1999.755673","DOIUrl":"https://doi.org/10.1109/DCC.1999.755673","url":null,"abstract":"We report on the performance evaluation of greedy parsing with a single-step lookahead, denoted as flexible parsing. We also introduce a new fingerprint-based data structure which enables efficient linear-time implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126037652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In sequential lossless data compression algorithms the data stream is often transformed into short subsequences that are modeled as memoryless. Then it is desirable to use any information that each sequence might provide about the behaviour of other sequences that can be expected to have similar properties. Here we examine one such situation, as follows. We want to encode, using arithmetic coding with a sequential estimator, an M-ary memoryless source with unknown parameters /spl theta/, from which we have encoded already a sequence x/sup n/. In addition, both the encoder and the decoder have observed a sequence y/sup n/ that is generated independently by another source with unknown parameters /spl theta//spl tilde/ that are known to be "similar" to /spl theta/ by a pseudodistance /spl delta/(/spl theta/,/spl theta//spl tilde/) that is approximately equal to the relative entropy. Known to both sides is also a number d such that /spl delta/(/spl theta/,/spl theta//spl tilde/)/spl les/d. For a stand-alone memoryless source, the worst-case average redundancy of the (n+1)-th encoding is lower bounded by 0.5(M-1)/n+O(1/n/sup 2/), and the Dirichlet estimator is close to optimal for this case. We show that this bound holds also for the case with side information as described above, meaning that we can improve, at best, the O(1/n/sup 2/)-term. We define a frequency weighted estimator for this. Application of the frequency weighted estimator to to the PPM algorithm (Bell et al., 1989) by weighting order-4 statistics into an order-5 model, with d estimated during encoding, yields improvements that are consistent with the bounds above, which means that in practice we improve the performance by about 0.5 bits per active state of the model, making a gain of approximately 20000 bits on the Calgary Corpus.
{"title":"On taking advantage of similarities between parameters in lossless sequential coding","authors":"J. Åberg","doi":"10.1109/DCC.1999.785670","DOIUrl":"https://doi.org/10.1109/DCC.1999.785670","url":null,"abstract":"Summary form only given. In sequential lossless data compression algorithms the data stream is often transformed into short subsequences that are modeled as memoryless. Then it is desirable to use any information that each sequence might provide about the behaviour of other sequences that can be expected to have similar properties. Here we examine one such situation, as follows. We want to encode, using arithmetic coding with a sequential estimator, an M-ary memoryless source with unknown parameters /spl theta/, from which we have encoded already a sequence x/sup n/. In addition, both the encoder and the decoder have observed a sequence y/sup n/ that is generated independently by another source with unknown parameters /spl theta//spl tilde/ that are known to be \"similar\" to /spl theta/ by a pseudodistance /spl delta/(/spl theta/,/spl theta//spl tilde/) that is approximately equal to the relative entropy. Known to both sides is also a number d such that /spl delta/(/spl theta/,/spl theta//spl tilde/)/spl les/d. For a stand-alone memoryless source, the worst-case average redundancy of the (n+1)-th encoding is lower bounded by 0.5(M-1)/n+O(1/n/sup 2/), and the Dirichlet estimator is close to optimal for this case. We show that this bound holds also for the case with side information as described above, meaning that we can improve, at best, the O(1/n/sup 2/)-term. We define a frequency weighted estimator for this. Application of the frequency weighted estimator to to the PPM algorithm (Bell et al., 1989) by weighting order-4 statistics into an order-5 model, with d estimated during encoding, yields improvements that are consistent with the bounds above, which means that in practice we improve the performance by about 0.5 bits per active state of the model, making a gain of approximately 20000 bits on the Calgary Corpus.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114462094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present an edge-preserving image compression technique based on the wavelet transform and iterative constrained least square regularization. This approach treats image reconstruction from lossy image compression as the process of image restoration. It utilizes the edge information detected from the source image as a priori knowledge for the subsequent reconstruction. Image restoration refers to the problem of estimating the source image from its degraded version. The reconstruction of DWT-coded images is formulated as a regularized image recovery problem and makes use of the edge information as the a priori knowledge about the source image to recover the details, as well as to reduce the ringing artifact of the DWT-coded image. To compromise the rate of edge information and DWT-coded image data, a scheme based on generalized finite automata (GFA) is used. GFA is used instead of vector quantization in order to achieve adaptive encoding of the edge image.
{"title":"Finite automata and regularized edge-preserving wavelet transform scheme","authors":"Sung-Wai Hong, P. Bao","doi":"10.1109/DCC.1999.785687","DOIUrl":"https://doi.org/10.1109/DCC.1999.785687","url":null,"abstract":"Summary form only given. We present an edge-preserving image compression technique based on the wavelet transform and iterative constrained least square regularization. This approach treats image reconstruction from lossy image compression as the process of image restoration. It utilizes the edge information detected from the source image as a priori knowledge for the subsequent reconstruction. Image restoration refers to the problem of estimating the source image from its degraded version. The reconstruction of DWT-coded images is formulated as a regularized image recovery problem and makes use of the edge information as the a priori knowledge about the source image to recover the details, as well as to reduce the ringing artifact of the DWT-coded image. To compromise the rate of edge information and DWT-coded image data, a scheme based on generalized finite automata (GFA) is used. GFA is used instead of vector quantization in order to achieve adaptive encoding of the edge image.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124225516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}