[Summary form only given]. The best methods of text compression work by conditioning each symbol's probability on its predecessors. Prior symbols establish a context that governs the probability distribution for the next one, and the actual. The next symbol is encoded with respect to this distribution. However, the best predictors for words in natural language are not necessarily their immediate predecessors. Verbs may depend on nouns, pronouns on names, closing brackets on opening ones, question marks on "wh"-words. To establish a more appropriate dependency structure, the lexical attraction of a pair of words is defined as the likelihood that they will appear (in that order) within a sentence, regardless of how far apart they are. This is estimated by counting the co-occurrences of words in the sentences of a large corpus. Then, for each sentence, an undirected (planar, acydic) graph is found that maximizes the lexical attraction between linked items, effectively reorganizing the text in the form of a low-entropy model. We encode a series of linked sentences and transmit them in the same manner as order-1 word-level PPM. To prime the lexical attraction linker, the whole document is processed to acquire the co-occurrence counts, and again to re-link the sentences. Pairs that occur twice or less are excluded from the statistics, which significantly reduces the size of the model. The encoding stage utilizes an adaptive PPM-style method. Encouraging results have been obtained using this method.
{"title":"Lexical attraction for text compression","authors":"Joscha Bach, I. Witten","doi":"10.1109/DCC.1999.785673","DOIUrl":"https://doi.org/10.1109/DCC.1999.785673","url":null,"abstract":"[Summary form only given]. The best methods of text compression work by conditioning each symbol's probability on its predecessors. Prior symbols establish a context that governs the probability distribution for the next one, and the actual. The next symbol is encoded with respect to this distribution. However, the best predictors for words in natural language are not necessarily their immediate predecessors. Verbs may depend on nouns, pronouns on names, closing brackets on opening ones, question marks on \"wh\"-words. To establish a more appropriate dependency structure, the lexical attraction of a pair of words is defined as the likelihood that they will appear (in that order) within a sentence, regardless of how far apart they are. This is estimated by counting the co-occurrences of words in the sentences of a large corpus. Then, for each sentence, an undirected (planar, acydic) graph is found that maximizes the lexical attraction between linked items, effectively reorganizing the text in the form of a low-entropy model. We encode a series of linked sentences and transmit them in the same manner as order-1 word-level PPM. To prime the lexical attraction linker, the whole document is processed to acquire the co-occurrence counts, and again to re-link the sentences. Pairs that occur twice or less are excluded from the statistics, which significantly reduces the size of the model. The encoding stage utilizes an adaptive PPM-style method. Encouraging results have been obtained using this method.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130531902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present an efficient algorithm for compressing the data necessary to represent an arbitrary cutting plane extracted from a three-dimensional curvilinear data set. The cutting plane technique is an important visualization method for time-varying 3D simulation results since the data sets are often so large. An efficient compression algorithm for these cutting planes is especially important when the simulation running on a remote server is being tracked or the data set is stored on a remote server. Various aspects of the visualization process are considered in the algorithm design, such as the inherent data reduction in going from 3D to 2D when generating a cutting plane, the numerical accuracy required in the cutting plane, and the potential to decimate the triangle mesh. After separating each floating point number into mantissa and exponent, a block sorting algorithm and an entropy coding algorithm are used to perform lossless compression.
{"title":"Compression of arbitrary cutting planes","authors":"Yanlin Guan, R. Moorhead","doi":"10.1109/DCC.1999.785685","DOIUrl":"https://doi.org/10.1109/DCC.1999.785685","url":null,"abstract":"Summary form only given. We present an efficient algorithm for compressing the data necessary to represent an arbitrary cutting plane extracted from a three-dimensional curvilinear data set. The cutting plane technique is an important visualization method for time-varying 3D simulation results since the data sets are often so large. An efficient compression algorithm for these cutting planes is especially important when the simulation running on a remote server is being tracked or the data set is stored on a remote server. Various aspects of the visualization process are considered in the algorithm design, such as the inherent data reduction in going from 3D to 2D when generating a cutting plane, the numerical accuracy required in the cutting plane, and the potential to decimate the triangle mesh. After separating each floating point number into mantissa and exponent, a block sorting algorithm and an entropy coding algorithm are used to perform lossless compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127255025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fast and efficient image compression can be achieved with the progressive wavelet coder (PWC) introduced. Unlike many previous wavelet coders, PWC does not rely on zerotrees or other ordering schemes based on parent-child wavelet relationships. PWC has a very simple structure, based on two key concepts: (1) data-independent reordering and blocking, and (2) low-complexity independent encoding of each block via adaptive Rice coding of bit planes. In that way, PWC allows for progressive image encoding that is scalable both in resolution and bit rate, with a fully embedded bitstream. PWC achieves a rate/distortion performance that is comparable to that of the state-of-the-art SPIHT (set partitioning in hierarchical trees) coder, but with a better performance/complexity ratio.
{"title":"Fast progressive wavelet coding","authors":"Henrique S. Malvar","doi":"10.1109/DCC.1999.755683","DOIUrl":"https://doi.org/10.1109/DCC.1999.755683","url":null,"abstract":"Fast and efficient image compression can be achieved with the progressive wavelet coder (PWC) introduced. Unlike many previous wavelet coders, PWC does not rely on zerotrees or other ordering schemes based on parent-child wavelet relationships. PWC has a very simple structure, based on two key concepts: (1) data-independent reordering and blocking, and (2) low-complexity independent encoding of each block via adaptive Rice coding of bit planes. In that way, PWC allows for progressive image encoding that is scalable both in resolution and bit rate, with a fully embedded bitstream. PWC achieves a rate/distortion performance that is comparable to that of the state-of-the-art SPIHT (set partitioning in hierarchical trees) coder, but with a better performance/complexity ratio.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125492467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A novel architecture with parallel memories suitable for hybrid video coding is presented. It efficiently relieves the memory bandwidth bottleneck in motion estimation, DCT, and IDCT involved in the real-time low-bit rate ITU-T H.263 video compression standard. There are four parallel processing elements and eight parallel memory blocks in the system. The address space is divided into three areas. Coordinate areas 0 and 1 can be accessed simultaneously for row or column formats, needed in the motion estimation, DCT, and IDCT. Alternatively, the area 2 can be accessed for a more complex formats. Such formats are needed, for example, in zigzag scanning and interpolation. The module assignment function S(i,j), expresses how data is stored in the memory modules. We can describe the memory space as a 2D coordinate system with horizontal and vertical coordinates (i,j). The coordinate values are restricted to positive values, and (0,0) is fixed to the uppermost left corner of the coordinate area. The function S(i,j) simply describes the memory block, where the value of coordinate point (i,j) is stored. Memory addresses are described by the address function a(i,j). The coordinate area 0 deals with the memory blocks 0...3, the area 1 with the blocks 4...7 and the area 2 with the blocks 0...7. The constants a/sub 0max/ and a/sub 1max/ are the maximum addresses of the coordinate areas 0 and 1, respectively. The width of the coordinate area is given by L/sub i/. The processing power increases linearly with the number of parallel processing elements. Using more parallel memory blocks enables use of more access formats.
{"title":"Parallel memories in video encoding","authors":"Jarno K. Tanskanen, J. Niittylahti","doi":"10.1109/DCC.1999.785709","DOIUrl":"https://doi.org/10.1109/DCC.1999.785709","url":null,"abstract":"Summary form only given. A novel architecture with parallel memories suitable for hybrid video coding is presented. It efficiently relieves the memory bandwidth bottleneck in motion estimation, DCT, and IDCT involved in the real-time low-bit rate ITU-T H.263 video compression standard. There are four parallel processing elements and eight parallel memory blocks in the system. The address space is divided into three areas. Coordinate areas 0 and 1 can be accessed simultaneously for row or column formats, needed in the motion estimation, DCT, and IDCT. Alternatively, the area 2 can be accessed for a more complex formats. Such formats are needed, for example, in zigzag scanning and interpolation. The module assignment function S(i,j), expresses how data is stored in the memory modules. We can describe the memory space as a 2D coordinate system with horizontal and vertical coordinates (i,j). The coordinate values are restricted to positive values, and (0,0) is fixed to the uppermost left corner of the coordinate area. The function S(i,j) simply describes the memory block, where the value of coordinate point (i,j) is stored. Memory addresses are described by the address function a(i,j). The coordinate area 0 deals with the memory blocks 0...3, the area 1 with the blocks 4...7 and the area 2 with the blocks 0...7. The constants a/sub 0max/ and a/sub 1max/ are the maximum addresses of the coordinate areas 0 and 1, respectively. The width of the coordinate area is given by L/sub i/. The processing power increases linearly with the number of parallel processing elements. Using more parallel memory blocks enables use of more access formats.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126485778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We study the cascading of LZ variants to Huffman coding for multilingual documents. Two models are proposed: the static model and the adaptive (dynamic) model. The static model makes use of the dictionary generated by the LZW algorithm in Chinese dictionary-based Huffman compression to achieve better performance. The dynamic model is an extension of the static cascading model. During the insertion of phrases into the dictionary the frequency count of the phrases is updated so that a dynamic Huffman tree with variable length output tokens is obtained. We propose a new method to capture the "LZW dictionary" "by picking up the dictionary entries during decompression. The general idea is the adding of delimiters during the decompression process so that the decompressed files are segmented into phrases that reflect how the LZW compressor makes use of its dictionary phrases to encode the source. The idea of the adaptive cascading model can be thought as an extension of the Chinese LZW compression. Since the size of the header is one important performance bottleneck in the static cascading model we propose the adaptive cascading model to address this issue. The LZW compressor is now outputting not a fixed length token, but a variable length Huffman code from the Huffman tree. It is expected that such a compressor can achieve very good compression performance. In our adaptive cascading model we choose LZW instead of LZSS because the LZW algorithm preserves more information than the LZSS algorithm does. This characteristic is found to be very useful in helping Chinese compressors to attain better performance.
{"title":"Design consideration for multi-lingual cascading text compressors","authors":"Chi-Hung Chi, IV YanZhang","doi":"10.1109/DCC.1999.785677","DOIUrl":"https://doi.org/10.1109/DCC.1999.785677","url":null,"abstract":"Summary form only given. We study the cascading of LZ variants to Huffman coding for multilingual documents. Two models are proposed: the static model and the adaptive (dynamic) model. The static model makes use of the dictionary generated by the LZW algorithm in Chinese dictionary-based Huffman compression to achieve better performance. The dynamic model is an extension of the static cascading model. During the insertion of phrases into the dictionary the frequency count of the phrases is updated so that a dynamic Huffman tree with variable length output tokens is obtained. We propose a new method to capture the \"LZW dictionary\" \"by picking up the dictionary entries during decompression. The general idea is the adding of delimiters during the decompression process so that the decompressed files are segmented into phrases that reflect how the LZW compressor makes use of its dictionary phrases to encode the source. The idea of the adaptive cascading model can be thought as an extension of the Chinese LZW compression. Since the size of the header is one important performance bottleneck in the static cascading model we propose the adaptive cascading model to address this issue. The LZW compressor is now outputting not a fixed length token, but a variable length Huffman code from the Huffman tree. It is expected that such a compressor can achieve very good compression performance. In our adaptive cascading model we choose LZW instead of LZSS because the LZW algorithm preserves more information than the LZSS algorithm does. This characteristic is found to be very useful in helping Chinese compressors to attain better performance.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125538785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given two strings: pattern P and text T of lengths |P|=M and |T|=N, a string matching problem is to find all occurrences of pattern P in text T. A fully compressed string matching problem is the string matching problem with input strings P and T given in compressed forms p and t respectively, where |p|=m and |t|=n. We present first, almost-optimal, string matching algorithms for LZW-compressed strings running in: (1) O((n+m)log(n+m)) time on a single processor machine; and (2) O/sup /spl tilde//(n+m) work on a (n+m)-processor PRAM. The techniques used can be used in design of efficient algorithms for a wide range of the most typical string problems, in the compressed LZW setting, including: computing a period of a word, finding repetitions, symmetries, counting subwords, and multi-pattern matching.
{"title":"Almost-optimal fully LZW-compressed pattern matching","authors":"L. Gąsieniec, W. Rytter","doi":"10.1109/DCC.1999.755681","DOIUrl":"https://doi.org/10.1109/DCC.1999.755681","url":null,"abstract":"Given two strings: pattern P and text T of lengths |P|=M and |T|=N, a string matching problem is to find all occurrences of pattern P in text T. A fully compressed string matching problem is the string matching problem with input strings P and T given in compressed forms p and t respectively, where |p|=m and |t|=n. We present first, almost-optimal, string matching algorithms for LZW-compressed strings running in: (1) O((n+m)log(n+m)) time on a single processor machine; and (2) O/sup /spl tilde//(n+m) work on a (n+m)-processor PRAM. The techniques used can be used in design of efficient algorithms for a wide range of the most typical string problems, in the compressed LZW setting, including: computing a period of a word, finding repetitions, symmetries, counting subwords, and multi-pattern matching.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127752242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
[Summary form only given]. For data communication purposes, the initial parsing required by the static Huffman algorithm represents a big disadvantage. This is because the data must be transmitted on-line. As soon as the symbol arrives at the transmitter, it must be encoded and transmitted to the receiver. In these situations, adaptive Huffman codes have been largely used. This method determines the mapping from symbol alphabet to codewords based upon a running estimate of the alphabet symbol weights. The code is adaptive, just changing to remain optimal for the current estimates. Two methods have been presented in the literature for implementing dynamic Huffman coding. The first one was the FGK algorithm (Knuth, 1985) and the second was the /spl Lambda/ algorithm (Vitter, 1987). Vitter proved that the total number of bits D/sub t/ transmitted by the FGK algorithm for a message with t symbols is bounded below by S/sub t/-n+1, where S/sub t/ is the number of bits required by the static Huffman method and bounded above by 2S/sub t/+t-4n+2. Furthermore, he conjectured that D/sub t/ is bounded above by S/sub t/+O(t). We present an amortized analysis to prove this conjecture by showing that D/sub t//spl les/S/sub t/+2t-2k-[log min(k+1,n)], where k is the number of distinct symbols in the message. We also present an example where D/sub t/=S/sub t/+2t-2k-3[(t-k)/k]-[log(k+1)], showing that the proposed bound is asymptotically tight. These results explain the good performance of FGK observed by some authors through practical experiments.
{"title":"Bounding the compression loss of the FGK algorithm","authors":"R. Milidiú, E. Laber, A. Pessoa","doi":"10.1109/DCC.1999.785696","DOIUrl":"https://doi.org/10.1109/DCC.1999.785696","url":null,"abstract":"[Summary form only given]. For data communication purposes, the initial parsing required by the static Huffman algorithm represents a big disadvantage. This is because the data must be transmitted on-line. As soon as the symbol arrives at the transmitter, it must be encoded and transmitted to the receiver. In these situations, adaptive Huffman codes have been largely used. This method determines the mapping from symbol alphabet to codewords based upon a running estimate of the alphabet symbol weights. The code is adaptive, just changing to remain optimal for the current estimates. Two methods have been presented in the literature for implementing dynamic Huffman coding. The first one was the FGK algorithm (Knuth, 1985) and the second was the /spl Lambda/ algorithm (Vitter, 1987). Vitter proved that the total number of bits D/sub t/ transmitted by the FGK algorithm for a message with t symbols is bounded below by S/sub t/-n+1, where S/sub t/ is the number of bits required by the static Huffman method and bounded above by 2S/sub t/+t-4n+2. Furthermore, he conjectured that D/sub t/ is bounded above by S/sub t/+O(t). We present an amortized analysis to prove this conjecture by showing that D/sub t//spl les/S/sub t/+2t-2k-[log min(k+1,n)], where k is the number of distinct symbols in the message. We also present an example where D/sub t/=S/sub t/+2t-2k-3[(t-k)/k]-[log(k+1)], showing that the proposed bound is asymptotically tight. These results explain the good performance of FGK observed by some authors through practical experiments.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"505 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127048613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional wavelet packet (WP) optimization techniques neglect information about the structure of the lossy part of the compression scheme. Such information, however, can help guide the optimization procedure so as to result in efficient WP structures. We propose a wavelet packet algorithm with a constrained rate-distortion optimization which makes it suited to subsequent tree-structured coding such as with the set partitioning in hierarchical trees (SPMT) algorithm. The (octave-band) wavelet transform lends itself to simple and coherent tree-shaped spatial relations which can then be used to define zero-trees. Yet, input images have different frequency distributions and an adaptive transform such as WP is bound to be more efficient on an image-by-image basis. With WP algorithms, the coefficients in the WP domain can be rearranged to produce what resembles (or simulates) the normal wavelet transform structure. This stage is usually performed to simplify the coding stage. However, an unconstrained optimization can result in a transformed image with complicated or incoherent tree-shaped spatial relations. This work aims to show that the efficiency of embedded coders such as SPMT and Shapiro's Zerotrees strongly depends on WP structures with coherent spatial tree relationships.
{"title":"Constrained wavelet packets for tree-structured video coding algorithms","authors":"H. Khalil, A. Jacquin, C. Podilchuk","doi":"10.1109/DCC.1999.755685","DOIUrl":"https://doi.org/10.1109/DCC.1999.755685","url":null,"abstract":"Traditional wavelet packet (WP) optimization techniques neglect information about the structure of the lossy part of the compression scheme. Such information, however, can help guide the optimization procedure so as to result in efficient WP structures. We propose a wavelet packet algorithm with a constrained rate-distortion optimization which makes it suited to subsequent tree-structured coding such as with the set partitioning in hierarchical trees (SPMT) algorithm. The (octave-band) wavelet transform lends itself to simple and coherent tree-shaped spatial relations which can then be used to define zero-trees. Yet, input images have different frequency distributions and an adaptive transform such as WP is bound to be more efficient on an image-by-image basis. With WP algorithms, the coefficients in the WP domain can be rearranged to produce what resembles (or simulates) the normal wavelet transform structure. This stage is usually performed to simplify the coding stage. However, an unconstrained optimization can result in a transformed image with complicated or incoherent tree-shaped spatial relations. This work aims to show that the efficiency of embedded coders such as SPMT and Shapiro's Zerotrees strongly depends on WP structures with coherent spatial tree relationships.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114926786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When macroblocks are lost in an MPEG decoder, the decoder can try to conceal the error by estimating or interpolating the missing area. Many different methods for this type of concealment have been proposed, operating in the spatial, frequency, or temporal domains, or some hybrid combination of them. We show how the use of a decision tree that can adaptively choose among several different error concealment methods can outperform each single method. We also propose two promising new methods for temporal error concealment.
{"title":"Decision trees for error concealment in video decoding","authors":"Song Cen, P. Cosman, F. Azadegan","doi":"10.1109/DCC.1999.755688","DOIUrl":"https://doi.org/10.1109/DCC.1999.755688","url":null,"abstract":"When macroblocks are lost in an MPEG decoder, the decoder can try to conceal the error by estimating or interpolating the missing area. Many different methods for this type of concealment have been proposed, operating in the spatial, frequency, or temporal domains, or some hybrid combination of them. We show how the use of a decision tree that can adaptively choose among several different error concealment methods can outperform each single method. We also propose two promising new methods for temporal error concealment.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132939762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Arithmetic coding is a well-known technique for lossless coding or data compression. We have developed two new multiplication-free methods. Our first new method is to round A to x bits instead of truncating it. Rounding is equivalent to truncating A to its x most significant bits if the (x+1)th most significant bit of A is a 0 and adding 1 to the truncated representation if the (x+1)th most significant bit is a 1. The rounding that is applied in our new method increases the complexity (compared to truncation), since, in about half of the cases, 1 has to be added to the truncated representation. As an alternative, we therefore developed a second new method, which we call "partial rounding". By partial rounding we mean that 1 is only added to the truncated representation of A in the case when the (x+1)th most significant bit is a 1 and the xth most significant bit is a 0. In the implementation this means that the xth bit of the approximation of A equals the logical OR of the xth and (x+l)th most significant bits of the original A. The partial rounding of this second new method results in the same approximation as the "full rounding" of the first method in about 75% of the cases, but its complexity is as low as that of truncation (since the complexity of the OR is negligible). Applying the various multiplication-free methods in the arithmetic coder has demonstrated that our new rounding-based method outperforms the previously published multiplication-free methods. The "partial rounding" method outperforms the previously published truncation-based methods.
{"title":"New methods for multiplication-free arithmetic coding","authors":"R. van der Vleuten","doi":"10.1109/DCC.1999.785712","DOIUrl":"https://doi.org/10.1109/DCC.1999.785712","url":null,"abstract":"Summary form only given. Arithmetic coding is a well-known technique for lossless coding or data compression. We have developed two new multiplication-free methods. Our first new method is to round A to x bits instead of truncating it. Rounding is equivalent to truncating A to its x most significant bits if the (x+1)th most significant bit of A is a 0 and adding 1 to the truncated representation if the (x+1)th most significant bit is a 1. The rounding that is applied in our new method increases the complexity (compared to truncation), since, in about half of the cases, 1 has to be added to the truncated representation. As an alternative, we therefore developed a second new method, which we call \"partial rounding\". By partial rounding we mean that 1 is only added to the truncated representation of A in the case when the (x+1)th most significant bit is a 1 and the xth most significant bit is a 0. In the implementation this means that the xth bit of the approximation of A equals the logical OR of the xth and (x+l)th most significant bits of the original A. The partial rounding of this second new method results in the same approximation as the \"full rounding\" of the first method in about 75% of the cases, but its complexity is as low as that of truncation (since the complexity of the OR is negligible). Applying the various multiplication-free methods in the arithmetic coder has demonstrated that our new rounding-based method outperforms the previously published multiplication-free methods. The \"partial rounding\" method outperforms the previously published truncation-based methods.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115593788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}