Summary form only given; substantially as follows. Computing with sets of tuples (n-ary relations) is often required in programming, while being a major cause of performance degradation as the size of sets increases. The authors present a new data structure dedicated to the manipulation of large sets of tuples, dubbed a sharing tree. The main idea to reduce memory consumption is to share some sub-tuples of the set represented by a sharing tree. Various conditions are given. The authors have developed algorithms for common set operations: membership, insertion, equality, union, intersection, ... that have theoretical complexities proportional to the sizes of the sharing trees given as arguments, which are usually much smaller than the sizes of the represented sets.
{"title":"Efficient handling of large sets of tuples with sharing trees","authors":"D. Zampuniéris, B. Le Charlier","doi":"10.1109/DCC.1995.515538","DOIUrl":"https://doi.org/10.1109/DCC.1995.515538","url":null,"abstract":"Summary form only given; substantially as follows. Computing with sets of tuples (n-ary relations) is often required in programming, while being a major cause of performance degradation as the size of sets increases. The authors present a new data structure dedicated to the manipulation of large sets of tuples, dubbed a sharing tree. The main idea to reduce memory consumption is to share some sub-tuples of the set represented by a sharing tree. Various conditions are given. The authors have developed algorithms for common set operations: membership, insertion, equality, union, intersection, ... that have theoretical complexities proportional to the sizes of the sharing trees given as arguments, which are usually much smaller than the sizes of the represented sets.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131721941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.
{"title":"Context selection and quantization for lossless image coding","authors":"Xiaolin Wu","doi":"10.1109/DCC.1995.515563","DOIUrl":"https://doi.org/10.1109/DCC.1995.515563","url":null,"abstract":"Summary form only given. After the context quantization, an entropy coder using L2/sup K/ (L is the quantized levels and K is the number of bits) conditional probabilities remains impractical. Instead, only the expectations are approximated by the sample means with respect to different quantized contexts. Computing the sample means involves only cumulating the error terms in the quantized context C(d,t) and keeping a count on the occurrences of C(d,t). Thus, the time and space complexities of the described context based modeling of the prediction errors are O(L2/sup K/). Based on the quantized context C(d,t), the encoder makes a DPCM prediction I, adds to I the most likely prediction error and then arrives at an adaptive, context-based, nonlinear prediction. The error e is then entropy coded. The coding of e is done with L conditional probabilities. The results of the proposed context-based, lossless image compression technique are included.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132014631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transmission and storage of digital video at reduced bit rates requires a source coding scheme, which generally contains motion compensated prediction as an essential part. The class of motion estimation algorithms known as backward methods have the advantage of dense motion field sampling, and in coding applications the decoder needs no motion information from the coder. In this paper, we first present an overview of operator based motion compensators with interpolative and non-interpolative kernels. We then proceed with two new results. The first offers a new perspective on the classical pel-recursive methods; one that exposes the weaknesses of traditional approaches and offers an explanation for the improved performance of operator-based algorithms. The second result introduces a minimum norm intra-frame operator and establishes an equivalence relationship between this and the original (least squares) operator. This equivalence induces interesting duality properties that, in addition to offering insights into operator-based motion estimators, can be used to relax either the maximum needed computational power or the frame buffer length.
{"title":"New relationships in operator-based backward motion compensation","authors":"Aria Nosratinia, M. Orchard","doi":"10.1109/DCC.1995.515529","DOIUrl":"https://doi.org/10.1109/DCC.1995.515529","url":null,"abstract":"The transmission and storage of digital video at reduced bit rates requires a source coding scheme, which generally contains motion compensated prediction as an essential part. The class of motion estimation algorithms known as backward methods have the advantage of dense motion field sampling, and in coding applications the decoder needs no motion information from the coder. In this paper, we first present an overview of operator based motion compensators with interpolative and non-interpolative kernels. We then proceed with two new results. The first offers a new perspective on the classical pel-recursive methods; one that exposes the weaknesses of traditional approaches and offers an explanation for the improved performance of operator-based algorithms. The second result introduces a minimum norm intra-frame operator and establishes an equivalence relationship between this and the original (least squares) operator. This equivalence induces interesting duality properties that, in addition to offering insights into operator-based motion estimators, can be used to relax either the maximum needed computational power or the frame buffer length.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"504 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133011322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. There exists a need for finding a good deinterlacing (scan format conversion) system, since, for example, current available cameras are interlaced and the US HDTV Grand Alliance has put forward a proposal containing both interlaced and progressive scanning formats. On the other hand, over the next few years, the local broadcasting stations will find themselves in the position of receiving video material that could be HDTV quality, progressively scanned, while their news/commercials are still NTSC produced (interlaced scanning). We have developed a new algorithm for deinterlacing based on the algorithm of Nguyen and Dubois (see Proc. Int. Workshop on HDTV, November 1992). It interpolates the missing pixels using a weighted combination of spatial and temporal methods. The algorithm is self-adaptive, since it weights various processing blocks based on the error they introduce. Experiments were run on both "real-world" and computer generated video sequences. The results were compared to the "original" obtained as an output of the ray-tracer, as well as to the reference algorithm provided by the AT&T HDTV group.
{"title":"Adaptive bidirectional time-recursive interpolation for deinterlacing","authors":"J. Kovacevic, R. Safranek, E. Yeh","doi":"10.1109/DCC.1995.515556","DOIUrl":"https://doi.org/10.1109/DCC.1995.515556","url":null,"abstract":"Summary form only given. There exists a need for finding a good deinterlacing (scan format conversion) system, since, for example, current available cameras are interlaced and the US HDTV Grand Alliance has put forward a proposal containing both interlaced and progressive scanning formats. On the other hand, over the next few years, the local broadcasting stations will find themselves in the position of receiving video material that could be HDTV quality, progressively scanned, while their news/commercials are still NTSC produced (interlaced scanning). We have developed a new algorithm for deinterlacing based on the algorithm of Nguyen and Dubois (see Proc. Int. Workshop on HDTV, November 1992). It interpolates the missing pixels using a weighted combination of spatial and temporal methods. The algorithm is self-adaptive, since it weights various processing blocks based on the error they introduce. Experiments were run on both \"real-world\" and computer generated video sequences. The results were compared to the \"original\" obtained as an output of the ray-tracer, as well as to the reference algorithm provided by the AT&T HDTV group.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114920519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Many popular redundancy free codes are linear or affine, including the natural binary code (NBC), the folded binary code (FBC), the Gray code (GC), and the two's complement code (TCC). A theorem which considers the channel distortion of a uniform 2/sup n/ level scalar quantizer with stepsize /spl Delta/, which uses an affine index assignment with generator matrix G to transmit across a binary symmetric channel with crossover probability q, is given. Using this theorem we compare the NBC and the FBC for any source distribution.
{"title":"On the performance of affine index assignments for redundancy free source-channel coding","authors":"A. Méhes, K. Zeger","doi":"10.1109/DCC.1995.515543","DOIUrl":"https://doi.org/10.1109/DCC.1995.515543","url":null,"abstract":"Summary form only given. Many popular redundancy free codes are linear or affine, including the natural binary code (NBC), the folded binary code (FBC), the Gray code (GC), and the two's complement code (TCC). A theorem which considers the channel distortion of a uniform 2/sup n/ level scalar quantizer with stepsize /spl Delta/, which uses an affine index assignment with generator matrix G to transmit across a binary symmetric channel with crossover probability q, is given. Using this theorem we compare the NBC and the FBC for any source distribution.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114754840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Dynamic Huffman coding uses a binary code tree data structure to encode the relative frequency counts of the symbols being coded. The authors aim is to obtain a simple and practical statistical algorithm in order to improve the processing speed while maintaining a high compression ratio. The algorithm proposed uses a self-organizing rule (transpose heuristic) to reconstruct the code tree. It renews the code tree by only switching the ordered positions of corresponding symbols. This method is called self organized dynamic Huffman coding. To achieve a higher compression ratio they employ context modelling.
{"title":"Self-organized dynamic Huffman coding without frequency counts","authors":"Y. Okada, N. Satoh, K. Murashita, S. Yoshida","doi":"10.1109/DCC.1995.515583","DOIUrl":"https://doi.org/10.1109/DCC.1995.515583","url":null,"abstract":"Summary form only given. Dynamic Huffman coding uses a binary code tree data structure to encode the relative frequency counts of the symbols being coded. The authors aim is to obtain a simple and practical statistical algorithm in order to improve the processing speed while maintaining a high compression ratio. The algorithm proposed uses a self-organizing rule (transpose heuristic) to reconstruct the code tree. It renews the code tree by only switching the ordered positions of corresponding symbols. This method is called self organized dynamic Huffman coding. To achieve a higher compression ratio they employ context modelling.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132265889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In synchronous compression, a lossless data compressor attempts to equalize the rates of two synchronous communication channels. Synchronous compression is of broad applicability in improving the efficiency of internetwork links over public digital networks. The most notable features of the synchronous compression application are the mixed traffic it must tolerate and the rate buffering role played by the compression processor. The resulting system can be modeled in the time domain by queuing methods. The performance of a compression algorithm in this application is governed by the interplay of its ultimate compression ratio, its computational efficiency, and the distribution function of its instantaneous consumption rate of the source. The queuing model for synchronous compression represents the compressor as the server fed by a single queue. We describe the basic model, develop the required basic queuing theory, look at service time statistics, and compare to simulation. We develop the queuing model for synchronous compression and relate it to theoretical and empirical properties of queuing systems and Lempel-Ziv compression algorithm performance. We illustrate that synchronous compression simulations are in agreement with the predictions of queuing theory. In addition, we observe various interesting properties of match length distributions and their impact on compression in the time-domain.
{"title":"Queuing models of synchronous compressors","authors":"M. S. Moellenhoff, M.W. Maier","doi":"10.1109/DCC.1995.515555","DOIUrl":"https://doi.org/10.1109/DCC.1995.515555","url":null,"abstract":"Summary form only given. In synchronous compression, a lossless data compressor attempts to equalize the rates of two synchronous communication channels. Synchronous compression is of broad applicability in improving the efficiency of internetwork links over public digital networks. The most notable features of the synchronous compression application are the mixed traffic it must tolerate and the rate buffering role played by the compression processor. The resulting system can be modeled in the time domain by queuing methods. The performance of a compression algorithm in this application is governed by the interplay of its ultimate compression ratio, its computational efficiency, and the distribution function of its instantaneous consumption rate of the source. The queuing model for synchronous compression represents the compressor as the server fed by a single queue. We describe the basic model, develop the required basic queuing theory, look at service time statistics, and compare to simulation. We develop the queuing model for synchronous compression and relate it to theoretical and empirical properties of queuing systems and Lempel-Ziv compression algorithm performance. We illustrate that synchronous compression simulations are in agreement with the predictions of queuing theory. In addition, we observe various interesting properties of match length distributions and their impact on compression in the time-domain.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"603 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134139224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shows that the use of the lazy list processing technique from the world of functional languages allows, under certain conditions, the package-merge algorithm to be executed in much less space than is indicated by the O(nL) space worst-case bound. For example, the revised implementation generates a 32-bit limited code for the TREC distribution within 15 Mb of memory. It is also shown how a second observation-that in large-alphabet situations it is often the case that there are many symbols with the same frequency-can be exploited to further reduce the space required, for both unlimited and length-limited coding. This second improvement allows calculation of an optimal length-limited code for the TREC word distribution in under 8 Mb of memory; and calculation of an unrestricted Huffman code in under 1 Mb of memory.
{"title":"Space-efficient construction of optimal prefix codes","authors":"Alistair Moffat, A. Turpin, J. Katajainen","doi":"10.1109/DCC.1995.515509","DOIUrl":"https://doi.org/10.1109/DCC.1995.515509","url":null,"abstract":"Shows that the use of the lazy list processing technique from the world of functional languages allows, under certain conditions, the package-merge algorithm to be executed in much less space than is indicated by the O(nL) space worst-case bound. For example, the revised implementation generates a 32-bit limited code for the TREC distribution within 15 Mb of memory. It is also shown how a second observation-that in large-alphabet situations it is often the case that there are many symbols with the same frequency-can be exploited to further reduce the space required, for both unlimited and length-limited coding. This second improvement allows calculation of an optimal length-limited code for the TREC word distribution in under 8 Mb of memory; and calculation of an unrestricted Huffman code in under 1 Mb of memory.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132873584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Lossless image compression is often required in situations where compression is done once and decompression is to be performed a multiple number of times. Since compression is to be performed only once, time taken for compression is not a critical factor while selecting an appropriate compression scheme. What is more critical is the amount of time and memory needed for decompression and also the compression ratio obtained. Compression schemes that satisfy the above constraints are called asymmetric techniques. While there exist many asymmetric techniques for the lossy compression of image data, most techniques reported for lossless compression of image data have been symmetric. We present a new lossless compression technique that is well suited for asymmetric applications. It gives superior performance compared to standard lossless compression techniques by exploiting 'global' correlations. By 'global' correlations we mean similar patterns of pixels that re-occur within the image, not necessarily at close proximity. The developed technique can also potentially be adapted for use in symmetric applications that require high compression ratios. We develop algorithms for codebook design using LBG like clustering of image blocks. For the sake of a preliminary investigation, codebooks of various sizes were constructed using different block sizes and using the 8 JPEG predictors as the set of prediction schemes.
{"title":"Asymmetric lossless image compression","authors":"N. Memon, K. Sayood","doi":"10.1109/DCC.1995.515567","DOIUrl":"https://doi.org/10.1109/DCC.1995.515567","url":null,"abstract":"Summary form only given. Lossless image compression is often required in situations where compression is done once and decompression is to be performed a multiple number of times. Since compression is to be performed only once, time taken for compression is not a critical factor while selecting an appropriate compression scheme. What is more critical is the amount of time and memory needed for decompression and also the compression ratio obtained. Compression schemes that satisfy the above constraints are called asymmetric techniques. While there exist many asymmetric techniques for the lossy compression of image data, most techniques reported for lossless compression of image data have been symmetric. We present a new lossless compression technique that is well suited for asymmetric applications. It gives superior performance compared to standard lossless compression techniques by exploiting 'global' correlations. By 'global' correlations we mean similar patterns of pixels that re-occur within the image, not necessarily at close proximity. The developed technique can also potentially be adapted for use in symmetric applications that require high compression ratios. We develop algorithms for codebook design using LBG like clustering of image blocks. For the sake of a preliminary investigation, codebooks of various sizes were constructed using different block sizes and using the 8 JPEG predictors as the set of prediction schemes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130541500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many compression applications consist of compressing multiple sources with significantly different distributions. In the context of vector quantization (VQ) these sources are typically quantized using separate codebooks. Since memory is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. As a natural generalization, we propose the design of a size-limited universal codebook consisting of the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal codevectors and provides greater design flexibility which improves the storage-constrained performance. Further advantages of the proposed approach include the fact that no two sources need be encoded at the same rate, and the close relation to universal, adaptive, and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative descent algorithm is introduced to impose these conditions on the resulting quantizer. Possible applications of the proposed technique are enumerated and its effectiveness is illustrated for coding of images using finite-state vector quantization.
{"title":"Constrained-storage vector quantization with a universal codebook","authors":"Sangeeta Ramakrishnan, Kenneth Rose, A. Gersho","doi":"10.1109/DCC.1995.515494","DOIUrl":"https://doi.org/10.1109/DCC.1995.515494","url":null,"abstract":"Many compression applications consist of compressing multiple sources with significantly different distributions. In the context of vector quantization (VQ) these sources are typically quantized using separate codebooks. Since memory is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. As a natural generalization, we propose the design of a size-limited universal codebook consisting of the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal codevectors and provides greater design flexibility which improves the storage-constrained performance. Further advantages of the proposed approach include the fact that no two sources need be encoded at the same rate, and the close relation to universal, adaptive, and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative descent algorithm is introduced to impose these conditions on the resulting quantizer. Possible applications of the proposed technique are enumerated and its effectiveness is illustrated for coding of images using finite-state vector quantization.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}