The authors describe a human visual perception approach and report some results for this problem. The approach is based on the differential quantization of images, in which smooth approximations are subtracted from the image prior to quantization. They consider two such approximations. The first one is an approximation by splines obtained from a sparse and fixed subsampled array of the image. The second one segments the image into piecewise constant regions on the basis of the local activity of the image. Both these approximations result in remainders of residual images where large errors are localized in portions of the image of high activity. Because of visual masking the remainder image can now be coarsely quantized without visual impairment to the reconstructed image. The coarsely quantized remainder is now encoded in an error free manner. In such a perceptually based encoding method the mean square error is now dependent on the activity of the image.<>
{"title":"Perceptually based coding of monochrome and color still images","authors":"T. Reed, V. Algazi, G. Ford, I. Hussain","doi":"10.1109/DCC.1992.227467","DOIUrl":"https://doi.org/10.1109/DCC.1992.227467","url":null,"abstract":"The authors describe a human visual perception approach and report some results for this problem. The approach is based on the differential quantization of images, in which smooth approximations are subtracted from the image prior to quantization. They consider two such approximations. The first one is an approximation by splines obtained from a sparse and fixed subsampled array of the image. The second one segments the image into piecewise constant regions on the basis of the local activity of the image. Both these approximations result in remainders of residual images where large errors are localized in portions of the image of high activity. Because of visual masking the remainder image can now be coarsely quantized without visual impairment to the reconstructed image. The coarsely quantized remainder is now encoded in an error free manner. In such a perceptually based encoding method the mean square error is now dependent on the activity of the image.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128201238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Describes a system called Emily which was designed to implement the minimal-length encoding principle for induction, and a series of experiments that was carried out with some success by that system. Emily is based on the principle that the formulation of concepts (i.e., theories or explanations) over a set of data can be achieved by the process of minimally encoding that data. Thus, a learning problem can be solved by minimising its descriptions.<>
{"title":"Experiments using minimal-length encoding to solve machine learning problems","authors":"A. Gammerman, T. Bellotti","doi":"10.1109/DCC.1992.227445","DOIUrl":"https://doi.org/10.1109/DCC.1992.227445","url":null,"abstract":"Describes a system called Emily which was designed to implement the minimal-length encoding principle for induction, and a series of experiments that was carried out with some success by that system. Emily is based on the principle that the formulation of concepts (i.e., theories or explanations) over a set of data can be achieved by the process of minimally encoding that data. Thus, a learning problem can be solved by minimising its descriptions.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131260180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Witten, Bell and Nevill (see ibid., p.23, 1991) have described compression models for use in full-text retrieval systems. The authors discuss other coding methods for use with the same models, and give results that show their scheme yielding virtually identical compression, and decoding more than forty times faster. One of the main features of their implementation is the complete absence of arithmetic coding; this, in part, is the reason for the high speed. The implementation is also particularly suited to slow devices such as CD-ROM, in that the answering of a query requires one disk access for each term in the query and one disk access for each answer. All words and numbers are indexed, and there are no stop words. They have built two compressed databases.<>
{"title":"Coding for compression in full-text retrieval systems","authors":"Alistair Moffat, J. Zobel","doi":"10.1109/DCC.1992.227474","DOIUrl":"https://doi.org/10.1109/DCC.1992.227474","url":null,"abstract":"Witten, Bell and Nevill (see ibid., p.23, 1991) have described compression models for use in full-text retrieval systems. The authors discuss other coding methods for use with the same models, and give results that show their scheme yielding virtually identical compression, and decoding more than forty times faster. One of the main features of their implementation is the complete absence of arithmetic coding; this, in part, is the reason for the high speed. The implementation is also particularly suited to slow devices such as CD-ROM, in that the answering of a query requires one disk access for each term in the query and one disk access for each answer. All words and numbers are indexed, and there are no stop words. They have built two compressed databases.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digitized images are known to be extremely space consuming. However, regularities in the images can often be exploited to reduce the necessary storage area. Thus, many systems store images in a compressed form. The authors propose that compression be used as a time saving tool, in addition to its traditional role of space saving. They introduce a new pattern matching paradigm, compressed matching. A text array T and pattern array P are given in compressed forms c(T) and c(P). They seek all appearances of P in T, without decompressing T. This achieves a search time that is sublinear in the size of the uncompressed text mod T mod . They show that for the two-dimensional run-length compression there is a O( mod c(T) mod log mod P mod + mod P mod ), or almost optimal algorithm. The algorithm uses a novel multidimensional pattern matching technique, two-dimensional periodicity analysis.<>
众所周知,数字化图像非常消耗空间。然而,通常可以利用图像中的规律来减少必要的存储面积。因此,许多系统以压缩形式存储图像。作者建议将压缩作为一种节省时间的工具,除了其传统的节省空间的作用。他们引入了一种新的模式匹配范式——压缩匹配。文本数组T和模式数组P以压缩形式c(T)和c(P)给出。它们在T中寻找P的所有表现形式,而不解压缩T。这使得搜索时间在未压缩文本的大小上是次线性的。他们表明,对于二维运行长度压缩,存在O(mod c(T) mod log mod P mod + mod P mod),或者几乎是最优算法。该算法采用了一种新颖的多维模式匹配技术——二维周期性分析。
{"title":"Efficient two-dimensional compressed matching","authors":"A. Amir, Gary Benson","doi":"10.1109/DCC.1992.227453","DOIUrl":"https://doi.org/10.1109/DCC.1992.227453","url":null,"abstract":"Digitized images are known to be extremely space consuming. However, regularities in the images can often be exploited to reduce the necessary storage area. Thus, many systems store images in a compressed form. The authors propose that compression be used as a time saving tool, in addition to its traditional role of space saving. They introduce a new pattern matching paradigm, compressed matching. A text array T and pattern array P are given in compressed forms c(T) and c(P). They seek all appearances of P in T, without decompressing T. This achieves a search time that is sublinear in the size of the uncompressed text mod T mod . They show that for the two-dimensional run-length compression there is a O( mod c(T) mod log mod P mod + mod P mod ), or almost optimal algorithm. The algorithm uses a novel multidimensional pattern matching technique, two-dimensional periodicity analysis.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134484172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presents a technique for building an index into a Huffman-coded file that permits efficient random access to the encoded data. The technique provides the ability to find the starting position of the jth symbol of the uncompressed file in an n-bit compressed file in O(log n) bit-examinations of the compressed file plus its index. Furthermore, the size of the index is o(n) bits. In other words, the ratio of the space occupied by the index to the space occupied by the data approaches zero as the length of the data file increases without bound.<>
{"title":"Random access in Huffman-coded files","authors":"G. Jacobson","doi":"10.1109/DCC.1992.227444","DOIUrl":"https://doi.org/10.1109/DCC.1992.227444","url":null,"abstract":"Presents a technique for building an index into a Huffman-coded file that permits efficient random access to the encoded data. The technique provides the ability to find the starting position of the jth symbol of the uncompressed file in an n-bit compressed file in O(log n) bit-examinations of the compressed file plus its index. Furthermore, the size of the index is o(n) bits. In other words, the ratio of the space occupied by the index to the space occupied by the data approaches zero as the length of the data file increases without bound.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"60 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134138691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors analyze the approximate performance of tree search and provide tight upper bounds on the amount of error resulting from tree search and for a single input vector. These bounds are not encouraging but fortunately, the performance of tree-structured VQ in practice does not seem to be as bad. From the analysis, they derive a simple heuristic to improve the approximation of tree search. The strategy is to identify for each code vector some of its closest neighboring code vectors determined by the partition. After a code vector is found for an input vector by tree search, the closest neighboring code vectors are then searched for the best match. Unfortunately, the average number of neighboring code vectors of a given code vector can be as many as the total number of code vectors. Thus, the performance improvement of the strategy depends on the number of code vectors that are searched. Experimental results show that a number logarithmic in the size of the codebook provides significant performance gain while preserving the asymptotic search time complexity.<>
{"title":"Improving search for tree-structured vector quantization","authors":"Jianhua Lin, J. Storer","doi":"10.1109/DCC.1992.227447","DOIUrl":"https://doi.org/10.1109/DCC.1992.227447","url":null,"abstract":"The authors analyze the approximate performance of tree search and provide tight upper bounds on the amount of error resulting from tree search and for a single input vector. These bounds are not encouraging but fortunately, the performance of tree-structured VQ in practice does not seem to be as bad. From the analysis, they derive a simple heuristic to improve the approximation of tree search. The strategy is to identify for each code vector some of its closest neighboring code vectors determined by the partition. After a code vector is found for an input vector by tree search, the closest neighboring code vectors are then searched for the best match. Unfortunately, the average number of neighboring code vectors of a given code vector can be as many as the total number of code vectors. Thus, the performance improvement of the strategy depends on the number of code vectors that are searched. Experimental results show that a number logarithmic in the size of the codebook provides significant performance gain while preserving the asymptotic search time complexity.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129953191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A sequence of medical images is converted to a 4-dimensional volume of bits using gray code. Then an interframe sequential and intraframe interlacing prediction scheme is used for reversible image sequence compression. Higher compression ratios than the current intraframe compression methods are achieved due to interframe decorrelation.<>
{"title":"Lossless interframe compression of medical images","authors":"Xiaolin Wu, Yonggang Fang","doi":"10.1109/DCC.1992.227456","DOIUrl":"https://doi.org/10.1109/DCC.1992.227456","url":null,"abstract":"A sequence of medical images is converted to a 4-dimensional volume of bits using gray code. Then an interframe sequential and intraframe interlacing prediction scheme is used for reversible image sequence compression. Higher compression ratios than the current intraframe compression methods are achieved due to interframe decorrelation.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114233928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms.<>
{"title":"Constructing word-based text compression algorithms","authors":"R. Horspool, G. Cormack","doi":"10.1109/DCC.1992.227475","DOIUrl":"https://doi.org/10.1109/DCC.1992.227475","url":null,"abstract":"Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125225629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes some image compression research at New Mexico State University. It explores the possibility of combining multi-resolution and harmonic analysis in signal decomposition. This is in recognition of the fact that both local and global characteristics are found in most images and that the ideal compression system should be able to contend with both types of features. A hybrid is proposed and discussed.<>
{"title":"Possible harmonic-wavelet hybrids in image compression","authors":"Michael Rollins, F. Carden","doi":"10.1109/DCC.1992.227462","DOIUrl":"https://doi.org/10.1109/DCC.1992.227462","url":null,"abstract":"The paper describes some image compression research at New Mexico State University. It explores the possibility of combining multi-resolution and harmonic analysis in signal decomposition. This is in recognition of the fact that both local and global characteristics are found in most images and that the ideal compression system should be able to contend with both types of features. A hybrid is proposed and discussed.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130885172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors present systolic array implementations of transpose coding, which uses an alternative self-organizing list strategy but otherwise is similar to move-to-front coding. They present implementations for fixed-length word lists which provide improved system bandwidth by accelerating transpose coding.<>
{"title":"Transpose coding on the systolic array","authors":"L. M. Stauffer, D. Hirschberg","doi":"10.1109/DCC.1992.227465","DOIUrl":"https://doi.org/10.1109/DCC.1992.227465","url":null,"abstract":"The authors present systolic array implementations of transpose coding, which uses an alternative self-organizing list strategy but otherwise is similar to move-to-front coding. They present implementations for fixed-length word lists which provide improved system bandwidth by accelerating transpose coding.<<ETX>>","PeriodicalId":170269,"journal":{"name":"Data Compression Conference, 1992.","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}