Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words.<>
{"title":"Divergence and the construction of variable-to-variable-length lossless codes by source-word extensions","authors":"G. H. Freeman","doi":"10.1109/DCC.1993.253142","DOIUrl":"https://doi.org/10.1109/DCC.1993.253142","url":null,"abstract":"Such codes are described using dual leaf-linked trees: one specifying the parsing of the source symbols into source words, and the other specifying the formation of code words from code symbols. Compression exceeds entropy by the amount of the informational divergence, between source words and code words, divided by the expected source-word length. The asymptotic optimality of Tunstall or Huffman codes derives from the bounding of divergence while the expected source-word length is made arbitrarily large. A heuristic extension scheme is asymptotically optimal but also acts to reduce the divergence by retaining those source words which are well matched to their corresponding code words.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126935939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A general technique is suggested for reducing random noise from signals using data compression in conjunction with the principle of Occam's Razor. Not only are classical spectral filters realisable as a particular instance of the technique, but more powerful nonlinear filters fall within its scope.<>
{"title":"Filtering random noise via data compression","authors":"B. Natarajan","doi":"10.1109/DCC.1993.253144","DOIUrl":"https://doi.org/10.1109/DCC.1993.253144","url":null,"abstract":"A general technique is suggested for reducing random noise from signals using data compression in conjunction with the principle of Occam's Razor. Not only are classical spectral filters realisable as a particular instance of the technique, but more powerful nonlinear filters fall within its scope.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124339335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes two method to improve the robustness of transmitting such data over wireless communication channels: adaptively changing the quantizer codebook at the decoder, and optimizing codebook design for bursty errors.<>
{"title":"Adaptive channel optimization of vector quantized data","authors":"A. Hung, T. Meng","doi":"10.1109/DCC.1993.253121","DOIUrl":"https://doi.org/10.1109/DCC.1993.253121","url":null,"abstract":"This paper describes two method to improve the robustness of transmitting such data over wireless communication channels: adaptively changing the quantizer codebook at the decoder, and optimizing codebook design for bursty errors.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124856299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and performance of a nonadaptive hardware system for data compression by arithmetic coding are presented. The alphabet of the data source is the full 256-symbol ASCII character set, plus a non-ASCII end-of-file symbol. The key ideas are the non-arithmetic representation of the current interval width, which yields improved coding efficiency in the interval width update, and the design of a circuit for the code point update, which operates at a high speed independent of the length of the code point register. On a reconfigurable coprocessor, constructed from commercially available field-programmable gate arrays and static RAM, implementation compresses its input stream at better than 16 MBytes/sec.<>
{"title":"Multialphabet arithmetic coding at 16 MBytes/sec","authors":"H. Printz, P. Stubley","doi":"10.1109/DCC.1993.253137","DOIUrl":"https://doi.org/10.1109/DCC.1993.253137","url":null,"abstract":"The design and performance of a nonadaptive hardware system for data compression by arithmetic coding are presented. The alphabet of the data source is the full 256-symbol ASCII character set, plus a non-ASCII end-of-file symbol. The key ideas are the non-arithmetic representation of the current interval width, which yields improved coding efficiency in the interval width update, and the design of a circuit for the code point update, which operates at a high speed independent of the length of the code point register. On a reconfigurable coprocessor, constructed from commercially available field-programmable gate arrays and static RAM, implementation compresses its input stream at better than 16 MBytes/sec.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129290603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter.<>
{"title":"Fast and efficient lossless image compression","authors":"P. Howard, J. Vitter","doi":"10.1109/DCC.1993.253114","DOIUrl":"https://doi.org/10.1109/DCC.1993.253114","url":null,"abstract":"A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121044662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.<>
{"title":"Design and analysis of fast text compression based on quasi-arithmetic coding","authors":"P. Howard, J. Vitter","doi":"10.1109/DCC.1993.253140","DOIUrl":"https://doi.org/10.1109/DCC.1993.253140","url":null,"abstract":"A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126175122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper concerns one aspect of the construction of compact representations R(T) for trees T which are instances of an abstract tree data type. The ADT supports operations, on certain trees, built around a top-down traversal primitive, and provides the interface between the second and third stages of a general semantic compression system. The mechanisms used ensure that the time taken to perform all such operations using R(T) is linear with respect to performing the operation directly on T. They fall into two categories: invertible transformations on T which produce an equivalent tree with fewer elements (data or child specifications) than the input form; and the exploitation of statistical variations in the occurrence of data elements to reduce the average space required to represent remaining components.<>
{"title":"Tree compacting transformations","authors":"Gary Promhouse","doi":"10.1109/DCC.1993.253117","DOIUrl":"https://doi.org/10.1109/DCC.1993.253117","url":null,"abstract":"This paper concerns one aspect of the construction of compact representations R(T) for trees T which are instances of an abstract tree data type. The ADT supports operations, on certain trees, built around a top-down traversal primitive, and provides the interface between the second and third stages of a general semantic compression system. The mechanisms used ensure that the time taken to perform all such operations using R(T) is linear with respect to performing the operation directly on T. They fall into two categories: invertible transformations on T which produce an equivalent tree with fewer elements (data or child specifications) than the input form; and the exploitation of statistical variations in the occurrence of data elements to reduce the average space required to represent remaining components.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129312618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper offers a solution to the problem of determining good quantization tables for use with the discrete cosine transform. Using the methods proposed, the designer of a system can choose a selection of test images and a coefficient weighting scenario, from which a quantization table can be produced, optimized for the choices made. The method is based on simulated annealing searches which the space of quantization tables to minimize some chosen measure.<>
{"title":"Optimum DCT quantization","authors":"D. Monro, B. Sherlock","doi":"10.1109/DCC.1993.253131","DOIUrl":"https://doi.org/10.1109/DCC.1993.253131","url":null,"abstract":"The paper offers a solution to the problem of determining good quantization tables for use with the discrete cosine transform. Using the methods proposed, the designer of a system can choose a selection of test images and a coefficient weighting scenario, from which a quantization table can be produced, optimized for the choices made. The method is based on simulated annealing searches which the space of quantization tables to minimize some chosen measure.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense.<>
{"title":"A mean-removed variation of weighted universal vector quantization for image coding","authors":"Barry D. Andrews, P. Chou, M. Effros, R. Gray","doi":"10.1109/DCC.1993.253119","DOIUrl":"https://doi.org/10.1109/DCC.1993.253119","url":null,"abstract":"Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129372787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper uses two principles, the robust encoding of residuals and the efficient coding of parameters, to obtain a new learning rule for neural networks. In particular, it examines how different coding techniques give rise to different learning rules. The storage space requirements of parameters and residuals are considered. A 'group regularizer' is derived from encoding of the parameters as a whole group rather than individually.<>
{"title":"Coding theory and regularization","authors":"J. Connor, L. Atlas","doi":"10.1109/DCC.1993.253134","DOIUrl":"https://doi.org/10.1109/DCC.1993.253134","url":null,"abstract":"This paper uses two principles, the robust encoding of residuals and the efficient coding of parameters, to obtain a new learning rule for neural networks. In particular, it examines how different coding techniques give rise to different learning rules. The storage space requirements of parameters and residuals are considered. A 'group regularizer' is derived from encoding of the parameters as a whole group rather than individually.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114230548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}