解析树的上下文编码

J. Tarhio
{"title":"解析树的上下文编码","authors":"J. Tarhio","doi":"10.1109/DCC.1995.515552","DOIUrl":null,"url":null,"abstract":"Summary form only given. General-purpose text compression works normally at the lexical level assuming that symbols to be encoded are independent or they depend on preceding symbols within a fixed distance. Traditionally such syntactical models have been focused on compression of source programs, but also other areas are feasible. The compression of a parse tree is an important and challenging part of syntactical modeling. A parse tree can be represented by a left parse which is a sequence of productions applied in preorder. A left parse can be encoded efficiently with arithmetic coding using counts of production alternatives of each nonterminal. We introduce a more refined method which reduces the size of a compressed tree. A blending scheme, PPM (prediction by partial matching) produces very good compression on text files. In PPM, adaptive models of several context lengths are maintained and they are blended during processing. The k preceding symbols of the symbol to be encoded form the context of order k. We apply the PPM technique to a left parse so that we use contexts of nodes instead of contexts consisting of preceding symbols in the sequence. We tested our approach with parse trees of Pascal programs. Our method gave on the average 20 percent better compression than the standard method based on counts of production alternatives of nonterminals. In our model, an item of the context is a pair (production, branch). The form of the item seems to be crucial. We tested three other variations for an item: production, nonterminal, and (nonterminal, branch), but all these three approaches produced clearly worse results.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Context coding of parse trees\",\"authors\":\"J. Tarhio\",\"doi\":\"10.1109/DCC.1995.515552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given. General-purpose text compression works normally at the lexical level assuming that symbols to be encoded are independent or they depend on preceding symbols within a fixed distance. Traditionally such syntactical models have been focused on compression of source programs, but also other areas are feasible. The compression of a parse tree is an important and challenging part of syntactical modeling. A parse tree can be represented by a left parse which is a sequence of productions applied in preorder. A left parse can be encoded efficiently with arithmetic coding using counts of production alternatives of each nonterminal. We introduce a more refined method which reduces the size of a compressed tree. A blending scheme, PPM (prediction by partial matching) produces very good compression on text files. In PPM, adaptive models of several context lengths are maintained and they are blended during processing. The k preceding symbols of the symbol to be encoded form the context of order k. We apply the PPM technique to a left parse so that we use contexts of nodes instead of contexts consisting of preceding symbols in the sequence. We tested our approach with parse trees of Pascal programs. Our method gave on the average 20 percent better compression than the standard method based on counts of production alternatives of nonterminals. In our model, an item of the context is a pair (production, branch). The form of the item seems to be crucial. We tested three other variations for an item: production, nonterminal, and (nonterminal, branch), but all these three approaches produced clearly worse results.\",\"PeriodicalId\":107017,\"journal\":{\"name\":\"Proceedings DCC '95 Data Compression Conference\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings DCC '95 Data Compression Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DCC.1995.515552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings DCC '95 Data Compression Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.1995.515552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

只提供摘要形式。通用文本压缩通常在词法级别工作,假设要编码的符号是独立的,或者它们在固定距离内依赖于前面的符号。传统上,这种语法模型主要集中在源程序的压缩上,但在其他领域也是可行的。解析树的压缩是语法建模的一个重要且具有挑战性的部分。解析树可以用左解析表示,左解析是按预先顺序应用的一系列结果。左解析可以通过使用每个非终结符的产生替代的计数进行有效的算术编码。我们引入了一种更精细的方法,它减少了压缩树的大小。混合方案PPM(部分匹配预测)对文本文件产生非常好的压缩。在PPM中,维护多个上下文长度的自适应模型,并在处理期间将它们混合在一起。要编码的符号的k个前面的符号形成顺序k的上下文。我们将PPM技术应用于左解析,以便我们使用节点上下文而不是由序列中前面的符号组成的上下文。我们用Pascal程序的解析树测试了我们的方法。我们的方法比基于非终端生产替代品计数的标准方法平均压缩率提高了20%。在我们的模型中,上下文的项是一对(生产、分支)。项目的形式似乎是至关重要的。我们对一个项目测试了另外三种变体:生产、非终端和(非终端、分支),但这三种方法产生的结果显然更差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Context coding of parse trees
Summary form only given. General-purpose text compression works normally at the lexical level assuming that symbols to be encoded are independent or they depend on preceding symbols within a fixed distance. Traditionally such syntactical models have been focused on compression of source programs, but also other areas are feasible. The compression of a parse tree is an important and challenging part of syntactical modeling. A parse tree can be represented by a left parse which is a sequence of productions applied in preorder. A left parse can be encoded efficiently with arithmetic coding using counts of production alternatives of each nonterminal. We introduce a more refined method which reduces the size of a compressed tree. A blending scheme, PPM (prediction by partial matching) produces very good compression on text files. In PPM, adaptive models of several context lengths are maintained and they are blended during processing. The k preceding symbols of the symbol to be encoded form the context of order k. We apply the PPM technique to a left parse so that we use contexts of nodes instead of contexts consisting of preceding symbols in the sequence. We tested our approach with parse trees of Pascal programs. Our method gave on the average 20 percent better compression than the standard method based on counts of production alternatives of nonterminals. In our model, an item of the context is a pair (production, branch). The form of the item seems to be crucial. We tested three other variations for an item: production, nonterminal, and (nonterminal, branch), but all these three approaches produced clearly worse results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multiplication-free subband coding of color images Constraining the size of the instantaneous alphabet in trellis quantizers Classified conditional entropy coding of LSP parameters Lattice-based designs of direct sum codebooks for vector quantization On the performance of affine index assignments for redundancy free source-channel coding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1