首页 > 最新文献

Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

英文 中文
Move-to-front and permutation based inversion coding 移动到前面和基于排列的反转编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785672
Z. Arnavut
[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.
[仅提供摘要形式]。由Bentley等人(1986)介绍,移动到前面(MTF)编码是一种自适应的、自组织的列表(排列)技术。由于MTF编码器使用限于数据源字母表大小的小尺寸排列,我们研究了数据文件的压缩,通过使用规范排序排列,然后使用基于排列的反转编码(PBIC),从{0,…, n-1},其中n为数据源的大小。所介绍的技术产生比MTF编码器更好的压缩增益,并提高块排序技术的压缩增益。
{"title":"Move-to-front and permutation based inversion coding","authors":"Z. Arnavut","doi":"10.1109/DCC.1999.785672","DOIUrl":"https://doi.org/10.1109/DCC.1999.785672","url":null,"abstract":"[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116202835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low complexity high-order context modeling of embedded wavelet bit streams 嵌入式小波比特流的低复杂度高阶上下文建模
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755660
Xiaolin Wu
In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.
在过去三年左右的时间里,特别是在去年启动的JPEG 2000标准化过程中,嵌入式小波比特流的统计上下文建模受到了图像压缩社区的大量关注。高阶上下文建模已被证明是小波图像编码器实现高率失真性能的必要条件。然而,如果在算法设计和实现中不注意,高阶建模上下文的形成可能会占用CPU和内存,从而给小波编码系统带来计算瓶颈。本文重点研究了高阶统计上下文建模的操作方面,并介绍了一些快速算法技术,可以大大降低小波域高阶上下文建模的时间和空间复杂性。
{"title":"Low complexity high-order context modeling of embedded wavelet bit streams","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755660","DOIUrl":"https://doi.org/10.1109/DCC.1999.755660","url":null,"abstract":"In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Three-dimensional wavelet coding of video with global motion compensation 基于全局运动补偿的视频三维小波编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755690
Albert Wang, Zixiang Xiong, P. Chou, S. Mehrotra
Three-dimensional (2D+T) wavelet coding of video using SPIHT has been shown to outperform standard predictive video coders on complex high-motion sequences, and is competitive with standard predictive video coders on simple low-motion sequences. However, on a number of typical moderate-motion sequences characterized by largely rigid motions, 3D SPIHT performs several dB worse than motion-compensated predictive coders, because it is does not take advantage of the real physical motion underlying the scene. We introduce global motion compensation for 3D subband video coders, and find 0.5 to 2 dB gain on sequences with dominant background motion. Our approach is a hybrid of video coding based on sprites, or mosaics, and subband coding.
使用SPIHT的视频三维(2D+T)小波编码在复杂的高运动序列上优于标准预测视频编码器,在简单的低运动序列上与标准预测视频编码器竞争。然而,在一些典型的以刚性运动为特征的中等运动序列上,3D SPIHT的性能比运动补偿预测编码器差几个dB,因为它没有利用场景底层的真实物理运动。我们引入了三维子带视频编码器的全局运动补偿,并找到了具有主导背景运动的序列的0.5到2db增益。我们的方法是基于精灵或马赛克的视频编码和子带编码的混合。
{"title":"Three-dimensional wavelet coding of video with global motion compensation","authors":"Albert Wang, Zixiang Xiong, P. Chou, S. Mehrotra","doi":"10.1109/DCC.1999.755690","DOIUrl":"https://doi.org/10.1109/DCC.1999.755690","url":null,"abstract":"Three-dimensional (2D+T) wavelet coding of video using SPIHT has been shown to outperform standard predictive video coders on complex high-motion sequences, and is competitive with standard predictive video coders on simple low-motion sequences. However, on a number of typical moderate-motion sequences characterized by largely rigid motions, 3D SPIHT performs several dB worse than motion-compensated predictive coders, because it is does not take advantage of the real physical motion underlying the scene. We introduce global motion compensation for 3D subband video coders, and find 0.5 to 2 dB gain on sequences with dominant background motion. Our approach is a hybrid of video coding based on sprites, or mosaics, and subband coding.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130260346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Edge-adaptive prediction for lossless image coding 无损图像编码的边缘自适应预测
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755698
Wee Sun Lee
We design an edge-adaptive predictor for lossless image coding. The predictor adaptively weights a four-directional predictor together with an adaptive linear predictor based on information from neighbouring pixels. Although conceptually simple, the performance of the resulting coder is comparable to state-of-the-art image coders when a simple context-based coder is used to encode the prediction errors.
设计了一种用于无损图像编码的边缘自适应预测器。该预测器自适应地加权一个四向预测器和一个基于邻近像素信息的自适应线性预测器。虽然概念上很简单,但当使用简单的基于上下文的编码器对预测错误进行编码时,所得到的编码器的性能与最先进的图像编码器相当。
{"title":"Edge-adaptive prediction for lossless image coding","authors":"Wee Sun Lee","doi":"10.1109/DCC.1999.755698","DOIUrl":"https://doi.org/10.1109/DCC.1999.755698","url":null,"abstract":"We design an edge-adaptive predictor for lossless image coding. The predictor adaptively weights a four-directional predictor together with an adaptive linear predictor based on information from neighbouring pixels. Although conceptually simple, the performance of the resulting coder is comparable to state-of-the-art image coders when a simple context-based coder is used to encode the prediction errors.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134257490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Universal lossless source coding with the Burrows Wheeler transform 通用无损源代码编码与Burrows惠勒变换
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755667
M. Effros, Karthik Venkat Ramanan, S. R. Kulkarni, S. Verdú
We here consider a theoretical evaluation of data compression algorithms based on the Burrows Wheeler transform (BWT). The main contributions include a variety of very simple new techniques for BWT-based universal lossless source coding on finite-memory sources and a set of new rate of convergence results for BWT-based source codes. The result is a theoretical validation and quantification of the earlier experimental observation that BWT-based lossless source codes give performance better than that of Ziv-Lempel-style codes and almost as good as that of prediction by partial mapping (PPM) algorithms.
本文考虑了基于Burrows Wheeler变换(BWT)的数据压缩算法的理论评价。主要贡献包括各种非常简单的新技术,用于在有限内存源上基于bwt的通用无损源代码编码,以及一组新的基于bwt的源代码收敛速度结果。结果是对早期实验观察的理论验证和量化,即基于bwt的无损源代码的性能优于Ziv-Lempel-style代码,几乎与部分映射(PPM)算法的预测一样好。
{"title":"Universal lossless source coding with the Burrows Wheeler transform","authors":"M. Effros, Karthik Venkat Ramanan, S. R. Kulkarni, S. Verdú","doi":"10.1109/DCC.1999.755667","DOIUrl":"https://doi.org/10.1109/DCC.1999.755667","url":null,"abstract":"We here consider a theoretical evaluation of data compression algorithms based on the Burrows Wheeler transform (BWT). The main contributions include a variety of very simple new techniques for BWT-based universal lossless source coding on finite-memory sources and a set of new rate of convergence results for BWT-based source codes. The result is a theoretical validation and quantification of the earlier experimental observation that BWT-based lossless source codes give performance better than that of Ziv-Lempel-style codes and almost as good as that of prediction by partial mapping (PPM) algorithms.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132678213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 134
Text mining: a new frontier for lossless compression 文本挖掘:无损压缩的新前沿
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755669
I. Witten, Zane Bray, M. Mahoui, W. Teahan
Data mining, a burgeoning new technology, is about looking for patterns in data. Likewise, text mining is about looking for patterns in text. Text mining is possible because you do not have to understand text in order to extract useful information from it. Here are four examples. First, if only names could be identified, links could be inserted automatically to other places that mention the same name, links that are "dynamically evaluated" by calling upon a search engine to bind them at click time. Second, actions can be associated with different types of data, using either explicit programming or programming-by-demonstration techniques. A day/time specification appearing anywhere within one's E-mail could be associated with diary actions such as updating a personal organizer or creating an automatic reminder, and each mention of a day/time in the text could raise a popup menu of calendar-based actions. Third, text could be mined for data in tabular format, allowing databases to be created from formatted tables such as stock-market information on Web pages. Fourth, an agent could monitor incoming newswire stories for company names and collect documents that mention them, an automated press clipping service. This paper aims to promote text compression as a key technology for text mining.
数据挖掘是一项新兴的新技术,主要是在数据中寻找模式。同样,文本挖掘就是在文本中寻找模式。文本挖掘是可能的,因为您不必为了从中提取有用的信息而理解文本。这里有四个例子。首先,如果只有名字可以识别,链接就可以自动插入到提到相同名字的其他地方,这些链接通过调用搜索引擎在点击时绑定它们来“动态评估”。其次,可以使用显式编程或演示编程技术将操作与不同类型的数据关联起来。在电子邮件中出现的日期/时间规范可以与日记操作相关联,例如更新个人组织者或创建自动提醒,并且每次在文本中提到日期/时间都可以弹出一个基于日历的操作菜单。第三,可以从文本中挖掘表格格式的数据,从而允许从格式化的表(如Web页面上的股票市场信息)创建数据库。第四,代理可以监控收到的新闻专线报道中的公司名称,并收集提到它们的文件,这是一种自动剪报服务。本文旨在推广文本压缩作为文本挖掘的一项关键技术。
{"title":"Text mining: a new frontier for lossless compression","authors":"I. Witten, Zane Bray, M. Mahoui, W. Teahan","doi":"10.1109/DCC.1999.755669","DOIUrl":"https://doi.org/10.1109/DCC.1999.755669","url":null,"abstract":"Data mining, a burgeoning new technology, is about looking for patterns in data. Likewise, text mining is about looking for patterns in text. Text mining is possible because you do not have to understand text in order to extract useful information from it. Here are four examples. First, if only names could be identified, links could be inserted automatically to other places that mention the same name, links that are \"dynamically evaluated\" by calling upon a search engine to bind them at click time. Second, actions can be associated with different types of data, using either explicit programming or programming-by-demonstration techniques. A day/time specification appearing anywhere within one's E-mail could be associated with diary actions such as updating a personal organizer or creating an automatic reminder, and each mention of a day/time in the text could raise a popup menu of calendar-based actions. Third, text could be mined for data in tabular format, allowing databases to be created from formatted tables such as stock-market information on Web pages. Fourth, an agent could monitor incoming newswire stories for company names and collect documents that mention them, an automated press clipping service. This paper aims to promote text compression as a key technology for text mining.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
Context quantization with Fisher discriminant for adaptive embedded wavelet image coding 基于Fisher判别的上下文量化自适应嵌入小波图像编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755659
Xiaolin Wu
Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.
上下文建模和小波系数自适应熵编码的最新进展可能是小波图像压缩技术迅速成熟的最重要的催化剂。本文认为小波系数的统计上下文建模是小波编解码器率失真性能的决定因素。提出了一种新的最小条件熵上下文量化算法。该算法是一个以Fisher线性判别法为指导的动态规划过程。它促进了嵌入式小波比特流的高阶上下文建模和自适应熵编码,在有损和无损情况下都具有出色的压缩性能。
{"title":"Context quantization with Fisher discriminant for adaptive embedded wavelet image coding","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755659","DOIUrl":"https://doi.org/10.1109/DCC.1999.755659","url":null,"abstract":"Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126169915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Modified SPIHT encoding for SAR image data 修改了SAR图像数据的SPIHT编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785719
Z. Zeng, I. Cumming
Summary form only given. We developed a wavelet-based SAR image compression algorithm which combines tree-structured texture analysis, soft-thresholding speckle reduction, quadtree homogeneous decomposition, and a modified zero-tree coding scheme. First, the tree-structured wavelet transform is applied to the SAR image. The decomposition is no longer simply applied to the low-scale subsignals recursively but to the output of any filter. The measurement of the decomposition is the energy of the image. If the energy of a subimage is significantly smaller than others, we stop the decomposition in this region since it contains less information. The texture factors are created after this step, which represents the amount of texture information. Second, quadtree decomposition is used to split the components in the lowest scale component into two sets, a homogeneous set and a target set. The homogeneous set consists of the relatively homogeneous regions. The target set consists of those non-homogeneous regions which have been further decomposed into single component regions. A conventional soft-threshold is applied to reduce speckle noise on all the wavelet coefficients except those of the lowest scale. The feature factor is used to set the threshold. Finally, the conventional SPIHT methods are modified based on the result from the tree-structured decomposition and the quadtree decomposition. In the encoder, the amount of speckle reduction is chosen based on the requirements of the user. Different coding schemes are applied to the homogeneous set and the target set. The skewed distribution of the residuals makes arithmetic coding the best choice for lossless compression.
只提供摘要形式。我们开发了一种基于小波的SAR图像压缩算法,该算法结合了树结构纹理分析、软阈值斑点消减、四叉树均匀分解和改进的零树编码方案。首先,对SAR图像进行树结构小波变换。分解不再简单地递归地应用于低尺度子信号,而是应用于任何滤波器的输出。分解的度量是图像的能量。如果一个子图像的能量明显小于其他子图像,我们停止在该区域分解,因为它包含的信息较少。纹理因子是在这一步之后创建的,它代表了纹理信息的数量。其次,利用四叉树分解将最小尺度分量中的分量分成两个集合,即齐次集合和目标集合。齐次集合由相对齐次的区域组成。目标集由非均匀区域组成,这些非均匀区域被进一步分解为单个分量区域。采用传统的软阈值法去除除最低尺度外的所有小波系数上的散斑噪声。特征因子用于设置阈值。最后,根据树结构分解和四叉树分解的结果,对传统的SPIHT方法进行了改进。在编码器中,根据用户的要求选择斑点减少量。对齐次集和目标集采用了不同的编码方案。残差的偏态分布使得算术编码成为无损压缩的最佳选择。
{"title":"Modified SPIHT encoding for SAR image data","authors":"Z. Zeng, I. Cumming","doi":"10.1109/DCC.1999.785719","DOIUrl":"https://doi.org/10.1109/DCC.1999.785719","url":null,"abstract":"Summary form only given. We developed a wavelet-based SAR image compression algorithm which combines tree-structured texture analysis, soft-thresholding speckle reduction, quadtree homogeneous decomposition, and a modified zero-tree coding scheme. First, the tree-structured wavelet transform is applied to the SAR image. The decomposition is no longer simply applied to the low-scale subsignals recursively but to the output of any filter. The measurement of the decomposition is the energy of the image. If the energy of a subimage is significantly smaller than others, we stop the decomposition in this region since it contains less information. The texture factors are created after this step, which represents the amount of texture information. Second, quadtree decomposition is used to split the components in the lowest scale component into two sets, a homogeneous set and a target set. The homogeneous set consists of the relatively homogeneous regions. The target set consists of those non-homogeneous regions which have been further decomposed into single component regions. A conventional soft-threshold is applied to reduce speckle noise on all the wavelet coefficients except those of the lowest scale. The feature factor is used to set the threshold. Finally, the conventional SPIHT methods are modified based on the result from the tree-structured decomposition and the quadtree decomposition. In the encoder, the amount of speckle reduction is chosen based on the requirements of the user. Different coding schemes are applied to the homogeneous set and the target set. The skewed distribution of the residuals makes arithmetic coding the best choice for lossless compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lossless JBIG2 coding performance 无损JBIG2编码性能
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785710
D. Tompkins, F. Kossentini
Summary form only given. The Joint Bi-Level Expert Group (JBIG), an international study group affiliated with the ISO/IEC and ITU-T, has recently completed a committee draft of the JBIG2 standard for lossy and lossless bi-level image compression. We study design considerations for a purely lossless encoder. First, we outline the JBIG2 bitstream, focusing on the options and parameters available to an encoder. Then, we present numerous lossless encoder design strategies, including lossy to lossless coding approaches. For each strategy, we determine the compression performance, and the execution times for both encoding and decoding. The strategy that achieved the highest compression performance in our experiment used a double dictionary approach, with a residue cleanup. In this strategy, small and unique symbols were coded as a generic region residue. Only repeated symbols or those used as a basis for soft matches were added to a dictionary, with the remaining symbols embedded as refinements in the symbol region segment. The second dictionary was encoded as a refinement-aggregate dictionary, where dictionary symbols were encoded as refinements of symbols from the first dictionary, or previous entries in the second dictionary. With all other bitstream parameters optimized, this strategy can easily achieve an additional 30% compression over simpler symbol dictionary approaches. Next, we continue the experiment with an evaluation of each of the bitstream options and configuration parameters, and their impact on complexity and compression. We also demonstrate the consequences of choosing incorrect parameters. We conclude with a summary of our compression results, and general recommendations for encoder designers.
只提供摘要形式。联合双级专家组(JBIG)是一个隶属于ISO/IEC和ITU-T的国际研究小组,最近完成了有损和无损双级图像压缩JBIG2标准的委员会草案。我们研究了纯无损编码器的设计考虑。首先,我们概述了JBIG2比特流,重点介绍了编码器可用的选项和参数。然后,我们提出了许多无损编码器的设计策略,包括有损到无损的编码方法。对于每种策略,我们确定了压缩性能,以及编码和解码的执行时间。在我们的实验中,实现最高压缩性能的策略使用了双字典方法,并进行了残留清理。在该策略中,小而独特的符号被编码为通用区域残基。只将重复的符号或作为软匹配基础的符号添加到字典中,其余的符号作为改进嵌入符号区域段。第二个字典被编码为一个精化聚合字典,其中字典符号被编码为第一个字典中符号的精化,或者第二个字典中以前的条目。通过优化所有其他比特流参数,该策略可以轻松实现比更简单的符号字典方法额外30%的压缩。接下来,我们继续实验,评估每个比特流选项和配置参数,以及它们对复杂性和压缩的影响。我们还演示了选择不正确参数的后果。最后,我们总结了我们的压缩结果,并为编码器设计者提供了一般建议。
{"title":"Lossless JBIG2 coding performance","authors":"D. Tompkins, F. Kossentini","doi":"10.1109/DCC.1999.785710","DOIUrl":"https://doi.org/10.1109/DCC.1999.785710","url":null,"abstract":"Summary form only given. The Joint Bi-Level Expert Group (JBIG), an international study group affiliated with the ISO/IEC and ITU-T, has recently completed a committee draft of the JBIG2 standard for lossy and lossless bi-level image compression. We study design considerations for a purely lossless encoder. First, we outline the JBIG2 bitstream, focusing on the options and parameters available to an encoder. Then, we present numerous lossless encoder design strategies, including lossy to lossless coding approaches. For each strategy, we determine the compression performance, and the execution times for both encoding and decoding. The strategy that achieved the highest compression performance in our experiment used a double dictionary approach, with a residue cleanup. In this strategy, small and unique symbols were coded as a generic region residue. Only repeated symbols or those used as a basis for soft matches were added to a dictionary, with the remaining symbols embedded as refinements in the symbol region segment. The second dictionary was encoded as a refinement-aggregate dictionary, where dictionary symbols were encoded as refinements of symbols from the first dictionary, or previous entries in the second dictionary. With all other bitstream parameters optimized, this strategy can easily achieve an additional 30% compression over simpler symbol dictionary approaches. Next, we continue the experiment with an evaluation of each of the bitstream options and configuration parameters, and their impact on complexity and compression. We also demonstrate the consequences of choosing incorrect parameters. We conclude with a summary of our compression results, and general recommendations for encoder designers.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123037559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data compression using long common strings 使用长公共字符串进行数据压缩
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755678
J. Bentley
We describe a precompression algorithm that effectively represents any long common strings that appear in a file. The algorithm interacts well with standard compression algorithms that represent shorter strings that are near in the input text. Our experiments show that some real data sets do indeed contain many long common strings. We extend the fingerprint mechanisms of our algorithm to a program that identifies long common strings in an input file. This program gives interesting insights into the structure of real data files that contain long common strings.
我们描述了一种预压缩算法,它可以有效地表示文件中出现的任何长公共字符串。该算法与表示接近输入文本的较短字符串的标准压缩算法很好地交互。我们的实验表明,一些真实的数据集确实包含许多长公共字符串。我们将算法的指纹机制扩展到一个程序,该程序可以识别输入文件中的长公共字符串。这个程序对包含长公共字符串的实际数据文件的结构提供了有趣的见解。
{"title":"Data compression using long common strings","authors":"J. Bentley","doi":"10.1109/DCC.1999.755678","DOIUrl":"https://doi.org/10.1109/DCC.1999.755678","url":null,"abstract":"We describe a precompression algorithm that effectively represents any long common strings that appear in a file. The algorithm interacts well with standard compression algorithms that represent shorter strings that are near in the input text. Our experiments show that some real data sets do indeed contain many long common strings. We extend the fingerprint mechanisms of our algorithm to a program that identifies long common strings in an input file. This program gives interesting insights into the structure of real data files that contain long common strings.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116241451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
期刊
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1