首页 > 最新文献

Proceedings. Data Compression Conference最新文献

英文 中文
An improved method for lossless data compression 一种改进的无损数据压缩方法
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.14
Yuhua Bai, T. Cooklev
Summary form only given. This paper describes an improved lossless data compression scheme. The proposed scheme contains three innovations: first, an efficient algorithm is introduced to decide when and how to switch from transparent mode to compressed mode, second, a temporary buffer is introduced at the encoder, and third, an approach to decide when to discard the entire dictionary is advanced. According to the developed method the changes are only at the transmitter and any V.42bis-compatible receiver can be used as a decoder. Therefore devices using V.42bis can use the proposed method after a firmware upgrade. To make a decision when to switch modes, we introduce two look-ahead buffers, B/sub C/ and B/sub T/, for each mode of operation of the encoder. Regardless of which mode the encoder is in, the output of both modes of operation is written to the corresponding look-ahead buffer. The simulation results demonstrate that the proposed method achieves higher compression ratios in most cases. Another goal of the work presented in this paper is to analyze what is the improvement obtained after the dictionary is reset and when is a good time to discard the dictionary. Our results for different file types show that consistently the compression ratio before dictionary reset is 0.87 and after dictionary reset is 1.07, an increase of 22.6 %. It is noted that V.44 is a newer compression standard, based on a different compression algorithm. While our results do not apply directly to V.44, they may be used after appropriate modifications.
只提供摘要形式。本文提出了一种改进的无损数据压缩方案。该方案包含三个创新点:首先,引入了一种有效的算法来决定何时以及如何从透明模式切换到压缩模式;其次,在编码器处引入了临时缓冲区;第三,提出了一种决定何时丢弃整个字典的方法。根据所开发的方法,变化只在发射机和任何v .42bis兼容的接收器都可以用作解码器。因此,使用V.42bis的设备可以在固件升级后使用建议的方法。为了决定何时切换模式,我们为编码器的每种操作模式引入了两个前瞻性缓冲器,B/sub C/和B/sub T/。无论编码器处于哪种模式,两种操作模式的输出都被写入相应的预检缓冲区。仿真结果表明,该方法在大多数情况下都能获得较高的压缩比。本文工作的另一个目的是分析字典重置后获得的改进是什么,什么时候是丢弃字典的好时机。我们对不同文件类型的结果表明,字典重置前的压缩比始终为0.87,字典重置后的压缩比为1.07,增加了22.6%。值得注意的是,V.44是一个较新的压缩标准,基于不同的压缩算法。虽然我们的结果不直接适用于V.44,但经过适当的修改后可以使用。
{"title":"An improved method for lossless data compression","authors":"Yuhua Bai, T. Cooklev","doi":"10.1109/DCC.2005.14","DOIUrl":"https://doi.org/10.1109/DCC.2005.14","url":null,"abstract":"Summary form only given. This paper describes an improved lossless data compression scheme. The proposed scheme contains three innovations: first, an efficient algorithm is introduced to decide when and how to switch from transparent mode to compressed mode, second, a temporary buffer is introduced at the encoder, and third, an approach to decide when to discard the entire dictionary is advanced. According to the developed method the changes are only at the transmitter and any V.42bis-compatible receiver can be used as a decoder. Therefore devices using V.42bis can use the proposed method after a firmware upgrade. To make a decision when to switch modes, we introduce two look-ahead buffers, B/sub C/ and B/sub T/, for each mode of operation of the encoder. Regardless of which mode the encoder is in, the output of both modes of operation is written to the corresponding look-ahead buffer. The simulation results demonstrate that the proposed method achieves higher compression ratios in most cases. Another goal of the work presented in this paper is to analyze what is the improvement obtained after the dictionary is reset and when is a good time to discard the dictionary. Our results for different file types show that consistently the compression ratio before dictionary reset is 0.87 and after dictionary reset is 1.07, an increase of 22.6 %. It is noted that V.44 is a newer compression standard, based on a different compression algorithm. While our results do not apply directly to V.44, they may be used after appropriate modifications.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"12 1","pages":"451-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84195980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Video coding for a time varying tandem channel with feedback 具有反馈的时变串联信道的视频编码
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.95
Yushi Shen, P. Cosman, L. Milstein
Summary form only given. A robust scheme for the efficient transmission of packet video over a tandem wireless Internet channel is extended to a time varying scenario with a feedback channel. This channel is assumed to have bit errors (due to noise and fading on the wireless portion of the channel) and packet erasures (due to congestion on the wired portion). Simulation results showed that refined estimation can dramatically improve the performance for varying channel conditions, and that combined feedback of both channel conditions and ACK/NACK information can further improve system performance compared with the feedback of just one type of information.
只提供摘要形式。将一种可靠的分组视频传输方案扩展到具有反馈信道的时变场景。这个信道假定有比特错误(由于信道的无线部分的噪声和衰落)和包擦除(由于有线部分的拥塞)。仿真结果表明,在不同信道条件下,改进后的估计能显著提高系统性能,并且信道条件和ACK/NACK信息的联合反馈比单一信息的反馈能进一步提高系统性能。
{"title":"Video coding for a time varying tandem channel with feedback","authors":"Yushi Shen, P. Cosman, L. Milstein","doi":"10.1109/DCC.2005.95","DOIUrl":"https://doi.org/10.1109/DCC.2005.95","url":null,"abstract":"Summary form only given. A robust scheme for the efficient transmission of packet video over a tandem wireless Internet channel is extended to a time varying scenario with a feedback channel. This channel is assumed to have bit errors (due to noise and fading on the wireless portion of the channel) and packet erasures (due to congestion on the wired portion). Simulation results showed that refined estimation can dramatically improve the performance for varying channel conditions, and that combined feedback of both channel conditions and ACK/NACK information can further improve system performance compared with the feedback of just one type of information.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"49 1","pages":"480-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89859748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JPEG2000 compliant lossless coding of floating point data 符合JPEG2000的浮点数据无损编码
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.49
B. Usevitch
Summary form only given. Many scientific applications require that image data be stored in floating point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating point data. The extensions are based on the idea of representing floating point values as "extended integers", and these extensions maintain desirable properties of JPEG2000, such as scalable embedded bit streams and rate distortion optimality. Like JPEG2000, the proposed methods can be applied to both lossy and lossless compression. However, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results show that one of the proposed lossless methods improve upon the compression ratio of standard methods such as gzip by an average of 16%.
只提供摘要形式。由于图像数据的动态范围大,许多科学应用都要求以浮点格式存储图像数据。如果需要压缩数据,这些应用程序就会产生问题,因为现代图像压缩标准(如JPEG2000)只定义为操作定点或整数数据。本文提出了对JPEG2000图像压缩标准的简单扩展,以实现浮点数据的高效编码。这些扩展基于将浮点值表示为“扩展整数”的思想,并且这些扩展保持了JPEG2000的理想特性,例如可伸缩的嵌入式比特流和速率失真优化性。与JPEG2000一样,所提出的方法可以应用于有损压缩和无损压缩。然而,本文的讨论主要集中在无损情况下,测试结果也仅限于无损情况。测试结果表明,所提出的一种无损方法比gzip等标准方法的压缩比平均提高了16%。
{"title":"JPEG2000 compliant lossless coding of floating point data","authors":"B. Usevitch","doi":"10.1109/DCC.2005.49","DOIUrl":"https://doi.org/10.1109/DCC.2005.49","url":null,"abstract":"Summary form only given. Many scientific applications require that image data be stored in floating point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating point data. The extensions are based on the idea of representing floating point values as \"extended integers\", and these extensions maintain desirable properties of JPEG2000, such as scalable embedded bit streams and rate distortion optimality. Like JPEG2000, the proposed methods can be applied to both lossy and lossless compression. However, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results show that one of the proposed lossless methods improve upon the compression ratio of standard methods such as gzip by an average of 16%.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"155 1","pages":"484-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86298527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A compression-boosting transform for 2D data 二维数据的压缩增强变换
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.2
Qiaofeng Yang, S. Lonardi
In this paper, we present an invertible transform for 2D data which has the objective of reordering the matrix to improve its (lossless) compression at later stages. Given a binary matrix, the transform involves first searching for the largest uniform submatrix, that is, a submatrix solely composed by the same symbol (either 0 or 1) induced by a subset of rows and columns (which are not necessarily contiguous). Then, the rows and the columns are reordered such that the uniform submatrix is moved to the left-upper corner of the matrix. The transform is recursively applied on the rest of the matrix. The recursion is stopped when the partition produces a matrix which is smaller than a predetermined threshold. The inverse transform (decompression) is fast and can be implemented in linear time in the size of the matrix. The effects of the transform on the compressibility of 2D data is studied empirically by comparing the performance of gzip and bzip2 before and after the application of the transform on several inputs. The preliminary results show that the transform boosts compression.
在本文中,我们提出了一种二维数据的可逆变换,其目的是对矩阵进行重新排序,以提高其在后期的(无损)压缩。给定一个二进制矩阵,该变换首先涉及搜索最大的一致子矩阵,也就是说,一个完全由相同符号(0或1)组成的子矩阵,该子矩阵由一组行和列(不一定是连续的)组成。然后,对行和列进行重新排序,以便将均匀子矩阵移动到矩阵的左上角。变换递归地应用于矩阵的其余部分。当划分产生的矩阵小于预定阈值时,递归停止。反变换(解压)速度快,可以在矩阵大小的线性时间内实现。通过比较gzip和bzip2在多个输入上应用变换前后的性能,实证研究了变换对二维数据可压缩性的影响。初步结果表明,该变换提高了压缩效果。
{"title":"A compression-boosting transform for 2D data","authors":"Qiaofeng Yang, S. Lonardi","doi":"10.1109/DCC.2005.2","DOIUrl":"https://doi.org/10.1109/DCC.2005.2","url":null,"abstract":"In this paper, we present an invertible transform for 2D data which has the objective of reordering the matrix to improve its (lossless) compression at later stages. Given a binary matrix, the transform involves first searching for the largest uniform submatrix, that is, a submatrix solely composed by the same symbol (either 0 or 1) induced by a subset of rows and columns (which are not necessarily contiguous). Then, the rows and the columns are reordered such that the uniform submatrix is moved to the left-upper corner of the matrix. The transform is recursively applied on the rest of the matrix. The recursion is stopped when the partition produces a matrix which is smaller than a predetermined threshold. The inverse transform (decompression) is fast and can be implemented in linear time in the size of the matrix. The effects of the transform on the compressibility of 2D data is studied empirically by comparing the performance of gzip and bzip2 before and after the application of the transform on several inputs. The preliminary results show that the transform boosts compression.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"66 1","pages":"492-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86413939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BWT based universal lossless source controlled channel decoding with low density parity check codes 基于小波变换的低密度奇偶校验码通用无损信源控制信道解码
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.24
Li Wang, G. Shamir
Summary form only given. In many channel decoding applications, redundancy is left in the channel coded data. A new method for utilizing this redundancy in channel decoding is proposed. The method is based on the Burrows-Wheeler transform (BWT) and on universal compression techniques for piecewise stationary memoryless sources (PSMS), and is applied to regular low-density parity-check (LDPC) codes. Two settings are proposed. In the first, the BWT-PSMS loop is in the decoder, while in the second, the rearrangement of the data is performed with the BWT before channel encoding, and then the decoder is designed for extracting statistics in a PSMS. After the last iteration, the data is reassembled with the inverse BWT. Simulations show that the bit error rate performance of the new method (in either setting) is almost as good as genie-aided decoding with perfect knowledge of the statistics.
只提供摘要形式。在许多信道解码应用中,信道编码数据中存在冗余。提出了一种在信道解码中利用这种冗余的新方法。该方法基于Burrows-Wheeler变换(BWT)和分段平稳无记忆信源(PSMS)的通用压缩技术,并应用于规则的低密度奇偶校验码(LDPC)。提出了两种设置。该方法首先将BWT-PSMS环路置于解码器中,然后在信道编码前利用BWT对数据进行重排,然后设计解码器用于提取PSMS中的统计信息。最后一次迭代后,使用逆BWT对数据进行重新组装。仿真结果表明,在充分了解统计信息的情况下,新方法的误码率性能几乎与基因辅助解码一样好。
{"title":"BWT based universal lossless source controlled channel decoding with low density parity check codes","authors":"Li Wang, G. Shamir","doi":"10.1109/DCC.2005.24","DOIUrl":"https://doi.org/10.1109/DCC.2005.24","url":null,"abstract":"Summary form only given. In many channel decoding applications, redundancy is left in the channel coded data. A new method for utilizing this redundancy in channel decoding is proposed. The method is based on the Burrows-Wheeler transform (BWT) and on universal compression techniques for piecewise stationary memoryless sources (PSMS), and is applied to regular low-density parity-check (LDPC) codes. Two settings are proposed. In the first, the BWT-PSMS loop is in the decoder, while in the second, the rearrangement of the data is performed with the BWT before channel encoding, and then the decoder is designed for extracting statistics in a PSMS. After the last iteration, the data is reassembled with the inverse BWT. Simulations show that the bit error rate performance of the new method (in either setting) is almost as good as genie-aided decoding with perfect knowledge of the statistics.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"19 1","pages":"487-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real, tight frames with maximal robustness to erasures 对擦除具有最大鲁棒性的真实、紧密的帧
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.77
Markus Püschel, J. Kovacevic
Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last requirement of equal norm and construct, to our best knowledge, the first real, tight, equal-norm frames maximally robust to erasures.
在使用帧在互联网上进行鲁棒传输的激励下,我们提出了第一个具有最大鲁棒性的真实紧帧的系统构造。我们分步骤解决这个问题:我们首先使用多项式变换构造最大鲁棒帧。然后我们在正交多项式的帮助下增加紧性作为一个额外的性质。最后,我们提出了等范数的最后一个要求,并构造了,据我们所知,第一个真正的,紧密的,对擦除具有最大鲁棒性的等范数框架。
{"title":"Real, tight frames with maximal robustness to erasures","authors":"Markus Püschel, J. Kovacevic","doi":"10.1109/DCC.2005.77","DOIUrl":"https://doi.org/10.1109/DCC.2005.77","url":null,"abstract":"Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last requirement of equal norm and construct, to our best knowledge, the first real, tight, equal-norm frames maximally robust to erasures.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"67 1","pages":"63-72"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88190356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
Parallelization of VQ codebook generation by two algorithms: parallel LBG and aggressive PNN [image compression applications] 两种算法并行化VQ码本生成:并行LBG和主动PNN[图像压缩应用]
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.69
A. Wakatani
Summary form only given. We evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and aggressive PNN. Parallel LBG is based on the LBG algorithm with the K-mean method. The cost of both latter algorithms mainly consists of: a) the computation part; b) the communication part; and c) the update part. Aggressive PNN is a parallelized version of the PNN (pairwise nearest neighbor) algorithm, whose cost mainly consists of: a) the computation part; b) the communication part; and c) the merge part. We measured the speedups and elapsed times of both algorithms on a PC cluster system. When the quality of images compressed by both algorithms is the same, the number of training vectors required by the aggressive PNN is much less than that by the parallel LBG, and the aggressive PNN is superior in terms of the elapsed time.
只提供摘要形式。我们评估了用于VQ压缩码本生成的两种并行算法:并行LBG和主动PNN。并行LBG是基于基于k -均值方法的LBG算法。后两种算法的成本主要包括:a)计算部分;B)通信部分;c)更新部分。侵略性PNN是PNN (pairwise nearest neighbor)算法的并行化版本,其代价主要包括:a)计算部分;B)通信部分;c)归并部分。我们在PC集群系统上测量了这两种算法的加速和运行时间。在两种算法压缩的图像质量相同的情况下,主动PNN所需的训练向量数量远少于并行LBG,并且在运行时间方面具有优势。
{"title":"Parallelization of VQ codebook generation by two algorithms: parallel LBG and aggressive PNN [image compression applications]","authors":"A. Wakatani","doi":"10.1109/DCC.2005.69","DOIUrl":"https://doi.org/10.1109/DCC.2005.69","url":null,"abstract":"Summary form only given. We evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and aggressive PNN. Parallel LBG is based on the LBG algorithm with the K-mean method. The cost of both latter algorithms mainly consists of: a) the computation part; b) the communication part; and c) the update part. Aggressive PNN is a parallelized version of the PNN (pairwise nearest neighbor) algorithm, whose cost mainly consists of: a) the computation part; b) the communication part; and c) the merge part. We measured the speedups and elapsed times of both algorithms on a PC cluster system. When the quality of images compressed by both algorithms is the same, the number of training vectors required by the aggressive PNN is much less than that by the parallel LBG, and the aggressive PNN is superior in terms of the elapsed time.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"5 1","pages":"486-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81513807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QLFC - a compression algorithm using the Burrows-Wheeler transform 一个使用Burrows-Wheeler变换的压缩算法
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.75
F. Ghido
Summary form only given. In this paper, we propose a novel approach for the second step of the Burrows-Wheeler compression algorithm, based on the idea that the probabilities of events are not continuous valued, but are rather quantized with respect to a specific class of base functions. The first pass of encoding transforms the input sequence x into sequence x/spl tilde/. The second pass models and codes x/spl tilde/using entropy coding. The entropy decoding, modeling, and context updating for decoding x/spl tilde/ are the same as the ones used for encoding. We have proved that the quantized local frequency transform is optimal in the case of binary and ternary alphabet memoryless sources, showing that x and x/spl tilde/ have the same entropy; for larger alphabets, we verified this by simulation.
只提供摘要形式。在本文中,我们提出了Burrows-Wheeler压缩算法第二步的一种新方法,基于事件的概率不是连续值,而是相对于特定基函数类的量化的思想。第一次编码将输入序列x转换为序列x/spl /。第二遍模型和编码x/spl波浪/使用熵编码。解码x/spl波浪/的熵解码、建模和上下文更新与编码所用的熵解码、建模和上下文更新相同。我们证明了在二进制和三元字母无记忆源的情况下,量化局部频率变换是最优的,表明x和x/spl波浪/具有相同的熵;对于较大的字母,我们通过模拟验证了这一点。
{"title":"QLFC - a compression algorithm using the Burrows-Wheeler transform","authors":"F. Ghido","doi":"10.1109/DCC.2005.75","DOIUrl":"https://doi.org/10.1109/DCC.2005.75","url":null,"abstract":"Summary form only given. In this paper, we propose a novel approach for the second step of the Burrows-Wheeler compression algorithm, based on the idea that the probabilities of events are not continuous valued, but are rather quantized with respect to a specific class of base functions. The first pass of encoding transforms the input sequence x into sequence x/spl tilde/. The second pass models and codes x/spl tilde/using entropy coding. The entropy decoding, modeling, and context updating for decoding x/spl tilde/ are the same as the ones used for encoding. We have proved that the quantized local frequency transform is optimal in the case of binary and ternary alphabet memoryless sources, showing that x and x/spl tilde/ have the same entropy; for larger alphabets, we verified this by simulation.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"26 1","pages":"459-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73652105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizing the Kraft-McMillan inequality to restricted languages 将Kraft-McMillan不等式推广到受限语言
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.42
M. Golin, Hyeon-Suk Na
Let /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ be a (possibly infinite) sequence of nonnegative integers and /spl Sigma/ some D-ary alphabet. The Kraft-inequality states that /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ are the lengths of the words in some prefix (free) code over /spl Sigma/ if and only if /spl Sigma//sub i=1//sup n/D/sup -/spl lscr/ i//spl les/1. Furthermore, the code is exhaustive if and only if equality holds. The McMillan inequality states that if /spl lscr/ /sub n/ are the lengths of the words in some uniquely decipherable code, then the same condition holds. In this paper we examine how the Kraft-McMillan inequality conditions for the existence of a prefix or uniquely decipherable code change when the code is not only required to be prefix but all of the codewords are restricted to belong to a given specific language L. For example, L might be all words that end in a particular pattern or, if /spl Sigma/ is binary, might be all words in which the number of zeros equals the number of ones.
让/spl lscr/ /sub 1/,/spl lscr/ /sub 2/,…,/spl lscr/ /sub n/是一个(可能是无限的)非负整数序列和/spl Sigma/某个d进制字母表。克拉夫特不等式表明/spl lscr/ /sub 1/,/spl lscr/ /sub 2/,…,/spl lscr/ /sub n/是/spl Sigma/上某个前缀(自由)码中的单词长度,当且仅当/spl Sigma//sub i=1//sup n/D/sup -/spl lscr/ i//spl les/1。此外,当且仅当等式成立时,代码是详尽的。麦克米伦不等式指出,如果/spl lscr/ /sub n/是某些唯一可破译代码中的单词长度,则同样的条件成立。在本文中,我们研究如何Kraft-McMillan前缀或独特的存在不平等条件可解释的代码更改代码时不仅需要前缀,所有的码字都局限于属于一个给定的特定语言L .例如,L可能结束在一个特定模式的所有单词,或者如果二进制/ splσ,可能所有单词的数量0的数量= 1。
{"title":"Generalizing the Kraft-McMillan inequality to restricted languages","authors":"M. Golin, Hyeon-Suk Na","doi":"10.1109/DCC.2005.42","DOIUrl":"https://doi.org/10.1109/DCC.2005.42","url":null,"abstract":"Let /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ be a (possibly infinite) sequence of nonnegative integers and /spl Sigma/ some D-ary alphabet. The Kraft-inequality states that /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ are the lengths of the words in some prefix (free) code over /spl Sigma/ if and only if /spl Sigma//sub i=1//sup n/D/sup -/spl lscr/ i//spl les/1. Furthermore, the code is exhaustive if and only if equality holds. The McMillan inequality states that if /spl lscr/ /sub n/ are the lengths of the words in some uniquely decipherable code, then the same condition holds. In this paper we examine how the Kraft-McMillan inequality conditions for the existence of a prefix or uniquely decipherable code change when the code is not only required to be prefix but all of the codewords are restricted to belong to a given specific language L. For example, L might be all words that end in a particular pattern or, if /spl Sigma/ is binary, might be all words in which the number of zeros equals the number of ones.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"21 1","pages":"163-172"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74449647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TetStreamer: compressed back-to-front transmission of Delaunay tetrahedra meshes TetStreamer:压缩德劳内四面体网格的前后传输
Pub Date : 2005-03-29 DOI: 10.1109/DCC.2005.85
Urs Bischoff, J. Rossignac
We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces are on the sheet. The sheet is a tri mesh separating active and inactive tets. It is initialized with the back-facing boundary of the mesh. It is compressed using EdgeBreaker and transmitted first. It is maintained by both the server and the client and advanced towards the viewer passing one free tet at a time. The client receives a compressed bit stream indicating where to attach free tets to the sheet. It renders each free tet and updates the sheet by either flipping a concave edge, removing a concave valence-3 vertex, or inserting a new vertex to split a tri. TetStreamer compresses the connectivity of the whole let mesh to an average of about 1.7 bits per tet. The footprint (in-core memory required by the client) needs only to hold the evolving sheet, which is a small fraction of the storage that would be required by the entire tet-mesh. Hence, TetStreamer permits us to receive, decompress, and visualize or process very large meshes on clients with a small in-core memory. Furthermore, it permits us to use volumetric visualization techniques, which require that the mesh be processed in view-dependent back-to-front order, at no extra memory, performance or transmission cost.
我们用tet和tri来表示四面体和三角形。TetStreamer按照从后到前的可见性顺序编码Delaunay tet mesh,并将其从服务器传输到客户端(volumetric visualizer)。在解压缩期间,服务器通过每次识别和停用一个空闲的tet来执行与视图相关的测试的前后排序。当一只猫的所有背面都在被单上时,它就是自由的。这张纸是一个三网分离活性和非活性测试。它是用网格的背面边界初始化的。使用EdgeBreaker进行压缩并首先传输。它由服务器和客户端共同维护,并向查看器推进,每次传递一个空闲的tet。客户端接收一个压缩的比特流,指示在哪里将空闲测试附加到工作表上。它渲染每个空闲的集合,并通过翻转凹边、删除凹价-3顶点或插入新顶点来分割tri来更新工作表。TetStreamer将整个let mesh的连通性压缩到平均每tet约1.7比特。占用空间(客户端所需的核心内存)只需要容纳不断变化的表,这是整个et-mesh所需存储的一小部分。因此,TetStreamer允许我们接收、解压缩、可视化或处理非常大的网格在客户端与一个小的核心内存。此外,它允许我们使用体积可视化技术,该技术要求以视图相关的前后顺序处理网格,而不需要额外的内存,性能或传输成本。
{"title":"TetStreamer: compressed back-to-front transmission of Delaunay tetrahedra meshes","authors":"Urs Bischoff, J. Rossignac","doi":"10.1109/DCC.2005.85","DOIUrl":"https://doi.org/10.1109/DCC.2005.85","url":null,"abstract":"We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces are on the sheet. The sheet is a tri mesh separating active and inactive tets. It is initialized with the back-facing boundary of the mesh. It is compressed using EdgeBreaker and transmitted first. It is maintained by both the server and the client and advanced towards the viewer passing one free tet at a time. The client receives a compressed bit stream indicating where to attach free tets to the sheet. It renders each free tet and updates the sheet by either flipping a concave edge, removing a concave valence-3 vertex, or inserting a new vertex to split a tri. TetStreamer compresses the connectivity of the whole let mesh to an average of about 1.7 bits per tet. The footprint (in-core memory required by the client) needs only to hold the evolving sheet, which is a small fraction of the storage that would be required by the entire tet-mesh. Hence, TetStreamer permits us to receive, decompress, and visualize or process very large meshes on clients with a small in-core memory. Furthermore, it permits us to use volumetric visualization techniques, which require that the mesh be processed in view-dependent back-to-front order, at no extra memory, performance or transmission cost.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"9 1","pages":"93-102"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78292615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
Proceedings. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1