首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
High performance arithmetic coding for small alphabets 小字母的高性能算术编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582149
Xiaohui Xue, Wen Gao
Summary form only given. Generally, there are two main obstacles in the application of arithmetic coding. One is the relatively heavy computational burden in the coding part, since at least two multiplications are needed for each symbol. The other is that a highly efficient statistical model is hard to implement. We observe that under some important circumstances the number of different symbols in the data stream is definitely small. We specially design both the coding part and the modeling part to get a high performance arithmetic coder for the case of small alphabets. Our method is based on the improved arithmetic coding algorithm. We further improve it to be multiplication-free.
只提供摘要形式。一般来说,在算术编码的应用中存在两个主要障碍。一个是编码部分相对繁重的计算负担,因为每个符号至少需要两次乘法。另一个是高效的统计模型很难实现。我们观察到,在一些重要的情况下,数据流中不同符号的数量肯定很小。我们对编码部分和建模部分进行了专门设计,得到了一种适用于小字母情况的高性能算术编码器。我们的方法是基于改进的算术编码算法。我们进一步将其改进为无乘法。
{"title":"High performance arithmetic coding for small alphabets","authors":"Xiaohui Xue, Wen Gao","doi":"10.1109/DCC.1997.582149","DOIUrl":"https://doi.org/10.1109/DCC.1997.582149","url":null,"abstract":"Summary form only given. Generally, there are two main obstacles in the application of arithmetic coding. One is the relatively heavy computational burden in the coding part, since at least two multiplications are needed for each symbol. The other is that a highly efficient statistical model is hard to implement. We observe that under some important circumstances the number of different symbols in the data stream is definitely small. We specially design both the coding part and the modeling part to get a high performance arithmetic coder for the case of small alphabets. Our method is based on the improved arithmetic coding algorithm. We further improve it to be multiplication-free.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Content-adaptive postfiltering for very low bit rate video 非常低比特率视频的内容自适应后滤波
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581986
A. Jacquin, H. Okada, P. E. Crouch
We propose a postfiltering algorithm which adapts to global image quality as well as (optionally) to semantic image content extracted from the video sequence. This approach is in contrast to traditional postfiltering techniques which attempt to remove coding artifacts based on local signal characteristics only. Our postfilter is ideally suited to head-and-shoulders video coded at very low bit rates (less than 25.6 kbps), where coding artifacts are fairly strong and difficult to distinguish from fine image detail. Results are shown comparing head-and-shoulder sequences encoded at 16 kbps with an H.263-based codec to images postfiltered using the content-adaptive postfilter proposed. The postfilter manages to remove most of the mosquito artifacts introduced by the low-bit-rate coder while preserving a good rendition of facial detail.
我们提出了一种适应全局图像质量以及(可选)从视频序列中提取的语义图像内容的后滤波算法。这种方法与传统的后滤波技术相反,后者试图仅根据局部信号特征去除编码伪影。我们的后滤波器非常适合以非常低的比特率(小于25.6 kbps)编码的头肩视频,其中编码伪影相当强,难以从精细的图像细节中区分出来。将基于h .263编解码器以16 kbps编码的头肩序列与使用所提出的内容自适应后滤波器进行后滤波的图像进行了比较。后滤波器设法去除低比特率编码器引入的大多数蚊子伪影,同时保留良好的面部细节再现。
{"title":"Content-adaptive postfiltering for very low bit rate video","authors":"A. Jacquin, H. Okada, P. E. Crouch","doi":"10.1109/DCC.1997.581986","DOIUrl":"https://doi.org/10.1109/DCC.1997.581986","url":null,"abstract":"We propose a postfiltering algorithm which adapts to global image quality as well as (optionally) to semantic image content extracted from the video sequence. This approach is in contrast to traditional postfiltering techniques which attempt to remove coding artifacts based on local signal characteristics only. Our postfilter is ideally suited to head-and-shoulders video coded at very low bit rates (less than 25.6 kbps), where coding artifacts are fairly strong and difficult to distinguish from fine image detail. Results are shown comparing head-and-shoulder sequences encoded at 16 kbps with an H.263-based codec to images postfiltered using the content-adaptive postfilter proposed. The postfilter manages to remove most of the mosquito artifacts introduced by the low-bit-rate coder while preserving a good rendition of facial detail.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116911247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Text compression via alphabet re-representation 通过字母重新表示的文本压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582003
Philip M. Long, A. Natsev, J. Vitter
We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.
我们考虑重新表示字母表,以便字符的表示反映其属性,作为未来文本的预测器。这使我们能够使用来自受限类的估计器将上下文映射到即将到来的字符的预测。我们描述了一种将这种思想与神经网络结合使用的算法。将此实现的性能与其他压缩方法(如UNIX compress、gzip、PPMC和另一种神经网络方法)进行比较。
{"title":"Text compression via alphabet re-representation","authors":"Philip M. Long, A. Natsev, J. Vitter","doi":"10.1109/DCC.1997.582003","DOIUrl":"https://doi.org/10.1109/DCC.1997.582003","url":null,"abstract":"We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134062356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Efficient context-based entropy coding for lossy wavelet image compression 基于上下文的有效熵编码用于有损小波图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582047
C. Chrysafis, Antonio Ortega
We present an adaptive image coding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts the coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. These previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.
提出了一种基于后向自适应量化/分类技术的自适应图像编码算法。我们使用一个简单的均匀标量量化器来量化图像子带。我们的算法根据相邻的先前量化系数的值将系数划分为若干类中的一类。这些先前量化的系数形成用于表征子带数据的上下文。对于每个上下文类型对应一个不同的概率模型,因此每个子带系数用一个算术编码器压缩,该编码器根据该系数的邻域具有适当的模型。我们展示了上下文选择是如何由率失真标准驱动的,通过选择给定比特率的总失真最小化的方式来选择上下文。此外,每个上下文的概率模型都以非常有效的方式初始化/更新,因此实际上不需要向解码器发送开销信息。我们的结果与最近的技术水平相当,甚至在某些情况下更好,我们的算法比大多数已发布的具有可比性能的算法更简单。
{"title":"Efficient context-based entropy coding for lossy wavelet image compression","authors":"C. Chrysafis, Antonio Ortega","doi":"10.1109/DCC.1997.582047","DOIUrl":"https://doi.org/10.1109/DCC.1997.582047","url":null,"abstract":"We present an adaptive image coding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts the coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. These previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128527063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 135
Robust image coding with perceptual-based scalability 基于感知可扩展性的鲁棒图像编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582133
M. G. Ramos, S. Hemami
Summary form only given. We present a multiresolution-based image coding technique that achieves high visual quality through perceptual-based scalability and robustness to transmission errors. To achieve perceptual coding, the image is first segmented at a block level (16/spl times/16) into smooth, edge, and highly-detailed regions, using the Holder regularity property of the wavelet coefficients as well as their distributions. The activity classifications are used when coding the high-frequency wavelet coefficients. The image is compressed by first performing a 3-level hierarchical decomposition, yielding 10 subbands which are coded independently. The LL band is coded using reconstruction-optimized lapped orthogonal transforms, followed by quantization, runlength encoding, and Huffman coding. The high-frequency coefficients corresponding to the smooth regions are quantized to zero. The high-frequency coefficients corresponding to the edge regions are uniformly quantized, to maintain Holder regularity and sharpness of the edges, while those corresponding to the highly-detailed regions are quantized with a modified uniform quantizer with a dead zone. Bits are allocated based on the scale and orientation selectivity of each high-frequency subband as well as the activity regions inside each band corresponding to the edge and highly-detailed regions of the image. The quantized high-frequency bands are then run-length encoded.
只提供摘要形式。我们提出了一种基于多分辨率的图像编码技术,该技术通过基于感知的可扩展性和对传输错误的鲁棒性来实现高视觉质量。为了实现感知编码,首先利用小波系数及其分布的Holder正则性,在块级(16/spl次/16)将图像分割为光滑、边缘和高度详细的区域。对高频小波系数进行编码时采用活动分类。图像首先通过执行3级分层分解进行压缩,产生10个子带,这些子带是独立编码的。使用重建优化的重叠正交变换对LL波段进行编码,然后进行量化、运行长度编码和霍夫曼编码。对应于光滑区域的高频系数被量化为零。对边缘区域对应的高频系数进行均匀量化,以保持边缘的Holder正则性和清晰度,而对高细节区域对应的高频系数采用带死区的改进均匀量化器进行量化。根据每个高频子带的尺度和方向选择性以及每个子带内对应图像边缘和高细节区域的活动区域来分配比特。然后对量子化的高频频带进行码长编码。
{"title":"Robust image coding with perceptual-based scalability","authors":"M. G. Ramos, S. Hemami","doi":"10.1109/DCC.1997.582133","DOIUrl":"https://doi.org/10.1109/DCC.1997.582133","url":null,"abstract":"Summary form only given. We present a multiresolution-based image coding technique that achieves high visual quality through perceptual-based scalability and robustness to transmission errors. To achieve perceptual coding, the image is first segmented at a block level (16/spl times/16) into smooth, edge, and highly-detailed regions, using the Holder regularity property of the wavelet coefficients as well as their distributions. The activity classifications are used when coding the high-frequency wavelet coefficients. The image is compressed by first performing a 3-level hierarchical decomposition, yielding 10 subbands which are coded independently. The LL band is coded using reconstruction-optimized lapped orthogonal transforms, followed by quantization, runlength encoding, and Huffman coding. The high-frequency coefficients corresponding to the smooth regions are quantized to zero. The high-frequency coefficients corresponding to the edge regions are uniformly quantized, to maintain Holder regularity and sharpness of the edges, while those corresponding to the highly-detailed regions are quantized with a modified uniform quantizer with a dead zone. Bits are allocated based on the scale and orientation selectivity of each high-frequency subband as well as the activity regions inside each band corresponding to the edge and highly-detailed regions of the image. The quantized high-frequency bands are then run-length encoded.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"55 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114104468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Low-cost prevention of error-propagation for data compression with dynamic dictionaries 使用动态字典对数据压缩进行错误传播的低成本预防
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582007
J. Storer, J. Reif
In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary method from error propagation as the result of any k errors on the communication channel or compressed file. Here we further develop this approach and provide experimental evidence that this approach is highly effective in practice against a noisy channel or faulty storage medium. That is, for LZ2-based methods that "blow up" as a result of a single error, with the protocol in place, high error rates (with far more than the k errors for which the protocol was previously designed) can be sustained with no error propagation (the only corrupted bytes decoded are those that are part of the string represented by a pointer that was corrupted).
在早期的工作中,我们提出了k-error协议,这是一种保护动态字典方法免受由于通信通道或压缩文件上的任何k错误而导致的错误传播的技术。在这里,我们进一步发展了这种方法,并提供了实验证据,证明这种方法在实践中对噪声信道或故障存储介质非常有效。也就是说,对于由于单个错误而“崩溃”的基于lz2的方法,有了协议,就可以在没有错误传播的情况下维持高错误率(远远超过协议先前设计的k个错误)(解码的唯一损坏的字节是那些由指针表示的字符串的一部分)。
{"title":"Low-cost prevention of error-propagation for data compression with dynamic dictionaries","authors":"J. Storer, J. Reif","doi":"10.1109/DCC.1997.582007","DOIUrl":"https://doi.org/10.1109/DCC.1997.582007","url":null,"abstract":"In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary method from error propagation as the result of any k errors on the communication channel or compressed file. Here we further develop this approach and provide experimental evidence that this approach is highly effective in practice against a noisy channel or faulty storage medium. That is, for LZ2-based methods that \"blow up\" as a result of a single error, with the protocol in place, high error rates (with far more than the k errors for which the protocol was previously designed) can be sustained with no error propagation (the only corrupted bytes decoded are those that are part of the string represented by a pointer that was corrupted).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Significantly lower entropy estimates for natural DNA sequences 显著降低了自然DNA序列的熵估计
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581998
D. Loewenstern, P. Yianilos
If DNA were a random string over its alphabet {A,C,G,T}, an optimal code would assign 2 bits to each nucleotide. We imagine DNA to be a highly ordered, purposeful molecule, and might therefore reasonably expect statistical models of its string representation to produce much lower entropy estimates. Surprisingly this has not been the case for many natural DNA sequences, including portions of the human genome. We introduce a new statistical model (compression algorithm), the strongest reported to date, for naturally occurring DNA sequences. Conventional techniques code a nucleotide using only slightly fewer bits (1.90) than one obtains by relying only on the frequency statistics of individual nucleotides (1.95). Our method in some cases increases this gap by more than five-fold (1.66) and may lead to better performance in microbiological pattern recognition applications. One of our main contributions, and the principle source of these improvements, is the formal inclusion of inexact match information in the model. The existence of matches at various distances forms a panel of experts which are then combined into a single prediction. The structure of this combination is novel and its parameters are learned using expectation maximization (EM).
如果DNA是由字母表{a,C,G,T}组成的随机字符串,那么最佳编码将为每个核苷酸分配2位。我们想象DNA是一个高度有序的、有目的的分子,因此可以合理地期望它的字符串表示的统计模型产生更低的熵估计。令人惊讶的是,许多天然DNA序列,包括部分人类基因组,都不是这种情况。我们介绍了一个新的统计模型(压缩算法),迄今为止最强的报道,自然发生的DNA序列。传统技术编码一个核苷酸所用的比特数(1.90)比仅依靠单个核苷酸的频率统计(1.95)所获得的比特数略少。我们的方法在某些情况下将这一差距增加了五倍以上(1.66),并可能在微生物模式识别应用中带来更好的性能。我们的主要贡献之一,也是这些改进的主要来源,是在模型中正式包含了不精确匹配信息。在不同距离上存在的匹配形成一个专家小组,然后将其组合成一个单一的预测。该组合结构新颖,参数学习方法采用期望最大化方法。
{"title":"Significantly lower entropy estimates for natural DNA sequences","authors":"D. Loewenstern, P. Yianilos","doi":"10.1109/DCC.1997.581998","DOIUrl":"https://doi.org/10.1109/DCC.1997.581998","url":null,"abstract":"If DNA were a random string over its alphabet {A,C,G,T}, an optimal code would assign 2 bits to each nucleotide. We imagine DNA to be a highly ordered, purposeful molecule, and might therefore reasonably expect statistical models of its string representation to produce much lower entropy estimates. Surprisingly this has not been the case for many natural DNA sequences, including portions of the human genome. We introduce a new statistical model (compression algorithm), the strongest reported to date, for naturally occurring DNA sequences. Conventional techniques code a nucleotide using only slightly fewer bits (1.90) than one obtains by relying only on the frequency statistics of individual nucleotides (1.95). Our method in some cases increases this gap by more than five-fold (1.66) and may lead to better performance in microbiological pattern recognition applications. One of our main contributions, and the principle source of these improvements, is the formal inclusion of inexact match information in the model. The existence of matches at various distances forms a panel of experts which are then combined into a single prediction. The structure of this combination is novel and its parameters are learned using expectation maximization (EM).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127523276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
Efficient approximate adaptive coding 高效近似自适应编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582059
A. Turpin, Alistair Moffat
We describe a mechanism for approximate adaptive coding that makes use of deferred probability update to obtain good throughput rates with no buffering of symbols from the input message. Our proposed mechanism makes use of a novel code calculation process that allows an approximate code for a message of m symbols to be calculated in O(log m) time, improving upon previous methods. We also give analysis that bounds both the total computation time required to encode a message using the approximate code and the inefficiency of the resulting codeword set. Finally, experimental results are given that highlight the role the new method might play in a practical compression system. The current work builds upon two earlier papers. We previously described a mechanism for efficiently calculating a minimum-redundancy code for an alphabet in which there are many symbols with the same frequency of occurrence. We impose a modest amount of additional discipline upon the input frequencies, and show how the calculation of codewords can be performed in time and space logarithmic in the length of the message. The second area we have previously examined is the process of manipulating a code to actually perform compression. We examined mechanisms for encoding and decoding a prefix code that avoid any need for explicit enumeration of the source codewords. This means that we are free to change the source codewords at will during a message without incurring the additional cost of completely recalculating an n entry codebook.
我们描述了一种近似自适应编码机制,该机制利用延迟概率更新来获得良好的吞吐量,而不需要从输入消息中缓冲符号。我们提出的机制利用了一种新的代码计算过程,该过程允许在O(log m)时间内计算m个符号的消息的近似代码,改进了以前的方法。我们还分析了使用近似代码编码消息所需的总计算时间和结果码字集的低效率。最后给出了实验结果,强调了新方法在实际压缩系统中的作用。目前的工作建立在两篇早期论文的基础上。我们之前描述了一种机制,用于有效地计算具有相同出现频率的许多符号的字母表的最小冗余码。我们对输入频率施加了适度的额外规则,并展示了如何在时间和空间上以消息长度的对数方式执行码字的计算。我们前面研究的第二个领域是操作代码以实际执行压缩的过程。我们研究了编码和解码前缀码的机制,这些机制避免了显式枚举源代码字的需要。这意味着我们可以在消息期间随意更改源代码字,而不会产生完全重新计算n条目码本的额外成本。
{"title":"Efficient approximate adaptive coding","authors":"A. Turpin, Alistair Moffat","doi":"10.1109/DCC.1997.582059","DOIUrl":"https://doi.org/10.1109/DCC.1997.582059","url":null,"abstract":"We describe a mechanism for approximate adaptive coding that makes use of deferred probability update to obtain good throughput rates with no buffering of symbols from the input message. Our proposed mechanism makes use of a novel code calculation process that allows an approximate code for a message of m symbols to be calculated in O(log m) time, improving upon previous methods. We also give analysis that bounds both the total computation time required to encode a message using the approximate code and the inefficiency of the resulting codeword set. Finally, experimental results are given that highlight the role the new method might play in a practical compression system. The current work builds upon two earlier papers. We previously described a mechanism for efficiently calculating a minimum-redundancy code for an alphabet in which there are many symbols with the same frequency of occurrence. We impose a modest amount of additional discipline upon the input frequencies, and show how the calculation of codewords can be performed in time and space logarithmic in the length of the message. The second area we have previously examined is the process of manipulating a code to actually perform compression. We examined mechanisms for encoding and decoding a prefix code that avoid any need for explicit enumeration of the source codewords. This means that we are free to change the source codewords at will during a message without incurring the additional cost of completely recalculating an n entry codebook.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127473037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Recursive block structured data compression 递归块结构数据压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582139
M. Tilgner, M. Ishida, T. Yamaguchi
Summary form only given. A simple algorithm for efficient lossless compression of circuit test data with fast decompression speed is presented. It can easily be converted into a VLSI implementation. The algorithm is based on recursive block structured run-length coding and compresses at ratios of about 6:1 to 1000:1, higher than most of the widely known compression techniques.
只提供摘要形式。提出了一种快速有效无损压缩电路测试数据的简单算法。它可以很容易地转换成VLSI实现。该算法基于递归块结构游程编码,压缩率约为6:1到1000:1,高于大多数广为人知的压缩技术。
{"title":"Recursive block structured data compression","authors":"M. Tilgner, M. Ishida, T. Yamaguchi","doi":"10.1109/DCC.1997.582139","DOIUrl":"https://doi.org/10.1109/DCC.1997.582139","url":null,"abstract":"Summary form only given. A simple algorithm for efficient lossless compression of circuit test data with fast decompression speed is presented. It can easily be converted into a VLSI implementation. The algorithm is based on recursive block structured run-length coding and compresses at ratios of about 6:1 to 1000:1, higher than most of the widely known compression techniques.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121875500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image coding using optimized significance tree quantization 图像编码采用优化的显著性树量化
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582064
G. Davis, S. Chawla
A number of recent embedded transform coders, including Shapiro's (1993) EZW scheme, Said and Pearlman's (see IEEE Trans. Circuits and Systems for Video Technology, vol.6, no.3, p.243-250, 1996) SPIHT scheme, and Xiong et al. (see IEEE Signal Processing Letters, no.11, 1996) EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge of the transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8/spl times/8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performance from 0.7 to 2.5 dB better than baseline JPEG for standard test images.
最近的一些嵌入式转换编码器,包括Shapiro的(1993)EZW方案,Said和Pearlman的(参见IEEE Trans.)。视频技术电路与系统,第6卷,第6期。3, p.243-250, 1996) SPIHT方案,和熊等(见IEEE信号处理快报,第2期。11.1996) EZDCT方案采用一种称为显著性树量化(STQ)的通用算法。这些编码器中的每一个都是从基于经验工作和变换系数行为的先验知识的大量显著性树量化器中选择出来的。我们描述了一种算法,用于选择特定形式的STQ,该STQ针对给定的图像类进行了优化。我们将我们的优化程序应用于量化8/spl次/8 DCT块的任务。我们的算法产生了一个完全嵌入的、低复杂度的编码器,对于标准测试图像,其性能比基线JPEG好0.7到2.5 dB。
{"title":"Image coding using optimized significance tree quantization","authors":"G. Davis, S. Chawla","doi":"10.1109/DCC.1997.582064","DOIUrl":"https://doi.org/10.1109/DCC.1997.582064","url":null,"abstract":"A number of recent embedded transform coders, including Shapiro's (1993) EZW scheme, Said and Pearlman's (see IEEE Trans. Circuits and Systems for Video Technology, vol.6, no.3, p.243-250, 1996) SPIHT scheme, and Xiong et al. (see IEEE Signal Processing Letters, no.11, 1996) EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge of the transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8/spl times/8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performance from 0.7 to 2.5 dB better than baseline JPEG for standard test images.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121716998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1