首页 > 最新文献

Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

英文 中文
A fractional chip wavelet zero tree codec (WZT) for video compression 用于视频压缩的分数芯片小波零树编解码器(WZT)
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785692
K. Kolarov, W. Lynch, Bill Arrighi, Bob Hoover
[Summary form only given]. We introduce a motion wavelet transform zero tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size. The codec employs a group of pictures (GOP) of two interlaced video frames, edge filters for the boundaries, intermediate field image compression and block compression structure. Specific features of the implementation for a small single chip are: 1) Transform filters are short and use dyadic rational coefficients with small numerators. Implementation can be accomplished with adds and shifts. We propose a Mallat pyramid resulting from five filter applications in the horizontal direction and three applications in the vertical direction. We use modified edge filters near block and image boundaries so as to utilize actual image values. 2) Motion image compression is used in place of motion compensation. We have applied transform compression in the temporal direction to a GOP of four fields. A two level temporal Mallat pyramid is used as a tensor product with the spatial pyramid. The linear edge filters are used at the fine level and the modified Haar filters at the coarse level, resulting in four temporal subbands. 3) Processing can be decoupled into the processing of blocks of 8 scan lines of 32 pixels each. This helps reduce the RAM requirements to the point that the RAM can be placed in the ASIC itself. 4) Quantization denominators are powers of two, enabling implementation by shifts. 5) Zero-tree coding yields a progressive encoding which is easily rate controlled. 6) The codec itself imposes a very low delay of less than 3.5 ms within a field and 67 ms for a GOP. The overall conclusion is that it is reasonable to expect that this method can be implemented, including memory, in a few mm/sup 2/ of silicon.
[仅提供摘要形式]。我们介绍了一种运动小波变换零树(WZT)编解码器,它可以实现良好的压缩比,并且可以在单个中等大小的ASIC中实现。该编解码器采用两帧交错视频的一组图像(GOP),边缘滤波器作为边界,中间场图像压缩和块压缩结构。小芯片实现的具体特点是:1)变换滤波器短,使用分子小的二进有理系数。实现可以通过添加和转移来完成。我们提出了一个马拉特金字塔,这是由水平方向上的五个滤波器应用和垂直方向上的三个滤波器应用产生的。我们在块和图像边界附近使用改进的边缘滤波器,以便利用实际的图像值。2)用运动图像压缩代替运动补偿。我们在时间方向上对四个字段的GOP进行了变换压缩。一个两层时间Mallat金字塔被用作与空间金字塔的张量积。在精细级使用线性边缘滤波器,在粗级使用改进的Haar滤波器,得到四个时间子带。3)处理可以解耦为8条32像素扫描线的块处理。这有助于减少RAM需求,使RAM可以放置在ASIC本身。4)量化分母是2的幂,可以通过移位实现。5)零树编码产生一种易于控制速率的渐进式编码。6)编解码器本身施加了一个非常低的延迟,在一个字段内小于3.5 ms,在一个GOP内小于67 ms。总的结论是,可以合理地期望这种方法可以实现,包括内存,在几毫米/sup 2/硅。
{"title":"A fractional chip wavelet zero tree codec (WZT) for video compression","authors":"K. Kolarov, W. Lynch, Bill Arrighi, Bob Hoover","doi":"10.1109/DCC.1999.785692","DOIUrl":"https://doi.org/10.1109/DCC.1999.785692","url":null,"abstract":"[Summary form only given]. We introduce a motion wavelet transform zero tree (WZT) codec which achieves good compression ratios and can be implemented in a single ASIC of modest size. The codec employs a group of pictures (GOP) of two interlaced video frames, edge filters for the boundaries, intermediate field image compression and block compression structure. Specific features of the implementation for a small single chip are: 1) Transform filters are short and use dyadic rational coefficients with small numerators. Implementation can be accomplished with adds and shifts. We propose a Mallat pyramid resulting from five filter applications in the horizontal direction and three applications in the vertical direction. We use modified edge filters near block and image boundaries so as to utilize actual image values. 2) Motion image compression is used in place of motion compensation. We have applied transform compression in the temporal direction to a GOP of four fields. A two level temporal Mallat pyramid is used as a tensor product with the spatial pyramid. The linear edge filters are used at the fine level and the modified Haar filters at the coarse level, resulting in four temporal subbands. 3) Processing can be decoupled into the processing of blocks of 8 scan lines of 32 pixels each. This helps reduce the RAM requirements to the point that the RAM can be placed in the ASIC itself. 4) Quantization denominators are powers of two, enabling implementation by shifts. 5) Zero-tree coding yields a progressive encoding which is easily rate controlled. 6) The codec itself imposes a very low delay of less than 3.5 ms within a field and 67 ms for a GOP. The overall conclusion is that it is reasonable to expect that this method can be implemented, including memory, in a few mm/sup 2/ of silicon.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133553667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossless color image compression using chromatic correlation 彩色图像无损压缩利用彩色相关
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785690
Wen Jiang, L. Bruton
[Summary form only given] Typically, the lossless compression of color images is achieved by separately compressing the three RGB monochromatic image components. The proposed method takes into account the fact that high spatial correlations exist not only within each monochromatic frame but also between similar spatial locations in adjacent monochromatic frames. Based on the observation that the prediction errors produced by the JPEG predictor in each RGB monochromatic frame present very similar structures, we propose two new chromatic predictors, called chromatic differential predictor (CDP) and classified CDP (CCDP), to capture the spectral dependencies between the monochromatic frames. In addition to prediction schemes, we consider context modeling schemes that take into account the prediction errors in spatially and/or spectrally adjacent pixels in order to efficiently encode the prediction errors. In order to demonstrate the advantage of the proposed lossless color image compression scheme, 5 different types of images are selected from the KODAK image set. All images are RGB 24 bpp color images with resolution 768/spl times/512. The experimental results demonstrate significant improvement in compression performance. Its fast implementation and high compression ratio may be a promising approach for the application of real-time color video compression.
通常,彩色图像的无损压缩是通过分别压缩三个RGB单色图像分量来实现的。该方法考虑到高空间相关性不仅存在于单色帧内,而且存在于相邻单色帧中相似空间位置之间。基于JPEG预测器在每个RGB单色帧中产生的预测误差呈现非常相似的结构,我们提出了两种新的色差预测器,称为色差预测器(CDP)和分类CDP (CCDP),以捕获单色帧之间的光谱依赖性。除了预测方案外,我们还考虑了考虑空间和/或频谱相邻像素预测误差的上下文建模方案,以便有效地对预测误差进行编码。为了证明所提出的无损彩色图像压缩方案的优势,从KODAK图像集中选择了5种不同类型的图像。所有图像均为RGB 24 bpp彩色图像,分辨率为768/spl倍/512。实验结果表明,压缩性能有明显改善。它的实现速度快,压缩比高,是一种很有前途的实时彩色视频压缩方法。
{"title":"Lossless color image compression using chromatic correlation","authors":"Wen Jiang, L. Bruton","doi":"10.1109/DCC.1999.785690","DOIUrl":"https://doi.org/10.1109/DCC.1999.785690","url":null,"abstract":"[Summary form only given] Typically, the lossless compression of color images is achieved by separately compressing the three RGB monochromatic image components. The proposed method takes into account the fact that high spatial correlations exist not only within each monochromatic frame but also between similar spatial locations in adjacent monochromatic frames. Based on the observation that the prediction errors produced by the JPEG predictor in each RGB monochromatic frame present very similar structures, we propose two new chromatic predictors, called chromatic differential predictor (CDP) and classified CDP (CCDP), to capture the spectral dependencies between the monochromatic frames. In addition to prediction schemes, we consider context modeling schemes that take into account the prediction errors in spatially and/or spectrally adjacent pixels in order to efficiently encode the prediction errors. In order to demonstrate the advantage of the proposed lossless color image compression scheme, 5 different types of images are selected from the KODAK image set. All images are RGB 24 bpp color images with resolution 768/spl times/512. The experimental results demonstrate significant improvement in compression performance. Its fast implementation and high compression ratio may be a promising approach for the application of real-time color video compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134580337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Software synthesis of variable-length code decoder using a mixture of programmed logic and table lookups 软件合成的可变长度码解码器使用的混合程序逻辑和表查找
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755661
Gene Cheung, S. McCanne, C. Papadimitriou
Implementation of variable-length code (VLC) decoders can involve a tradeoff between the number of decoding steps and memory usage. In this paper, we proposed a novel scheme for optimizing this tradeoff using a machine model abstracted from general purpose processors with hierarchical memories. We formulate the VLC decode problem as an optimization problem where the objective is to minimize the average decoding time. After showing that the problem is NP-complete, we present a Lagrangian algorithm that finds an approximate solution with bounded error. An implementation is automatically synthesized by a code generator. To demonstrate the efficacy of our approach, we conducted experiments of decoding codebooks for a pruned tree-structured vector quantizer and H.263 motion vector that show a performance gain of our proposed algorithm over single table lookup implementation and logic implementation.
可变长度代码(VLC)解码器的实现可能涉及解码步骤数和内存使用之间的权衡。在本文中,我们提出了一种新的方案来优化这种权衡,使用从具有分层存储器的通用处理器抽象出来的机器模型。我们将VLC解码问题表述为一个优化问题,其目标是最小化平均解码时间。在证明这个问题是np完全的之后,我们提出了一个拉格朗日算法来寻找一个有界误差的近似解。实现由代码生成器自动合成。为了证明我们的方法的有效性,我们对修剪过的树结构矢量量化器和H.263运动矢量进行了解码码本的实验,结果表明我们提出的算法比单表查找实现和逻辑实现的性能提高。
{"title":"Software synthesis of variable-length code decoder using a mixture of programmed logic and table lookups","authors":"Gene Cheung, S. McCanne, C. Papadimitriou","doi":"10.1109/DCC.1999.755661","DOIUrl":"https://doi.org/10.1109/DCC.1999.755661","url":null,"abstract":"Implementation of variable-length code (VLC) decoders can involve a tradeoff between the number of decoding steps and memory usage. In this paper, we proposed a novel scheme for optimizing this tradeoff using a machine model abstracted from general purpose processors with hierarchical memories. We formulate the VLC decode problem as an optimization problem where the objective is to minimize the average decoding time. After showing that the problem is NP-complete, we present a Lagrangian algorithm that finds an approximate solution with bounded error. An implementation is automatically synthesized by a code generator. To demonstrate the efficacy of our approach, we conducted experiments of decoding codebooks for a pruned tree-structured vector quantizer and H.263 motion vector that show a performance gain of our proposed algorithm over single table lookup implementation and logic implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132572314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
SICLIC: a simple inter-color lossless image coder SICLIC:一个简单的彩色间无损图像编码器
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755700
R. Barequet, M. Feder
Many applications require high-quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray-scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of the code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of inter-band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than inter-band CALIC.
许多应用需要高质量的彩色图像。为了减少存储空间和传输时间,在保持高质量的同时,对这些图像进行无损压缩。大多数图像压缩算法将彩色图像(通常为RGB格式)视为一组独立的灰度图像。SICLIC是一种基于类似loco算法的新型彩色编码算法。它结合了Golomb-Rice编码的简单性和上下文模型在颜色内和颜色间编码中的潜力。它还支持颜色内和颜色间的字母扩展,以减少代码的冗余。SICLIC的压缩比优于大多数最先进的压缩算法,压缩比非常接近频带间CALIC的压缩比,且复杂性低得多。通过算术编码,SICLIC比带间CALIC具有更好的压缩效果。
{"title":"SICLIC: a simple inter-color lossless image coder","authors":"R. Barequet, M. Feder","doi":"10.1109/DCC.1999.755700","DOIUrl":"https://doi.org/10.1109/DCC.1999.755700","url":null,"abstract":"Many applications require high-quality color images. In order to alleviate storage space and transmission time, while preserving high quality, these images are losslessly compressed. Most of the image compression algorithms treat the color image, usually in RGB format, as a set of independent gray-scale images. SICLIC is a novel inter-color coding algorithm based on a LOCO-like algorithm. It combines the simplicity of Golomb-Rice coding with the potential of context models in both intra-color and inter-color encoding. It also supports intra-color and inter-color alphabet extension, in order to reduce the redundancy of the code. SICLIC attains compression ratios superior to those obtained with most of the state-of-the-art compression algorithms and achieves compression ratios very close to those of inter-band CALIC, with much lower complexity. With arithmetic coding, SICLIC attains better compression than inter-band CALIC.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114147247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Reversible variable length codes (RVLC) for robust coding of 3D topological mesh data 可逆变长码(RVLC)用于三维拓扑网格数据的鲁棒编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785717
Z. Yan, Sunil Kumar, Jiankun Li, C.-C. Jay Kuo
Summary form only given. In order to limit error propagation, we divide the topological data of the entire mesh into several segments. Each segment is identified by its synchronization word and header. Due to the use of the arithmetic coder, data of a whole segment would often become useless in the presence of even a single bit error. Furthermore, several adjacent segments may be corrupted simultaneously at high bit error rates (BER). As a result, a lot of data would be required to be retransmitted in the presence of errors. Retransmitted data may also in turn get corrupted in high BER conditions. This would result in a considerable loss of coding efficiency and increased delay. We propose the use of reversible variable length codes (RVLC) to solve this problem. RVLC not only prevents error propagation in one segment but also efficiently detects the distorted portion of the bitstream due to their capability of two-way decoding. This would allow the recovery of a large portion of data from a corrupted segment. The amount of retransmitted data can thus be drastically reduced. RVLC can be matched to various sources with different probability distributions by adjusting their suffix length, and have been found suitable for image and video coding. However, the application of RVLC to robust 3D mesh coding has not yet been studied. Our study of the suitability of RVLC for the topological data is presented in this research. Experiments have been carried to prove the efficiency of the proposed robust 3D graphic coding algorithm. To design an efficient pre-defined code table, a large set of 300 MPEG-4 selected 3D models have been used in our experiments. The use of predefined code tables would result in a significantly reduced computational complexity.
只提供摘要形式。为了限制误差的传播,我们将整个网格的拓扑数据分成若干段。每个段由其同步字和头标识。由于算术编码器的使用,整个段的数据在出现一个比特错误时往往会变得无用。此外,在高误码率(BER)下,几个相邻的段可能同时被损坏。因此,在出现错误时需要重新传输大量数据。在高误码率条件下,重传的数据也可能被破坏。这将导致编码效率的巨大损失和延迟的增加。我们建议使用可逆变长码(RVLC)来解决这个问题。RVLC不仅可以防止错误在一个段内传播,而且由于其双向解码的能力,可以有效地检测比特流的失真部分。这将允许从损坏的段中恢复大部分数据。因此,重传的数据量可以大大减少。RVLC可以通过调整其后缀长度来匹配具有不同概率分布的各种源,适用于图像和视频编码。然而,RVLC在三维网格鲁棒编码中的应用尚未得到研究。我们对RVLC对拓扑数据的适用性进行了研究。实验证明了所提出的鲁棒三维图形编码算法的有效性。为了设计一个高效的预定义代码表,我们在实验中使用了300个MPEG-4选择的3D模型。使用预定义的代码表可以显著降低计算复杂度。
{"title":"Reversible variable length codes (RVLC) for robust coding of 3D topological mesh data","authors":"Z. Yan, Sunil Kumar, Jiankun Li, C.-C. Jay Kuo","doi":"10.1109/DCC.1999.785717","DOIUrl":"https://doi.org/10.1109/DCC.1999.785717","url":null,"abstract":"Summary form only given. In order to limit error propagation, we divide the topological data of the entire mesh into several segments. Each segment is identified by its synchronization word and header. Due to the use of the arithmetic coder, data of a whole segment would often become useless in the presence of even a single bit error. Furthermore, several adjacent segments may be corrupted simultaneously at high bit error rates (BER). As a result, a lot of data would be required to be retransmitted in the presence of errors. Retransmitted data may also in turn get corrupted in high BER conditions. This would result in a considerable loss of coding efficiency and increased delay. We propose the use of reversible variable length codes (RVLC) to solve this problem. RVLC not only prevents error propagation in one segment but also efficiently detects the distorted portion of the bitstream due to their capability of two-way decoding. This would allow the recovery of a large portion of data from a corrupted segment. The amount of retransmitted data can thus be drastically reduced. RVLC can be matched to various sources with different probability distributions by adjusting their suffix length, and have been found suitable for image and video coding. However, the application of RVLC to robust 3D mesh coding has not yet been studied. Our study of the suitability of RVLC for the topological data is presented in this research. Experiments have been carried to prove the efficiency of the proposed robust 3D graphic coding algorithm. To design an efficient pre-defined code table, a large set of 300 MPEG-4 selected 3D models have been used in our experiments. The use of predefined code tables would result in a significantly reduced computational complexity.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121843924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
2D-pattern matching image and video compression 二维模式匹配图像和视频压缩
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755692
Marc Alzina, W. Szpankowski, A. Grama
We propose a lossy data compression scheme based on an approximate two-dimensional pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we show that the so-called fixed database model leads to suboptimal compression. Furthermore, the compression ratio of this model is as low as the generalized entropy that we define. We use this model for our video compression scheme and present experimental results. For image compression we use a growing database model. The implementation of PD-PMC is a challenging problem from the algorithmic point of view. We use a range of novel techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.4 Mbit/s for video compression.
提出了一种基于近似二维模式匹配(2D-PMC)扩展的Lempel-Ziv无损压缩方案。我们将该方案应用于图像和视频压缩,并报告了理论和实验结果。理论上,我们证明了所谓的固定数据库模型会导致次优压缩。此外,该模型的压缩比与我们定义的广义熵一样低。我们将该模型应用于视频压缩方案,并给出了实验结果。对于图像压缩,我们使用增长数据库模型。从算法的角度来看,PD-PMC的实现是一个具有挑战性的问题。我们使用了一系列新颖的技术和数据结构,如k-d树、广义运行长度编码、自适应算术编码以及可变和自适应最大失真水平,以在高压缩速度下实现良好的压缩比。我们演示了高质量图像的比特率范围为0.25-0.5 bpp,视频压缩的数据速率范围为0.15-0.4 Mbit/s。
{"title":"2D-pattern matching image and video compression","authors":"Marc Alzina, W. Szpankowski, A. Grama","doi":"10.1109/DCC.1999.755692","DOIUrl":"https://doi.org/10.1109/DCC.1999.755692","url":null,"abstract":"We propose a lossy data compression scheme based on an approximate two-dimensional pattern matching (2D-PMC) extension of the Lempel-Ziv lossless scheme. We apply the scheme to image and video compression and report on our theoretical and experimental results. Theoretically, we show that the so-called fixed database model leads to suboptimal compression. Furthermore, the compression ratio of this model is as low as the generalized entropy that we define. We use this model for our video compression scheme and present experimental results. For image compression we use a growing database model. The implementation of PD-PMC is a challenging problem from the algorithmic point of view. We use a range of novel techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.4 Mbit/s for video compression.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121938297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Quadtree classification and TCQ image coding 四叉树分类与TCQ图像编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755664
B. A. Banister, T. Fischer
The SPIHT algorithm is shown to implicitly use quadtree-based classification. The rate-distortion encoding performance of the classes is described, and quantization improvements presented. A new encoding algorithm combines a general SPIHT data structure with the granular gain of multi-dimensional quantization to achieve improved PSNR versus rate performance.
SPIHT算法隐式地使用基于四叉树的分类。描述了类的率失真编码性能,并提出了量化改进。一种新的编码算法将通用的SPIHT数据结构与多维量化的颗粒增益相结合,以提高PSNR / rate性能。
{"title":"Quadtree classification and TCQ image coding","authors":"B. A. Banister, T. Fischer","doi":"10.1109/DCC.1999.755664","DOIUrl":"https://doi.org/10.1109/DCC.1999.755664","url":null,"abstract":"The SPIHT algorithm is shown to implicitly use quadtree-based classification. The rate-distortion encoding performance of the classes is described, and quantization improvements presented. A new encoding algorithm combines a general SPIHT data structure with the granular gain of multi-dimensional quantization to achieve improved PSNR versus rate performance.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127163168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Context quantization with Fisher discriminant for adaptive embedded wavelet image coding 基于Fisher判别的上下文量化自适应嵌入小波图像编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755659
Xiaolin Wu
Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.
上下文建模和小波系数自适应熵编码的最新进展可能是小波图像压缩技术迅速成熟的最重要的催化剂。本文认为小波系数的统计上下文建模是小波编解码器率失真性能的决定因素。提出了一种新的最小条件熵上下文量化算法。该算法是一个以Fisher线性判别法为指导的动态规划过程。它促进了嵌入式小波比特流的高阶上下文建模和自适应熵编码,在有损和无损情况下都具有出色的压缩性能。
{"title":"Context quantization with Fisher discriminant for adaptive embedded wavelet image coding","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755659","DOIUrl":"https://doi.org/10.1109/DCC.1999.755659","url":null,"abstract":"Recent progress in context modeling and adaptive entropy coding of wavelet coefficients has probably been the most important catalyst for the rapidly maturing area of wavelet image compression technology. In this paper we identify statistical context modeling of wavelet coefficients as the determining factor of rate-distortion performance of wavelet codecs. We propose a new context quantization algorithm for minimum conditional entropy. The algorithm is a dynamic programming process guided by Fisher's linear discriminant. It facilitates high-order context modeling and adaptive entropy coding of embedded wavelet bit streams, and leads to superb compression performance in both lossy and lossless cases.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126169915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Low complexity high-order context modeling of embedded wavelet bit streams 嵌入式小波比特流的低复杂度高阶上下文建模
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755660
Xiaolin Wu
In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.
在过去三年左右的时间里,特别是在去年启动的JPEG 2000标准化过程中,嵌入式小波比特流的统计上下文建模受到了图像压缩社区的大量关注。高阶上下文建模已被证明是小波图像编码器实现高率失真性能的必要条件。然而,如果在算法设计和实现中不注意,高阶建模上下文的形成可能会占用CPU和内存,从而给小波编码系统带来计算瓶颈。本文重点研究了高阶统计上下文建模的操作方面,并介绍了一些快速算法技术,可以大大降低小波域高阶上下文建模的时间和空间复杂性。
{"title":"Low complexity high-order context modeling of embedded wavelet bit streams","authors":"Xiaolin Wu","doi":"10.1109/DCC.1999.755660","DOIUrl":"https://doi.org/10.1109/DCC.1999.755660","url":null,"abstract":"In the past three or so years, particularly during the JPEG 2000 standardization process that was launched last year, statistical context modeling of embedded wavelet bit streams has received a lot of attention from the image compression community. High-order context modeling has been proven to be indispensable for high rate-distortion performance of wavelet image coders. However, if care is not taken in algorithm design and implementation, the formation of high-order modeling contexts can be both CPU and memory greedy, creating a computation bottleneck for wavelet coding systems. In this paper we focus on the operational aspect of high-order statistical context modeling, and introduce some fast algorithm techniques that can drastically reduce both time and space complexities of high-order context modeling in the wavelet domain.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Move-to-front and permutation based inversion coding 移动到前面和基于排列的反转编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785672
Z. Arnavut
[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.
[仅提供摘要形式]。由Bentley等人(1986)介绍,移动到前面(MTF)编码是一种自适应的、自组织的列表(排列)技术。由于MTF编码器使用限于数据源字母表大小的小尺寸排列,我们研究了数据文件的压缩,通过使用规范排序排列,然后使用基于排列的反转编码(PBIC),从{0,…, n-1},其中n为数据源的大小。所介绍的技术产生比MTF编码器更好的压缩增益,并提高块排序技术的压缩增益。
{"title":"Move-to-front and permutation based inversion coding","authors":"Z. Arnavut","doi":"10.1109/DCC.1999.785672","DOIUrl":"https://doi.org/10.1109/DCC.1999.785672","url":null,"abstract":"[Summary form only given]. Introduced by Bentley et al (1986), move-to-front (MTF) coding is an adaptive, self-organizing list (permutation) technique. Motivated with the MTF coder's utilization of small size permutations which are restricted to the data source's alphabet size, we investigate compression of data files by using the canonical sorting permutations followed by permutation based inversion coding (PBIC) from the set of {0, ..., n-1}, where n is the size of the data source. The technique introduced yields better compression gain than the MTF coder and improves the compression gain in block sorting techniques.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116202835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1