首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
Fast implementation of two-level compression method using QM-coder 用qm编码器快速实现两级压缩方法
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582123
K. Nguyen-Phi, H. Weinrichter
We deal with bi-level image compression. Modern methods consider the bi-level image as a high order Markovian source, and by exploiting this characteristic, can attain better performance. At a first glance, the increasing of the order of the Markovian model in the modelling process should yield a higher compression ratio, but in fact, it is not true. A higher order model needs a longer time to learn (adaptively) the statistical characteristic of the source. If the source sequence, or the bi-level image in this case, is not long enough, then we do not have a stable model. One simple way to solve this problem is the two-level method. We consider the implementation aspects of this method. Instead of using the general arithmetic coder, an obvious alternative is using the QM-coder, thus reducing the memory used and increasing the execution speed. We discuss some possible heuristics to increase the performance. Experimental results obtained with the ITU-T test images are given.
我们处理双级图像压缩。现代方法将双层图像视为高阶马尔可夫源,利用这一特性可以获得更好的性能。乍一看,马尔可夫模型在建模过程中阶数的增加应该会产生更高的压缩比,但实际上并非如此。高阶模型需要更长的时间来(自适应地)学习源的统计特征。如果源序列,或者在这种情况下的双层图像不够长,那么我们就没有一个稳定的模型。解决这个问题的一个简单方法是两级方法。我们考虑了该方法的实现方面。一个明显的替代方案是使用qm编码器,而不是使用一般的算术编码器,这样可以减少所使用的内存并提高执行速度。我们讨论了一些可能的启发式方法来提高性能。给出了利用ITU-T测试图像得到的实验结果。
{"title":"Fast implementation of two-level compression method using QM-coder","authors":"K. Nguyen-Phi, H. Weinrichter","doi":"10.1109/DCC.1997.582123","DOIUrl":"https://doi.org/10.1109/DCC.1997.582123","url":null,"abstract":"We deal with bi-level image compression. Modern methods consider the bi-level image as a high order Markovian source, and by exploiting this characteristic, can attain better performance. At a first glance, the increasing of the order of the Markovian model in the modelling process should yield a higher compression ratio, but in fact, it is not true. A higher order model needs a longer time to learn (adaptively) the statistical characteristic of the source. If the source sequence, or the bi-level image in this case, is not long enough, then we do not have a stable model. One simple way to solve this problem is the two-level method. We consider the implementation aspects of this method. Instead of using the general arithmetic coder, an obvious alternative is using the QM-coder, thus reducing the memory used and increasing the execution speed. We discuss some possible heuristics to increase the performance. Experimental results obtained with the ITU-T test images are given.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128433657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-level image compression using adaptive tree model 采用自适应树模型的双级图像压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582122
K. Nguyen-Phi, H. Weinrichter
Summary form only given. State-of-the-art methods for bi-level image compression rely on two processes of modelling and coding. The modelling process determines the context of the coded pixel based on its adjacent pixels and using the information of the context to predict the probability of the coded pixel being 0 or 1. The coding process will actually code the pixel based on the prediction. Because the source is finite, a bigger template (more adjacent pixels) doesn't always lead to a better result, which is known as "context dilution" phenomenon. The authors present a new method called adaptive tree modelling for preventing the context dilution. They discussed this method by considering a pruned binary tree. They have implemented the proposed method in software.
只提供摘要形式。最先进的双级图像压缩方法依赖于建模和编码两个过程。建模过程根据编码像素的相邻像素确定其上下文,并利用上下文信息预测编码像素为0或1的概率。编码过程实际上会根据预测对像素进行编码。因为源是有限的,更大的模板(更多的相邻像素)并不总是带来更好的结果,这就是所谓的“上下文稀释”现象。作者提出了一种新的方法,称为自适应树模型,以防止上下文稀释。他们通过考虑一棵经过修剪的二叉树来讨论这种方法。他们在软件中实现了所提出的方法。
{"title":"Bi-level image compression using adaptive tree model","authors":"K. Nguyen-Phi, H. Weinrichter","doi":"10.1109/DCC.1997.582122","DOIUrl":"https://doi.org/10.1109/DCC.1997.582122","url":null,"abstract":"Summary form only given. State-of-the-art methods for bi-level image compression rely on two processes of modelling and coding. The modelling process determines the context of the coded pixel based on its adjacent pixels and using the information of the context to predict the probability of the coded pixel being 0 or 1. The coding process will actually code the pixel based on the prediction. Because the source is finite, a bigger template (more adjacent pixels) doesn't always lead to a better result, which is known as \"context dilution\" phenomenon. The authors present a new method called adaptive tree modelling for preventing the context dilution. They discussed this method by considering a pruned binary tree. They have implemented the proposed method in software.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130092919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A framework for application specific image compression 用于特定于应用程序的图像压缩的框架
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582119
S. Moni, S. Sista
Summary form only given. Images and video are being extensively used in numerous areas such as video-conferencing, multimedia documentation, telemedicine, high definition television (HDTV) etc. These diverse applications can benefit from the design of a family of image compression algorithms that address their specific needs. We propose a framework for wavelet-based image compression that leads to a family of image compression schemes, and facilitates a comparative study of the complexity, compression ratio, and other properties of this family. The embedded zerotree wavelet (EZW) and the web of wavelets (WW) methods fall into this family.
只提供摘要形式。图像和视频被广泛应用于视频会议、多媒体文档、远程医疗、高清电视等众多领域。这些不同的应用程序可以从一系列图像压缩算法的设计中受益,这些算法可以满足它们的特定需求。我们提出了一个基于小波的图像压缩框架,该框架导致了一系列图像压缩方案,并促进了对该家族的复杂性,压缩比和其他特性的比较研究。嵌入式零树小波(EZW)和小波网络(WW)方法属于这一类。
{"title":"A framework for application specific image compression","authors":"S. Moni, S. Sista","doi":"10.1109/DCC.1997.582119","DOIUrl":"https://doi.org/10.1109/DCC.1997.582119","url":null,"abstract":"Summary form only given. Images and video are being extensively used in numerous areas such as video-conferencing, multimedia documentation, telemedicine, high definition television (HDTV) etc. These diverse applications can benefit from the design of a family of image compression algorithms that address their specific needs. We propose a framework for wavelet-based image compression that leads to a family of image compression schemes, and facilitates a comparative study of the complexity, compression ratio, and other properties of this family. The embedded zerotree wavelet (EZW) and the web of wavelets (WW) methods fall into this family.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130339282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Out-of-loop motion compensation for reduced complexity video encoding 降低视频编码复杂度的环外运动补偿
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582084
C.D. Creusere
Summary form only given. In order to reduce the complexity of a video encoder, we introduce a new approach to global motion compensation related to the conventional hybrid DPCM-transform method in which the motion compensation is performed outside the feedback loop. Within this framework, many specific implementations are possible, some of which are studied. Our method continually tracks and updates the image in the feedback loop in the same way as a conventional hybrid system. Using both residual energy and reconstruction error as metrics, we show that the new motion compensation scheme compares very favorably to the conventional one.
只提供摘要形式。为了降低视频编码器的复杂性,在传统的混合dpcm变换方法的基础上,提出了一种新的全局运动补偿方法,该方法在反馈环外进行运动补偿。在这个框架内,许多特定的实现是可能的,其中一些是研究。我们的方法与传统的混合系统一样,在反馈回路中持续跟踪和更新图像。利用剩余能量和重构误差作为度量,我们证明了新的运动补偿方案比传统的运动补偿方案更有优势。
{"title":"Out-of-loop motion compensation for reduced complexity video encoding","authors":"C.D. Creusere","doi":"10.1109/DCC.1997.582084","DOIUrl":"https://doi.org/10.1109/DCC.1997.582084","url":null,"abstract":"Summary form only given. In order to reduce the complexity of a video encoder, we introduce a new approach to global motion compensation related to the conventional hybrid DPCM-transform method in which the motion compensation is performed outside the feedback loop. Within this framework, many specific implementations are possible, some of which are studied. Our method continually tracks and updates the image in the feedback loop in the same way as a conventional hybrid system. Using both residual energy and reconstruction error as metrics, we show that the new motion compensation scheme compares very favorably to the conventional one.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast three dimensional discrete cosine transform 一个快速的三维离散余弦变换
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582085
R. K. Chan, M. Lee
Summary form only given. A three dimensional fast discrete cosine transform (3-D FCT) algorithm is proposed for 3-D data points. Unlike other methods for 3-D DCT, the proposed algorithm treats 3-D data points directly as volume data. The algorithm involves a 3-D decomposition and rearrangement process where a data volume is recursively halved for each dimension until unit data cubes are formed. The data points are further rearranged to avoid redundant computations in the transformation. The 3-D algorithm has been shown to be computationally efficient, and can thus be used in applications requiring a real-time symmetric codec, such as software based video conferencing systems.
只提供摘要形式。针对三维数据点,提出了一种三维快速离散余弦变换算法。与其他三维DCT方法不同,该算法将三维数据点直接作为体数据处理。该算法涉及三维分解和重排过程,其中每个维度的数据量递归减半,直到形成单元数据立方体。进一步对数据点进行重新排列,避免了变换过程中的冗余计算。三维算法已被证明具有计算效率,因此可用于需要实时对称编解码器的应用,例如基于软件的视频会议系统。
{"title":"A fast three dimensional discrete cosine transform","authors":"R. K. Chan, M. Lee","doi":"10.1109/DCC.1997.582085","DOIUrl":"https://doi.org/10.1109/DCC.1997.582085","url":null,"abstract":"Summary form only given. A three dimensional fast discrete cosine transform (3-D FCT) algorithm is proposed for 3-D data points. Unlike other methods for 3-D DCT, the proposed algorithm treats 3-D data points directly as volume data. The algorithm involves a 3-D decomposition and rearrangement process where a data volume is recursively halved for each dimension until unit data cubes are formed. The data points are further rearranged to avoid redundant computations in the transformation. The 3-D algorithm has been shown to be computationally efficient, and can thus be used in applications requiring a real-time symmetric codec, such as software based video conferencing systems.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133820374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive vector quantization using generalized threshold replenishment 基于广义阈值补充的自适应矢量量化
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582055
J. Fowler, S. Ahalt
In this paper, we describe a new adaptive vector quantization (AVQ) algorithm designed for the coding of nonstationary sources. This new algorithm, generalized threshold replenishment (GTR), differs from prior AVQ algorithms in that it features an explicit, online consideration of both rate and distortion. Rate-distortion cost criteria are used in both the determination of nearest-neighbor codewords and the decision to update the codebook. Results presented indicate that, for the coding of an image sequence, (1) most AVQ algorithms achieve distortion much lower than that of nonadaptive VQ for the same rate (about 1.5 bits/pixel), and (2) the GTR algorithm achieves rate-distortion performance substantially superior to that of the prior AVQ algorithms for low-rate coding, being the only algorithm to achieve a rate below 1.0 bits/pixel.
本文提出了一种用于非平稳信号编码的自适应矢量量化(AVQ)算法。这种新算法,广义阈值补充(GTR),不同于先前的AVQ算法,因为它具有明确的,在线考虑速率和失真。在确定最近邻码字和决定是否更新码本时,都使用了速率失真代价准则。结果表明,对于图像序列的编码,(1)在相同速率下,大多数AVQ算法的失真率远低于非自适应VQ算法(约1.5 bits/pixel);(2)在低速率下,GTR算法的失真率性能明显优于先前的AVQ算法,是唯一一个实现低于1.0 bits/pixel的编码率的算法。
{"title":"Adaptive vector quantization using generalized threshold replenishment","authors":"J. Fowler, S. Ahalt","doi":"10.1109/DCC.1997.582055","DOIUrl":"https://doi.org/10.1109/DCC.1997.582055","url":null,"abstract":"In this paper, we describe a new adaptive vector quantization (AVQ) algorithm designed for the coding of nonstationary sources. This new algorithm, generalized threshold replenishment (GTR), differs from prior AVQ algorithms in that it features an explicit, online consideration of both rate and distortion. Rate-distortion cost criteria are used in both the determination of nearest-neighbor codewords and the decision to update the codebook. Results presented indicate that, for the coding of an image sequence, (1) most AVQ algorithms achieve distortion much lower than that of nonadaptive VQ for the same rate (about 1.5 bits/pixel), and (2) the GTR algorithm achieves rate-distortion performance substantially superior to that of the prior AVQ algorithms for low-rate coding, being the only algorithm to achieve a rate below 1.0 bits/pixel.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132222070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An executable taxonomy of on-line modeling algorithms 在线建模算法的可执行分类法
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.581959
S. Bunton
This paper gives an overview of our decomposition of a group of existing and novel on-line modeling algorithms into component parts, which can be implemented as a cross product of predominantly independent sets. The result is all of the following: a test bed for executing controlled experiments with algorithm components, a frame work that unifies existing techniques and defines novel techniques, and a taxonomy for describing on-line modeling algorithms precisely and completely in a way that enables meaningful comparison.
本文概述了我们将一组现有的和新的在线建模算法分解为组成部分,这些组成部分可以作为主要独立集的叉积来实现。结果是:一个用算法组件执行受控实验的试验台,一个统一现有技术并定义新技术的框架,以及一个精确完整地描述在线建模算法的分类法,这种分类法可以进行有意义的比较。
{"title":"An executable taxonomy of on-line modeling algorithms","authors":"S. Bunton","doi":"10.1109/DCC.1997.581959","DOIUrl":"https://doi.org/10.1109/DCC.1997.581959","url":null,"abstract":"This paper gives an overview of our decomposition of a group of existing and novel on-line modeling algorithms into component parts, which can be implemented as a cross product of predominantly independent sets. The result is all of the following: a test bed for executing controlled experiments with algorithm components, a frame work that unifies existing techniques and defines novel techniques, and a taxonomy for describing on-line modeling algorithms precisely and completely in a way that enables meaningful comparison.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130776201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An optimal-joint-coordinate block matching algorithm for motion-compensated coding 运动补偿编码的最优关节-坐标块匹配算法
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582111
C.C. Lin, D. Pease, R. Raje
Summary form only given. Block matching motion estimation/compensation has emerged as an efficient technique in removing temporal redundancies in video signals. Based on a certain distortion function, this technique searches for the best match between a block of pixels in the current picture frame and a number of blocks in close proximity in the previous frame. Most of the published block matching algorithms reduce the search area surrounding the optimum match at each search step. This paper describes a novel methodology to advance the search towards the joint coordinate of the two optimum matches in each step. A new block matching algorithm, namely optimal joint coordinate (OJC) search method, is built on this methodology to avoid redundant searches. Our algorithm relies on intensive and valid simulation results. The distortion function present in the search region is a convex function with elliptical contours.
只提供摘要形式。块匹配运动估计/补偿已成为消除视频信号中时间冗余的有效技术。该技术基于一定的失真函数,在当前图像帧中的一个像素块与前一帧中邻近的一些像素块之间搜索最佳匹配。大多数已发布的块匹配算法在每个搜索步骤中都缩小了最优匹配周围的搜索区域。本文描述了一种新的方法,在每一步中向两个最优匹配的联合坐标推进搜索。在此基础上提出了一种新的块匹配算法,即最优联合坐标(OJC)搜索法,以避免冗余搜索。我们的算法依赖于大量有效的仿真结果。在搜索区域中存在的畸变函数是具有椭圆轮廓的凸函数。
{"title":"An optimal-joint-coordinate block matching algorithm for motion-compensated coding","authors":"C.C. Lin, D. Pease, R. Raje","doi":"10.1109/DCC.1997.582111","DOIUrl":"https://doi.org/10.1109/DCC.1997.582111","url":null,"abstract":"Summary form only given. Block matching motion estimation/compensation has emerged as an efficient technique in removing temporal redundancies in video signals. Based on a certain distortion function, this technique searches for the best match between a block of pixels in the current picture frame and a number of blocks in close proximity in the previous frame. Most of the published block matching algorithms reduce the search area surrounding the optimum match at each search step. This paper describes a novel methodology to advance the search towards the joint coordinate of the two optimum matches in each step. A new block matching algorithm, namely optimal joint coordinate (OJC) search method, is built on this methodology to avoid redundant searches. Our algorithm relies on intensive and valid simulation results. The distortion function present in the search region is a convex function with elliptical contours.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Region-based video coding with embedded zero-trees 嵌入零树的基于区域的视频编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582110
J. Liang, I. Moccagatta, K. Oehler
Summary form only given. In this paper, we describe a region-based video coding algorithm that is currently under investigation for inclusion in the emerging MPEG4 standard. This algorithm was incorporated in a submission that scored highly in the MPEG4 subjective tests of November 1995 (Talluri et al. 1997). Good coding efficiency is achieved by combining motion segmented region-based coding with the Shapiro's embedded zero-tree wavelet (EZW) method.
只提供摘要形式。在本文中,我们描述了一种基于区域的视频编码算法,该算法目前正在研究中,以纳入新兴的MPEG4标准。该算法在1995年11月的MPEG4主观测试中获得高分(Talluri et al. 1997)。将基于运动分割的区域编码与夏皮罗嵌入零树小波(EZW)方法相结合,获得了良好的编码效率。
{"title":"Region-based video coding with embedded zero-trees","authors":"J. Liang, I. Moccagatta, K. Oehler","doi":"10.1109/DCC.1997.582110","DOIUrl":"https://doi.org/10.1109/DCC.1997.582110","url":null,"abstract":"Summary form only given. In this paper, we describe a region-based video coding algorithm that is currently under investigation for inclusion in the emerging MPEG4 standard. This algorithm was incorporated in a submission that scored highly in the MPEG4 subjective tests of November 1995 (Talluri et al. 1997). Good coding efficiency is achieved by combining motion segmented region-based coding with the Shapiro's embedded zero-tree wavelet (EZW) method.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114339869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image coding based on mixture modeling of wavelet coefficients and a fast estimation-quantization framework 基于小波系数混合建模和快速估计量化框架的图像编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582045
Scott M. LePresto, K. Ramchandran, M. Orchard
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the space-frequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatially-varying variances. Based on this model, we develop a powerful "on the fly" estimation-quantization (EQ) framework that consists of: (i) first finding the maximum-likelihood estimate of the individual spatially-varying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an off-line rate-distortion (R-D) optimized quantization/entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of our paradigm is the dynamic switching between forward and backward adaptation modes based on the reliability of causal prediction contexts. The performance of our coder is extremely competitive with the best published results in the literature across diverse classes of images and target bitrates of interest, in both compression efficiency and processing speed. For example, our coder exceeds the objective performance of the best zerotree-based wavelet coder based on space-frequency-quantization at all bit rates for all tested images at a fraction of its complexity.
我们引入了一种新的图像压缩范式,它结合了压缩效率和速度,并基于一个独立的“无限”混合模型,该模型准确地捕捉了小波图像表示的空间频率特征。具体来说,我们将图像小波系数建模为从独立的广义高斯分布场中提取,每个子带具有固定的未知形状,具有零均值和未知的缓慢空间变化方差。基于该模型,我们开发了一个强大的“动态”估计量化(EQ)框架,该框架包括:(i)首先基于因果关系和量化的空间邻域上下文找到单个空间变化系数场方差的最大似然估计;(ii)然后应用离线率失真(R-D)优化的量化/熵编码策略,实现为快速查找表,该策略与派生的方差估计最佳匹配。我们的范式的一个显著特征是基于因果预测上下文的可靠性在前向和后向适应模式之间动态切换。在压缩效率和处理速度方面,我们的编码器的性能与文献中不同类别的图像和感兴趣的目标比特率的最佳发表结果极具竞争力。例如,我们的编码器在所有测试图像的所有比特率下,以其复杂性的一小部分超过了基于空间频率量化的最佳基于零树的小波编码器的客观性能。
{"title":"Image coding based on mixture modeling of wavelet coefficients and a fast estimation-quantization framework","authors":"Scott M. LePresto, K. Ramchandran, M. Orchard","doi":"10.1109/DCC.1997.582045","DOIUrl":"https://doi.org/10.1109/DCC.1997.582045","url":null,"abstract":"We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent \"infinite\" mixture model which accurately captures the space-frequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent generalized Gaussian distribution field, of fixed unknown shape for each subband, having zero mean and unknown slowly spatially-varying variances. Based on this model, we develop a powerful \"on the fly\" estimation-quantization (EQ) framework that consists of: (i) first finding the maximum-likelihood estimate of the individual spatially-varying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then applying an off-line rate-distortion (R-D) optimized quantization/entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A distinctive feature of our paradigm is the dynamic switching between forward and backward adaptation modes based on the reliability of causal prediction contexts. The performance of our coder is extremely competitive with the best published results in the literature across diverse classes of images and target bitrates of interest, in both compression efficiency and processing speed. For example, our coder exceeds the objective performance of the best zerotree-based wavelet coder based on space-frequency-quantization at all bit rates for all tested images at a fraction of its complexity.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114235830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 313
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1