首页 > 最新文献

Proceedings of IEEE Data Compression Conference (DCC'94)最新文献

英文 中文
An investigation of wavelet-based image coding using an entropy-constrained quantization framework 基于熵约束的量化框架的小波图像编码研究
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305942
K. Ramchandran, M. Orchard
Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. Several recently proposed image compression algorithms have focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients. This paper presents a new framework for understanding the efficiency of one such algorithm as a simplified attempt to a global entropy-constrained image quantizer. The principle insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. The specific algorithm described is designed around one conveniently structured collection of such sets. The efficiency of hierarchical wavelet coding algorithms derives from their success at identifying and exploiting dependencies between coefficients in the hierarchical structure. The second part of the paper presents an empirical study of the distribution of high-band wavelet coefficients, the band responsible for most of the performance improvements of the new algorithms.<>
小波图像分解生成一组树状结构的系数,为表示图像提供了一种分层的数据结构。最近提出的几个图像压缩算法集中在利用小波系数层次之间的依赖关系的新方法上。本文提出了一种新的框架来理解这种算法的效率,作为对全局熵约束图像量化器的简化尝试。新框架提供的主要见解是,通过更准确地表征任意小波系数集的联合概率来实现性能的改进。所描述的具体算法是围绕这样的集合的一个方便的结构化集合设计的。分层小波编码算法的效率源于它们在识别和利用分层结构中系数之间的依赖关系方面的成功。论文的第二部分对高频带小波系数的分布进行了实证研究,这一频带是新算法性能改进的主要原因。
{"title":"An investigation of wavelet-based image coding using an entropy-constrained quantization framework","authors":"K. Ramchandran, M. Orchard","doi":"10.1109/DCC.1994.305942","DOIUrl":"https://doi.org/10.1109/DCC.1994.305942","url":null,"abstract":"Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. Several recently proposed image compression algorithms have focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients. This paper presents a new framework for understanding the efficiency of one such algorithm as a simplified attempt to a global entropy-constrained image quantizer. The principle insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. The specific algorithm described is designed around one conveniently structured collection of such sets. The efficiency of hierarchical wavelet coding algorithms derives from their success at identifying and exploiting dependencies between coefficients in the hierarchical structure. The second part of the paper presents an empirical study of the distribution of high-band wavelet coefficients, the band responsible for most of the performance improvements of the new algorithms.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114897083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Explicit bit minimization for motion-compensated video coding 显式位最小化运动补偿视频编码
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305925
Dzung T. Hoang, Philip M. Long, J. Vitter
Compares methods for choosing motion vectors for motion-compensated video compression. The primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where the motion is usually limited, and where the frames must be coded in the order they are generated. the authors provide evidence, using established benchmark videos of this type, that choosing motion vectors to minimize codelength subject to (implicit) constraints on quality yields substantially better rate-distortion tradeoffs than minimizing notions of prediction error. They illustrate this point using an algorithm within the p/spl times/64 standard. They show that using quadtrees to code the motion vectors in conjunction with explicit codelength minimization yields further improvement. They describe a dynamic-programming algorithm for choosing a quadtree to minimize the codelength.<>
比较了运动补偿视频压缩中运动矢量的选择方法。主要的焦点是视频电话和视频会议应用,在这些应用中需要非常低的比特率,运动通常是有限的,帧必须按照生成的顺序编码。作者提供了证据,使用这种类型的已建立的基准视频,选择运动矢量最小化码长受制于(隐含的)质量约束,产生比最小化预测误差概念更好的率失真权衡。他们使用p/spl times/64标准内的算法来说明这一点。他们表明,使用四叉树来编码运动向量,并结合显式码长最小化产生进一步的改进。他们描述了一种动态规划算法,用于选择四叉树以最小化码长。
{"title":"Explicit bit minimization for motion-compensated video coding","authors":"Dzung T. Hoang, Philip M. Long, J. Vitter","doi":"10.1109/DCC.1994.305925","DOIUrl":"https://doi.org/10.1109/DCC.1994.305925","url":null,"abstract":"Compares methods for choosing motion vectors for motion-compensated video compression. The primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where the motion is usually limited, and where the frames must be coded in the order they are generated. the authors provide evidence, using established benchmark videos of this type, that choosing motion vectors to minimize codelength subject to (implicit) constraints on quality yields substantially better rate-distortion tradeoffs than minimizing notions of prediction error. They illustrate this point using an algorithm within the p/spl times/64 standard. They show that using quadtrees to code the motion vectors in conjunction with explicit codelength minimization yields further improvement. They describe a dynamic-programming algorithm for choosing a quadtree to minimize the codelength.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121169619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Entropy-constrained tree-structured vector quantizer design by the minimum cross entropy principle 基于最小交叉熵原理的熵约束树结构矢量量化器设计
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305908
K. Rose, David J. Miller, A. Gersho
The authors address the variable rate tree-structured vector quantizer design problem, wherein the rate is measured by the quantizer's entropy. For this problem, tree pruning via the generalized Breiman-Friedman-Olshen-Stone (1980) algorithm obtains solutions which are optimal over the restricted solution space consisting of all pruned trees derivable from an initial tree. However, the restrictions imposed on such solutions have several implications. In addition to depending on the tree initialization, growing and pruning solutions result in tree-structured vector quantizers which use a sub-optimal encoding rule. To remedy the latter problem, they consider a "tree-constrained" version of entropy-constrained vector quantizer design. This leads to an optimal tree-structured encoding rule for the leaves. In practice, though, improvements obtained in this fashion are limited by the tree initialization, as well as by the sub-optimal encoding performed at non-leaf nodes. To address these problems, they develop a joint optimization method which is inspired by the deterministic annealing algorithm for data clustering, and which extends their previous work on tree-structured vector quantization. The method is based on the principle of minimum cross entropy, using informative priors to approximate the unstructured solution while imposing the structural constraint. As in the original deterministic annealing method, the number of distinct codevectors (and hence the tree) grows by a sequence of bifurcations in the process, which occur as solutions of a free energy minimization. Their method obtains performance gains over growing and pruning methods for variable rate quantization of Gauss-Markov and Gaussian mixture sources.<>
作者解决了可变速率树结构矢量量化器的设计问题,其中速率由量化器的熵来测量。对于该问题,采用广义Breiman-Friedman-Olshen-Stone(1980)算法进行树剪枝,在由初始树衍生的所有剪枝树组成的有限解空间上得到最优解。然而,对这些解决方案施加的限制有几个影响。除了依赖于树初始化之外,生长和修剪解决方案还会产生使用次优编码规则的树结构矢量量化器。为了解决后一个问题,他们考虑了熵约束向量量化器设计的“树约束”版本。这将为叶子生成最优的树结构编码规则。但是,在实践中,以这种方式获得的改进受到树初始化以及在非叶节点上执行的次优编码的限制。为了解决这些问题,他们开发了一种联合优化方法,该方法受到数据聚类的确定性退火算法的启发,并扩展了他们之前在树结构矢量量化方面的工作。该方法基于最小交叉熵原理,在施加结构约束的同时,利用信息先验逼近非结构化解。与最初的确定性退火方法一样,不同的协向向量(以及树)的数量随着过程中的一系列分岔而增长,这些分岔作为自由能最小化的解出现。对于高斯-马尔可夫和高斯混合源的可变速率量化,他们的方法比生长和修剪方法获得了性能增益。
{"title":"Entropy-constrained tree-structured vector quantizer design by the minimum cross entropy principle","authors":"K. Rose, David J. Miller, A. Gersho","doi":"10.1109/DCC.1994.305908","DOIUrl":"https://doi.org/10.1109/DCC.1994.305908","url":null,"abstract":"The authors address the variable rate tree-structured vector quantizer design problem, wherein the rate is measured by the quantizer's entropy. For this problem, tree pruning via the generalized Breiman-Friedman-Olshen-Stone (1980) algorithm obtains solutions which are optimal over the restricted solution space consisting of all pruned trees derivable from an initial tree. However, the restrictions imposed on such solutions have several implications. In addition to depending on the tree initialization, growing and pruning solutions result in tree-structured vector quantizers which use a sub-optimal encoding rule. To remedy the latter problem, they consider a \"tree-constrained\" version of entropy-constrained vector quantizer design. This leads to an optimal tree-structured encoding rule for the leaves. In practice, though, improvements obtained in this fashion are limited by the tree initialization, as well as by the sub-optimal encoding performed at non-leaf nodes. To address these problems, they develop a joint optimization method which is inspired by the deterministic annealing algorithm for data clustering, and which extends their previous work on tree-structured vector quantization. The method is based on the principle of minimum cross entropy, using informative priors to approximate the unstructured solution while imposing the structural constraint. As in the original deterministic annealing method, the number of distinct codevectors (and hence the tree) grows by a sequence of bifurcations in the process, which occur as solutions of a free energy minimization. Their method obtains performance gains over growing and pruning methods for variable rate quantization of Gauss-Markov and Gaussian mixture sources.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116333815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Compression by induction of hierarchical grammars 通过分层语法的归纳进行压缩
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305932
C. Nevill-Manning, I. Witten, D. Maulsby
The paper describes a technique that constructs models of symbol sequences in the form of small, human-readable, hierarchical grammars. The grammars are both semantically plausible and compact. The technique can induce structure from a variety of different kinds of sequence, and examples are given of models derived from English text, C source code and a sequence of terminal control codes. It explains the grammatical induction technique, demonstrates its application to three very different sequences, evaluates its compression performance, and concludes by briefly discussing its use as a method for knowledge acquisition.<>
本文描述了一种以小的、人类可读的、层次语法的形式构建符号序列模型的技术。语法在语义上既合理又紧凑。该技术可以从各种不同的序列中推导出结构,并给出了从英文文本、C源代码和一系列终端控制代码中推导出的模型的例子。它解释了语法归纳技术,展示了它在三种非常不同的序列中的应用,评估了它的压缩性能,并通过简要讨论它作为一种知识获取方法的使用来结束。
{"title":"Compression by induction of hierarchical grammars","authors":"C. Nevill-Manning, I. Witten, D. Maulsby","doi":"10.1109/DCC.1994.305932","DOIUrl":"https://doi.org/10.1109/DCC.1994.305932","url":null,"abstract":"The paper describes a technique that constructs models of symbol sequences in the form of small, human-readable, hierarchical grammars. The grammars are both semantically plausible and compact. The technique can induce structure from a variety of different kinds of sequence, and examples are given of models derived from English text, C source code and a sequence of terminal control codes. It explains the grammatical induction technique, demonstrates its application to three very different sequences, evaluates its compression performance, and concludes by briefly discussing its use as a method for knowledge acquisition.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131009035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 94
Lossless image compression with lossy image using adaptive prediction and arithmetic coding 利用自适应预测和算术编码对有损图像进行无损压缩
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305924
Seishi Takamura, M. Takagi
Lossless gray scale image compression is necessary for many purposes, such as medical imaging, image databases and so on. Lossy images are important as well, because of the high compression ratio. The authors propose a lossless image compression scheme using a lossy image generated with the JPEG-DCT scheme. The concept is, send a JPEG-compressed lossy image primarily, then send residual information and reconstruct the original image using both the lossy image and residual information. 3D adaptive prediction and adaptive arithmetic coding are used, which fully use the statistical parameters of the distribution of the symbol source. The optimal number of neighbor pixels and lossy pixels used for prediction is discussed. The compression ratio is better than previous work and quite close to the original lossless algorithm.<>
无损灰度图像压缩在医学成像、图像数据库等许多领域都是必不可少的。由于高压缩比,有损图像也很重要。利用JPEG-DCT格式生成的有损图像,提出了一种无损图像压缩方案。其原理是先发送jpeg压缩的有损图像,然后发送残差信息,利用有损图像和残差信息重建原始图像。采用三维自适应预测和自适应算法编码,充分利用了符号源分布的统计参数。讨论了用于预测的邻居像素和有损像素的最优数量。压缩比优于以往的工作,相当接近原始的无损算法
{"title":"Lossless image compression with lossy image using adaptive prediction and arithmetic coding","authors":"Seishi Takamura, M. Takagi","doi":"10.1109/DCC.1994.305924","DOIUrl":"https://doi.org/10.1109/DCC.1994.305924","url":null,"abstract":"Lossless gray scale image compression is necessary for many purposes, such as medical imaging, image databases and so on. Lossy images are important as well, because of the high compression ratio. The authors propose a lossless image compression scheme using a lossy image generated with the JPEG-DCT scheme. The concept is, send a JPEG-compressed lossy image primarily, then send residual information and reconstruct the original image using both the lossy image and residual information. 3D adaptive prediction and adaptive arithmetic coding are used, which fully use the statistical parameters of the distribution of the symbol source. The optimal number of neighbor pixels and lossy pixels used for prediction is discussed. The compression ratio is better than previous work and quite close to the original lossless algorithm.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133062081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Differential state quantization of high order Gauss-Markov process 高阶高斯-马尔可夫过程的微分态量化
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305913
A. Bist
Analyzes a differential technique of tracking and quantizing a continuous time Gauss-Markov process using the process and its derivatives. By using fine quantization approximations the author derives expressions for the time-average smoothed error. Analytical bounds are derived on the overall smoothed error and it is confirmed that the differential scheme outperforms vector quantization of the scalar process, state component quantization, and state vector quantization. It is shown that when the overall rate R in bits per second is high, the optimal smoothed error varies as 1/R/sup 3/ for the differential scheme. This is better than the performance of DPCM and a modified vector DPCM, analyzed under the same framework. For both these schemes the asymptotic variation of the smoothed error is 1/R/sup 2/ at rate R. For differential state quantisation, the resulting optimal size of the vector quantizers are small and can be used in practice.<>
利用连续时间高斯-马尔可夫过程及其导数,分析了一种跟踪和量化连续时间高斯-马尔可夫过程的微分技术。通过精细量化近似,导出了时间平均平滑误差的表达式。推导了总体平滑误差的解析界,并证实了微分方案优于标量过程的矢量量化、状态分量量化和状态矢量量化。结果表明,当以比特/秒为单位的总速率R较高时,差分格式的最优平滑误差为1/R/sup 3/。在同一框架下分析,这优于DPCM和改进向量DPCM的性能。对于这两种方案,平滑误差的渐近变化为1/R/sup 2/,速率为R。对于微分状态量化,得到的矢量量化器的最佳尺寸很小,可以在实际中使用
{"title":"Differential state quantization of high order Gauss-Markov process","authors":"A. Bist","doi":"10.1109/DCC.1994.305913","DOIUrl":"https://doi.org/10.1109/DCC.1994.305913","url":null,"abstract":"Analyzes a differential technique of tracking and quantizing a continuous time Gauss-Markov process using the process and its derivatives. By using fine quantization approximations the author derives expressions for the time-average smoothed error. Analytical bounds are derived on the overall smoothed error and it is confirmed that the differential scheme outperforms vector quantization of the scalar process, state component quantization, and state vector quantization. It is shown that when the overall rate R in bits per second is high, the optimal smoothed error varies as 1/R/sup 3/ for the differential scheme. This is better than the performance of DPCM and a modified vector DPCM, analyzed under the same framework. For both these schemes the asymptotic variation of the smoothed error is 1/R/sup 2/ at rate R. For differential state quantisation, the resulting optimal size of the vector quantizers are small and can be used in practice.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130384325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enhancement of block transform coded images using residual spectra adaptive postfiltering 残差光谱自适应后滤波对块变换编码图像的增强
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305940
I. Linares, R. Mersereau, Mark J. T. Smith
Image block transform techniques usually introduce several types of spatial periodic distortion which are mostly noticeable at low bit rates. One way to reduce these artifacts to obtain an acceptable visual quality level is to postfilter the decoded images using nonlinear space-variant adaptive filters derived from the structural relationships and residual spectral information provided by the discrete-time Fourier transform (DTFT) of block transforms such as the discrete cosine transform (DCT) and the lapped orthogonal transform (LOT). A method for analyzing and filtering the DCT blocking noise and the LOT ringing noise for moderate and highly compressed images is described and several test cases are presented. A generalized Fourier analysis of the block transform distortion as seen in the frequency domain is discussed in conjunction with an outline of a separable adaptive postfiltering algorithm for decoded image enhancement.<>
图像块变换技术通常会引入几种类型的空间周期性失真,这些失真在低比特率下最为明显。减少这些伪影以获得可接受的视觉质量水平的一种方法是使用非线性空间变自适应滤波器对解码图像进行后滤波,该滤波器由块变换(如离散余弦变换(DCT)和重叠正交变换(LOT)的离散时间傅里叶变换(DTFT)提供的结构关系和残余光谱信息派生。介绍了一种对中等压缩和高度压缩图像进行DCT阻塞噪声和LOT振铃噪声分析和滤波的方法,并给出了几个测试用例。块变换失真在频域的广义傅里叶分析与用于解码图像增强的可分离自适应后滤波算法的大纲一起讨论。
{"title":"Enhancement of block transform coded images using residual spectra adaptive postfiltering","authors":"I. Linares, R. Mersereau, Mark J. T. Smith","doi":"10.1109/DCC.1994.305940","DOIUrl":"https://doi.org/10.1109/DCC.1994.305940","url":null,"abstract":"Image block transform techniques usually introduce several types of spatial periodic distortion which are mostly noticeable at low bit rates. One way to reduce these artifacts to obtain an acceptable visual quality level is to postfilter the decoded images using nonlinear space-variant adaptive filters derived from the structural relationships and residual spectral information provided by the discrete-time Fourier transform (DTFT) of block transforms such as the discrete cosine transform (DCT) and the lapped orthogonal transform (LOT). A method for analyzing and filtering the DCT blocking noise and the LOT ringing noise for moderate and highly compressed images is described and several test cases are presented. A generalized Fourier analysis of the block transform distortion as seen in the frequency domain is discussed in conjunction with an outline of a separable adaptive postfiltering algorithm for decoded image enhancement.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Visibility of DCT basis functions: effects of display resolution DCT基函数的可见性:显示分辨率的影响
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305945
A. Watson, J. Solomon, A. Ahumada
The authors have examined the variation in visibility of single DCT basis functions as a function of display visual resolution. They have shown that the existing model (Ahumada and Peterson, 1992; and Peterson et al., 1993) accommodates resolutions of 16, 32, and 64 pixels/degree, provided that one parameter, the peak sensitivity s0, is allowed to vary. Variations in this parameter are to some extent consistent with spatial summation, although sensitivity is lower at the lowest resolution than summation would predict. Practical DCT quantization matrices must take into account both the visibility of single basis functions, and the spatial pooling of artifacts from block to block. Peterson et al. (1993) showed that to a first approximation this pooling is consistent with probability summation. If one considers two images of equivalent size in degrees, but visual resolutions differing by a factor of two, then the sensitivity to individual artifacts would be lower by 4/sup 1/4/ in the higher resolution image due to the smaller block size in degrees, but higher by 4/sup 1/4/ in the same image due to the greater number of blocks. Thus the same matrix should be used with both. The point of the illustration is that the overall gain of the best quantization matrix must take into account both display resolution and image size.<>
作者研究了单个DCT基函数的可见性变化作为显示视觉分辨率的函数。他们已经表明,现有的模型(Ahumada和Peterson, 1992;和Peterson等人,1993)容纳分辨率16,32和64像素/度,只要一个参数,峰值灵敏度50,是允许变化的。该参数的变化在一定程度上与空间求和一致,尽管在最低分辨率下的灵敏度低于求和所预测的。实用的DCT量化矩阵既要考虑单个基函数的可见性,又要考虑块间伪影的空间池化。Peterson等人(1993)初步表明,这种池化与概率求和是一致的。如果考虑两张同等大小的图像,但视觉分辨率相差两倍,那么在高分辨率图像中,由于块大小较小,对单个伪影的灵敏度将降低4/sup 1/4/,但在同一图像中,由于块数量较多,灵敏度将提高4/sup 1/4/。因此,两者应该使用相同的矩阵。这个例子的要点是,最佳量化矩阵的总增益必须同时考虑显示分辨率和图像尺寸。
{"title":"Visibility of DCT basis functions: effects of display resolution","authors":"A. Watson, J. Solomon, A. Ahumada","doi":"10.1109/DCC.1994.305945","DOIUrl":"https://doi.org/10.1109/DCC.1994.305945","url":null,"abstract":"The authors have examined the variation in visibility of single DCT basis functions as a function of display visual resolution. They have shown that the existing model (Ahumada and Peterson, 1992; and Peterson et al., 1993) accommodates resolutions of 16, 32, and 64 pixels/degree, provided that one parameter, the peak sensitivity s0, is allowed to vary. Variations in this parameter are to some extent consistent with spatial summation, although sensitivity is lower at the lowest resolution than summation would predict. Practical DCT quantization matrices must take into account both the visibility of single basis functions, and the spatial pooling of artifacts from block to block. Peterson et al. (1993) showed that to a first approximation this pooling is consistent with probability summation. If one considers two images of equivalent size in degrees, but visual resolutions differing by a factor of two, then the sensitivity to individual artifacts would be lower by 4/sup 1/4/ in the higher resolution image due to the smaller block size in degrees, but higher by 4/sup 1/4/ in the same image due to the greater number of blocks. Thus the same matrix should be used with both. The point of the illustration is that the overall gain of the best quantization matrix must take into account both display resolution and image size.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multiplication and division free adaptive arithmetic coding techniques for bi-level images 双能级图像的免乘除自适应算术编码技术
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305934
Linh Huynh
Two new approximate methods for coding the binary alphabet with negligible loss of compression efficiency are proposed. An overview is provided of arithmetic coding and bi-level image modeling, the proposed methods are described, followed by their implementation. A theoretical discussion of the the compression performance is also included with an empirical evaluation of the proposed techniques. The focus throughout is on encoding; the decoding process is similar.<>
提出了两种可以忽略压缩效率损失的近似编码方法。概述了算法编码和双级图像建模,描述了所提出的方法,然后是它们的实现。对压缩性能的理论讨论也包括对所提出的技术的经验评估。整个过程的重点是编码;解码过程类似
{"title":"Multiplication and division free adaptive arithmetic coding techniques for bi-level images","authors":"Linh Huynh","doi":"10.1109/DCC.1994.305934","DOIUrl":"https://doi.org/10.1109/DCC.1994.305934","url":null,"abstract":"Two new approximate methods for coding the binary alphabet with negligible loss of compression efficiency are proposed. An overview is provided of arithmetic coding and bi-level image modeling, the proposed methods are described, followed by their implementation. A theoretical discussion of the the compression performance is also included with an empirical evaluation of the proposed techniques. The focus throughout is on encoding; the decoding process is similar.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126835937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Parsing algorithms for dictionary compression on the PRAM 在PRAM上进行字典压缩的解析算法
Pub Date : 1994-03-29 DOI: 10.1109/DCC.1994.305921
D. Hirschberg, L. M. Stauffer
Parallel algorithms for lossless data compression via dictionary compression using optimal and greedy parsing strategies are described. Dictionary compression removes redundancy by replacing substrings of the input by references to strings stored in a dictionary. Given a static dictionary stored as a suffix tree, the authors present a parallel random access concurrent read, concurrent write (PRAM CREW) algorithm for optimal compression which runs in O(M+log M log n) time with O(nM/sup 2/) processors, where it is assumed that M is the maximum length of any dictionary entry. They also describe an O(M+log n) time and O(n) processor algorithm for greedy parsing given a static or sliding-window dictionary. For sliding-window compression. A different approach finds the greedy parsing in O(log n) time using O(nM log M/log n) processors. Their algorithms are practical in the sense that their analysis elicits small constants.<>
描述了利用最优和贪婪解析策略通过字典压缩实现无损数据压缩的并行算法。字典压缩通过对字典中存储的字符串的引用替换输入的子字符串来消除冗余。给定一个以后缀树形式存储的静态字典,作者提出了一种并行随机访问并发读,并发写(PRAM CREW)算法,该算法在O(M+log M log n)时间内运行,使用O(nM/sup 2/)个处理器,其中假设M是任何字典条目的最大长度。他们还描述了给定静态或滑动窗口字典的贪婪解析的O(M+log n)时间和O(n)处理器算法。用于滑动窗口压缩。另一种方法是使用O(nM log M/log n)个处理器在O(log n)时间内找到贪婪解析。他们的算法是实用的,因为他们的分析引出了小常数。
{"title":"Parsing algorithms for dictionary compression on the PRAM","authors":"D. Hirschberg, L. M. Stauffer","doi":"10.1109/DCC.1994.305921","DOIUrl":"https://doi.org/10.1109/DCC.1994.305921","url":null,"abstract":"Parallel algorithms for lossless data compression via dictionary compression using optimal and greedy parsing strategies are described. Dictionary compression removes redundancy by replacing substrings of the input by references to strings stored in a dictionary. Given a static dictionary stored as a suffix tree, the authors present a parallel random access concurrent read, concurrent write (PRAM CREW) algorithm for optimal compression which runs in O(M+log M log n) time with O(nM/sup 2/) processors, where it is assumed that M is the maximum length of any dictionary entry. They also describe an O(M+log n) time and O(n) processor algorithm for greedy parsing given a static or sliding-window dictionary. For sliding-window compression. A different approach finds the greedy parsing in O(log n) time using O(nM log M/log n) processors. Their algorithms are practical in the sense that their analysis elicits small constants.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Proceedings of IEEE Data Compression Conference (DCC'94)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1