首页 > 最新文献

Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

英文 中文
A perceptual-based video coder for error resilience 基于感知的纠错视频编码器
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785678
Yi-jen Chiu
Summary form only given. Error resilience is an important requirement when errors occur during video transmission. The video transmitted over the Internet is usually a packetized stream and thus the common errors for the Internet video are due to packet loss, caused by buffer overflows in routers, late arrival of packets, and bit errors in the network. This loss results in single or multiple macroblock losses in the decoding process and causes severe degradation in perceived quality and error propagation. We present a perceptual preprocessor based on the insensitivity of the human visual system to the mild changes in pixel intensity in order to segment video into regions according to perceptibility of picture changes. With the information of segmentation, we determine which macroblocks require motion estimation and then which macroblocks need to be included in the second layer. The second layer contains the coarse (less quantized) version of the most perceptually-critical picture information to provide redundancy used to reconstruct lost coding blocks. This information is transmitted in a separate packet, which provides path and time diversities when packet losses are uncorrelated. This combination of methods provides a significant improvement in received quality when losses occur, without significantly degrading the video in a low-bit-rate video channel. Our proposed scheme is easily scalable to various data bitrates, picture quality, and computational complexity for use on different platforms. Because the data in our layered video stream is standards-compliant, our proposed schemes require no extra non-standard device to encode/decode the video and they are easily integrated into the current video standards such as H.261/263, MPEG1/MPEG2 and the forthcoming MPEG4.
只提供摘要形式。当视频传输过程中出现错误时,错误恢复能力是一个重要的要求。在Internet上传输的视频通常是分组流,因此Internet视频的常见错误是由于丢包、路由器缓冲区溢出、数据包延迟到达和网络中的比特错误造成的。这种丢失会导致解码过程中单个或多个宏块丢失,并导致感知质量和错误传播的严重下降。基于人类视觉系统对像素强度轻微变化的不敏感性,提出了一种感知预处理器,根据图像变化的可感知性将视频分割成多个区域。根据分割的信息,我们确定哪些宏块需要运动估计,然后哪些宏块需要包含在第二层中。第二层包含最感知关键的图像信息的粗糙(较少量化)版本,以提供用于重建丢失的编码块的冗余。该信息在一个单独的数据包中传输,当数据包丢失不相关时,它提供了路径和时间的多样性。当出现丢失时,这种方法的组合可以显著提高接收质量,而不会显著降低低比特率视频通道中的视频质量。我们提出的方案可以很容易地扩展到不同的数据比特率、图像质量和计算复杂性,以便在不同的平台上使用。由于我们的分层视频流中的数据是符合标准的,我们提出的方案不需要额外的非标准设备来编码/解码视频,并且它们很容易集成到当前的视频标准中,如H.261/263, MPEG1/MPEG2和即将推出的MPEG4。
{"title":"A perceptual-based video coder for error resilience","authors":"Yi-jen Chiu","doi":"10.1109/DCC.1999.785678","DOIUrl":"https://doi.org/10.1109/DCC.1999.785678","url":null,"abstract":"Summary form only given. Error resilience is an important requirement when errors occur during video transmission. The video transmitted over the Internet is usually a packetized stream and thus the common errors for the Internet video are due to packet loss, caused by buffer overflows in routers, late arrival of packets, and bit errors in the network. This loss results in single or multiple macroblock losses in the decoding process and causes severe degradation in perceived quality and error propagation. We present a perceptual preprocessor based on the insensitivity of the human visual system to the mild changes in pixel intensity in order to segment video into regions according to perceptibility of picture changes. With the information of segmentation, we determine which macroblocks require motion estimation and then which macroblocks need to be included in the second layer. The second layer contains the coarse (less quantized) version of the most perceptually-critical picture information to provide redundancy used to reconstruct lost coding blocks. This information is transmitted in a separate packet, which provides path and time diversities when packet losses are uncorrelated. This combination of methods provides a significant improvement in received quality when losses occur, without significantly degrading the video in a low-bit-rate video channel. Our proposed scheme is easily scalable to various data bitrates, picture quality, and computational complexity for use on different platforms. Because the data in our layered video stream is standards-compliant, our proposed schemes require no extra non-standard device to encode/decode the video and they are easily integrated into the current video standards such as H.261/263, MPEG1/MPEG2 and the forthcoming MPEG4.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards a calibrated corpus for compression testing 迈向压缩测试的校准语料库
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785711
M. Titchener, P. Fenwick, M. C. Chen
Summary form only given. A mini-corpus of twelve 'calibrated' binary-data files have been produced for systematic evaluation of compression algorithms. These are generated within the framework of a deterministic theory of string complexity. Here the T-complexity of a string x (measured in taugs) is defined as C/sub T/(x/sub i/)=/spl Sigma//sub i/log/sub 2/(k/sub i/+1), where the positive integers k/sub i/ are the T-expansion parameters for the corresponding string production process. C/sub T/(x) is observed to be the logarithmic integral of the total information content I/sub x/ of x (measured in nats), i.e., C/sub T/(x)=li(I/sub x/). The average entropy is H~/sub x/=I/sub x//|x|, i.e., the total information content divided by the length of x. Thus C/sub T/(x)=li(H~/sub x//spl times/|x|). Alternatively, the information rate along a string may be described by an entropy function H/sub x/(n),0/spl les/n/spl les/|x| for the string. Assuming that H/sub x/(n) is continuously integrable along the length of the x, then I/sub x/=/spl int//sub 0//sup |/x|H/sub x/(n)/spl delta/n. Thus C/sub T/(x)=li(/spl int//sub 0//sup |/x|H/sub x/(n)/spl delta/n). Solving for H/sub x/(n): that is differentiating both sides and rearranging, we get: H/sub x/(n)=(/spl delta/C/sub T/(x|n)//spl delta/n)/spl times/log/sub e/(li/sup -1/(C/sub T/(x|/sub n/))). With x being in fact discrete, and the T-complexity function being computed in terms of the discrete T-augmentation steps, we may accordingly re-express the equation in terms of the T-prefix increments: /spl delta/n/spl ap//spl Delta//sub i/|x|=k/sub i/|p/sub i/|; and from the definition of C/sub T/(x): /spl delta/C/sub T/(x) is replaced by /spl Delta//sub i/C/sub T/(x)=log/sub 2/(k/sub i/+1). The average slope over the i-th T-prefix p/sub i/ increment is then simply (/spl Delta//sub i/C/sub T/(x))/(/spl Delta//sub i/|x|)=(log/sub 2/(k/sub i/+1))/(k/sub i/|p/sub i/|). The entropy function is now replaced by a discrete approximation.
只提供摘要形式。12个“校准”二进制数据文件的迷你语料库已经产生了压缩算法的系统评估。这些都是在弦复杂性的确定性理论框架内生成的。这里,字符串x的T-复杂度(以标签为单位)定义为C/下标T/(x/下标i/)=/spl Sigma//下标i/log/下标2/(k/下标i/+1),其中正整数k/下标i/是对应的字符串生产过程的T-展开参数。观察到C/ T/(x)是x(以纳特为单位)的总信息量I/下标x/的对数积分,即C/ T/(x)=li(I/下标x/)。平均熵为H~/sub x/=I/sub x//|x|,即总信息量除以x的长度,因此C/sub T/(x)=li(H~/sub x//spl乘以/|x|)。或者,沿着字符串的信息速率可以用熵函数H/sub x/(n)来描述,对于字符串,0/spl les/n/spl les/|x|。假设H/下标x/(n)沿x的长度连续可积,则I/下标x/=/spl int//下标0//sup |/x|H/下标x/(n)/spl /n。因此C / sub T /李(x) = (spl int / / sub x | 0 | / /晚餐/ H / sub x / (n) / splδ/ n)。求解H/下标x/(n)也就是两边求导并重新排列,我们得到H/下标x/(n)=(/spl /C/ T/(x|n)//spl /(n) /spl乘以/log/ e/(li/sup -1/(C/下标T/(x|/下标n/)))由于x实际上是离散的,并且t -复杂度函数是用离散的t增积步骤来计算的,因此我们可以用t前缀增量来重新表示方程:/spl delta/n/spl ap//spl delta/ /下标i/|x|=k/下标i/|p/下标i/|;由C/ T/(x)的定义:/spl /C/ T/(x)被/spl //下标i/C/下标T/(x)=log/下标2/(k/下标i/+1)所取代。第i个T前缀p/下标i/增量的平均斜率为(/spl Delta//下标i/C/下标T/(x))/(/spl Delta//下标i/|x|)=(log/下标2/(k/下标i/+1) /(k/下标i/|p/下标i/|))。熵函数现在被一个离散的近似代替了。
{"title":"Towards a calibrated corpus for compression testing","authors":"M. Titchener, P. Fenwick, M. C. Chen","doi":"10.1109/DCC.1999.785711","DOIUrl":"https://doi.org/10.1109/DCC.1999.785711","url":null,"abstract":"Summary form only given. A mini-corpus of twelve 'calibrated' binary-data files have been produced for systematic evaluation of compression algorithms. These are generated within the framework of a deterministic theory of string complexity. Here the T-complexity of a string x (measured in taugs) is defined as C/sub T/(x/sub i/)=/spl Sigma//sub i/log/sub 2/(k/sub i/+1), where the positive integers k/sub i/ are the T-expansion parameters for the corresponding string production process. C/sub T/(x) is observed to be the logarithmic integral of the total information content I/sub x/ of x (measured in nats), i.e., C/sub T/(x)=li(I/sub x/). The average entropy is H~/sub x/=I/sub x//|x|, i.e., the total information content divided by the length of x. Thus C/sub T/(x)=li(H~/sub x//spl times/|x|). Alternatively, the information rate along a string may be described by an entropy function H/sub x/(n),0/spl les/n/spl les/|x| for the string. Assuming that H/sub x/(n) is continuously integrable along the length of the x, then I/sub x/=/spl int//sub 0//sup |/x|H/sub x/(n)/spl delta/n. Thus C/sub T/(x)=li(/spl int//sub 0//sup |/x|H/sub x/(n)/spl delta/n). Solving for H/sub x/(n): that is differentiating both sides and rearranging, we get: H/sub x/(n)=(/spl delta/C/sub T/(x|n)//spl delta/n)/spl times/log/sub e/(li/sup -1/(C/sub T/(x|/sub n/))). With x being in fact discrete, and the T-complexity function being computed in terms of the discrete T-augmentation steps, we may accordingly re-express the equation in terms of the T-prefix increments: /spl delta/n/spl ap//spl Delta//sub i/|x|=k/sub i/|p/sub i/|; and from the definition of C/sub T/(x): /spl delta/C/sub T/(x) is replaced by /spl Delta//sub i/C/sub T/(x)=log/sub 2/(k/sub i/+1). The average slope over the i-th T-prefix p/sub i/ increment is then simply (/spl Delta//sub i/C/sub T/(x))/(/spl Delta//sub i/|x|)=(log/sub 2/(k/sub i/+1))/(k/sub i/|p/sub i/|). The entropy function is now replaced by a discrete approximation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125258810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Rate-distortion analysis of spike processes 尖峰过程的速率失真分析
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755657
C. Weidmann, M. Vetterli
Recent rate-distortion analyses of image transform coders are based on a trade-off between the lossless coding of coefficient positions versus the lossy coding of the coefficient values. We propose spike processes as a tool that allows a more fundamental trade-off, namely between lossy position coding and lossy value coding. We investigate the Hamming distortion case and give analytic results for single and multiple spikes. We then consider upper bounds for a single Gaussian spike with squared error distortion. The obtained results show a rate distortion behavior which switches from linear at low rates to exponential at high rates.
最近的图像变换编码器的率失真分析是基于系数位置的无损编码与系数值的有损编码之间的权衡。我们提出尖峰过程作为一种工具,允许更基本的权衡,即有损位置编码和有损值编码之间。我们研究了汉明失真的情况,并给出了单尖峰和多尖峰的分析结果。然后,我们考虑具有平方误差失真的单个高斯尖峰的上界。得到的结果表明,在低速率下由线性转换为指数速率的速率畸变行为。
{"title":"Rate-distortion analysis of spike processes","authors":"C. Weidmann, M. Vetterli","doi":"10.1109/DCC.1999.755657","DOIUrl":"https://doi.org/10.1109/DCC.1999.755657","url":null,"abstract":"Recent rate-distortion analyses of image transform coders are based on a trade-off between the lossless coding of coefficient positions versus the lossy coding of the coefficient values. We propose spike processes as a tool that allows a more fundamental trade-off, namely between lossy position coding and lossy value coding. We investigate the Hamming distortion case and give analytic results for single and multiple spikes. We then consider upper bounds for a single Gaussian spike with squared error distortion. The obtained results show a rate distortion behavior which switches from linear at low rates to exponential at high rates.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125838046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Binary pseudowavelets and applications to bilevel image processing 二值伪小波及其在二值图像处理中的应用
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755686
S. Pigeon, Yoshua Bengio
This paper shows the existence of binary pseudowavelets, bases on the binary domain that exhibit some of the properties of wavelets, such as multiresolution reconstruction and compact support. The binary pseudowavelets are defined on B/sup n/ (binary vectors of length n) and are operated upon with the binary operators logical and, and exclusive or. The forward transform, or analysis, is the decomposition of a binary vector into its constituent binary pseudowavelets. Binary pseudowavelets allow multiresolution, progressive reconstruction of binary vectors by using progressively more coefficients in the inverse transform. Binary pseudowavelets bases, being sparse matrices, also provide for fast transforms; moreover pseudowavelets rely on hardware-friendly operations for efficient software and hardware implementation.
本文基于二值域证明了二值伪小波的存在性,这些伪小波具有小波的一些特性,如多分辨率重构和紧凑支持。二元伪小波定义在B/sup n/(长度为n的二进制向量)上,并使用二元算子逻辑与和异或进行运算。前向变换或分析是将一个二值向量分解成其组成的二值伪小波。二值伪小波允许多分辨率,通过在反变换中使用逐步增加的系数来逐步重建二值矢量。二值伪小波基作为稀疏矩阵,也提供了快速变换;此外,伪小波依赖于硬件友好的操作,以实现高效的软件和硬件实现。
{"title":"Binary pseudowavelets and applications to bilevel image processing","authors":"S. Pigeon, Yoshua Bengio","doi":"10.1109/DCC.1999.755686","DOIUrl":"https://doi.org/10.1109/DCC.1999.755686","url":null,"abstract":"This paper shows the existence of binary pseudowavelets, bases on the binary domain that exhibit some of the properties of wavelets, such as multiresolution reconstruction and compact support. The binary pseudowavelets are defined on B/sup n/ (binary vectors of length n) and are operated upon with the binary operators logical and, and exclusive or. The forward transform, or analysis, is the decomposition of a binary vector into its constituent binary pseudowavelets. Binary pseudowavelets allow multiresolution, progressive reconstruction of binary vectors by using progressively more coefficients in the inverse transform. Binary pseudowavelets bases, being sparse matrices, also provide for fast transforms; moreover pseudowavelets rely on hardware-friendly operations for efficient software and hardware implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129891432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Progressive joint source-channel coding in feedback channels 反馈信道中渐进联合源信道编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755663
Jin Lu, Aria Nosratinia, B. Aazhang
It is well known that Shannon's separation result does not hold under finite computation or finite delay constraints, thus joint source-channel coding is of great interest for practical reasons. For progressive source-channel coding systems, efficient codes have been proposed for feedforward channels and the important problem of rate allocation between the source and channel codes has been solved. For memoryless channels with feedback, the rate allocation problem was studied by Chande et al. (1998). In this paper, we consider the case of fading channels with feedback. Feedback routes are provided in many existing standard wireless channels, making rate allocation with feedback a problem of considerable practical importance. We address the question of rate allocation between the source and channel codes in the forward channel, in the presence of feedback information and under a distortion cost function. We show that the presence of feedback shifts the optimal rate allocation point, resulting in higher rates for error-correcting codes and smaller overall distortion. Simulations on both memoryless and fading channels show that the presence of feedback allows up to 1 dB improvement in PSNR compared to the similarly optimized feedforward scheme.
众所周知,在有限计算或有限延迟约束下,香农分离结果不成立,因此联合信源信道编码在实际应用中具有重要意义。对于渐进式信源信道编码系统,提出了前馈信道的有效码,解决了信源码和信道码之间速率分配的重要问题。对于有反馈的无记忆信道,Chande等(1998)研究了速率分配问题。本文考虑了带反馈的衰落信道的情况。在现有的许多标准无线信道中都提供了反馈路由,使得带反馈的速率分配问题具有相当重要的实际意义。我们解决了前向信道中存在反馈信息和失真代价函数的源码和信道码之间的速率分配问题。我们表明,反馈的存在移动了最佳速率分配点,导致更高的纠错码速率和更小的整体失真。在无记忆信道和衰落信道上的仿真表明,与类似优化的前馈方案相比,反馈的存在使PSNR提高了1db。
{"title":"Progressive joint source-channel coding in feedback channels","authors":"Jin Lu, Aria Nosratinia, B. Aazhang","doi":"10.1109/DCC.1999.755663","DOIUrl":"https://doi.org/10.1109/DCC.1999.755663","url":null,"abstract":"It is well known that Shannon's separation result does not hold under finite computation or finite delay constraints, thus joint source-channel coding is of great interest for practical reasons. For progressive source-channel coding systems, efficient codes have been proposed for feedforward channels and the important problem of rate allocation between the source and channel codes has been solved. For memoryless channels with feedback, the rate allocation problem was studied by Chande et al. (1998). In this paper, we consider the case of fading channels with feedback. Feedback routes are provided in many existing standard wireless channels, making rate allocation with feedback a problem of considerable practical importance. We address the question of rate allocation between the source and channel codes in the forward channel, in the presence of feedback information and under a distortion cost function. We show that the presence of feedback shifts the optimal rate allocation point, resulting in higher rates for error-correcting codes and smaller overall distortion. Simulations on both memoryless and fading channels show that the presence of feedback allows up to 1 dB improvement in PSNR compared to the similarly optimized feedforward scheme.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"159-160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122126653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Reduced comparison search for the exact GLA 减少了精确GLA的比较搜索
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755651
T. Kaukoranta, P. Fränti, O. Nevalainen
This paper introduces a new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization. The reduced comparison search detects the activity of the code vectors and utilizes it on the classification of the training vectors. For training vectors whose current code vector has not been modified, we calculate distances only to the active code vectors. A large proportion of the distance calculations can be omitted without sacrificing the optimality of the partition. The new method is included in several fast GLA variants reducing their running times over 50% on average.
本文介绍了一种减少距离计算次数的新方法——广义劳埃德算法(GLA),它是矢量量化中广泛使用的构造码本的方法。简化比较搜索检测代码向量的活动并将其用于训练向量的分类。对于当前代码向量没有被修改的训练向量,我们只计算到活动代码向量的距离。在不牺牲分区的最优性的情况下,可以省去大部分的距离计算。新方法包含在几个快速GLA变体中,平均减少了50%以上的运行时间。
{"title":"Reduced comparison search for the exact GLA","authors":"T. Kaukoranta, P. Fränti, O. Nevalainen","doi":"10.1109/DCC.1999.755651","DOIUrl":"https://doi.org/10.1109/DCC.1999.755651","url":null,"abstract":"This paper introduces a new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization. The reduced comparison search detects the activity of the code vectors and utilizes it on the classification of the training vectors. For training vectors whose current code vector has not been modified, we calculate distances only to the active code vectors. A large proportion of the distance calculations can be omitted without sacrificing the optimality of the partition. The new method is included in several fast GLA variants reducing their running times over 50% on average.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130475499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Two space-economical algorithms for calculating minimum redundancy prefix codes 计算最小冗余前缀码的两种空间经济算法
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755676
R. Milidiú, A. Pessoa, E. Laber
The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.
最小冗余前缀码问题是确定,对于给定列表W=[W /sub 1/,…],w/下标n/]的n个正符号权值,一个列表L=[L /下标1/,…],l/下标n/]的n个对应的整数码字长度,使得/spl Sigma//下标i=1//sup n/2/sup -li//spl les/1和/spl Sigma//下标i=1//sup n/w/下标i/l/下标i/最小。让我们考虑W已经排序的情况。在这种情况下,输出列表L可以表示为列表M=[M /sub 1/,…],m/下标H/],其中m(l/下标1/),对于l=1,…,H为码字长度l在l中的多重数,H为最大码字的长度。幸运的是,H被证明为O(min{log(1/(p/下标1/)),n}),其中p/下标1/是最小的符号概率,由w/下标1///spl Sigma//下标i=1//sup n/w/下标i/给出。提出了F-LazyHuff算法和E-LazyHuff算法。F-LazyHuff运行时间为O(n),但需要O(min{H/sup 2/,n})额外空间。另一方面,E-LazyHuff在O(nlog(n/H))时间内运行,只需要O(H)额外空间。最后,由于我们的两种算法具有在代码计算期间不写入输入缓冲区的优点,因此我们讨论了一些应用程序,其中该特性非常有用。
{"title":"Two space-economical algorithms for calculating minimum redundancy prefix codes","authors":"R. Milidiú, A. Pessoa, E. Laber","doi":"10.1109/DCC.1999.755676","DOIUrl":"https://doi.org/10.1109/DCC.1999.755676","url":null,"abstract":"The minimum redundancy prefix code problem is to determine, for a given list W=[w/sub 1/,...,w/sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,...,m/sub H/], where m(l/sub 1/), for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min{log(1/(p/sub 1/)),n}), where p/sub 1/ is the smallest symbol probability, given by w/sub 1///spl Sigma//sub i=1//sup n/w/sub i/. We present the F-LazyHuff and the E-LazyHuff algorithms. F-LazyHuff runs in O(n) time but requires O(min{H/sup 2/,n}) additional space. On the other hand, E-LazyHuff runs in O(nlog(n/H)) time, requiring only O(H) additional space. Finally, since our two algorithms have the advantage of not writing at the input buffer during the code calculation, we discuss some applications where this feature is very useful.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131888574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
The effect of flexible parsing for dynamic dictionary-based data compression 灵活解析对基于动态字典的数据压缩的影响
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755673
Yossi Matias, N. Rajpoot, S. C. Sahinalp
We report on the performance evaluation of greedy parsing with a single-step lookahead, denoted as flexible parsing. We also introduce a new fingerprint-based data structure which enables efficient linear-time implementation.
我们报告了单步前瞻性贪婪解析的性能评估,称为灵活解析。我们还介绍了一种新的基于指纹的数据结构,它可以实现高效的线性时间实现。
{"title":"The effect of flexible parsing for dynamic dictionary-based data compression","authors":"Yossi Matias, N. Rajpoot, S. C. Sahinalp","doi":"10.1109/DCC.1999.755673","DOIUrl":"https://doi.org/10.1109/DCC.1999.755673","url":null,"abstract":"We report on the performance evaluation of greedy parsing with a single-step lookahead, denoted as flexible parsing. We also introduce a new fingerprint-based data structure which enables efficient linear-time implementation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126037652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
On taking advantage of similarities between parameters in lossless sequential coding 无损序列编码中参数间相似性的利用
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785670
J. Åberg
Summary form only given. In sequential lossless data compression algorithms the data stream is often transformed into short subsequences that are modeled as memoryless. Then it is desirable to use any information that each sequence might provide about the behaviour of other sequences that can be expected to have similar properties. Here we examine one such situation, as follows. We want to encode, using arithmetic coding with a sequential estimator, an M-ary memoryless source with unknown parameters /spl theta/, from which we have encoded already a sequence x/sup n/. In addition, both the encoder and the decoder have observed a sequence y/sup n/ that is generated independently by another source with unknown parameters /spl theta//spl tilde/ that are known to be "similar" to /spl theta/ by a pseudodistance /spl delta/(/spl theta/,/spl theta//spl tilde/) that is approximately equal to the relative entropy. Known to both sides is also a number d such that /spl delta/(/spl theta/,/spl theta//spl tilde/)/spl les/d. For a stand-alone memoryless source, the worst-case average redundancy of the (n+1)-th encoding is lower bounded by 0.5(M-1)/n+O(1/n/sup 2/), and the Dirichlet estimator is close to optimal for this case. We show that this bound holds also for the case with side information as described above, meaning that we can improve, at best, the O(1/n/sup 2/)-term. We define a frequency weighted estimator for this. Application of the frequency weighted estimator to to the PPM algorithm (Bell et al., 1989) by weighting order-4 statistics into an order-5 model, with d estimated during encoding, yields improvements that are consistent with the bounds above, which means that in practice we improve the performance by about 0.5 bits per active state of the model, making a gain of approximately 20000 bits on the Calgary Corpus.
只提供摘要形式。在顺序无损数据压缩算法中,数据流通常被转换成短的子序列,这些子序列被建模为无内存的。然后,需要使用每个序列可能提供的关于可以预期具有类似属性的其他序列的行为的任何信息。在这里,我们考察这样一种情况,如下所示。我们想要编码,使用算术编码与序列估计器,一个M-ary无记忆源与未知参数/spl θ /,从中我们已经编码了序列x/sup n/。此外,编码器和解码器都观察到一个序列y/sup n/,该序列由另一个具有未知参数的源独立产生/spl theta//spl tilde/,已知与/spl theta/“相似”,通过伪距离/spl delta/(/spl theta/,/spl theta//spl tilde/),该序列近似等于相对熵。双方都知道一个数字d,使得/spl delta/(/spl theta/,/spl theta//spl波浪/)/spl les/d。对于独立无内存源,(n+1)次编码的最坏情况平均冗余下界为0.5(M-1)/n+O(1/n/sup 2/), Dirichlet估计器在这种情况下接近最优。我们证明了这个界也适用于上面描述的边信息的情况,这意味着我们最多可以改进O(1/n/sup 2/)项。我们为此定义了一个频率加权估计器。将频率加权估计器应用于PPM算法(Bell et al., 1989),将阶-4统计量加权到阶-5模型中,在编码期间估计d,产生与上述边界一致的改进,这意味着在实践中,我们将模型的每个活动状态的性能提高了约0.5比特,在卡尔加里语料库上获得了约20000比特的增益。
{"title":"On taking advantage of similarities between parameters in lossless sequential coding","authors":"J. Åberg","doi":"10.1109/DCC.1999.785670","DOIUrl":"https://doi.org/10.1109/DCC.1999.785670","url":null,"abstract":"Summary form only given. In sequential lossless data compression algorithms the data stream is often transformed into short subsequences that are modeled as memoryless. Then it is desirable to use any information that each sequence might provide about the behaviour of other sequences that can be expected to have similar properties. Here we examine one such situation, as follows. We want to encode, using arithmetic coding with a sequential estimator, an M-ary memoryless source with unknown parameters /spl theta/, from which we have encoded already a sequence x/sup n/. In addition, both the encoder and the decoder have observed a sequence y/sup n/ that is generated independently by another source with unknown parameters /spl theta//spl tilde/ that are known to be \"similar\" to /spl theta/ by a pseudodistance /spl delta/(/spl theta/,/spl theta//spl tilde/) that is approximately equal to the relative entropy. Known to both sides is also a number d such that /spl delta/(/spl theta/,/spl theta//spl tilde/)/spl les/d. For a stand-alone memoryless source, the worst-case average redundancy of the (n+1)-th encoding is lower bounded by 0.5(M-1)/n+O(1/n/sup 2/), and the Dirichlet estimator is close to optimal for this case. We show that this bound holds also for the case with side information as described above, meaning that we can improve, at best, the O(1/n/sup 2/)-term. We define a frequency weighted estimator for this. Application of the frequency weighted estimator to to the PPM algorithm (Bell et al., 1989) by weighting order-4 statistics into an order-5 model, with d estimated during encoding, yields improvements that are consistent with the bounds above, which means that in practice we improve the performance by about 0.5 bits per active state of the model, making a gain of approximately 20000 bits on the Calgary Corpus.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114462094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite automata and regularized edge-preserving wavelet transform scheme 有限自动机和正则化保边小波变换方案
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785687
Sung-Wai Hong, P. Bao
Summary form only given. We present an edge-preserving image compression technique based on the wavelet transform and iterative constrained least square regularization. This approach treats image reconstruction from lossy image compression as the process of image restoration. It utilizes the edge information detected from the source image as a priori knowledge for the subsequent reconstruction. Image restoration refers to the problem of estimating the source image from its degraded version. The reconstruction of DWT-coded images is formulated as a regularized image recovery problem and makes use of the edge information as the a priori knowledge about the source image to recover the details, as well as to reduce the ringing artifact of the DWT-coded image. To compromise the rate of edge information and DWT-coded image data, a scheme based on generalized finite automata (GFA) is used. GFA is used instead of vector quantization in order to achieve adaptive encoding of the edge image.
只提供摘要形式。提出了一种基于小波变换和迭代约束最小二乘正则化的图像边缘保持压缩技术。该方法将有损图像压缩后的图像重建过程视为图像恢复过程。它利用从源图像中检测到的边缘信息作为后续重建的先验知识。图像恢复是指从降级图像中估计源图像的问题。将dwt编码图像的重建表述为一个正则化的图像恢复问题,利用边缘信息作为源图像的先验知识来恢复细节,并减少dwt编码图像的振铃伪影。为了折衷边缘信息和dwt编码图像数据的速率,采用了一种基于广义有限自动机(GFA)的方案。为了实现边缘图像的自适应编码,采用梯度分解法代替矢量量化。
{"title":"Finite automata and regularized edge-preserving wavelet transform scheme","authors":"Sung-Wai Hong, P. Bao","doi":"10.1109/DCC.1999.785687","DOIUrl":"https://doi.org/10.1109/DCC.1999.785687","url":null,"abstract":"Summary form only given. We present an edge-preserving image compression technique based on the wavelet transform and iterative constrained least square regularization. This approach treats image reconstruction from lossy image compression as the process of image restoration. It utilizes the edge information detected from the source image as a priori knowledge for the subsequent reconstruction. Image restoration refers to the problem of estimating the source image from its degraded version. The reconstruction of DWT-coded images is formulated as a regularized image recovery problem and makes use of the edge information as the a priori knowledge about the source image to recover the details, as well as to reduce the ringing artifact of the DWT-coded image. To compromise the rate of edge information and DWT-coded image data, a scheme based on generalized finite automata (GFA) is used. GFA is used instead of vector quantization in order to achieve adaptive encoding of the edge image.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124225516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1