首页 > 最新文献

Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

英文 中文
Real-time VBR rate control of MPEG video based upon lexicographic bit allocation 基于字典位分配的MPEG视频实时VBR速率控制
Pub Date : 1999-10-01 DOI: 10.1109/DCC.1999.755687
Dzung T. Hoang
The MPEG-2 video standard describes a bitstream syntax and a decoder model but leaves many details of the encoding process unspecified, such as encoder bit rate control. The standard defines a hypothetical decoder model, called the video buffering verifier, that can operate in either constant-bit-rate or variable-bit-rate modes. We present a low-complexity algorithm for variable-bit-rate control suitable for low-delay, real-time applications. The algorithm is motivated by recent results in lexicographic bit allocation. The basic algorithm switches between constant-quality and constant-bit-rate modes based on changes in the fullness of the decoding buffer in the video buffering verifier. We show how the algorithm can be applied either to produce a desired quality level or to meet a global bit budget. Simulation results show that the algorithm compares favorably to the optimal lexicographic algorithm.
MPEG-2视频标准描述了一个比特流语法和一个解码器模型,但没有说明编码过程的许多细节,比如编码器比特率控制。该标准定义了一个假设的解码器模型,称为视频缓冲验证器,它可以在恒定比特率或可变比特率模式下运行。我们提出了一种适合于低延迟、实时应用的低复杂度的可变比特率控制算法。该算法是由最近的字典位分配结果驱动的。基本算法根据视频缓冲验证器中解码缓冲区的满度变化在恒定质量和恒定比特率模式之间切换。我们展示了如何应用该算法来产生所需的质量水平或满足全局比特预算。仿真结果表明,该算法优于最优词典编纂算法。
{"title":"Real-time VBR rate control of MPEG video based upon lexicographic bit allocation","authors":"Dzung T. Hoang","doi":"10.1109/DCC.1999.755687","DOIUrl":"https://doi.org/10.1109/DCC.1999.755687","url":null,"abstract":"The MPEG-2 video standard describes a bitstream syntax and a decoder model but leaves many details of the encoding process unspecified, such as encoder bit rate control. The standard defines a hypothetical decoder model, called the video buffering verifier, that can operate in either constant-bit-rate or variable-bit-rate modes. We present a low-complexity algorithm for variable-bit-rate control suitable for low-delay, real-time applications. The algorithm is motivated by recent results in lexicographic bit allocation. The basic algorithm switches between constant-quality and constant-bit-rate modes based on changes in the fullness of the decoding buffer in the video buffering verifier. We show how the algorithm can be applied either to produce a desired quality level or to meet a global bit budget. Simulation results show that the algorithm compares favorably to the optimal lexicographic algorithm.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129712112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An asymptotically optimal data compression algorithm based on an inverted index 基于倒排索引的渐近最优数据压缩算法
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785708
P. Subrahmanya
Summary form only given. An alternate approach to representing a data sequence is to associate with each source letter, the list of locations at which it appears in the data sequence. We present a data compression algorithm based on a generalization of this idea. The algorithm parses the data with respect to a static dictionary of phrases and associates with each phrase in the dictionary a list of locations at which the phrase appears in the parsed data. Each list of locations is then run-length encoded. This collection of run-length encoded lists constitutes the compressed representation of the data. We refer to the collection of lists as an inverted index. While in information retrieval systems, the inverted index is an adjunct to the main database used to speed up searching, we regard it here as a self-contained representation of the database itself. Further, our inverted index does not necessarily list every occurrence of a phrase in the data, only every occurrence in the parsing. This allows us to be asymptotically optimal in terms of compression, though at the cost of a loss in searching efficiency. We discuss this trade-off between compression and searching efficiency. We prove that in terms of compression, this algorithm is asymptotically optimal universally over the class of discrete memoryless sources. We also show that pattern matching can be performed efficiently in the compressed domain. Compressing and storing data in this manner may be useful in applications which require frequent searching of a large but mostly static database.
只提供摘要形式。表示数据序列的另一种方法是将每个源字母与它在数据序列中出现的位置列表相关联。在此基础上提出了一种数据压缩算法。该算法根据静态短语字典解析数据,并将短语在解析数据中出现的位置列表与字典中的每个短语关联起来。然后对每个位置列表进行运行长度编码。这个运行长度编码列表的集合构成了数据的压缩表示。我们把列表集合称为倒排索引。在信息检索系统中,倒排索引是主数据库的辅助工具,用于加快搜索速度,在这里我们将其视为数据库本身的自包含表示。此外,我们的倒排索引不一定列出数据中某个短语的每次出现,只列出解析中的每次出现。这允许我们在压缩方面达到渐近最优,尽管代价是搜索效率的损失。我们将讨论压缩和搜索效率之间的权衡。我们证明了在压缩方面,该算法在离散无记忆源类上是渐近最优的。我们还证明了在压缩域中可以有效地进行模式匹配。以这种方式压缩和存储数据在需要频繁搜索大型但主要是静态数据库的应用程序中可能很有用。
{"title":"An asymptotically optimal data compression algorithm based on an inverted index","authors":"P. Subrahmanya","doi":"10.1109/DCC.1999.785708","DOIUrl":"https://doi.org/10.1109/DCC.1999.785708","url":null,"abstract":"Summary form only given. An alternate approach to representing a data sequence is to associate with each source letter, the list of locations at which it appears in the data sequence. We present a data compression algorithm based on a generalization of this idea. The algorithm parses the data with respect to a static dictionary of phrases and associates with each phrase in the dictionary a list of locations at which the phrase appears in the parsed data. Each list of locations is then run-length encoded. This collection of run-length encoded lists constitutes the compressed representation of the data. We refer to the collection of lists as an inverted index. While in information retrieval systems, the inverted index is an adjunct to the main database used to speed up searching, we regard it here as a self-contained representation of the database itself. Further, our inverted index does not necessarily list every occurrence of a phrase in the data, only every occurrence in the parsing. This allows us to be asymptotically optimal in terms of compression, though at the cost of a loss in searching efficiency. We discuss this trade-off between compression and searching efficiency. We prove that in terms of compression, this algorithm is asymptotically optimal universally over the class of discrete memoryless sources. We also show that pattern matching can be performed efficiently in the compressed domain. Compressing and storing data in this manner may be useful in applications which require frequent searching of a large but mostly static database.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121992144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint image compression and classification with vector quantization and a two dimensional hidden Markov model 基于矢量量化和二维隐马尔可夫模型的联合图像压缩与分类
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755650
Jia Li, R. Gray, R. Olshen
We present an algorithm to achieve good compression and classification for images using vector quantization and a two dimensional hidden Markov model. The feature vectors of image blocks are assumed to be generated by a two dimensional hidden Markov model. We first estimate the parameters of the model, then design a vector quantizer to minimize a weighted sum of compression distortion and classification risk, the latter being defined as the negative of the maximum log likelihood of states and feature vectors. The algorithm is tested on both synthetic data and real image data. The extension to joint progressive compression and classification is discussed.
我们提出了一种利用矢量量化和二维隐马尔可夫模型来实现图像压缩和分类的算法。假设图像块的特征向量是由二维隐马尔可夫模型生成的。我们首先估计模型的参数,然后设计一个矢量量化器来最小化压缩失真和分类风险的加权和,后者被定义为状态和特征向量的最大对数似然的负数。该算法在合成数据和真实图像数据上进行了测试。讨论了关节渐进压缩的扩展和分类。
{"title":"Joint image compression and classification with vector quantization and a two dimensional hidden Markov model","authors":"Jia Li, R. Gray, R. Olshen","doi":"10.1109/DCC.1999.755650","DOIUrl":"https://doi.org/10.1109/DCC.1999.755650","url":null,"abstract":"We present an algorithm to achieve good compression and classification for images using vector quantization and a two dimensional hidden Markov model. The feature vectors of image blocks are assumed to be generated by a two dimensional hidden Markov model. We first estimate the parameters of the model, then design a vector quantizer to minimize a weighted sum of compression distortion and classification risk, the latter being defined as the negative of the maximum log likelihood of states and feature vectors. The algorithm is tested on both synthetic data and real image data. The extension to joint progressive compression and classification is discussed.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Modified Viterbi algorithm for predictive TCQ 预测TCQ的改进Viterbi算法
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785689
T. Ji, W. Stark
Summary form only given. A hybrid trellis-tree search algorithm, the H-PTCQ, which has the same storage requirement as PTCQ and, is presented. We assume 2 survivor paths are kept at each state. It is straightforward to extend the algorithm to the cases where n/spl ges/2. Simulation is conducted over 20-second speech samples using DPCM, PTCQ and H-PTCQ. The data sequence is truncated into blocks of 1024 samples. The optimal codebooks for a memoryless Laplacian source are used. Predictor coefficients for the 1st-order and 2nd-order predictors are {0.8456} and {1.3435, -0.5888}, respectively. Simulation results indicate that both PTCQ and H-PTCQ have about 3 dB gain over DPCM. H-PTCQ with 8-state convolutional code has about 0.2 to 0.3 db gain over PTCQ for the same trellis size; H-PTCQ with 256-state convolutional code has 0.05 to 0.1 dB gain over the PTCQ counterpart. Compared with a 2M-state PTCQ, the M-state H-PTCQ has the same computational complexity and uses half of the path memory. Since the performance improvement of an an 8-state PTCQ over a 4-state PTCQ is about 0.4 dB for a similar set of data, the 0.2 to 0.3 dB gain obtained by using H-PTCQ is quite remarkable. Notice that H-PTQ enables a transmitter to adapt performance according to the resource constraints without changing PTCQ receivers. It is also interesting to observe that the 0.1 dB gain of an 8-state TCQ against a 4-state TCQ plus the 0.3 dB gain of H-PTCQ is about the gain of an 8-state PTCQ over a 4-state PTCQ. The results for 256-state quantization also agree with this observation. Therefore, we conclude that most of the gain of a 2M- over M-state PTCQ comes from the better internal TCQ quantizer, and mostly from the better prediction by keeping more paths.
只提供摘要形式。提出了一种与PTCQ具有相同存储需求的混合网格树搜索算法H-PTCQ。我们假设在每个状态有2条幸存者路径。将该算法扩展到n/spl等于/2的情况是很简单的。利用DPCM、PTCQ和H-PTCQ对20秒语音样本进行仿真。数据序列被截断为1024个样本的块。使用无记忆拉普拉斯源的最佳码本。一阶和二阶预测因子的预测系数分别为{0.8456}和{1.3435,-0.5888}。仿真结果表明,PTCQ和H-PTCQ都比DPCM增益约3db。具有8态卷积码的H-PTCQ在相同网格尺寸下比PTCQ增益约0.2 ~ 0.3 db;具有256状态卷积码的H-PTCQ比PTCQ具有0.05至0.1 dB的增益。与2m状态的PTCQ相比,m状态的H-PTCQ具有相同的计算复杂度,并且占用了一半的路径内存。由于对于类似的数据集,8态PTCQ比4态PTCQ的性能改进约为0.4 dB,因此使用H-PTCQ获得的0.2至0.3 dB增益非常显着。请注意,H-PTQ使发射机能够根据资源限制调整性能,而无需改变PTCQ接收器。同样有趣的是,8态TCQ对4态TCQ的0.1 dB增益加上H-PTCQ的0.3 dB增益大约是8态PTCQ对4态PTCQ的增益。256态量子化的结果也符合这一观察结果。因此,我们得出结论,2M- over m状态PTCQ的大部分增益来自更好的内部TCQ量化器,并且主要来自通过保持更多路径来更好地预测。
{"title":"Modified Viterbi algorithm for predictive TCQ","authors":"T. Ji, W. Stark","doi":"10.1109/DCC.1999.785689","DOIUrl":"https://doi.org/10.1109/DCC.1999.785689","url":null,"abstract":"Summary form only given. A hybrid trellis-tree search algorithm, the H-PTCQ, which has the same storage requirement as PTCQ and, is presented. We assume 2 survivor paths are kept at each state. It is straightforward to extend the algorithm to the cases where n/spl ges/2. Simulation is conducted over 20-second speech samples using DPCM, PTCQ and H-PTCQ. The data sequence is truncated into blocks of 1024 samples. The optimal codebooks for a memoryless Laplacian source are used. Predictor coefficients for the 1st-order and 2nd-order predictors are {0.8456} and {1.3435, -0.5888}, respectively. Simulation results indicate that both PTCQ and H-PTCQ have about 3 dB gain over DPCM. H-PTCQ with 8-state convolutional code has about 0.2 to 0.3 db gain over PTCQ for the same trellis size; H-PTCQ with 256-state convolutional code has 0.05 to 0.1 dB gain over the PTCQ counterpart. Compared with a 2M-state PTCQ, the M-state H-PTCQ has the same computational complexity and uses half of the path memory. Since the performance improvement of an an 8-state PTCQ over a 4-state PTCQ is about 0.4 dB for a similar set of data, the 0.2 to 0.3 dB gain obtained by using H-PTCQ is quite remarkable. Notice that H-PTQ enables a transmitter to adapt performance according to the resource constraints without changing PTCQ receivers. It is also interesting to observe that the 0.1 dB gain of an 8-state TCQ against a 4-state TCQ plus the 0.3 dB gain of H-PTCQ is about the gain of an 8-state PTCQ over a 4-state PTCQ. The results for 256-state quantization also agree with this observation. Therefore, we conclude that most of the gain of a 2M- over M-state PTCQ comes from the better internal TCQ quantizer, and mostly from the better prediction by keeping more paths.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122226668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zerotree coding of wavelet coefficients for image data on arbitrarily shaped support 任意形状支撑上图像数据小波系数的零树编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785691
A. Kawanaka, V. Algazi
[Summary form only given]. A wavelet coding method for arbitrarily shaped image data, applicable to object-oriented coding of moving pictures, and to the efficient representation of texture data in computer graphics is proposed. The wavelet transform of an arbitrarily shaped image is obtained by applying the symmetrical extension technique at region boundaries and keeping the location of the wavelet coefficient. For entropy coding of the wavelet coefficients, the zerotree coding technique is modified to work with arbitrarily shaped regions by treating missing (outside of the decomposed support) coefficients as insignificant and transmitting only those zerotree symbols which are in the decomposed support. The coding performance of the proposed method for several test images that include a person, a teapot and a necklace is compared to a shape-adaptive DCT and an ordinary DCT method applying low pass extrapolation to the DCT block containing the region boundaries. Experiments show that the proposed method has a better coding efficiency compared to SA-DCT and the ordinary DCT method.
[仅提供摘要形式]。提出了一种适用于运动图像面向对象编码和计算机图形学中纹理数据高效表示的任意形状图像数据小波编码方法。在区域边界处采用对称扩展技术,保持小波系数的位置,得到任意形状图像的小波变换。对于小波系数的熵编码,将零树编码技术改进为对任意形状的区域进行编码,将缺失的(在分解支持之外的)系数视为不重要,只传输分解支持内的零树符号。将该方法对包括人、茶壶和项链在内的几幅测试图像的编码性能与形状自适应DCT和对包含区域边界的DCT块应用低通外推的普通DCT方法进行了比较。实验表明,与SA-DCT和普通DCT方法相比,该方法具有更好的编码效率。
{"title":"Zerotree coding of wavelet coefficients for image data on arbitrarily shaped support","authors":"A. Kawanaka, V. Algazi","doi":"10.1109/DCC.1999.785691","DOIUrl":"https://doi.org/10.1109/DCC.1999.785691","url":null,"abstract":"[Summary form only given]. A wavelet coding method for arbitrarily shaped image data, applicable to object-oriented coding of moving pictures, and to the efficient representation of texture data in computer graphics is proposed. The wavelet transform of an arbitrarily shaped image is obtained by applying the symmetrical extension technique at region boundaries and keeping the location of the wavelet coefficient. For entropy coding of the wavelet coefficients, the zerotree coding technique is modified to work with arbitrarily shaped regions by treating missing (outside of the decomposed support) coefficients as insignificant and transmitting only those zerotree symbols which are in the decomposed support. The coding performance of the proposed method for several test images that include a person, a teapot and a necklace is compared to a shape-adaptive DCT and an ordinary DCT method applying low pass extrapolation to the DCT block containing the region boundaries. Experiments show that the proposed method has a better coding efficiency compared to SA-DCT and the ordinary DCT method.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129511713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Quantized frame expansions as source-channel codes for erasure channels 作为擦除信道的源信道码的量化帧展开
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755682
Vivek K Goyal, J. Kovacevic, M. Vetterli
Quantized frame expansions are proposed as a method for generalized multiple description coding, where each quantized coefficient is a description. Whereas previous investigations have revealed the robustness of frame expansions to additive noise and quantization, this represents a new application of frame expansions. The performance of a system based on quantized frame expansions is compared to that of a system with a conventional block channel code. The new system performs well when the number of lost descriptions (erasures on an erasure channel) is hard to predict.
提出了一种广义多重描述编码的量化帧展开方法,其中每个量化系数都是一个描述。鉴于之前的研究已经揭示了帧展开对加性噪声和量化的鲁棒性,这代表了帧展开的一种新的应用。将基于量化帧展开的系统性能与使用常规块信道编码的系统性能进行了比较。当丢失的描述(擦除信道上的擦除)数量难以预测时,新系统表现良好。
{"title":"Quantized frame expansions as source-channel codes for erasure channels","authors":"Vivek K Goyal, J. Kovacevic, M. Vetterli","doi":"10.1109/DCC.1999.755682","DOIUrl":"https://doi.org/10.1109/DCC.1999.755682","url":null,"abstract":"Quantized frame expansions are proposed as a method for generalized multiple description coding, where each quantized coefficient is a description. Whereas previous investigations have revealed the robustness of frame expansions to additive noise and quantization, this represents a new application of frame expansions. The performance of a system based on quantized frame expansions is compared to that of a system with a conventional block channel code. The new system performs well when the number of lost descriptions (erasures on an erasure channel) is hard to predict.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128474968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 113
On entropy-constrained residual vector quantization design 熵约束残差矢量量化设计
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785683
Y. Gong, M. Fan, Chien-Min Huang
Summary form only given. Entropy-constrained residual vector quantization (EC-RVQ) has been shown to be a competitive compression technique. Its design procedure is an iterative process which typically consists of three steps: encoder update, decoder update, and entropy coder update. We propose a new algorithm for the EC-RVQ design. The main features of our algorithm are: (i) in the encoder update step, we propose a variation of the exhaustive search encoder that significantly speeds up encoding at no expense in terms of the rate-distortion performance; (ii) in the decoder update step, we propose a new method that simultaneously updates the codebooks of all stages; the method is to form and solve a certain least square problem and we show that both tasks can be done very efficiently; (iii) the Lagrangian of rate-distortion decreases at every step and thus this guarantees the convergence of the algorithm. We have performed some preliminary numerical experiments to test the proposed algorithm. Both random sources and still images are considered. For random sources, the size of training sequence is 2500 and the vector size is 4. For still images, the training set consists of monochrome images from the USC database and the vector size is 4/spl times/4.
只提供摘要形式。熵约束残差矢量量化(EC-RVQ)已被证明是一种有竞争力的压缩技术。其设计过程是一个迭代过程,通常包括三个步骤:编码器更新、解码器更新和熵编码器更新。我们提出了一种新的EC-RVQ设计算法。该算法的主要特点是:(i)在编码器更新步骤中,我们提出了一种穷举搜索编码器的变体,可以在不牺牲率失真性能的情况下显著加快编码速度;(ii)在解码器更新步骤中,我们提出了一种同时更新各阶段码本的新方法;该方法是形成并求解一个最小二乘问题,我们证明了这两个任务都可以非常有效地完成;(iii)速率畸变的拉格朗日量每一步减小,保证了算法的收敛性。我们已经进行了一些初步的数值实验来测试所提出的算法。随机源和静态图像都被考虑。对于随机源,训练序列的大小为2500,向量大小为4。对于静止图像,训练集由来自USC数据库的单色图像组成,向量大小为4/spl × /4。
{"title":"On entropy-constrained residual vector quantization design","authors":"Y. Gong, M. Fan, Chien-Min Huang","doi":"10.1109/DCC.1999.785683","DOIUrl":"https://doi.org/10.1109/DCC.1999.785683","url":null,"abstract":"Summary form only given. Entropy-constrained residual vector quantization (EC-RVQ) has been shown to be a competitive compression technique. Its design procedure is an iterative process which typically consists of three steps: encoder update, decoder update, and entropy coder update. We propose a new algorithm for the EC-RVQ design. The main features of our algorithm are: (i) in the encoder update step, we propose a variation of the exhaustive search encoder that significantly speeds up encoding at no expense in terms of the rate-distortion performance; (ii) in the decoder update step, we propose a new method that simultaneously updates the codebooks of all stages; the method is to form and solve a certain least square problem and we show that both tasks can be done very efficiently; (iii) the Lagrangian of rate-distortion decreases at every step and thus this guarantees the convergence of the algorithm. We have performed some preliminary numerical experiments to test the proposed algorithm. Both random sources and still images are considered. For random sources, the size of training sequence is 2500 and the vector size is 4. For still images, the training set consists of monochrome images from the USC database and the vector size is 4/spl times/4.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126664659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Codes for data synchronization with timing 用于定时数据同步的代码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755694
N. Kashyap, D. Neuhoff
This paper investigates the design and analysis of data synchronization codes whose decoders have the property that, in addition to reestablishing correct decoding after encoded data is lost or afflicted with errors, they produce the original time index of each decoded data symbol modulo some integer T. The motivation for such data synchronization with timing is that in many situations where data must be encoded, it is not sufficient for the decoder to present a sequence of correct data symbols. Instead, the user also needs to know the position in the original source sequence of the symbols being presented. With this goal in mind, periodic prefix-synchronized (PPS) codes are introduced and analyzed on the basis of their synchronization delay D, rate R, and timing span T. Introduced are two specific PPS designs called natural marker and cascaded codes. A principal result is that when coding binary data with rate R, the largest possible timing span attainable with PPS codes grows exponentially with delay D, with exponent D(1-R). Thus, a large timing span can be attained with little redundancy and moderate values of delay.
本文研究了一种数据同步码的设计和分析,这种数据同步码的解码器除了在编码数据丢失或出现错误后重建正确的解码外,还能将每个解码数据符号的原始时间索引模取某个整数t。这种带定时的数据同步的动机是在许多必须对数据进行编码的情况下,解码器仅提供正确的数据符号序列是不够的。相反,用户还需要知道所呈现的符号在原始源序列中的位置。基于这一目标,本文介绍了周期性前缀同步码(PPS),并根据其同步延迟D、速率R和时间跨度t对其进行了分析。介绍了两种特定的PPS设计,即自然标记码和级联码。一个主要的结果是,当以速率R编码二进制数据时,PPS码可能达到的最大时间跨度随着延迟D以指数D(1-R)呈指数增长。因此,可以获得大的时间跨度,冗余少,延迟适中。
{"title":"Codes for data synchronization with timing","authors":"N. Kashyap, D. Neuhoff","doi":"10.1109/DCC.1999.755694","DOIUrl":"https://doi.org/10.1109/DCC.1999.755694","url":null,"abstract":"This paper investigates the design and analysis of data synchronization codes whose decoders have the property that, in addition to reestablishing correct decoding after encoded data is lost or afflicted with errors, they produce the original time index of each decoded data symbol modulo some integer T. The motivation for such data synchronization with timing is that in many situations where data must be encoded, it is not sufficient for the decoder to present a sequence of correct data symbols. Instead, the user also needs to know the position in the original source sequence of the symbols being presented. With this goal in mind, periodic prefix-synchronized (PPS) codes are introduced and analyzed on the basis of their synchronization delay D, rate R, and timing span T. Introduced are two specific PPS designs called natural marker and cascaded codes. A principal result is that when coding binary data with rate R, the largest possible timing span attainable with PPS codes grows exponentially with delay D, with exponent D(1-R). Thus, a large timing span can be attained with little redundancy and moderate values of delay.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Complexity-distortion tradeoffs in vector matching based on probabilistic partial distance techniques 基于概率部分距离技术的向量匹配中的复杂性-失真权衡
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755689
Krisda Lengwehasatit, Antonio Ortega
We consider the problem of searching for the best match for an input among a set of vectors, according to some predetermined metric. Examples of this problem include the search for the best match for an input in a VQ encoder and the search for a motion vector in motion estimation-based video coding. We propose an approach that computes a partial distance metric and uses prior probabilistic knowledge of the reliability of the estimate to decide on whether to stop the distance computation. This is achieved with a simple hypothesis testing and the result, an extension of the partial distance technique of Bei and Gray (1985) provides additional computation savings at the cost of a (controllable) loss in matching performance.
我们考虑在一组向量中,根据预定的度量,寻找输入的最佳匹配问题。这个问题的例子包括在VQ编码器中搜索输入的最佳匹配,以及在基于运动估计的视频编码中搜索运动向量。我们提出了一种计算部分距离度量并使用估计可靠性的先验概率知识来决定是否停止距离计算的方法。这是通过一个简单的假设检验来实现的,结果是,Bei和Gray(1985)的部分距离技术的扩展以匹配性能(可控)损失为代价提供了额外的计算节省。
{"title":"Complexity-distortion tradeoffs in vector matching based on probabilistic partial distance techniques","authors":"Krisda Lengwehasatit, Antonio Ortega","doi":"10.1109/DCC.1999.755689","DOIUrl":"https://doi.org/10.1109/DCC.1999.755689","url":null,"abstract":"We consider the problem of searching for the best match for an input among a set of vectors, according to some predetermined metric. Examples of this problem include the search for the best match for an input in a VQ encoder and the search for a motion vector in motion estimation-based video coding. We propose an approach that computes a partial distance metric and uses prior probabilistic knowledge of the reliability of the estimate to decide on whether to stop the distance computation. This is achieved with a simple hypothesis testing and the result, an extension of the partial distance technique of Bei and Gray (1985) provides additional computation savings at the cost of a (controllable) loss in matching performance.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114677646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Graceful degradation over packet erasure channels through forward error correction 通过前向纠错,在包擦除通道上进行优雅的降级
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755658
A. Mohr, E. Riskin, R. Ladner
We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.
我们提出了一种算法,该算法为渐进式数据分配不等数量的前向纠错,以便在数据包丢失增加时提供优雅的降级。在这项工作中,我们使用SPIHT编码器来压缩图像,但我们的算法可以保护任何渐进压缩方案。该算法还可以使用几乎任何函数作为丢包条件的模型。我们发现,对于平均丢包率为20%、总丢包率为0.2 bpp的指数丢包模型,即使丢失40%的传输包,也能获得良好的图像质量。
{"title":"Graceful degradation over packet erasure channels through forward error correction","authors":"A. Mohr, E. Riskin, R. Ladner","doi":"10.1109/DCC.1999.755658","DOIUrl":"https://doi.org/10.1109/DCC.1999.755658","url":null,"abstract":"We present an algorithm that assigns unequal amounts of forward error correction to progressive data so as to provide graceful degradation as packet losses increase. We use the SPIHT coder to compress images in this work, but our algorithm can protect any progressive compression scheme. The algorithm can also use almost any function as a model of packet loss conditions. We find that for an exponential packet loss model with a mean of 20% and a total rate of 0.2 bpp, good image quality can be obtained, even when 40% of transmitted packets are lost.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115997614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
期刊
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1