首页 > 最新文献

Proceedings DCC 2002. Data Compression Conference最新文献

英文 中文
Combining FEC and optimal soft-input source decoding for the reliable transmission of correlated variable-length encoded signals 将FEC与最优软输入源解码相结合,实现相关变长编码信号的可靠传输
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999946
J. Kliewer, R. Thobaben
We utilize both the implicit residual source correlation and the explicit redundancy from a forward error correction (FEC) scheme for the error protection of packetized variable-length encoded source indices. The implicit source correlation is exploited in a novel symbol-based soft-input a-posteriori probability (APP) decoder, which leads to an optimal decoding process in combination with a mean-squares or maximum a-posteriori probability estimation of the reconstructed source signal. When, additionally, the variable-length encoded source data is protected by channel codes, an iterative source-channel decoder can be obtained in the same way as for serially concatenated codes, where the outer constituent decoder is replaced by the proposed APP source decoder. Simulation results show that, by additionally considering the correlations between the variable-length encoded source indices, the error-correction performance can be highly increased.
利用隐式残差源相关和前向纠错(FEC)方案的显式冗余,对分组变长编码源索引进行了纠错保护。在一种新型的基于符号的软输入后验概率(APP)解码器中利用了隐式源相关,该解码器结合重构源信号的均方或最大后验概率估计实现了最优的解码过程。此外,当变长编码的源数据受到信道码的保护时,可以通过与串行连接码相同的方式获得迭代源-信道解码器,其中外部组成解码器由所提出的APP源解码器替换。仿真结果表明,通过额外考虑变长编码源指标之间的相关性,可以大大提高系统的纠错性能。
{"title":"Combining FEC and optimal soft-input source decoding for the reliable transmission of correlated variable-length encoded signals","authors":"J. Kliewer, R. Thobaben","doi":"10.1109/DCC.2002.999946","DOIUrl":"https://doi.org/10.1109/DCC.2002.999946","url":null,"abstract":"We utilize both the implicit residual source correlation and the explicit redundancy from a forward error correction (FEC) scheme for the error protection of packetized variable-length encoded source indices. The implicit source correlation is exploited in a novel symbol-based soft-input a-posteriori probability (APP) decoder, which leads to an optimal decoding process in combination with a mean-squares or maximum a-posteriori probability estimation of the reconstructed source signal. When, additionally, the variable-length encoded source data is protected by channel codes, an iterative source-channel decoder can be obtained in the same way as for serially concatenated codes, where the outer constituent decoder is replaced by the proposed APP source decoder. Simulation results show that, by additionally considering the correlations between the variable-length encoded source indices, the error-correction performance can be highly increased.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124881304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Progressive coding of palette images and digital maps 调色板图像和数字地图的渐进编码
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999974
S. Forchhammer, J. M. Salinas
A 2D version of PPM (Prediction by Partial Matching) coding is introduced simply by combining a 2D template with the standard PPM coding scheme. A simple scheme for resolution reduction is given and the 2D PPM scheme extended to resolution progressive coding by placing pixels in a lower resolution image layer. The resolution is increased by a factor of 2 in each step. The 2D PPM coding is applied to palette images and street maps. The sequential results are comparable to PWC. The PPM results are a little better for the palette images with few colors (up to 4-5 bpp) and a little worse for the images with more colors. For street maps the 2D PPM is slightly better. The PPM based resolution progressive coding provides a better result than coding the resolution layers as individual images. Compared to GIF the resolution progressive 2D PPM's coding efficiency is significantly better. An example of combined content-layer/spatial progressive coding is also given.
通过将2D模板与标准PPM编码方案相结合,简单地介绍了2D版本的PPM(部分匹配预测)编码。给出了一种简单的分辨率降低方案,并通过在低分辨率图像层中放置像素,将二维PPM方案扩展到分辨率渐进编码。在每一步中,分辨率增加2倍。二维PPM编码应用于调色板图像和街道地图。其连续业绩可与普华永道相媲美。对于颜色较少的调色板图像(最多4-5 bpp), PPM结果稍好一些,对于颜色较多的图像,PPM结果稍差一些。对于街道地图,2D PPM略好一些。基于PPM的分辨率累进编码比将分辨率层编码为单独的图像提供更好的结果。与GIF相比,分辨率累进二维PPM的编码效率明显提高。最后给出了内容层/空间累进编码的一个实例。
{"title":"Progressive coding of palette images and digital maps","authors":"S. Forchhammer, J. M. Salinas","doi":"10.1109/DCC.2002.999974","DOIUrl":"https://doi.org/10.1109/DCC.2002.999974","url":null,"abstract":"A 2D version of PPM (Prediction by Partial Matching) coding is introduced simply by combining a 2D template with the standard PPM coding scheme. A simple scheme for resolution reduction is given and the 2D PPM scheme extended to resolution progressive coding by placing pixels in a lower resolution image layer. The resolution is increased by a factor of 2 in each step. The 2D PPM coding is applied to palette images and street maps. The sequential results are comparable to PWC. The PPM results are a little better for the palette images with few colors (up to 4-5 bpp) and a little worse for the images with more colors. For street maps the 2D PPM is slightly better. The PPM based resolution progressive coding provides a better result than coding the resolution layers as individual images. Compared to GIF the resolution progressive 2D PPM's coding efficiency is significantly better. An example of combined content-layer/spatial progressive coding is also given.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129361277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Perceptual preprocessing techniques applied to video compression: some result elements and analysis 应用于视频压缩的感知预处理技术:一些结果元素及分析
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000006
Gwenaelle Marquant
Summary form only given. The developments in video coding research deal with solutions to improve the picture quality while decreasing the bit rates. However, no major breakthrough in compression emerged and low bit rate high quality video compression is still an open issue. The compression scheme is generally decomposed into two stages: coding and decoding. In order to improve the compression efficiency, a complementary solution may consist in introducing a preprocessing stage before the encoding process or/and a post-processing step after decoding. For this purpose, instead of using the usual (Y, U, V) representation space to compress the video signal, where the video is encoded along different separate channels (luminance Y, chrominance U, chrominance V), we propose to choose other channels by means of a color preprocessing based upon perceptual and physics-based approaches. We compare an original H.26L encoder (ITU standard for video coding), i.e. without preprocessing, and the same H.26L encoder with a preprocessing stage to evaluate the extent to which the preprocessing stage increases the compression efficiency, in particular with perceptual solutions.
只提供摘要形式。视频编码研究的发展涉及在降低码率的同时提高图像质量的解决方案。然而,在压缩方面没有出现重大突破,低比特率高质量视频压缩仍然是一个悬而未决的问题。压缩方案一般分为两个阶段:编码和解码。为了提高压缩效率,一种互补的解决方案可以包括在编码处理之前引入预处理阶段或/和在解码之后引入后处理步骤。为此,我们不使用通常的(Y, U, V)表示空间来压缩视频信号,其中视频沿着不同的独立通道(亮度Y,色度U,色度V)进行编码,我们建议通过基于感知和基于物理的方法的颜色预处理来选择其他通道。我们比较了原始的H.26L编码器(国际电联视频编码标准),即没有预处理,以及具有预处理阶段的相同H.26L编码器,以评估预处理阶段提高压缩效率的程度,特别是在感知解决方案中。
{"title":"Perceptual preprocessing techniques applied to video compression: some result elements and analysis","authors":"Gwenaelle Marquant","doi":"10.1109/DCC.2002.1000006","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000006","url":null,"abstract":"Summary form only given. The developments in video coding research deal with solutions to improve the picture quality while decreasing the bit rates. However, no major breakthrough in compression emerged and low bit rate high quality video compression is still an open issue. The compression scheme is generally decomposed into two stages: coding and decoding. In order to improve the compression efficiency, a complementary solution may consist in introducing a preprocessing stage before the encoding process or/and a post-processing step after decoding. For this purpose, instead of using the usual (Y, U, V) representation space to compress the video signal, where the video is encoded along different separate channels (luminance Y, chrominance U, chrominance V), we propose to choose other channels by means of a color preprocessing based upon perceptual and physics-based approaches. We compare an original H.26L encoder (ITU standard for video coding), i.e. without preprocessing, and the same H.26L encoder with a preprocessing stage to evaluate the extent to which the preprocessing stage increases the compression efficiency, in particular with perceptual solutions.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124692460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A source coding approach to classification by vector quantization and the principle of minimum description length 一种基于矢量量化和最小描述长度原则的源编码分类方法
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999978
Jia Li
An algorithm for supervised classification using vector quantization and entropy coding is presented. The classification rule is formed from a set of training data {(X/sub i/, Y/sub i/)}/sub i=1//sup n/, which are independent samples from a joint distribution P/sub XY/. Based on the principle of minimum description length (MDL), a statistical model that approximates the distribution P/sub XY/ ought to enable efficient coding of X and Y. On the other hand, we expect a system that encodes (X, Y) efficiently to provide ample information on the distribution P/sub XY/. This information can then be used to classify X, i.e., to predict the corresponding Y based on X. To encode both X and Y, a two-stage vector quantizer is applied to X and a Huffman code is formed for Y conditioned on each quantized value of X. The optimization of the encoder is equivalent to the design of a vector quantizer with an objective function reflecting the joint penalty of quantization error and misclassification rate. This vector quantizer provides an estimation of the conditional distribution of Y given X, which in turn yields an approximation to the Bayes classification rule. This algorithm, namely discriminant vector quantization (DVQ), is compared with learning vector quantization (LVQ) and CART/sup R/ on a number of data sets. DVQ outperforms the other two on several data sets. The relation between DVQ, density estimation, and regression is also discussed.
提出了一种基于矢量量化和熵编码的监督分类算法。分类规则由一组训练数据{(X/sub i/, Y/sub i/)}/sub i=1//sup n/组成,这些训练数据是来自联合分布P/sub XY/的独立样本。基于最小描述长度(MDL)原则,一个近似于P/sub XY/分布的统计模型应该能够有效地编码X和Y。另一方面,我们期望一个有效编码(X, Y)的系统能够提供关于P/sub XY/分布的充足信息。然后利用这些信息对X进行分类,即根据X预测相应的Y。对X和Y进行编码时,对X采用两级矢量量化器,并以X的每个量化值为条件对Y形成霍夫曼码。编码器的优化相当于设计一个矢量量化器,其目标函数反映量化误差和误分类率的共同惩罚。这个矢量量化器提供了给定X的Y的条件分布的估计,这反过来又产生了贝叶斯分类规则的近似值。该算法即判别向量量化(discriminant vector quantization, DVQ),在多个数据集上与学习向量量化(learning vector quantization, LVQ)和CART/sup R/进行了比较。DVQ在一些数据集上优于其他两种。本文还讨论了DVQ、密度估计和回归之间的关系。
{"title":"A source coding approach to classification by vector quantization and the principle of minimum description length","authors":"Jia Li","doi":"10.1109/DCC.2002.999978","DOIUrl":"https://doi.org/10.1109/DCC.2002.999978","url":null,"abstract":"An algorithm for supervised classification using vector quantization and entropy coding is presented. The classification rule is formed from a set of training data {(X/sub i/, Y/sub i/)}/sub i=1//sup n/, which are independent samples from a joint distribution P/sub XY/. Based on the principle of minimum description length (MDL), a statistical model that approximates the distribution P/sub XY/ ought to enable efficient coding of X and Y. On the other hand, we expect a system that encodes (X, Y) efficiently to provide ample information on the distribution P/sub XY/. This information can then be used to classify X, i.e., to predict the corresponding Y based on X. To encode both X and Y, a two-stage vector quantizer is applied to X and a Huffman code is formed for Y conditioned on each quantized value of X. The optimization of the encoder is equivalent to the design of a vector quantizer with an objective function reflecting the joint penalty of quantization error and misclassification rate. This vector quantizer provides an estimation of the conditional distribution of Y given X, which in turn yields an approximation to the Bayes classification rule. This algorithm, namely discriminant vector quantization (DVQ), is compared with learning vector quantization (LVQ) and CART/sup R/ on a number of data sets. DVQ outperforms the other two on several data sets. The relation between DVQ, density estimation, and regression is also discussed.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130321983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Compressor performance, absolutely! 压缩机性能,绝对!
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000017
M. Titchener
Summary form only given. Titchener (see Proc. DCC00, IEEE Society Press, p.353-62, 2000, and IEEE-ISIT, , MIT, Boston, August 1998) defined a computable grammar-based entropy measure (T-entropy) for finite strings, Ebeling, Steuer and Titchener (see Stochastics and Dynamics, vol.1, no.1, 2000) and Titchener and Ebeling (see Proc. DCC01, IEEE Society Press, p.520, 2001) demonstrated against the known results for the logistic map, to be a practical way to compute the Shannon information content for data files. A range of binary encodings of the logistic map dynamics have been prepared from a generating bi-partition and with selected normalised entropies, 0.1-1.0 bits/symbol, in steps of 0.1. This corpus of ten test files has been used to evaluate the 'absolute' performance of a series of popular compressors.
只提供摘要形式。Titchener(参见Proc. DCC00, IEEE Society Press, p.353-62, 2000, IEEE- isit,麻省理工学院,波士顿,1998年8月)为有限字符串定义了一个可计算的基于语法的熵测度(t -熵),Ebeling, Steuer和Titchener(参见《随机与动力学》,vol.1, no. 1)。1, 2000)和Titchener和Ebeling(见Proc. DCC01, IEEE Society Press, p.520, 2001)针对逻辑图的已知结果证明,这是计算数据文件香农信息内容的实用方法。逻辑映射动力学的一系列二进制编码已经从生成的双分区和选择的归一化熵中准备好,0.1-1.0比特/符号,步骤为0.1。这个由十个测试文件组成的语料库已被用于评估一系列流行压缩器的“绝对”性能。
{"title":"Compressor performance, absolutely!","authors":"M. Titchener","doi":"10.1109/DCC.2002.1000017","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000017","url":null,"abstract":"Summary form only given. Titchener (see Proc. DCC00, IEEE Society Press, p.353-62, 2000, and IEEE-ISIT, , MIT, Boston, August 1998) defined a computable grammar-based entropy measure (T-entropy) for finite strings, Ebeling, Steuer and Titchener (see Stochastics and Dynamics, vol.1, no.1, 2000) and Titchener and Ebeling (see Proc. DCC01, IEEE Society Press, p.520, 2001) demonstrated against the known results for the logistic map, to be a practical way to compute the Shannon information content for data files. A range of binary encodings of the logistic map dynamics have been prepared from a generating bi-partition and with selected normalised entropies, 0.1-1.0 bits/symbol, in steps of 0.1. This corpus of ten test files has been used to evaluate the 'absolute' performance of a series of popular compressors.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126881573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image coding with the MAP criterion 使用MAP标准进行图像编码
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999996
T. Eriksson, John B. Anderson, M. Novak
Summary form only given. BCJR based source coding of image residuals is investigated. From a trellis representation of the residual, a joint source-channel coding system is formed. Then the BCJR algorithm is applied to find the MAP encoding. MAP and minimized squared error encoding are compared. The novelty of this work is the use of the BCJR algorithm and the MAP criterion in the source coding procedure. The source encoding system described preserves more features than an MSE based encoder. Also, blocking artifacts are reduced. Comparisons may be found in the full paper version (see http://www.it.lth.se/tomas/eriksson/spl I.bar/novak/spl I.bar/anderson/spl I.bar/dcc02.ps, 2001).
只提供摘要形式。研究了基于BCJR的图像残差源编码。从残差的栅格表示中,形成了联合源信道编码系统。然后应用BCJR算法寻找MAP编码。对MAP和最小平方误差编码进行了比较。这项工作的新颖之处在于在源编码过程中使用了BCJR算法和MAP准则。所描述的源编码系统比基于MSE的编码器保留了更多的特征。此外,阻塞工件也减少了。比较可以在论文全文中找到(见http://www.it.lth.se/tomas/eriksson/spl .bar/novak/spl .bar/anderson/spl .bar/dcc02.ps, 2001)。
{"title":"Image coding with the MAP criterion","authors":"T. Eriksson, John B. Anderson, M. Novak","doi":"10.1109/DCC.2002.999996","DOIUrl":"https://doi.org/10.1109/DCC.2002.999996","url":null,"abstract":"Summary form only given. BCJR based source coding of image residuals is investigated. From a trellis representation of the residual, a joint source-channel coding system is formed. Then the BCJR algorithm is applied to find the MAP encoding. MAP and minimized squared error encoding are compared. The novelty of this work is the use of the BCJR algorithm and the MAP criterion in the source coding procedure. The source encoding system described preserves more features than an MSE based encoder. Also, blocking artifacts are reduced. Comparisons may be found in the full paper version (see http://www.it.lth.se/tomas/eriksson/spl I.bar/novak/spl I.bar/anderson/spl I.bar/dcc02.ps, 2001).","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129116564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Overhead-constrained rate-allocation for scalable video transmission over networks 网络上可扩展视频传输的开销约束速率分配
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999998
Bo Hong, Aria Nosratinia
Summary form only given. Forward error correction (FEC) based schemes are use widely to address the packet loss problem for Internet video. Given total available bandwidth, finding optimal bit allocation is very important in FEC-based video, because the FEC bit rate limits the rate available to compress video. We want to give proper protection to the source, but also prevent unwanted FEC rate expansion. The rate of packet headers is often ignored in allocating bit rate. We show that this packetization overhead has significant influence on system performance in many cases. Decreasing packet size increases the rate of packet headers, thus reducing the available rate for the source and its FEC codes. On the other hand, smaller packet size allows a larger number of packets, in which case it can be shown that the efficiency of FEC codes improves. We show that packet size should be optimized to balance the effect of packet headers and the efficiency of FEC codes. We develop a probabilistic framework for the solution of rate allocation problem in the presence of packet overhead. We implement our solution on the MPEG-4 fine granularity scalability (FGS) mode. To show the flexibility of our technique, we use an unequal error protection scheme with FGS. Experimental results show that our overhead-constrained method leads to significant improvements in reconstructed video quality.
只提供摘要形式。基于前向纠错(FEC)的方案被广泛用于解决网络视频的丢包问题。给定总可用带宽,在基于FEC的视频中找到最佳的比特分配是非常重要的,因为FEC比特率限制了可用的压缩视频的速率。我们既要对源进行适当的保护,又要防止不必要的FEC速率膨胀。在分配比特率时,报文头的速率常常被忽略。我们表明,在许多情况下,这种分组开销对系统性能有重大影响。减小包的大小会增加包头的速率,从而降低源及其FEC码的可用速率。另一方面,较小的数据包大小允许较多的数据包数量,在这种情况下,可以证明FEC码的效率得到提高。我们表明,应该优化数据包大小,以平衡数据包头的影响和FEC码的效率。我们提出了一个概率框架来解决存在包开销的速率分配问题。我们在MPEG-4细粒度可伸缩性(FGS)模式上实现了我们的解决方案。为了显示我们技术的灵活性,我们在FGS中使用了不等误差保护方案。实验结果表明,我们的开销约束方法显著提高了重建视频的质量。
{"title":"Overhead-constrained rate-allocation for scalable video transmission over networks","authors":"Bo Hong, Aria Nosratinia","doi":"10.1109/DCC.2002.999998","DOIUrl":"https://doi.org/10.1109/DCC.2002.999998","url":null,"abstract":"Summary form only given. Forward error correction (FEC) based schemes are use widely to address the packet loss problem for Internet video. Given total available bandwidth, finding optimal bit allocation is very important in FEC-based video, because the FEC bit rate limits the rate available to compress video. We want to give proper protection to the source, but also prevent unwanted FEC rate expansion. The rate of packet headers is often ignored in allocating bit rate. We show that this packetization overhead has significant influence on system performance in many cases. Decreasing packet size increases the rate of packet headers, thus reducing the available rate for the source and its FEC codes. On the other hand, smaller packet size allows a larger number of packets, in which case it can be shown that the efficiency of FEC codes improves. We show that packet size should be optimized to balance the effect of packet headers and the efficiency of FEC codes. We develop a probabilistic framework for the solution of rate allocation problem in the presence of packet overhead. We implement our solution on the MPEG-4 fine granularity scalability (FGS) mode. To show the flexibility of our technique, we use an unequal error protection scheme with FGS. Experimental results show that our overhead-constrained method leads to significant improvements in reconstructed video quality.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132161557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data compression of correlated non-binary sources using punctured turbo codes 使用穿孔涡轮码的相关非二进制源数据压缩
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999962
Ying Zhao, J. Garcia-Frías
We consider the case of two correlated non-binary sources. Data compression is achieved by transforming the sequences of non-binary symbols into sequences of bits and then using punctured turbo codes as source encoders. Each source is compressed without knowledge about the other source, and no information about the correlation between sources is required in the encoding process. Compression is achieved because of puncturing, which is adjusted to obtain the desired compression rate. The source decoder utilizes iterative schemes over the compressed binary sequences, and recovers the non-binary symbol sequences from both sources. The performance of the proposed scheme is close to the theoretical limit predicted by the Slepian-Wolf (1973) theorem.
我们考虑两个相关的非二元源的情况。数据压缩是通过将非二进制符号序列转换为位序列,然后使用穿孔turbo码作为源编码器来实现的。每个源都在不了解其他源的情况下进行压缩,并且在编码过程中不需要有关源之间相关性的信息。压缩是通过穿刺实现的,通过调整以获得所需的压缩率。源解码器在压缩二进制序列上使用迭代方案,并从两个源中恢复非二进制符号序列。所提出方案的性能接近于Slepian-Wolf(1973)定理所预测的理论极限。
{"title":"Data compression of correlated non-binary sources using punctured turbo codes","authors":"Ying Zhao, J. Garcia-Frías","doi":"10.1109/DCC.2002.999962","DOIUrl":"https://doi.org/10.1109/DCC.2002.999962","url":null,"abstract":"We consider the case of two correlated non-binary sources. Data compression is achieved by transforming the sequences of non-binary symbols into sequences of bits and then using punctured turbo codes as source encoders. Each source is compressed without knowledge about the other source, and no information about the correlation between sources is required in the encoding process. Compression is achieved because of puncturing, which is adjusted to obtain the desired compression rate. The source decoder utilizes iterative schemes over the compressed binary sequences, and recovers the non-binary symbol sequences from both sources. The performance of the proposed scheme is close to the theoretical limit predicted by the Slepian-Wolf (1973) theorem.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114684128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Context tree compression of multi-component map images 多分量地图图像的上下文树压缩
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.999959
P. Kopylov, P. Fränti
We consider compression of multi-component map images by context modeling and arithmetic coding. We apply an optimized multi-level context tree for modeling the individual binary layers. The context pixels can be located within a search area in the current layer, or in a reference layer that has already been compressed. The binary layers are compressed using an optimized processing sequence that makes maximal utilization of the inter-layer dependencies. The structure of the context tree is a static variable depth binary tree, and the context information is stored only in the leaves of the tree. The proposed technique achieves an improvement of about 25% over a static 16 pixel context template, and 15% over a similar single-level context tree.
我们考虑通过上下文建模和算术编码对多分量地图图像进行压缩。我们应用了一个优化的多级上下文树来建模单个二进制层。上下文像素可以位于当前层的搜索区域内,也可以位于已经被压缩的参考层中。使用优化的处理序列压缩二进制层,最大限度地利用层间依赖关系。上下文树的结构是静态变深度二叉树,上下文信息仅存储在树的叶子中。所提出的技术比静态16像素上下文模板提高了约25%,比类似的单级上下文树提高了15%。
{"title":"Context tree compression of multi-component map images","authors":"P. Kopylov, P. Fränti","doi":"10.1109/DCC.2002.999959","DOIUrl":"https://doi.org/10.1109/DCC.2002.999959","url":null,"abstract":"We consider compression of multi-component map images by context modeling and arithmetic coding. We apply an optimized multi-level context tree for modeling the individual binary layers. The context pixels can be located within a search area in the current layer, or in a reference layer that has already been compressed. The binary layers are compressed using an optimized processing sequence that makes maximal utilization of the inter-layer dependencies. The structure of the context tree is a static variable depth binary tree, and the context information is stored only in the leaves of the tree. The proposed technique achieves an improvement of about 25% over a static 16 pixel context template, and 15% over a similar single-level context tree.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123183984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Semi-discrete matrix transforms (SDD) for image and video compression 半离散矩阵变换(SDD)用于图像和视频压缩
Pub Date : 2002-04-02 DOI: 10.1109/DCC.2002.1000027
Sacha Zyto, A. Grama, W. Szpankowski
Summary form only given. A wide variety of matrix transforms have been used for compression of image and video data. Transforms have also been used for motion estimation and quantization. One such transform is the singular-value decomposition (SVD) that relies on low rank approximations of the matrix for computational and storage efficiency. In this study, we describe the use of a variant of SVD in image and video compression. This variant, first proposed by Peleg and O'Leary, called semidiscrete decomposition (SDD), restricts the elements of the outer product vectors to 0/1/-1. Thus approximations of much higher rank can be stored for the same amount of storage. We demonstrate the superiority of SDD over SVD for a variety of compression schemes. We also show that DCT-based compression is still superior to SDD-based compression. We also demonstrate that SDD facilitates fast and accurate pattern matching and motion estimation; thus presenting excellent opportunities for improved compression.
只提供摘要形式。各种各样的矩阵变换已被用于图像和视频数据的压缩。变换也被用于运动估计和量化。其中一种变换是奇异值分解(SVD),它依赖于矩阵的低秩近似来提高计算和存储效率。在这项研究中,我们描述了SVD在图像和视频压缩中的一种变体的使用。这种变体首先由Peleg和O'Leary提出,称为半离散分解(SDD),它将外积向量的元素限制为0/1/-1。因此,对于相同的存储量,可以存储更高等级的近似值。我们证明了SDD优于SVD的各种压缩方案。我们还表明,基于dct的压缩仍然优于基于sdd的压缩。我们还证明了SDD有助于快速准确的模式匹配和运动估计;因此,为改进压缩提供了极好的机会。
{"title":"Semi-discrete matrix transforms (SDD) for image and video compression","authors":"Sacha Zyto, A. Grama, W. Szpankowski","doi":"10.1109/DCC.2002.1000027","DOIUrl":"https://doi.org/10.1109/DCC.2002.1000027","url":null,"abstract":"Summary form only given. A wide variety of matrix transforms have been used for compression of image and video data. Transforms have also been used for motion estimation and quantization. One such transform is the singular-value decomposition (SVD) that relies on low rank approximations of the matrix for computational and storage efficiency. In this study, we describe the use of a variant of SVD in image and video compression. This variant, first proposed by Peleg and O'Leary, called semidiscrete decomposition (SDD), restricts the elements of the outer product vectors to 0/1/-1. Thus approximations of much higher rank can be stored for the same amount of storage. We demonstrate the superiority of SDD over SVD for a variety of compression schemes. We also show that DCT-based compression is still superior to SDD-based compression. We also demonstrate that SDD facilitates fast and accurate pattern matching and motion estimation; thus presenting excellent opportunities for improved compression.","PeriodicalId":420897,"journal":{"name":"Proceedings DCC 2002. Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings DCC 2002. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1