首页 > 最新文献

2010 Data Compression Conference最新文献

英文 中文
Optimization of Overlapped Tiling for Efficient 3D Image Retrieval 优化重叠平铺的有效三维图像检索
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.99
Zihong Fan, Antonio Ortega
Remote visualization of an arbitrary 2-D planar "cut" from a large volumetric dataset with random access has both gained importance and posed significant challenges over the past few years in industrial and medical applications. In this paper, a prediction model is presented that relates transmission efficiency to voxel coverage statistics for a fast random 2-D image retrieval system. This model can be for parameter selection and also provides insights that lead us to propose a new 3D rectangular tiling scheme, which achieves an additional 10% - 30% reduction in average transmission rate as compared to our previously proposed technique, e.g.,a nearly 30% / 45% reduction in the average transmission rate at the cost of a factor of ten / fifteen in storage overhead compared to traditional cubic tiling. Furthermore, this approach leads to improved random access, with less storage and run-time memory required at the client.
在过去几年中,从随机访问的大型体积数据集中远程可视化任意二维平面“切割”在工业和医疗应用中变得越来越重要,同时也提出了重大挑战。针对快速随机二维图像检索系统,提出了一种将传输效率与体素覆盖统计量联系起来的预测模型。该模型可以用于参数选择,也为我们提出一种新的3D矩形平铺方案提供了见解,与我们之前提出的技术相比,该方案可以将平均传输速率降低10% - 30%,例如,与传统的立方体平铺相比,平均传输速率降低了近30% / 45%,而存储开销则降低了10 / 15倍。此外,这种方法可以改进随机访问,减少客户机所需的存储和运行时内存。
{"title":"Optimization of Overlapped Tiling for Efficient 3D Image Retrieval","authors":"Zihong Fan, Antonio Ortega","doi":"10.1109/DCC.2010.99","DOIUrl":"https://doi.org/10.1109/DCC.2010.99","url":null,"abstract":"Remote visualization of an arbitrary 2-D planar \"cut\" from a large volumetric dataset with random access has both gained importance and posed significant challenges over the past few years in industrial and medical applications. In this paper, a prediction model is presented that relates transmission efficiency to voxel coverage statistics for a fast random 2-D image retrieval system. This model can be for parameter selection and also provides insights that lead us to propose a new 3D rectangular tiling scheme, which achieves an additional 10% - 30% reduction in average transmission rate as compared to our previously proposed technique, e.g.,a nearly 30% / 45% reduction in the average transmission rate at the cost of a factor of ten / fifteen in storage overhead compared to traditional cubic tiling. Furthermore, this approach leads to improved random access, with less storage and run-time memory required at the client.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stationary and Trellis Encoding for IID Sources and Simulation IID源的平稳和栅格编码及其仿真
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.8
Mark Z. Mao, R. Gray
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation are presented and applied to a design technique for trellis-encoded source coding and rate constrained simulation of memoryless sources.
提出了滑块码或平稳码用于源编码和速率约束仿真的渐近最优的必要条件,并将其应用于无记忆源的栅格编码源编码和速率约束仿真的设计技术。
{"title":"Stationary and Trellis Encoding for IID Sources and Simulation","authors":"Mark Z. Mao, R. Gray","doi":"10.1109/DCC.2010.8","DOIUrl":"https://doi.org/10.1109/DCC.2010.8","url":null,"abstract":"Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation are presented and applied to a design technique for trellis-encoded source coding and rate constrained simulation of memoryless sources.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rate Distortion Bounds for Binary Erasure Source Using Sparse Graph Codes 使用稀疏图码的二进制擦除源的率失真界
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.95
Grégory Demay, V. Rathi, L. Rasmussen
We consider lower bounds on the rate-distortion performance for the binary erasure source(BES) introduced by Martinian and Yedidia, using sparse graph codes for compression. Ourapproach follows that of Kudekar and Urbanke, where lower bounds on the rate distortionperformance of low-density generator matrix (LDGM) codes for the binary symmetric source(BSS) are derived. They introduced two methods for deriving lower bounds, namely the countingmethod and the test channel method. Based on numerical results they observed that the twomethods lead to the same bound. We generalize these two methods for the BES and prove thatindeed both methods lead to identical rate-distortion bounds for the BES and hence, also forthe BSS.
我们考虑了由Martinian和Yedidia引入的二进制擦除源(BES)的率失真性能的下界,使用稀疏图码进行压缩。我们的方法遵循Kudekar和Urbanke的方法,他们推导了二进制对称源(BSS)的低密度发生器矩阵(LDGM)码的速率失真性能的下界。他们介绍了两种推导下界的方法,即计数法和测试通道法。根据数值结果,他们观察到这两种方法导致相同的边界。我们将这两种方法推广到BES,并证明这两种方法确实会导致BES和BSS相同的率失真界限。
{"title":"Rate Distortion Bounds for Binary Erasure Source Using Sparse Graph Codes","authors":"Grégory Demay, V. Rathi, L. Rasmussen","doi":"10.1109/DCC.2010.95","DOIUrl":"https://doi.org/10.1109/DCC.2010.95","url":null,"abstract":"We consider lower bounds on the rate-distortion performance for the binary erasure source(BES) introduced by Martinian and Yedidia, using sparse graph codes for compression. Ourapproach follows that of Kudekar and Urbanke, where lower bounds on the rate distortionperformance of low-density generator matrix (LDGM) codes for the binary symmetric source(BSS) are derived. They introduced two methods for deriving lower bounds, namely the countingmethod and the test channel method. Based on numerical results they observed that the twomethods lead to the same bound. We generalize these two methods for the BES and prove thatindeed both methods lead to identical rate-distortion bounds for the BES and hence, also forthe BSS.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tanner Graph Based Image Interpolation 基于坦纳图的图像插值
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.40
Ruiqin Xiong, Wen Gao
This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are "received" whereas other missing pixels of highresolution image are "erased", through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques.
本文将图像插值解释为信道解码问题,提出了一种基于tanner图的插值框架,该框架将图像中的每个像素视为可变节点,将每个像素周围的局部图像结构视为检查节点。从低分辨率图像中可用的像素被“接收”,而高分辨率图像中其他缺失的像素通过假想通道被“擦除”。低分辨率图像显示的局部图像结构提供了像素在小邻域内的联合分布信息,因此在经典信道编码场景中发挥了与奇偶校验符号相同的作用。在此框架下,基于高斯自回归图像模型,我们开发了一种有效的信念传播和积算法。初步实验表明,在相同的图像模型下,与其他方法相比,增益可达3dB。所建议的框架在每个节点上的消息处理是灵活的,并且为合并更复杂的图像建模技术提供了很大的空间。
{"title":"Tanner Graph Based Image Interpolation","authors":"Ruiqin Xiong, Wen Gao","doi":"10.1109/DCC.2010.40","DOIUrl":"https://doi.org/10.1109/DCC.2010.40","url":null,"abstract":"This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are \"received\" whereas other missing pixels of highresolution image are \"erased\", through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Shape Recognition Using Vector Quantization 基于矢量量化的形状识别
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.97
A. D. Lillo, G. Motta, J. Storer
We present a framework to recognize objects in images based on their silhouettes. In previous work we developed translation and rotation invariant classification algorithms for textures based on Fourier transforms in the polar space followed by dimensionality reduction. Here we present a new approach to recognizing shapes by following a similar classification step with a "soft" retrieval algorithm where the search of a shape database is based on the VQ centroids found by the classification step. Experiments presented on the MPEG-7 CE-Shape 1 database show significant gains in retrieval accuracy over previous work. An interesting aspect of this recognition algorithm is that the first phase of classification seems to be a powerful tool for both texture and shape recognition.
我们提出了一个基于轮廓来识别图像中的物体的框架。在之前的工作中,我们开发了基于极性空间傅里叶变换的纹理平移和旋转不变分类算法,然后进行降维。在这里,我们提出了一种新的识别形状的方法,通过类似的分类步骤和“软”检索算法,其中形状数据库的搜索是基于分类步骤找到的VQ质心。在MPEG-7 CE-Shape 1数据库上进行的实验表明,与以前的工作相比,检索精度有了显著提高。该识别算法的一个有趣的方面是,分类的第一阶段似乎是纹理和形状识别的强大工具。
{"title":"Shape Recognition Using Vector Quantization","authors":"A. D. Lillo, G. Motta, J. Storer","doi":"10.1109/DCC.2010.97","DOIUrl":"https://doi.org/10.1109/DCC.2010.97","url":null,"abstract":"We present a framework to recognize objects in images based on their silhouettes. In previous work we developed translation and rotation invariant classification algorithms for textures based on Fourier transforms in the polar space followed by dimensionality reduction. Here we present a new approach to recognizing shapes by following a similar classification step with a \"soft\" retrieval algorithm where the search of a shape database is based on the VQ centroids found by the classification step. Experiments presented on the MPEG-7 CE-Shape 1 database show significant gains in retrieval accuracy over previous work. An interesting aspect of this recognition algorithm is that the first phase of classification seems to be a powerful tool for both texture and shape recognition.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114302967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Arbitrary Directional Edge Encoding Schemes for the Operational Rate-Distortion Optimal Shape Coding Framework 任意方向边缘编码方案的操作率失真最优形状编码框架
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.10
Zhongyuan Lai, Junhuan Zhu, Zhou Ren, Wenyu Liu, Baolan Yan
We present two edge encoding schemes, namely 8-sector scheme and 16-sector scheme, for the operational rate-distortion (ORD) optimal shape coding framework. Different from the traditional 8-direction scheme that can only encode edges with angles being an integer multiple of π/4, our proposals can encode edges with arbitrary angles. We partition the digital coordinate plane into 8 and 16 sectors, and design the corresponding differential schemes to encode the short and the long component of each vertex. Experiment results demonstrate that our two proposals can reduce a large number of encoding vertices and therefore reduce 10%~20% bits for the basic ORD optimal algorithms and 10%~30% bits for all the ORD optimal algorithms under the same distortion thresholds, respectively. Moreover, the reconstruction contours are more compact compared with those using the traditional 8-direction edge encoding scheme.
针对ORD最优形状编码框架,提出了8扇区和16扇区两种边缘编码方案。与传统的8向方案只能编码角度为&# x030;/4整数倍的边不同,我们的方案可以对任意角度的边进行编码。将数字坐标平面划分为8个扇区和16个扇区,并设计相应的差分方案对每个顶点的长、短分量进行编码。实验结果表明,在相同的失真阈值下,我们的两种方案可以减少大量的编码顶点,从而在基本ORD最优算法中分别减少10%~20%的比特,在所有ORD最优算法中分别减少10%~30%的比特。与传统的8方向边缘编码方案相比,重构轮廓更加紧凑。
{"title":"Arbitrary Directional Edge Encoding Schemes for the Operational Rate-Distortion Optimal Shape Coding Framework","authors":"Zhongyuan Lai, Junhuan Zhu, Zhou Ren, Wenyu Liu, Baolan Yan","doi":"10.1109/DCC.2010.10","DOIUrl":"https://doi.org/10.1109/DCC.2010.10","url":null,"abstract":"We present two edge encoding schemes, namely 8-sector scheme and 16-sector scheme, for the operational rate-distortion (ORD) optimal shape coding framework. Different from the traditional 8-direction scheme that can only encode edges with angles being an integer multiple of π/4, our proposals can encode edges with arbitrary angles. We partition the digital coordinate plane into 8 and 16 sectors, and design the corresponding differential schemes to encode the short and the long component of each vertex. Experiment results demonstrate that our two proposals can reduce a large number of encoding vertices and therefore reduce 10%~20% bits for the basic ORD optimal algorithms and 10%~30% bits for all the ORD optimal algorithms under the same distortion thresholds, respectively. Moreover, the reconstruction contours are more compact compared with those using the traditional 8-direction edge encoding scheme.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"46 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120883011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Efficient Algorithms for Constructing Optimal Bi-directional Context Sets 构建最优双向上下文集的高效算法
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.23
F. Fernandez, Alfredo Viola, M. Weinberger
Bi-directional context sets extend the classical context-tree modeling framework to situations in which the observations consist of two tracks or directions. In this paper, we study the problem of efficiently finding an optimal bi-directional context set for a given data sequence and loss function. This problem has applications in data compression, prediction, and denoising. The main tool in our construction is a new data structure, the compact bi-directional context graph, which generalizes compact suffix trees to two directions.
双向上下文集将经典的上下文树建模框架扩展到观察由两个轨道或方向组成的情况。本文研究了给定数据序列和损失函数的最优双向上下文集的有效查找问题。这个问题在数据压缩、预测和去噪方面都有应用。我们构建的主要工具是一种新的数据结构,紧凑双向上下文图,它将紧凑后缀树推广到两个方向。
{"title":"Efficient Algorithms for Constructing Optimal Bi-directional Context Sets","authors":"F. Fernandez, Alfredo Viola, M. Weinberger","doi":"10.1109/DCC.2010.23","DOIUrl":"https://doi.org/10.1109/DCC.2010.23","url":null,"abstract":"Bi-directional context sets extend the classical context-tree modeling framework to situations in which the observations consist of two tracks or directions. In this paper, we study the problem of efficiently finding an optimal bi-directional context set for a given data sequence and loss function. This problem has applications in data compression, prediction, and denoising. The main tool in our construction is a new data structure, the compact bi-directional context graph, which generalizes compact suffix trees to two directions.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116596313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Image Compression Using the DCT and Noiselets: A New Algorithm and Its Rate Distortion Performance 基于DCT和小噪声的图像压缩:一种新的算法及其率失真性能
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.62
Zhuoyuan Chen, Jiangtao Wen, Shiqiang Yang, Yuxing Han, J. Villasenor
We describe an image coding algorithm combining the DCT and noiselet information. The algorithm first transmits DCT information sufficient to reproduce a "low-quality" version of the image at the decoder. This image is then used both at the decoder and encoder to create a mutually known list of locations of likely significant noiselet coefficients. The coefficient values themselves are then transmitted to the decoder differentially, by subtracting, at the encoder, the low-quality image from the original image, obtaining the noiselet values and subjecting them to quantization and entropy coding. There remain significant opportunities for further work combining CS-inspired information theoretic techniques with the rate-distortion considerations that are critical in practical image communications.
提出了一种结合DCT和小波信息的图像编码算法。该算法首先传输的DCT信息足以在解码器处再现图像的“低质量”版本。然后在解码器和编码器中使用该图像来创建一个相互已知的可能显著噪声系数的位置列表。然后将系数值本身差分传输到解码器,方法是在编码器处从原始图像中减去低质量图像,获得小噪声值并对其进行量化和熵编码。将cs启发的信息理论技术与实际图像通信中至关重要的率失真考虑相结合的进一步工作仍然有很大的机会。
{"title":"Image Compression Using the DCT and Noiselets: A New Algorithm and Its Rate Distortion Performance","authors":"Zhuoyuan Chen, Jiangtao Wen, Shiqiang Yang, Yuxing Han, J. Villasenor","doi":"10.1109/DCC.2010.62","DOIUrl":"https://doi.org/10.1109/DCC.2010.62","url":null,"abstract":"We describe an image coding algorithm combining the DCT and noiselet information. The algorithm first transmits DCT information sufficient to reproduce a \"low-quality\" version of the image at the decoder. This image is then used both at the decoder and encoder to create a mutually known list of locations of likely significant noiselet coefficients. The coefficient values themselves are then transmitted to the decoder differentially, by subtracting, at the encoder, the low-quality image from the original image, obtaining the noiselet values and subjecting them to quantization and entropy coding. There remain significant opportunities for further work combining CS-inspired information theoretic techniques with the rate-distortion considerations that are critical in practical image communications.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lossless Data Compression via Substring Enumeration 通过子字符串枚举进行无损数据压缩
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.28
Danny Dubé, V. Beaudoin
We present a technique that compresses a string $w$ by enumerating all the substrings of $w$. The substrings are enumerated from the shortest to the longest and in lexicographic order. Compression is obtained from the fact that the set of the substrings of a particular length gives a lot of information about the substrings that are one bit longer. A linear-time, linear-space algorithm is presented. Experimental results show that the compression efficiency comes close to that of the best PPM variants. Other compression techniques are compared to ours.
我们提出了一种通过枚举字符串$w$的所有子字符串来压缩字符串$w$的技术。子字符串按字典顺序从最短到最长枚举。压缩是由这样一个事实获得的:特定长度的子字符串集合给出了许多关于比它长1位的子字符串的信息。提出了一种线性时间、线性空间的算法。实验结果表明,压缩效率接近最佳PPM变体。其他压缩技术与我们的进行了比较。
{"title":"Lossless Data Compression via Substring Enumeration","authors":"Danny Dubé, V. Beaudoin","doi":"10.1109/DCC.2010.28","DOIUrl":"https://doi.org/10.1109/DCC.2010.28","url":null,"abstract":"We present a technique that compresses a string $w$ by enumerating all the substrings of $w$. The substrings are enumerated from the shortest to the longest and in lexicographic order. Compression is obtained from the fact that the set of the substrings of a particular length gives a lot of information about the substrings that are one bit longer. A linear-time, linear-space algorithm is presented. Experimental results show that the compression efficiency comes close to that of the best PPM variants. Other compression techniques are compared to ours.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129826932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A Symbolic Dynamical System Approach to Lossy Source Coding with Feedforward 一种前馈有损源编码的符号动力系统方法
Pub Date : 2010-01-20 DOI: 10.1109/DCC.2010.94
O. Shayevitz
It is known that modeling an information source via a symbolic dynamical system evolving over the unit interval, leads to a natural lossless compression scheme attaining the entropy rate of the source, under general conditions. We extend this notion to the lossy compression regime assuming a feedforward link is available, by modeling a source via a two-dimensional symbolic dynamical system where one component corresponds to the compressed signal, and the other essentially corresponds to the feedforward signal. For memoryless sources and an arbitrary bounded distortion measure, we show this approach leads to a family of simple deterministic compression schemes that attain the rate-distortion function of the source. The construction is dual to a recent optimal scheme for channel coding with feedback.
众所周知,在一般情况下,通过在单位间隔上进化的符号动力系统对信息源进行建模,可以获得自然的无损压缩方案,从而获得源的熵率。我们将这个概念扩展到有损压缩系统,假设有一个前馈链路可用,通过一个二维符号动力系统建模一个源,其中一个分量对应于压缩信号,另一个基本上对应于前馈信号。对于无记忆源和任意有界失真测量,我们展示了这种方法导致一系列简单的确定性压缩方案,这些方案可以获得源的率失真函数。该结构是一种最新的带反馈信道编码最优方案的对偶结构。
{"title":"A Symbolic Dynamical System Approach to Lossy Source Coding with Feedforward","authors":"O. Shayevitz","doi":"10.1109/DCC.2010.94","DOIUrl":"https://doi.org/10.1109/DCC.2010.94","url":null,"abstract":"It is known that modeling an information source via a symbolic dynamical system evolving over the unit interval, leads to a natural lossless compression scheme attaining the entropy rate of the source, under general conditions. We extend this notion to the lossy compression regime assuming a feedforward link is available, by modeling a source via a two-dimensional symbolic dynamical system where one component corresponds to the compressed signal, and the other essentially corresponds to the feedforward signal. For memoryless sources and an arbitrary bounded distortion measure, we show this approach leads to a family of simple deterministic compression schemes that attain the rate-distortion function of the source. The construction is dual to a recent optimal scheme for channel coding with feedback.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122388837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2010 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1