Remote visualization of an arbitrary 2-D planar "cut" from a large volumetric dataset with random access has both gained importance and posed significant challenges over the past few years in industrial and medical applications. In this paper, a prediction model is presented that relates transmission efficiency to voxel coverage statistics for a fast random 2-D image retrieval system. This model can be for parameter selection and also provides insights that lead us to propose a new 3D rectangular tiling scheme, which achieves an additional 10% - 30% reduction in average transmission rate as compared to our previously proposed technique, e.g.,a nearly 30% / 45% reduction in the average transmission rate at the cost of a factor of ten / fifteen in storage overhead compared to traditional cubic tiling. Furthermore, this approach leads to improved random access, with less storage and run-time memory required at the client.
{"title":"Optimization of Overlapped Tiling for Efficient 3D Image Retrieval","authors":"Zihong Fan, Antonio Ortega","doi":"10.1109/DCC.2010.99","DOIUrl":"https://doi.org/10.1109/DCC.2010.99","url":null,"abstract":"Remote visualization of an arbitrary 2-D planar \"cut\" from a large volumetric dataset with random access has both gained importance and posed significant challenges over the past few years in industrial and medical applications. In this paper, a prediction model is presented that relates transmission efficiency to voxel coverage statistics for a fast random 2-D image retrieval system. This model can be for parameter selection and also provides insights that lead us to propose a new 3D rectangular tiling scheme, which achieves an additional 10% - 30% reduction in average transmission rate as compared to our previously proposed technique, e.g.,a nearly 30% / 45% reduction in the average transmission rate at the cost of a factor of ten / fifteen in storage overhead compared to traditional cubic tiling. Furthermore, this approach leads to improved random access, with less storage and run-time memory required at the client.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation are presented and applied to a design technique for trellis-encoded source coding and rate constrained simulation of memoryless sources.
{"title":"Stationary and Trellis Encoding for IID Sources and Simulation","authors":"Mark Z. Mao, R. Gray","doi":"10.1109/DCC.2010.8","DOIUrl":"https://doi.org/10.1109/DCC.2010.8","url":null,"abstract":"Necessary conditions for asymptotically optimal sliding-block or stationary codes for source coding and rate-constrained simulation are presented and applied to a design technique for trellis-encoded source coding and rate constrained simulation of memoryless sources.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider lower bounds on the rate-distortion performance for the binary erasure source(BES) introduced by Martinian and Yedidia, using sparse graph codes for compression. Ourapproach follows that of Kudekar and Urbanke, where lower bounds on the rate distortionperformance of low-density generator matrix (LDGM) codes for the binary symmetric source(BSS) are derived. They introduced two methods for deriving lower bounds, namely the countingmethod and the test channel method. Based on numerical results they observed that the twomethods lead to the same bound. We generalize these two methods for the BES and prove thatindeed both methods lead to identical rate-distortion bounds for the BES and hence, also forthe BSS.
{"title":"Rate Distortion Bounds for Binary Erasure Source Using Sparse Graph Codes","authors":"Grégory Demay, V. Rathi, L. Rasmussen","doi":"10.1109/DCC.2010.95","DOIUrl":"https://doi.org/10.1109/DCC.2010.95","url":null,"abstract":"We consider lower bounds on the rate-distortion performance for the binary erasure source(BES) introduced by Martinian and Yedidia, using sparse graph codes for compression. Ourapproach follows that of Kudekar and Urbanke, where lower bounds on the rate distortionperformance of low-density generator matrix (LDGM) codes for the binary symmetric source(BSS) are derived. They introduced two methods for deriving lower bounds, namely the countingmethod and the test channel method. Based on numerical results they observed that the twomethods lead to the same bound. We generalize these two methods for the BES and prove thatindeed both methods lead to identical rate-distortion bounds for the BES and hence, also forthe BSS.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are "received" whereas other missing pixels of highresolution image are "erased", through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques.
{"title":"Tanner Graph Based Image Interpolation","authors":"Ruiqin Xiong, Wen Gao","doi":"10.1109/DCC.2010.40","DOIUrl":"https://doi.org/10.1109/DCC.2010.40","url":null,"abstract":"This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are \"received\" whereas other missing pixels of highresolution image are \"erased\", through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework to recognize objects in images based on their silhouettes. In previous work we developed translation and rotation invariant classification algorithms for textures based on Fourier transforms in the polar space followed by dimensionality reduction. Here we present a new approach to recognizing shapes by following a similar classification step with a "soft" retrieval algorithm where the search of a shape database is based on the VQ centroids found by the classification step. Experiments presented on the MPEG-7 CE-Shape 1 database show significant gains in retrieval accuracy over previous work. An interesting aspect of this recognition algorithm is that the first phase of classification seems to be a powerful tool for both texture and shape recognition.
{"title":"Shape Recognition Using Vector Quantization","authors":"A. D. Lillo, G. Motta, J. Storer","doi":"10.1109/DCC.2010.97","DOIUrl":"https://doi.org/10.1109/DCC.2010.97","url":null,"abstract":"We present a framework to recognize objects in images based on their silhouettes. In previous work we developed translation and rotation invariant classification algorithms for textures based on Fourier transforms in the polar space followed by dimensionality reduction. Here we present a new approach to recognizing shapes by following a similar classification step with a \"soft\" retrieval algorithm where the search of a shape database is based on the VQ centroids found by the classification step. Experiments presented on the MPEG-7 CE-Shape 1 database show significant gains in retrieval accuracy over previous work. An interesting aspect of this recognition algorithm is that the first phase of classification seems to be a powerful tool for both texture and shape recognition.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114302967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongyuan Lai, Junhuan Zhu, Zhou Ren, Wenyu Liu, Baolan Yan
We present two edge encoding schemes, namely 8-sector scheme and 16-sector scheme, for the operational rate-distortion (ORD) optimal shape coding framework. Different from the traditional 8-direction scheme that can only encode edges with angles being an integer multiple of π/4, our proposals can encode edges with arbitrary angles. We partition the digital coordinate plane into 8 and 16 sectors, and design the corresponding differential schemes to encode the short and the long component of each vertex. Experiment results demonstrate that our two proposals can reduce a large number of encoding vertices and therefore reduce 10%~20% bits for the basic ORD optimal algorithms and 10%~30% bits for all the ORD optimal algorithms under the same distortion thresholds, respectively. Moreover, the reconstruction contours are more compact compared with those using the traditional 8-direction edge encoding scheme.
{"title":"Arbitrary Directional Edge Encoding Schemes for the Operational Rate-Distortion Optimal Shape Coding Framework","authors":"Zhongyuan Lai, Junhuan Zhu, Zhou Ren, Wenyu Liu, Baolan Yan","doi":"10.1109/DCC.2010.10","DOIUrl":"https://doi.org/10.1109/DCC.2010.10","url":null,"abstract":"We present two edge encoding schemes, namely 8-sector scheme and 16-sector scheme, for the operational rate-distortion (ORD) optimal shape coding framework. Different from the traditional 8-direction scheme that can only encode edges with angles being an integer multiple of π/4, our proposals can encode edges with arbitrary angles. We partition the digital coordinate plane into 8 and 16 sectors, and design the corresponding differential schemes to encode the short and the long component of each vertex. Experiment results demonstrate that our two proposals can reduce a large number of encoding vertices and therefore reduce 10%~20% bits for the basic ORD optimal algorithms and 10%~30% bits for all the ORD optimal algorithms under the same distortion thresholds, respectively. Moreover, the reconstruction contours are more compact compared with those using the traditional 8-direction edge encoding scheme.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"46 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120883011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bi-directional context sets extend the classical context-tree modeling framework to situations in which the observations consist of two tracks or directions. In this paper, we study the problem of efficiently finding an optimal bi-directional context set for a given data sequence and loss function. This problem has applications in data compression, prediction, and denoising. The main tool in our construction is a new data structure, the compact bi-directional context graph, which generalizes compact suffix trees to two directions.
{"title":"Efficient Algorithms for Constructing Optimal Bi-directional Context Sets","authors":"F. Fernandez, Alfredo Viola, M. Weinberger","doi":"10.1109/DCC.2010.23","DOIUrl":"https://doi.org/10.1109/DCC.2010.23","url":null,"abstract":"Bi-directional context sets extend the classical context-tree modeling framework to situations in which the observations consist of two tracks or directions. In this paper, we study the problem of efficiently finding an optimal bi-directional context set for a given data sequence and loss function. This problem has applications in data compression, prediction, and denoising. The main tool in our construction is a new data structure, the compact bi-directional context graph, which generalizes compact suffix trees to two directions.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116596313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuoyuan Chen, Jiangtao Wen, Shiqiang Yang, Yuxing Han, J. Villasenor
We describe an image coding algorithm combining the DCT and noiselet information. The algorithm first transmits DCT information sufficient to reproduce a "low-quality" version of the image at the decoder. This image is then used both at the decoder and encoder to create a mutually known list of locations of likely significant noiselet coefficients. The coefficient values themselves are then transmitted to the decoder differentially, by subtracting, at the encoder, the low-quality image from the original image, obtaining the noiselet values and subjecting them to quantization and entropy coding. There remain significant opportunities for further work combining CS-inspired information theoretic techniques with the rate-distortion considerations that are critical in practical image communications.
{"title":"Image Compression Using the DCT and Noiselets: A New Algorithm and Its Rate Distortion Performance","authors":"Zhuoyuan Chen, Jiangtao Wen, Shiqiang Yang, Yuxing Han, J. Villasenor","doi":"10.1109/DCC.2010.62","DOIUrl":"https://doi.org/10.1109/DCC.2010.62","url":null,"abstract":"We describe an image coding algorithm combining the DCT and noiselet information. The algorithm first transmits DCT information sufficient to reproduce a \"low-quality\" version of the image at the decoder. This image is then used both at the decoder and encoder to create a mutually known list of locations of likely significant noiselet coefficients. The coefficient values themselves are then transmitted to the decoder differentially, by subtracting, at the encoder, the low-quality image from the original image, obtaining the noiselet values and subjecting them to quantization and entropy coding. There remain significant opportunities for further work combining CS-inspired information theoretic techniques with the rate-distortion considerations that are critical in practical image communications.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a technique that compresses a string $w$ by enumerating all the substrings of $w$. The substrings are enumerated from the shortest to the longest and in lexicographic order. Compression is obtained from the fact that the set of the substrings of a particular length gives a lot of information about the substrings that are one bit longer. A linear-time, linear-space algorithm is presented. Experimental results show that the compression efficiency comes close to that of the best PPM variants. Other compression techniques are compared to ours.
{"title":"Lossless Data Compression via Substring Enumeration","authors":"Danny Dubé, V. Beaudoin","doi":"10.1109/DCC.2010.28","DOIUrl":"https://doi.org/10.1109/DCC.2010.28","url":null,"abstract":"We present a technique that compresses a string $w$ by enumerating all the substrings of $w$. The substrings are enumerated from the shortest to the longest and in lexicographic order. Compression is obtained from the fact that the set of the substrings of a particular length gives a lot of information about the substrings that are one bit longer. A linear-time, linear-space algorithm is presented. Experimental results show that the compression efficiency comes close to that of the best PPM variants. Other compression techniques are compared to ours.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129826932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is known that modeling an information source via a symbolic dynamical system evolving over the unit interval, leads to a natural lossless compression scheme attaining the entropy rate of the source, under general conditions. We extend this notion to the lossy compression regime assuming a feedforward link is available, by modeling a source via a two-dimensional symbolic dynamical system where one component corresponds to the compressed signal, and the other essentially corresponds to the feedforward signal. For memoryless sources and an arbitrary bounded distortion measure, we show this approach leads to a family of simple deterministic compression schemes that attain the rate-distortion function of the source. The construction is dual to a recent optimal scheme for channel coding with feedback.
{"title":"A Symbolic Dynamical System Approach to Lossy Source Coding with Feedforward","authors":"O. Shayevitz","doi":"10.1109/DCC.2010.94","DOIUrl":"https://doi.org/10.1109/DCC.2010.94","url":null,"abstract":"It is known that modeling an information source via a symbolic dynamical system evolving over the unit interval, leads to a natural lossless compression scheme attaining the entropy rate of the source, under general conditions. We extend this notion to the lossy compression regime assuming a feedforward link is available, by modeling a source via a two-dimensional symbolic dynamical system where one component corresponds to the compressed signal, and the other essentially corresponds to the feedforward signal. For memoryless sources and an arbitrary bounded distortion measure, we show this approach leads to a family of simple deterministic compression schemes that attain the rate-distortion function of the source. The construction is dual to a recent optimal scheme for channel coding with feedback.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122388837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}