In this work, we proposes a two-tier binning scheme. First, we develop a Fountain coding with side information to construct the inner binning structure. Second, for the the outer binning, we model the WZ video coding architecture as a multi-access channel and exploit the duality property between the WZ coding and channel coding techniques. Third, we provide both the primal and dual solutions. For the primal distortion minimization problem, we use dynamic programming approach to find the optimal binning policy, and for the dual capacity maximization problem, we give a near sum-capacity binning algorithm. The objective is to lower the coding rate under same video reconstruction quality.
{"title":"A Binning Design for Wyner-Ziv Video Coding","authors":"Wen Ji, Yiqiang Chen","doi":"10.1109/DCC.2013.78","DOIUrl":"https://doi.org/10.1109/DCC.2013.78","url":null,"abstract":"In this work, we proposes a two-tier binning scheme. First, we develop a Fountain coding with side information to construct the inner binning structure. Second, for the the outer binning, we model the WZ video coding architecture as a multi-access channel and exploit the duality property between the WZ coding and channel coding techniques. Third, we provide both the primal and dual solutions. For the primal distortion minimization problem, we use dynamic programming approach to find the optimal binning policy, and for the dual capacity maximization problem, we give a near sum-capacity binning algorithm. The objective is to lower the coding rate under same video reconstruction quality.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123320365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Lema, Manuel Barcon-Goas, A. Fariña, M. R. Luaces
The high bandwidth requirements of GIS data is usually one of the main bottlenecks in the development of client-server GIS applications. Nowadays, spatial information is generated with high resolution and thus it has high storage costs. Depending on the specific use case, the precision at which that spatial information is needed is significantly smaller, so reducing its precision (within a given margin of error) is a straightforward approach to reducing transmission costs. The main technique to reduce precision in vectorial spatial representations is geometry simplification [1]. Additionally, data compression techniques are usually applied in the communication layer to further reduce data transmission costs. In this work, we show that the compressibility properties of the data should be taken into account when applying geometry simplification techniques. We present a naive two-stage approach that first applies geometry simplification using at most the 93% of the margin of error, and then applies coordinate approximation using the remaining 7%. Our approach leads to obtaining around 30-40% better compression with general-purpose compressors on the transformed data than when only simplification is performed.
{"title":"Combining Geometry Simplification and Coordinate Approximation Techniques for Better Lossy Compression of GIS Data","authors":"J. Lema, Manuel Barcon-Goas, A. Fariña, M. R. Luaces","doi":"10.1109/DCC.2013.64","DOIUrl":"https://doi.org/10.1109/DCC.2013.64","url":null,"abstract":"The high bandwidth requirements of GIS data is usually one of the main bottlenecks in the development of client-server GIS applications. Nowadays, spatial information is generated with high resolution and thus it has high storage costs. Depending on the specific use case, the precision at which that spatial information is needed is significantly smaller, so reducing its precision (within a given margin of error) is a straightforward approach to reducing transmission costs. The main technique to reduce precision in vectorial spatial representations is geometry simplification [1]. Additionally, data compression techniques are usually applied in the communication layer to further reduce data transmission costs. In this work, we show that the compressibility properties of the data should be taken into account when applying geometry simplification techniques. We present a naive two-stage approach that first applies geometry simplification using at most the 93% of the margin of error, and then applies coordinate approximation using the remaining 7%. Our approach leads to obtaining around 30-40% better compression with general-purpose compressors on the transformed data than when only simplification is performed.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130893350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stereoscopic 3D services are attracting considerable attention across various industries than never before and one of the major challenges is to introduce these services seamlessly while maintaining the backward compatibility of the existing 2D receivers. Also the increased amount of data for stereoscopic 3D needs to be efficiently compressed. In this paper an attempt has been made to compare the various options for the realization of frame compatible stereo 3D services along with the corresponding compression efficiency and backward compatibility issues.
{"title":"Frame-Compatible Stereo 3D Services Using H.264/AVC and HEVC","authors":"Palanivel Guruvareddiar, B. Joseph","doi":"10.1109/DCC.2013.74","DOIUrl":"https://doi.org/10.1109/DCC.2013.74","url":null,"abstract":"Stereoscopic 3D services are attracting considerable attention across various industries than never before and one of the major challenges is to introduce these services seamlessly while maintaining the backward compatibility of the existing 2D receivers. Also the increased amount of data for stereoscopic 3D needs to be efficiently compressed. In this paper an attempt has been made to compare the various options for the realization of frame compatible stereo 3D services along with the corresponding compression efficiency and backward compatibility issues.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134473622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres
This paper presents a perceptually enhanced intra-mode video encoder based on the Contrast Sensitivity Function (CSF) with a gracefully quality degradation as compression rate increases. The proposed encoder is highly competitive especially for high definition video formats at high video quality applications with constrained real-time and power processing demands.
{"title":"Perceptual Intra Video Encoder for High-Quality High-Definition Content","authors":"M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres","doi":"10.1109/DCC.2013.89","DOIUrl":"https://doi.org/10.1109/DCC.2013.89","url":null,"abstract":"This paper presents a perceptually enhanced intra-mode video encoder based on the Contrast Sensitivity Function (CSF) with a gracefully quality degradation as compression rate increases. The proposed encoder is highly competitive especially for high definition video formats at high video quality applications with constrained real-time and power processing demands.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1138 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongwei Xu, Joan Bartrina-Rapesta, Victor Sanchez, J. Serra-Sagristà, Juan Munoz-Gomez
Summary form only given. X-ray angiographic (angio) images are widely used for identifying irregularities in the vascular system. Because of their high spatial resolution and the increasingly amount of X-ray angio images generated, compression of these images is becoming increasingly appealing. In this paper, we introduce a diagnostically lossless compression scheme for X-ray angio images. The coding scheme relies on a novel method based on ray casting and a-shapes for distinguishing the clinically relevant Region of Interest from the background. The background is then suppressed to increase data redundancy, allowing to achieve a higher coding performance. Experimental results suggest that the proposed scheme correctly identifies the Region of Interest in X-ray angio images and achieves more than 2 bits per pixel reduction in average as compared to the case of compression with no background suppression. Results are reported here for 20 out of 25 images compressed using various lossless compression methods.
{"title":"Diagnostically Lossless Compression of X-Ray Angiographic Images through Background Suppression","authors":"Zhongwei Xu, Joan Bartrina-Rapesta, Victor Sanchez, J. Serra-Sagristà, Juan Munoz-Gomez","doi":"10.1109/DCC.2013.108","DOIUrl":"https://doi.org/10.1109/DCC.2013.108","url":null,"abstract":"Summary form only given. X-ray angiographic (angio) images are widely used for identifying irregularities in the vascular system. Because of their high spatial resolution and the increasingly amount of X-ray angio images generated, compression of these images is becoming increasingly appealing. In this paper, we introduce a diagnostically lossless compression scheme for X-ray angio images. The coding scheme relies on a novel method based on ray casting and a-shapes for distinguishing the clinically relevant Region of Interest from the background. The background is then suppressed to increase data redundancy, allowing to achieve a higher coding performance. Experimental results suggest that the proposed scheme correctly identifies the Region of Interest in X-ray angio images and achieves more than 2 bits per pixel reduction in average as compared to the case of compression with no background suppression. Results are reported here for 20 out of 25 images compressed using various lossless compression methods.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"98 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Grid-based model is used to describe 3D objects in many circumstances. It can represent fine structure of objects by using small grid. However, small grid causes problem that data of grid-based model occupies much space, which leads to difficulties of transmission and storage. This paper presents an effective compression method for 3D data. In this method, 8-byte float coordinates of each grid are transferred to 1-bit binary codes. Then the binary data is coded by octree further. The experiment result shows that the 3D data can be efficiently compressed. The method is lossless. The complete raw data can be obtained by decoding.
{"title":"Lossless Compression of 3D Grid-Based Model Based on Octree","authors":"B. Zou, Xiao Wang, Ye Zhang, Zhilu Wu","doi":"10.1109/DCC.2013.116","DOIUrl":"https://doi.org/10.1109/DCC.2013.116","url":null,"abstract":"Summary form only given. Grid-based model is used to describe 3D objects in many circumstances. It can represent fine structure of objects by using small grid. However, small grid causes problem that data of grid-based model occupies much space, which leads to difficulties of transmission and storage. This paper presents an effective compression method for 3D data. In this method, 8-byte float coordinates of each grid are transferred to 1-bit binary codes. Then the binary data is coded by octree further. The experiment result shows that the 3D data can be efficiently compressed. The method is lossless. The complete raw data can be obtained by decoding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133209350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Common e-commerce websites rely heavily on JPEG images for product presentation. In this paper we present a new coding scheme and file format that is tailored to the presentation of single-color products. A JPEG image file can be transcoded into this new format leading to substantial reduction in file size (Average of 28%) with practically no quality degradation. We describe how we can take advantage of several features found in images for product presentation to improve coding efficiency. Objective and subjective performance measurements are presented to demonstrate that little quality degradation is incurred after transcoding.
{"title":"A DCT-Based Image Coder Tailored to Product Presentation","authors":"W. Chu","doi":"10.1109/DCC.2013.66","DOIUrl":"https://doi.org/10.1109/DCC.2013.66","url":null,"abstract":"Summary form only given. Common e-commerce websites rely heavily on JPEG images for product presentation. In this paper we present a new coding scheme and file format that is tailored to the presentation of single-color products. A JPEG image file can be transcoded into this new format leading to substantial reduction in file size (Average of 28%) with practically no quality degradation. We describe how we can take advantage of several features found in images for product presentation to improve coding efficiency. Objective and subjective performance measurements are presented to demonstrate that little quality degradation is incurred after transcoding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122319563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the resolution of monitors and TVs continue to increase, the available bandwidth between host system and monitor becomes more and more a bottleneck. The Video Electronics Standards Association (VESA) is currently developing standards for screen resolutions beyond 4K and, facing the problem of not having enough bandwidth available on traditional copper wires, contacted the JPEG to develop a low complexity, high-throughput still image coder for lossy transmission of video signals. This article describes two approaches to address this predicament, a simple SPIHT based coded and a Hadamard based embedded codec requiring only minimal buffering at encoder and decoder side, and avoiding any pixel-based feedback loops limiting the operating frequency of hardware implementations. Analyzing the details of both implementations reveals an interesting connection between run-length coding, as found in the progressive mode of traditional JPEG coding, and SPIHT/EZW coding - a technique popular in wavelet based compression techniques.
{"title":"High Throughput Coding of Video Signals","authors":"T. Richter, S. Simon","doi":"10.1109/DCC.2013.96","DOIUrl":"https://doi.org/10.1109/DCC.2013.96","url":null,"abstract":"As the resolution of monitors and TVs continue to increase, the available bandwidth between host system and monitor becomes more and more a bottleneck. The Video Electronics Standards Association (VESA) is currently developing standards for screen resolutions beyond 4K and, facing the problem of not having enough bandwidth available on traditional copper wires, contacted the JPEG to develop a low complexity, high-throughput still image coder for lossy transmission of video signals. This article describes two approaches to address this predicament, a simple SPIHT based coded and a Hadamard based embedded codec requiring only minimal buffering at encoder and decoder side, and avoiding any pixel-based feedback loops limiting the operating frequency of hardware implementations. Analyzing the details of both implementations reveals an interesting connection between run-length coding, as found in the progressive mode of traditional JPEG coding, and SPIHT/EZW coding - a technique popular in wavelet based compression techniques.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130348137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers the problem of universal lossless source coding with side information at the decoder only. The correlation channel between the source and the side information is unknown and belongs to a class parametrized by some unknown parameter vector. A complete coding scheme is proposed that works well for any distribution in the class. At the encoder, the proposed scheme encompasses the determination of the coding rate and the design of the encoding process. Both contributions result from the information-theoretical compression bounds of universal lossless source coding with side information. Then a novel decoder is proposed that takes into account the available information regarding the class. The proposed scheme avoids the use of a feedback channel or the transmission of a learning sequence, which both would result in a rate increase at finite length.
{"title":"Practical Coding Scheme for Universal Source Coding with Side Information at the Decoder","authors":"Elsa Dupraz, A. Roumy, M. Kieffer","doi":"10.1109/DCC.2013.48","DOIUrl":"https://doi.org/10.1109/DCC.2013.48","url":null,"abstract":"This paper considers the problem of universal lossless source coding with side information at the decoder only. The correlation channel between the source and the side information is unknown and belongs to a class parametrized by some unknown parameter vector. A complete coding scheme is proposed that works well for any distribution in the class. At the encoder, the proposed scheme encompasses the determination of the coding rate and the design of the encoding process. Both contributions result from the information-theoretical compression bounds of universal lossless source coding with side information. Then a novel decoder is proposed that takes into account the available information regarding the class. The proposed scheme avoids the use of a feedback channel or the transmission of a learning sequence, which both would result in a rate increase at finite length.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"03 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129658046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine the case of a signal going through a processing chain consisting of two transform coding stages, with the aim of recovering the unknown parameters of the first encoder. Through number theoretical considerations, we identify a lattice of quantisation invariant points, whose coordinates are not affected by the double quantisation and whose parameters are closely related to the unknown transform. The conditions for this lattice to exist are then discussed, and its uniqueness properties analysed. Finally, an algorithmic procedure to recover the invariants from a sparse set of points is shown together with numerical results.
{"title":"Quantisation Invariants for Transform Parameter Estimation in Coding Chains","authors":"M. V. Scarzanella, M. Tagliasacchi, P. Dragotti","doi":"10.1109/DCC.2013.36","DOIUrl":"https://doi.org/10.1109/DCC.2013.36","url":null,"abstract":"We examine the case of a signal going through a processing chain consisting of two transform coding stages, with the aim of recovering the unknown parameters of the first encoder. Through number theoretical considerations, we identify a lattice of quantisation invariant points, whose coordinates are not affected by the double quantisation and whose parameters are closely related to the unknown transform. The conditions for this lattice to exist are then discussed, and its uniqueness properties analysed. Finally, an algorithmic procedure to recover the invariants from a sparse set of points is shown together with numerical results.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128477766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}