Qifei Wang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang
In 3D video coding, to provide the high-quality interactive viewpoint video to audience, it is necessary to jointly optimize coding efficiency of color and depth images at a given bit-rates. In this paper, a region based distortion model is proposed to precisely estimate the error of synthesized virtual view. Furthermore, combined with the rate-distortion (R-D) model of color and depth images coding, an overall R-D model is built up for 3D video coding. Experimental results exhibit that the proposed approach can efficiently measure the R-D property of 3D video coding.
{"title":"Region Based Rate-Distortion Analysis for 3D Video Coding","authors":"Qifei Wang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang","doi":"10.1109/DCC.2010.63","DOIUrl":"https://doi.org/10.1109/DCC.2010.63","url":null,"abstract":"In 3D video coding, to provide the high-quality interactive viewpoint video to audience, it is necessary to jointly optimize coding efficiency of color and depth images at a given bit-rates. In this paper, a region based distortion model is proposed to precisely estimate the error of synthesized virtual view. Furthermore, combined with the rate-distortion (R-D) model of color and depth images coding, an overall R-D model is built up for 3D video coding. Experimental results exhibit that the proposed approach can efficiently measure the R-D property of 3D video coding.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125957234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel A. Martínez-Prieto, J. Adiego, P. Fuente, Javier D. Fernández
High-order word-based modeling is able to achieve competitive compression ratios by using k-order text statistics. However, this can be an impracticable problem due to the large number of relationships between words. This paper focuses on how the 1-order Edge-Guided (E-G) technique can be enhanced to support modeling and coding on high-order text statistics. An improved E-G revision, called E-G1, is firstly done. A grammar-based building is next used to identify significative high-order contexts, in a first pass, which are used to encode the text on an extended revision of the E-G codification scheme. This current approach, E-Gk, yields a competitive space/efficiency trade-off with respect to comparable approaches.
{"title":"High-Order Text Compression on Hierarchical Edge-Guided","authors":"Miguel A. Martínez-Prieto, J. Adiego, P. Fuente, Javier D. Fernández","doi":"10.1109/DCC.2010.72","DOIUrl":"https://doi.org/10.1109/DCC.2010.72","url":null,"abstract":"High-order word-based modeling is able to achieve competitive compression ratios by using k-order text statistics. However, this can be an impracticable problem due to the large number of relationships between words. This paper focuses on how the 1-order Edge-Guided (E-G) technique can be enhanced to support modeling and coding on high-order text statistics. An improved E-G revision, called E-G1, is firstly done. A grammar-based building is next used to identify significative high-order contexts, in a first pass, which are used to encode the text on an extended revision of the E-G codification scheme. This current approach, E-Gk, yields a competitive space/efficiency trade-off with respect to comparable approaches.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"182 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of the rate region of the vector Gaussian multiple description with individual and central quadratic distortion constraints is studied. We have two main contributions. First, a lower bound on the rate region is derived. The bound is obtained by lower-bounding a weighted sum rate for each supporting hyperplane of the rate region. Second, the rate region for the scenario of the scalar Gaussian source is fully characterized by showing that the lower bound is tight. The optimal weighted sum rate for each supporting hyperplane is obtained by solving a single maximization problem. This is contrary to existing results, which require solving a min-max optimization problem.
{"title":"Bounding the Rate Region of Vector Gaussian Multiple Descriptions with Individual and Central Receivers","authors":"Guoqiang Zhang, W. Kleijn, Jan Østergaard","doi":"10.1109/DCC.2010.9","DOIUrl":"https://doi.org/10.1109/DCC.2010.9","url":null,"abstract":"The problem of the rate region of the vector Gaussian multiple description with individual and central quadratic distortion constraints is studied. We have two main contributions. First, a lower bound on the rate region is derived. The bound is obtained by lower-bounding a weighted sum rate for each supporting hyperplane of the rate region. Second, the rate region for the scenario of the scalar Gaussian source is fully characterized by showing that the lower bound is tight. The optimal weighted sum rate for each supporting hyperplane is obtained by solving a single maximization problem. This is contrary to existing results, which require solving a min-max optimization problem.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120992018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangtao Wen, Zhuoyuan Chen, Shiqiang Yang, Yuxing Han, J. Villasenor
This paper has described an improved algorithm for reconstructing sparse binary signals using compressive sensing. The algorithm is based on the reweighted $l_q$ norm optimization algorithm of cite{04}, but with the important additional operation of bounding in each round of the interior-point method iteration, and progressive reduction of $q$. Experimental results confirm that the algorithm performs well both in terms of the ability to recover an input signal as well as in terms of speed. We also found that both the progressive reduction and the bounding are integral to the improvement in performance.
{"title":"Reconstruction of Sparse Binary Signals Using Compressive Sensing","authors":"Jiangtao Wen, Zhuoyuan Chen, Shiqiang Yang, Yuxing Han, J. Villasenor","doi":"10.1109/DCC.2010.61","DOIUrl":"https://doi.org/10.1109/DCC.2010.61","url":null,"abstract":"This paper has described an improved algorithm for reconstructing sparse binary signals using compressive sensing. The algorithm is based on the reweighted $l_q$ norm optimization algorithm of cite{04}, but with the important additional operation of bounding in each round of the interior-point method iteration, and progressive reduction of $q$. Experimental results confirm that the algorithm performs well both in terms of the ability to recover an input signal as well as in terms of speed. We also found that both the progressive reduction and the bounding are integral to the improvement in performance.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"15 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the approximate library management problem, which is to construct an index for a dynamic text collection $L$ such that for any query pattern $P$ and any integer $k$, we can report all $k$-error matches of $P$ in $L$ efficiently. Existing work either focussed on the static version of the problem or assumed k=0. We observe that by combining several recent techniques, we can achieve the first compressed indexes that support efficient pattern queries and updating simultaneously.
{"title":"Compressed Indexes for Approximate Library Management","authors":"W. Hon, Winson Wu, Ting Yang","doi":"10.1109/DCC.2010.75","DOIUrl":"https://doi.org/10.1109/DCC.2010.75","url":null,"abstract":"This paper investigates the approximate library management problem, which is to construct an index for a dynamic text collection $L$ such that for any query pattern $P$ and any integer $k$, we can report all $k$-error matches of $P$ in $L$ efficiently. Existing work either focussed on the static version of the problem or assumed k=0. We observe that by combining several recent techniques, we can achieve the first compressed indexes that support efficient pattern queries and updating simultaneously.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115414622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matteo Danieli, S. Forchhammer, J. D. Andersen, Lars P. B. Christensen, S. S. Christensen
Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stationsand mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value without compromising the system throughput.
{"title":"Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations","authors":"Matteo Danieli, S. Forchhammer, J. D. Andersen, Lars P. B. Christensen, S. S. Christensen","doi":"10.1109/DCC.2010.98","DOIUrl":"https://doi.org/10.1109/DCC.2010.98","url":null,"abstract":"Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stationsand mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value without compromising the system throughput.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125611990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a new approximation for the Kolmogorov complexity of strings based on compression with smallest Context Free Grammars (CFG). If, for a given string, a dictionary containing its relevant patterns may be regarded as a model, a Context-Free Grammar may represent a generative model, with all of its rules (and as a consequence its own size) being meaningful. Thus, we define a new complexity approximation which takes into account the size of the string model, in a representation similar to the Minimum Description Length. These considerations result in the definition of a new compression-based similarity measure: its novelty lies in the fact that the impact of complexity overestimations, due to the limits that a real compressor has, can be accounted for and decreased.
{"title":"A Similarity Measure Using Smallest Context-Free Grammars","authors":"D. Cerra, M. Datcu","doi":"10.1109/DCC.2010.37","DOIUrl":"https://doi.org/10.1109/DCC.2010.37","url":null,"abstract":"This work presents a new approximation for the Kolmogorov complexity of strings based on compression with smallest Context Free Grammars (CFG). If, for a given string, a dictionary containing its relevant patterns may be regarded as a model, a Context-Free Grammar may represent a generative model, with all of its rules (and as a consequence its own size) being meaningful. Thus, we define a new complexity approximation which takes into account the size of the string model, in a representation similar to the Minimum Description Length. These considerations result in the definition of a new compression-based similarity measure: its novelty lies in the fact that the impact of complexity overestimations, due to the limits that a real compressor has, can be accounted for and decreased.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, auto regressive (AR) model is applied to error concealment for block-based packet video encoding. Each pixel within the corrupted block is restored as the weighted summation of corresponding pixels within the previous frame in a linear regression manner. Two novel algorithms using weighted least squares method are proposed to derive the AR coefficients. First, we present a coefficient derivation algorithm under the spatial continuity constraint, in which the summation of the weighted square errors within the available neighboring blocks is minimized. The confident weight of each sample is inversely proportional to the distance between the sample and the corrupted block. Second, we provide a coefficient derivation algorithm under the temporal continuity constraint, where the summation of the weighted square errors around the target pixel within the previous frame is minimized. The confident weight of each sample is proportional to the similarity of geometric proximity as well as the intensity gray level. The regression results generated by the two algorithms are then merged to form the ultimate restorations. Various experimental results demonstrate that the proposed error concealment strategy is able to increase the peak signal-to-noise ratio (PSNR) compared to other methods.
{"title":"Auto Regressive Model and Weighted Least Squares Based Packet Video Error Concealment","authors":"Yongbing Zhang, Xinguang Xiang, Siwei Ma, Debin Zhao, Wen Gao","doi":"10.1109/DCC.2010.100","DOIUrl":"https://doi.org/10.1109/DCC.2010.100","url":null,"abstract":"In this paper, auto regressive (AR) model is applied to error concealment for block-based packet video encoding. Each pixel within the corrupted block is restored as the weighted summation of corresponding pixels within the previous frame in a linear regression manner. Two novel algorithms using weighted least squares method are proposed to derive the AR coefficients. First, we present a coefficient derivation algorithm under the spatial continuity constraint, in which the summation of the weighted square errors within the available neighboring blocks is minimized. The confident weight of each sample is inversely proportional to the distance between the sample and the corrupted block. Second, we provide a coefficient derivation algorithm under the temporal continuity constraint, where the summation of the weighted square errors around the target pixel within the previous frame is minimized. The confident weight of each sample is proportional to the similarity of geometric proximity as well as the intensity gray level. The regression results generated by the two algorithms are then merged to form the ultimate restorations. Various experimental results demonstrate that the proposed error concealment strategy is able to increase the peak signal-to-noise ratio (PSNR) compared to other methods.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The JPEG XR image compression standard, originally developed under the name HD-Photo by Microsoft, offers the feature of spatial variably quantization; its codestream syntax allows to select one out of a limited set of possible quantizers per macro block and per frequency band. In this paper, an algorithm is presented that finds the rate-distortion optimal set of quantizers, and the optimal quantizer choice for each macro block. Even though it seems plausible that this feature may provide a huge improvement for images whose statistics is non-stationary, e.g. compound images, it is demonstrated that the PSNR improvement is not larger than 0.3dB for a two-step heuristics of feasible complexity, but improvements of up to 0.8dB for compound images are possible by a much more complex optimization strategy.
{"title":"Spatial Constant Quantization in JPEG XR is Nearly Optimal","authors":"T. Richter","doi":"10.1109/DCC.2010.14","DOIUrl":"https://doi.org/10.1109/DCC.2010.14","url":null,"abstract":"The JPEG XR image compression standard, originally developed under the name HD-Photo by Microsoft, offers the feature of spatial variably quantization; its codestream syntax allows to select one out of a limited set of possible quantizers per macro block and per frequency band. In this paper, an algorithm is presented that finds the rate-distortion optimal set of quantizers, and the optimal quantizer choice for each macro block. Even though it seems plausible that this feature may provide a huge improvement for images whose statistics is non-stationary, e.g. compound images, it is demonstrated that the PSNR improvement is not larger than 0.3dB for a two-step heuristics of feasible complexity, but improvements of up to 0.8dB for compound images are possible by a much more complex optimization strategy.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A pseudo-random sequence generator (PRNG), L12RC4, inspired by the LZSS compression algorithm and RC4 stream cipher, was presented and implemented. The result of the NIST and Diehard test suite indicate that the L12RC4 is a good PRNG, and so it seems to be sound and may be suitable for use in some cryptographic applications. We also found that the probability distribution of the index value frequency is associated with the compression pass and INDEX_BIT_COUNT value. As for one pass mode, the greater INDEX_BIT_COUNT value, the more uniformly distributed, and the double pass mode has better uniformity than the one pass mode.
{"title":"A Pseudo-Random Number Generator Based on LZSS","authors":"Wei-ling Chang, Binxing Fang, Xiao-chun Yun, Shupeng Wang, Xiang-Zhan Yu","doi":"10.1109/DCC.2010.77","DOIUrl":"https://doi.org/10.1109/DCC.2010.77","url":null,"abstract":"A pseudo-random sequence generator (PRNG), L12RC4, inspired by the LZSS compression algorithm and RC4 stream cipher, was presented and implemented. The result of the NIST and Diehard test suite indicate that the L12RC4 is a good PRNG, and so it seems to be sound and may be suitable for use in some cryptographic applications. We also found that the probability distribution of the index value frequency is associated with the compression pass and INDEX_BIT_COUNT value. As for one pass mode, the greater INDEX_BIT_COUNT value, the more uniformly distributed, and the double pass mode has better uniformity than the one pass mode.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}