首页 > 最新文献

2010 Data Compression Conference最新文献

英文 中文
Region Based Rate-Distortion Analysis for 3D Video Coding 基于区域的三维视频编码率失真分析
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.63
Qifei Wang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang
In 3D video coding, to provide the high-quality interactive viewpoint video to audience, it is necessary to jointly optimize coding efficiency of color and depth images at a given bit-rates. In this paper, a region based distortion model is proposed to precisely estimate the error of synthesized virtual view. Furthermore, combined with the rate-distortion (R-D) model of color and depth images coding, an overall R-D model is built up for 3D video coding. Experimental results exhibit that the proposed approach can efficiently measure the R-D property of 3D video coding.
在三维视频编码中,为了向观众提供高质量的交互式视点视频,需要在给定比特率下对彩色图像和深度图像的编码效率进行共同优化。为了精确估计合成虚拟视图的误差,提出了一种基于区域的畸变模型。在此基础上,结合彩色和深度图像编码的率失真模型,建立了三维视频编码的整体率失真模型。实验结果表明,该方法可以有效地测量三维视频编码的R-D特性。
{"title":"Region Based Rate-Distortion Analysis for 3D Video Coding","authors":"Qifei Wang, Xiangyang Ji, Qionghai Dai, Naiyao Zhang","doi":"10.1109/DCC.2010.63","DOIUrl":"https://doi.org/10.1109/DCC.2010.63","url":null,"abstract":"In 3D video coding, to provide the high-quality interactive viewpoint video to audience, it is necessary to jointly optimize coding efficiency of color and depth images at a given bit-rates. In this paper, a region based distortion model is proposed to precisely estimate the error of synthesized virtual view. Furthermore, combined with the rate-distortion (R-D) model of color and depth images coding, an overall R-D model is built up for 3D video coding. Experimental results exhibit that the proposed approach can efficiently measure the R-D property of 3D video coding.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125957234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
High-Order Text Compression on Hierarchical Edge-Guided 基于分层边缘引导的高阶文本压缩
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.72
Miguel A. Martínez-Prieto, J. Adiego, P. Fuente, Javier D. Fernández
High-order word-based modeling is able to achieve competitive compression ratios by using k-order text statistics. However, this can be an impracticable problem due to the large number of relationships between words. This paper focuses on how the 1-order Edge-Guided (E-G) technique can be enhanced to support modeling and coding on high-order text statistics. An improved E-G revision, called E-G1, is firstly done. A grammar-based building is next used to identify significative high-order contexts, in a first pass, which are used to encode the text on an extended revision of the E-G codification scheme. This current approach, E-Gk, yields a competitive space/efficiency trade-off with respect to comparable approaches.
高阶基于单词的建模能够通过使用k阶文本统计来实现具有竞争力的压缩比。然而,由于单词之间存在大量的关系,这可能是一个不切实际的问题。本文重点研究如何增强1阶边缘引导(E-G)技术,以支持高阶文本统计的建模和编码。一种改进的E-G版本,称为E-G1,首先完成。接下来,基于语法的构建被用于识别有意义的高阶上下文,在第一次传递中,这些上下文被用于对E-G编码方案的扩展修订上的文本进行编码。目前的方法,E-Gk,相对于类似的方法,产生了具有竞争力的空间/效率权衡。
{"title":"High-Order Text Compression on Hierarchical Edge-Guided","authors":"Miguel A. Martínez-Prieto, J. Adiego, P. Fuente, Javier D. Fernández","doi":"10.1109/DCC.2010.72","DOIUrl":"https://doi.org/10.1109/DCC.2010.72","url":null,"abstract":"High-order word-based modeling is able to achieve competitive compression ratios by using k-order text statistics. However, this can be an impracticable problem due to the large number of relationships between words. This paper focuses on how the 1-order Edge-Guided (E-G) technique can be enhanced to support modeling and coding on high-order text statistics. An improved E-G revision, called E-G1, is firstly done. A grammar-based building is next used to identify significative high-order contexts, in a first pass, which are used to encode the text on an extended revision of the E-G codification scheme. This current approach, E-Gk, yields a competitive space/efficiency trade-off with respect to comparable approaches.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"182 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bounding the Rate Region of Vector Gaussian Multiple Descriptions with Individual and Central Receivers 具有单个和中心接收器的矢量高斯多重描述的速率域边界
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.9
Guoqiang Zhang, W. Kleijn, Jan Østergaard
The problem of the rate region of the vector Gaussian multiple description with individual and central quadratic distortion constraints is studied. We have two main contributions. First, a lower bound on the rate region is derived. The bound is obtained by lower-bounding a weighted sum rate for each supporting hyperplane of the rate region. Second, the rate region for the scenario of the scalar Gaussian source is fully characterized by showing that the lower bound is tight. The optimal weighted sum rate for each supporting hyperplane is obtained by solving a single maximization problem. This is contrary to existing results, which require solving a min-max optimization problem.
研究了具有单个和中心二次失真约束的矢量高斯多重描述的速率域问题。我们有两个主要贡献。首先,导出了速率区域的下界。通过对速率区域的每个支持超平面的加权和速率下界得到边界。其次,通过表明下界是紧的,充分表征了标量高斯源场景下的速率区域。通过求解单个最大化问题,得到各支撑超平面的最优加权和率。这与现有的结果相反,需要解决最小-最大优化问题。
{"title":"Bounding the Rate Region of Vector Gaussian Multiple Descriptions with Individual and Central Receivers","authors":"Guoqiang Zhang, W. Kleijn, Jan Østergaard","doi":"10.1109/DCC.2010.9","DOIUrl":"https://doi.org/10.1109/DCC.2010.9","url":null,"abstract":"The problem of the rate region of the vector Gaussian multiple description with individual and central quadratic distortion constraints is studied. We have two main contributions. First, a lower bound on the rate region is derived. The bound is obtained by lower-bounding a weighted sum rate for each supporting hyperplane of the rate region. Second, the rate region for the scenario of the scalar Gaussian source is fully characterized by showing that the lower bound is tight. The optimal weighted sum rate for each supporting hyperplane is obtained by solving a single maximization problem. This is contrary to existing results, which require solving a min-max optimization problem.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120992018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reconstruction of Sparse Binary Signals Using Compressive Sensing 基于压缩感知的稀疏二值信号重构
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.61
Jiangtao Wen, Zhuoyuan Chen, Shiqiang Yang, Yuxing Han, J. Villasenor
This paper has described an improved algorithm for reconstructing sparse binary signals using compressive sensing. The algorithm is based on the reweighted $l_q$ norm optimization algorithm of cite{04}, but with the important additional operation of bounding in each round of the interior-point method iteration, and progressive reduction of $q$. Experimental results confirm that the algorithm performs well both in terms of the ability to recover an input signal as well as in terms of speed. We also found that both the progressive reduction and the bounding are integral to the improvement in performance.
本文描述了一种利用压缩感知重构稀疏二值信号的改进算法。该算法基于cite{04}的重新加权$l_q$范数优化算法,但在每轮内点法迭代中增加了重要的边界运算,并对$q$进行了逐步约简。实验结果证实,该算法在恢复输入信号的能力和速度方面都表现良好。我们还发现,渐进式减少和边界对性能的改进都是不可或缺的。
{"title":"Reconstruction of Sparse Binary Signals Using Compressive Sensing","authors":"Jiangtao Wen, Zhuoyuan Chen, Shiqiang Yang, Yuxing Han, J. Villasenor","doi":"10.1109/DCC.2010.61","DOIUrl":"https://doi.org/10.1109/DCC.2010.61","url":null,"abstract":"This paper has described an improved algorithm for reconstructing sparse binary signals using compressive sensing. The algorithm is based on the reweighted $l_q$ norm optimization algorithm of cite{04}, but with the important additional operation of bounding in each round of the interior-point method iteration, and progressive reduction of $q$. Experimental results confirm that the algorithm performs well both in terms of the ability to recover an input signal as well as in terms of speed. We also found that both the progressive reduction and the bounding are integral to the improvement in performance.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"15 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compressed Indexes for Approximate Library Management 近似图书馆管理的压缩索引
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.75
W. Hon, Winson Wu, Ting Yang
This paper investigates the approximate library management problem, which is to construct an index for a dynamic text collection $L$ such that for any query pattern $P$ and any integer $k$, we can report all $k$-error matches of $P$ in $L$ efficiently. Existing work either focussed on the static version of the problem or assumed k=0. We observe that by combining several recent techniques, we can achieve the first compressed indexes that support efficient pattern queries and updating simultaneously.
本文研究了一个近似图书馆管理问题,即为动态文本集合$L$构造一个索引,使得对于任意查询模式$P$和任意整数$k$,我们都能有效地报告$L$中$P$的所有$k$错误匹配。现有的工作要么集中在问题的静态版本上,要么假设k=0。我们观察到,通过结合几种最新的技术,我们可以实现第一个支持高效模式查询和同时更新的压缩索引。
{"title":"Compressed Indexes for Approximate Library Management","authors":"W. Hon, Winson Wu, Ting Yang","doi":"10.1109/DCC.2010.75","DOIUrl":"https://doi.org/10.1109/DCC.2010.75","url":null,"abstract":"This paper investigates the approximate library management problem, which is to construct an index for a dynamic text collection $L$ such that for any query pattern $P$ and any integer $k$, we can report all $k$-error matches of $P$ in $L$ efficiently. Existing work either focussed on the static version of the problem or assumed k=0. We observe that by combining several recent techniques, we can achieve the first compressed indexes that support efficient pattern queries and updating simultaneously.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115414622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations 内存高效HARQ实现中对数似然比的最大互信息向量量化
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.98
Matteo Danieli, S. Forchhammer, J. D. Andersen, Lars P. B. Christensen, S. S. Christensen
Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stationsand mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value without compromising the system throughput.
现代移动通信系统,如3GPP LTE,利用混合自动重复请求(HARQ)在基站和移动终端之间进行高效可靠的通信。为此,接收比特的边际后验概率以对数似然比(LLR)的形式存储,以便组合因请求而跨不同传输发送的信息。为了减轻不断增加的数据速率对更大HARQ内存的需求所带来的影响,矢量量化(VQ)作为一种临时压缩终端上llr的技术进行了研究。容量分析导致以最大互信息(MMI)作为最优准则,进而以Kullback-Leibler (KL)散度作为失真度量。基于类lte系统的仿真已经证明,VQ可以在不影响系统吞吐量的情况下以每LLR值2-3位的低速率以计算简单的方式实现。
{"title":"Maximum Mutual Information Vector Quantization of Log-Likelihood Ratios for Memory Efficient HARQ Implementations","authors":"Matteo Danieli, S. Forchhammer, J. D. Andersen, Lars P. B. Christensen, S. S. Christensen","doi":"10.1109/DCC.2010.98","DOIUrl":"https://doi.org/10.1109/DCC.2010.98","url":null,"abstract":"Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stationsand mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity analysis leads to using maximum mutual information (MMI) as optimality criterion and in turn Kullback-Leibler (KL) divergence as distortion measure. Simulations run based on an LTE-like system have proven that VQ can be implemented in a computationally simple way at low rates of 2-3 bits per LLR value without compromising the system throughput.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125611990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A Similarity Measure Using Smallest Context-Free Grammars 使用最小上下文无关语法的相似性度量
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.37
D. Cerra, M. Datcu
This work presents a new approximation for the Kolmogorov complexity of strings based on compression with smallest Context Free Grammars (CFG). If, for a given string, a dictionary containing its relevant patterns may be regarded as a model, a Context-Free Grammar may represent a generative model, with all of its rules (and as a consequence its own size) being meaningful. Thus, we define a new complexity approximation which takes into account the size of the string model, in a representation similar to the Minimum Description Length. These considerations result in the definition of a new compression-based similarity measure: its novelty lies in the fact that the impact of complexity overestimations, due to the limits that a real compressor has, can be accounted for and decreased.
本文提出了一种基于最小上下文无关语法(CFG)压缩的字符串Kolmogorov复杂度的新近似。对于给定的字符串,如果包含其相关模式的字典可以被视为一个模型,那么上下文无关语法可以代表一个生成模型,其所有规则(以及其自身的大小)都是有意义的。因此,我们定义了一个新的复杂性近似,它考虑了字符串模型的大小,以类似于最小描述长度的表示。这些考虑导致了一种新的基于压缩的相似性度量的定义:它的新颖之处在于,由于真实压缩器的限制,复杂性高估的影响可以被考虑和减少。
{"title":"A Similarity Measure Using Smallest Context-Free Grammars","authors":"D. Cerra, M. Datcu","doi":"10.1109/DCC.2010.37","DOIUrl":"https://doi.org/10.1109/DCC.2010.37","url":null,"abstract":"This work presents a new approximation for the Kolmogorov complexity of strings based on compression with smallest Context Free Grammars (CFG). If, for a given string, a dictionary containing its relevant patterns may be regarded as a model, a Context-Free Grammar may represent a generative model, with all of its rules (and as a consequence its own size) being meaningful. Thus, we define a new complexity approximation which takes into account the size of the string model, in a representation similar to the Minimum Description Length. These considerations result in the definition of a new compression-based similarity measure: its novelty lies in the fact that the impact of complexity overestimations, due to the limits that a real compressor has, can be accounted for and decreased.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Auto Regressive Model and Weighted Least Squares Based Packet Video Error Concealment 基于自回归模型和加权最小二乘的分组视频错误隐藏
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.100
Yongbing Zhang, Xinguang Xiang, Siwei Ma, Debin Zhao, Wen Gao
In this paper, auto regressive (AR) model is applied to error concealment for block-based packet video encoding. Each pixel within the corrupted block is restored as the weighted summation of corresponding pixels within the previous frame in a linear regression manner. Two novel algorithms using weighted least squares method are proposed to derive the AR coefficients. First, we present a coefficient derivation algorithm under the spatial continuity constraint, in which the summation of the weighted square errors within the available neighboring blocks is minimized. The confident weight of each sample is inversely proportional to the distance between the sample and the corrupted block. Second, we provide a coefficient derivation algorithm under the temporal continuity constraint, where the summation of the weighted square errors around the target pixel within the previous frame is minimized. The confident weight of each sample is proportional to the similarity of geometric proximity as well as the intensity gray level. The regression results generated by the two algorithms are then merged to form the ultimate restorations. Various experimental results demonstrate that the proposed error concealment strategy is able to increase the peak signal-to-noise ratio (PSNR) compared to other methods.
本文将自回归(AR)模型应用于基于块的分组视频编码中的错误隐藏。损坏块内的每个像素以线性回归的方式恢复为前一帧内相应像素的加权和。提出了两种新的加权最小二乘法求解AR系数的算法。首先,提出了一种空间连续性约束下的系数求导算法,该算法使可用相邻块内加权平方误差之和最小;每个样本的自信权重与样本和损坏块之间的距离成反比。其次,提出了一种时间连续性约束下的系数推导算法,使前一帧内目标像素周围的加权平方误差之和最小;每个样本的置信权重与几何接近度的相似性以及灰度强度成正比。然后将两种算法产生的回归结果合并形成最终的恢复。各种实验结果表明,与其他方法相比,所提出的错误隐藏策略能够提高峰值信噪比(PSNR)。
{"title":"Auto Regressive Model and Weighted Least Squares Based Packet Video Error Concealment","authors":"Yongbing Zhang, Xinguang Xiang, Siwei Ma, Debin Zhao, Wen Gao","doi":"10.1109/DCC.2010.100","DOIUrl":"https://doi.org/10.1109/DCC.2010.100","url":null,"abstract":"In this paper, auto regressive (AR) model is applied to error concealment for block-based packet video encoding. Each pixel within the corrupted block is restored as the weighted summation of corresponding pixels within the previous frame in a linear regression manner. Two novel algorithms using weighted least squares method are proposed to derive the AR coefficients. First, we present a coefficient derivation algorithm under the spatial continuity constraint, in which the summation of the weighted square errors within the available neighboring blocks is minimized. The confident weight of each sample is inversely proportional to the distance between the sample and the corrupted block. Second, we provide a coefficient derivation algorithm under the temporal continuity constraint, where the summation of the weighted square errors around the target pixel within the previous frame is minimized. The confident weight of each sample is proportional to the similarity of geometric proximity as well as the intensity gray level. The regression results generated by the two algorithms are then merged to form the ultimate restorations. Various experimental results demonstrate that the proposed error concealment strategy is able to increase the peak signal-to-noise ratio (PSNR) compared to other methods.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Spatial Constant Quantization in JPEG XR is Nearly Optimal 空间常数量化在JPEG XR中几乎是最优的
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.14
T. Richter
The JPEG XR image compression standard, originally developed under the name HD-Photo by Microsoft, offers the feature of spatial variably quantization; its codestream syntax allows to select one out of a limited set of possible quantizers per macro block and per frequency band. In this paper, an algorithm is presented that finds the rate-distortion optimal set of quantizers, and the optimal quantizer choice for each macro block. Even though it seems plausible that this feature may provide a huge improvement for images whose statistics is non-stationary, e.g. compound images, it is demonstrated that the PSNR improvement is not larger than 0.3dB for a two-step heuristics of feasible complexity, but improvements of up to 0.8dB for compound images are possible by a much more complex optimization strategy.
JPEG XR图像压缩标准最初由微软以HD-Photo的名义开发,提供了空间可变量化的特性;它的码流语法允许从每个宏块和每个频带的有限可能量化器集中选择一个。本文提出了一种寻找率失真最优量化器集的算法,并给出了每个宏块的最优量化器选择。尽管对于统计数据是非平稳的图像(如复合图像),该特征似乎可以提供巨大的改进,但事实证明,对于可行复杂度的两步启发式算法,PSNR的改进不大于0.3dB,但对于复合图像,通过更复杂的优化策略,PSNR的改进可能高达0.8dB。
{"title":"Spatial Constant Quantization in JPEG XR is Nearly Optimal","authors":"T. Richter","doi":"10.1109/DCC.2010.14","DOIUrl":"https://doi.org/10.1109/DCC.2010.14","url":null,"abstract":"The JPEG XR image compression standard, originally developed under the name HD-Photo by Microsoft, offers the feature of spatial variably quantization; its codestream syntax allows to select one out of a limited set of possible quantizers per macro block and per frequency band. In this paper, an algorithm is presented that finds the rate-distortion optimal set of quantizers, and the optimal quantizer choice for each macro block. Even though it seems plausible that this feature may provide a huge improvement for images whose statistics is non-stationary, e.g. compound images, it is demonstrated that the PSNR improvement is not larger than 0.3dB for a two-step heuristics of feasible complexity, but improvements of up to 0.8dB for compound images are possible by a much more complex optimization strategy.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Pseudo-Random Number Generator Based on LZSS 基于LZSS的伪随机数生成器
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.77
Wei-ling Chang, Binxing Fang, Xiao-chun Yun, Shupeng Wang, Xiang-Zhan Yu
A pseudo-random sequence generator (PRNG), L12RC4, inspired by the LZSS compression algorithm and RC4 stream cipher, was presented and implemented. The result of the NIST and Diehard test suite indicate that the L12RC4 is a good PRNG, and so it seems to be sound and may be suitable for use in some cryptographic applications. We also found that the probability distribution of the index value frequency is associated with the compression pass and INDEX_BIT_COUNT value. As for one pass mode, the greater INDEX_BIT_COUNT value, the more uniformly distributed, and the double pass mode has better uniformity than the one pass mode.
在LZSS压缩算法和RC4流密码的启发下,提出并实现了一种伪随机序列发生器L12RC4。NIST和Diehard测试套件的结果表明,L12RC4是一个很好的PRNG,因此它似乎是可靠的,可能适合在某些加密应用程序中使用。我们还发现索引值频率的概率分布与压缩通道和INDEX_BIT_COUNT值有关。对于单通道模式,INDEX_BIT_COUNT值越大,分布越均匀,双通道模式的均匀性优于单通道模式。
{"title":"A Pseudo-Random Number Generator Based on LZSS","authors":"Wei-ling Chang, Binxing Fang, Xiao-chun Yun, Shupeng Wang, Xiang-Zhan Yu","doi":"10.1109/DCC.2010.77","DOIUrl":"https://doi.org/10.1109/DCC.2010.77","url":null,"abstract":"A pseudo-random sequence generator (PRNG), L12RC4, inspired by the LZSS compression algorithm and RC4 stream cipher, was presented and implemented. The result of the NIST and Diehard test suite indicate that the L12RC4 is a good PRNG, and so it seems to be sound and may be suitable for use in some cryptographic applications. We also found that the probability distribution of the index value frequency is associated with the compression pass and INDEX_BIT_COUNT value. As for one pass mode, the greater INDEX_BIT_COUNT value, the more uniformly distributed, and the double pass mode has better uniformity than the one pass mode.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2010 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1