首页 > 最新文献

2009 Data Compression Conference最新文献

英文 中文
DCT Domain Message Embedding in Spread-Spectrum Steganography System 扩频隐写系统中的DCT域信息嵌入
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.86
Neha Agrawal, Anubha Gupta
Spread-spectrum steganographic (SSIS) method offers high payload and robustness to additive noise in transmission channel but the visual quality of image is distorted and exact data recovery may not be satisfied. DCT-domain message hiding based steganographic techniques provide high image imperceptibility and exact data recovery in absence of noise. In this paper, we combined the best of SSIS and DCT-domain hiding to provide high image imperceptibility and robustness to noise. We demonstrate our proposed algorithm through experiments on additive noise and jpeg compression attacks in the transmitted channel.
扩频隐写具有较高的有效载荷和对传输信道中加性噪声的鲁棒性,但会导致图像视觉质量失真,数据恢复不准确。基于dct域信息隐藏的隐写技术提供了高图像隐蔽性和精确的无噪声数据恢复。在本文中,我们结合了SSIS和dct域隐藏的优点,以提供高的图像不可感知性和对噪声的鲁棒性。通过对传输信道中加性噪声和jpeg压缩攻击的实验,验证了该算法的有效性。
{"title":"DCT Domain Message Embedding in Spread-Spectrum Steganography System","authors":"Neha Agrawal, Anubha Gupta","doi":"10.1109/DCC.2009.86","DOIUrl":"https://doi.org/10.1109/DCC.2009.86","url":null,"abstract":"Spread-spectrum steganographic (SSIS) method offers high payload and robustness to additive noise in transmission channel but the visual quality of image is distorted and exact data recovery may not be satisfied. DCT-domain message hiding based steganographic techniques provide high image imperceptibility and exact data recovery in absence of noise. In this paper, we combined the best of SSIS and DCT-domain hiding to provide high image imperceptibility and robustness to noise. We demonstrate our proposed algorithm through experiments on additive noise and jpeg compression attacks in the transmitted channel.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"PP 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Binary Alpha-Plane Assisted Fast Motion Estimation of Video Objects in Wavelet Domain 二值α平面辅助小波域视频目标快速运动估计
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.32
Chuanming Song, Xiang-Hai Wang, Yanwen Guo, Fuyan Zhang
In this paper, we present a novel approach to motion estimation (ME) of arbitrarily shaped video objects in wavelet domain. We explore the guiding role of binary alpha-plane in assisting ME of video objects and first devise a new block matching scheme of alpha-plane, by exploiting boundary expansion and boundary masks. To eliminate shift-variance, we modify low-band-shift (LBS) method via substituting variable-size block for wavelet block. Combining the modified LBS with a hierarchical structure, we further present a multiscale ME approach. Extensive experiments show that the proposed approach outperforms most of previous methods in terms of both subjective quality and objective quality. Moreover, significant reduction is achieved in computational complexity (89.05% at most) and memory requirement.
本文提出了一种基于小波域的任意形状视频目标的运动估计方法。我们探索了二进制α -平面在辅助视频对象识别中的指导作用,首先利用边界展开和边界掩码设计了一种新的α -平面块匹配方案。为了消除偏移方差,我们将小波块替换为变大小块,对低频带偏移方法进行了改进。将改进后的LBS与层次结构相结合,进一步提出了一种多尺度ME方法。大量的实验表明,该方法在主观质量和客观质量方面都优于以往的大多数方法。此外,在计算复杂度(最多89.05%)和内存需求方面取得了显著的降低。
{"title":"Binary Alpha-Plane Assisted Fast Motion Estimation of Video Objects in Wavelet Domain","authors":"Chuanming Song, Xiang-Hai Wang, Yanwen Guo, Fuyan Zhang","doi":"10.1109/DCC.2009.32","DOIUrl":"https://doi.org/10.1109/DCC.2009.32","url":null,"abstract":"In this paper, we present a novel approach to motion estimation (ME) of arbitrarily shaped video objects in wavelet domain. We explore the guiding role of binary alpha-plane in assisting ME of video objects and first devise a new block matching scheme of alpha-plane, by exploiting boundary expansion and boundary masks. To eliminate shift-variance, we modify low-band-shift (LBS) method via substituting variable-size block for wavelet block. Combining the modified LBS with a hierarchical structure, we further present a multiscale ME approach. Extensive experiments show that the proposed approach outperforms most of previous methods in terms of both subjective quality and objective quality. Moreover, significant reduction is achieved in computational complexity (89.05% at most) and memory requirement.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131554744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex Wavelet Modulation Subbands for Speech Compression 语音压缩的复小波调制子带
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.52
J. Luneau, J. Lebrun, S. H. Jensen
Low-frequency modulation of sound carry essential information for speech and music. They must be preserved for compression. The complex modulation spectrum is commonly obtained by spectral analysis of the sole temporal envelopes of the subbands out of a time/frequency analysis. Amplitudes and tones of speech or music tend to vary slowly over time thus the temporal envelopes are mostly of polynomial type. Processing in this domain usually creates undesirable distortions because only the magnitudes are taken into account and the phase data is often neglected. We remedy this problem with the use of a complex wavelet transform as a more appropriate envelope and phase processing tool. Complex wavelets carry both magnitude and phase explicitly with great sparsity and preserve well polynomials. Moreover an analytic Hilbert-like transform is possible with complex wavelets implemented as an orthogonal filter bank. By working in this alternative transform domain coined as ``Modulation Subbands", this transform shows very promising compression capabilities thanks to interesting sparsity properties and suggests new approaches for joint spectro-temporal analytic processing of slow frequency and phase varying audio signals.
低频调制的声音携带着语音和音乐的基本信息。它们必须保存以便压缩。复调制频谱通常是在时间/频率分析的基础上,对子带的单一时间包络进行频谱分析得到的。语音或音乐的振幅和音调往往随时间缓慢变化,因此时间包络主要是多项式类型。在这个领域的处理通常会产生不希望的失真,因为只考虑幅度而忽略了相位数据。我们使用复小波变换作为更合适的包络和相位处理工具来解决这个问题。复小波具有明显的幅值和相位,具有很大的稀疏性,并能很好地保持多项式。此外,用复小波作为正交滤波器组实现解析类希尔伯特变换是可能的。通过在这个被称为“调制子带”的替代变换域中工作,由于有趣的稀疏性,该变换显示出非常有前途的压缩能力,并为慢频率和相位变化音频信号的联合光谱-时间分析处理提供了新的方法。
{"title":"Complex Wavelet Modulation Subbands for Speech Compression","authors":"J. Luneau, J. Lebrun, S. H. Jensen","doi":"10.1109/DCC.2009.52","DOIUrl":"https://doi.org/10.1109/DCC.2009.52","url":null,"abstract":"Low-frequency modulation of sound carry essential information for speech and music. They must be preserved for compression. The complex modulation spectrum is commonly obtained by spectral analysis of the sole temporal envelopes of the subbands out of a time/frequency analysis. Amplitudes and tones of speech or music tend to vary slowly over time thus the temporal envelopes are mostly of polynomial type. Processing in this domain usually creates undesirable distortions because only the magnitudes are taken into account and the phase data is often neglected. We remedy this problem with the use of a complex wavelet transform as a more appropriate envelope and phase processing tool. Complex wavelets carry both magnitude and phase explicitly with great sparsity and preserve well polynomials. Moreover an analytic Hilbert-like transform is possible with complex wavelets implemented as an orthogonal filter bank. By working in this alternative transform domain coined as ``Modulation Subbands\", this transform shows very promising compression capabilities thanks to interesting sparsity properties and suggests new approaches for joint spectro-temporal analytic processing of slow frequency and phase varying audio signals.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131367806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Probing the Randomness of Proteins by Their Subsequence Composition 通过蛋白质的子序列组成探测蛋白质的随机性
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.60
A. Apostolico, F. Cunial
The  quantitative underpinning  of the information contents of biosequences represents an elusive goal and yet also  an obvious prerequisite to the quantitative  modeling and study of biological  function and evolution. Previous studies have consistently exposed a tenacious lack of compressibility on behalf of biosequences.  This leaves the question open as to what distinguishes them from random strings, the latter being clearly  unpalatable to the living cell. This paper assesses the randomness of biosequences in terms on newly introduced parameters that relate to the vocabulary of their  (suitably constrained)  subsequences rather than their substrings. Results from  experiments show the potential of the method in distinguishing a protein sequence from its random reshuffling, as well as in tasks of classification and clustering.
生物序列信息内容的定量支撑是一个难以实现的目标,但也是生物功能和进化定量建模和研究的一个明显的先决条件。先前的研究一致地暴露了生物序列的可压缩性的顽固缺乏。这就留下了一个悬而未决的问题,即是什么将它们与随机字符串区分开来,后者显然不适合活细胞。本文根据新引入的参数来评估生物序列的随机性,这些参数与它们的(适当约束的)子序列的词汇有关,而不是它们的子串。实验结果表明,该方法在区分随机重组的蛋白质序列以及分类和聚类任务方面具有潜力。
{"title":"Probing the Randomness of Proteins by Their Subsequence Composition","authors":"A. Apostolico, F. Cunial","doi":"10.1109/DCC.2009.60","DOIUrl":"https://doi.org/10.1109/DCC.2009.60","url":null,"abstract":"The  quantitative underpinning  of the information contents of biosequences represents an elusive goal and yet also  an obvious prerequisite to the quantitative  modeling and study of biological  function and evolution. Previous studies have consistently exposed a tenacious lack of compressibility on behalf of biosequences.  This leaves the question open as to what distinguishes them from random strings, the latter being clearly  unpalatable to the living cell. This paper assesses the randomness of biosequences in terms on newly introduced parameters that relate to the vocabulary of their  (suitably constrained)  subsequences rather than their substrings. Results from  experiments show the potential of the method in distinguishing a protein sequence from its random reshuffling, as well as in tasks of classification and clustering.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125666978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Suffix Tree Based VF-Coding for Compressed Pattern Matching 基于后缀树的压缩模式匹配vf编码
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.58
T. Kida
We propose an efficient variable-length-to-fixed-length code (VF code for short), called ST-VF code. It utilizes a frequency-base-pruned suffix tree as a parse tree. VF codes as typified by Tunstall code have a preferable aspect to compressed pattern matching. It is unnecessary to distinguish code boundaries on compressed texts since all codewords have the same length.
我们提出了一种高效的变长定长码(简称VF码),称为ST-VF码。它使用基于频率的后缀树作为解析树。以Tunstall代码为代表的VF代码具有比压缩模式匹配更可取的方面。由于所有码字的长度相同,因此没有必要在压缩文本上区分代码边界。
{"title":"Suffix Tree Based VF-Coding for Compressed Pattern Matching","authors":"T. Kida","doi":"10.1109/DCC.2009.58","DOIUrl":"https://doi.org/10.1109/DCC.2009.58","url":null,"abstract":"We propose an efficient variable-length-to-fixed-length code (VF code for short), called ST-VF code. It utilizes a frequency-base-pruned suffix tree as a parse tree. VF codes as typified by Tunstall code have a preferable aspect to compressed pattern matching. It is unnecessary to distinguish code boundaries on compressed texts since all codewords have the same length.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132421859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Compressed Kernel Perceptrons 压缩核感知器
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.75
S. Vucetic, Vladimir Coric, Zhuang Wang
Kernel machines are a popular class of machine learning algorithms that achieve state of the art accuracies on many real-life classification problems. Kernel perceptrons are among the most popular online kernel machines that are known to achieve high-quality classification despite their simplicity. They are represented by a set of B prototype examples, called support vectors, and their associated weights. To obtain a classification, a new example is compared to the support vectors. Both space to store a prediction model and time to provide a single classification scale as O(B). A problem with kernel perceptrons is that on noisy data the number of support vectors tends to grow without bounds with the number of training examples. To reduce the strain at computational resources, budget kernel perceptrons have been developed by upper bounding the number of support vectors. In this work, we propose a new budget algorithm that upper bounds the number of bits needed to store kernel perceptron. Setting the bitlength constraint could facilitate development of hardware and software implementations of kernel perceptrons on resource-limited devices such as microcontrollers. The proposed compressed kernel perceptron algorithm decides on the optimal tradeoff between number of support vectors and their bit precision. The algorithm was evaluated on several benchmark data sets and the results indicate that it can train highly accurate classifiers even when the available memory budget is below 1 Kbit. This promising result points to a possibility of implementing powerful learning algorithms even on the most resource-constrained computational devices.
核机器是一类流行的机器学习算法,它在许多现实生活中的分类问题上达到了最先进的精度。内核感知器是最流行的在线内核机器之一,尽管它们很简单,但已知它们可以实现高质量的分类。它们由一组B个原型例子(称为支持向量)及其相关权重表示。为了获得分类,将一个新的示例与支持向量进行比较。存储预测模型的空间和提供单一分类尺度的时间为O(B)。核感知器的一个问题是,在有噪声的数据上,支持向量的数量会随着训练样本的数量无限制地增长。为了减少对计算资源的压力,预算核感知器通过对支持向量的数量设置上限来实现。在这项工作中,我们提出了一种新的预算算法,该算法可以为存储核感知器所需的比特数上界。设置位长度约束可以促进内核感知器在资源有限的设备(如微控制器)上的硬件和软件实现的开发。提出的压缩核感知器算法在支持向量的数量和比特精度之间进行最优权衡。在多个基准数据集上对该算法进行了测试,结果表明,即使在可用内存预算低于1 Kbit的情况下,该算法也能训练出高度准确的分类器。这一有希望的结果表明,即使在资源最有限的计算设备上,也有可能实现强大的学习算法。
{"title":"Compressed Kernel Perceptrons","authors":"S. Vucetic, Vladimir Coric, Zhuang Wang","doi":"10.1109/DCC.2009.75","DOIUrl":"https://doi.org/10.1109/DCC.2009.75","url":null,"abstract":"Kernel machines are a popular class of machine learning algorithms that achieve state of the art accuracies on many real-life classification problems. Kernel perceptrons are among the most popular online kernel machines that are known to achieve high-quality classification despite their simplicity. They are represented by a set of B prototype examples, called support vectors, and their associated weights. To obtain a classification, a new example is compared to the support vectors. Both space to store a prediction model and time to provide a single classification scale as O(B). A problem with kernel perceptrons is that on noisy data the number of support vectors tends to grow without bounds with the number of training examples. To reduce the strain at computational resources, budget kernel perceptrons have been developed by upper bounding the number of support vectors. In this work, we propose a new budget algorithm that upper bounds the number of bits needed to store kernel perceptron. Setting the bitlength constraint could facilitate development of hardware and software implementations of kernel perceptrons on resource-limited devices such as microcontrollers. The proposed compressed kernel perceptron algorithm decides on the optimal tradeoff between number of support vectors and their bit precision. The algorithm was evaluated on several benchmark data sets and the results indicate that it can train highly accurate classifiers even when the available memory budget is below 1 Kbit. This promising result points to a possibility of implementing powerful learning algorithms even on the most resource-constrained computational devices.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130408876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The Block LZSS Compression Algorithm 块LZSS压缩算法
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.9
Wei-ling Chang, Xiao-chun Yun, Binxing Fang, Shupeng Wang
The mainstream compression algorithms, such as LZ, Huffman, PPM etc., have been extensively studied in recent years. However, rather less attention has been paid to the block algorithm of those algorithms. The aim of this study was therefore to investigate the block LZSS. We studied the relationship between the compression ratio of block LZSS and the value of index or length. We found that the bit of length has little effect on the compression performance of block LZSS, and the bit of index has a significant effect on the compression ratio. Results of the experiment show that to obtain better efficiency from block LZSS, a moderate sized block which is greater than 32KiB, may be optimal, and the optimal block size does not depend on file types. We also investigated factors which affect the optimal block size. We use the mean block standard deviation (MBS) and locality of reference to measure the compression ratio. we found that good data locality implies a large skew in the data distribution, and the greater data distribution skew or the MBS, the better the compression ratio.
主流的压缩算法,如LZ、Huffman、PPM等,近年来得到了广泛的研究。然而,对这些算法中的块算法的关注却很少。因此,本研究的目的是研究阻断LZSS。我们研究了块LZSS的压缩比与索引值或长度的关系。我们发现比特长度对LZSS块的压缩性能影响不大,而索引比特对压缩比有显著影响。实验结果表明,为了从LZSS块中获得更好的效率,一个大于32KiB的中等大小的块可能是最优的,并且最优的块大小与文件类型无关。我们还研究了影响最佳块大小的因素。我们使用平均块标准差(MBS)和参考的局部性来衡量压缩比。我们发现,良好的数据局部性意味着数据分布的大偏差,数据分布偏差或MBS越大,压缩比越好。
{"title":"The Block LZSS Compression Algorithm","authors":"Wei-ling Chang, Xiao-chun Yun, Binxing Fang, Shupeng Wang","doi":"10.1109/DCC.2009.9","DOIUrl":"https://doi.org/10.1109/DCC.2009.9","url":null,"abstract":"The mainstream compression algorithms, such as LZ, Huffman, PPM etc., have been extensively studied in recent years. However, rather less attention has been paid to the block algorithm of those algorithms. The aim of this study was therefore to investigate the block LZSS. We studied the relationship between the compression ratio of block LZSS and the value of index or length. We found that the bit of length has little effect on the compression performance of block LZSS, and the bit of index has a significant effect on the compression ratio. Results of the experiment show that to obtain better efficiency from block LZSS, a moderate sized block which is greater than 32KiB, may be optimal, and the optimal block size does not depend on file types. We also investigated factors which affect the optimal block size. We use the mean block standard deviation (MBS) and locality of reference to measure the compression ratio. we found that good data locality implies a large skew in the data distribution, and the greater data distribution skew or the MBS, the better the compression ratio.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130753974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How Can Intra Correlation Be Exploited Better? 如何更好地利用内部相关性?
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.23
Feng Wu, Xiulian Peng, Jizheng Xu, Shipeng Li
This paper studies how to better exploit intra correlation in image/intra-frame coding. Unlike the previous assumption, where the correlation between samples is determined only by their distance, image is considered to present local orientation property. The correlation between samples is determined by not only the distance but also the link orientation. Thus, a directional Filtering Transform (dFT) is proposed in this paper to exploit local orientation correlation among samples. It consists of directional filtering and optional transform. Furthermore, this paper analyzes the theoretical coding gain of the proposed directional filtering in terms of signal power spectral density (PSD). Both numerical results and actual compression results of the intra-frame coding in H.264 demonstrate the advantages of the proposed dFT with decoding complexity increased slightly.
本文研究了在图像/帧内编码中如何更好地利用帧内相关性。与之前的假设不同,样本之间的相关性仅由它们的距离决定,图像被认为具有局部定向特性。样品之间的相关性不仅由距离决定,而且由链接的方向决定。为此,本文提出了一种利用样本间局部方向相关性的定向滤波变换(dFT)。它由方向滤波和可选变换组成。此外,本文还从信号功率谱密度(PSD)的角度分析了所提出的定向滤波的理论编码增益。H.264帧内编码的数值结果和实际压缩结果都表明了该方法的优越性,但译码复杂度略有提高。
{"title":"How Can Intra Correlation Be Exploited Better?","authors":"Feng Wu, Xiulian Peng, Jizheng Xu, Shipeng Li","doi":"10.1109/DCC.2009.23","DOIUrl":"https://doi.org/10.1109/DCC.2009.23","url":null,"abstract":"This paper studies how to better exploit intra correlation in image/intra-frame coding. Unlike the previous assumption, where the correlation between samples is determined only by their distance, image is considered to present local orientation property. The correlation between samples is determined by not only the distance but also the link orientation. Thus, a directional Filtering Transform (dFT) is proposed in this paper to exploit local orientation correlation among samples. It consists of directional filtering and optional transform. Furthermore, this paper analyzes the theoretical coding gain of the proposed directional filtering in terms of signal power spectral density (PSD). Both numerical results and actual compression results of the intra-frame coding in H.264 demonstrate the advantages of the proposed dFT with decoding complexity increased slightly.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132949424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overlapped Tiling for Fast Random Oblique Plane Access of 3D Object Datasets 3D对象数据集快速随机斜平面存取的重叠平铺
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.83
Zihong Fan, Antonio Ortega, Cheng-hao Chien
Volume visualization with random data access poses significant challenges. While tiling techniques lead to simple implementations, they are not well suited for cases where the goal is to access arbitrarily located subdimensional datasets (e.g., being able to display an arbitrary 2D planar “cut” from a 3D volume). Significant effort has been devoted to volumetric data compression, with most techniques proposing to tile volumes into cuboid subvolumes to enable random access. In this paper we show that, in cases where subdimensional datasets are accessed, this leads to significant transmission inefficiency. As an alternative, we propose novel serverclient based data representation and retrieval methods which can be used for fast random access of oblique plane from 3D volume datasets. In this paper, 3D experiments are shown but the approach may be extended to higher dimensional datasets. We use multiple redundant tilings of the 3D object, where each tiling has a different orientation.We discuss the 3D rectangular tiling scheme and two main algorithm components of such 3D system, namely, (i) a search algorithm to determine which tiles should be retrieved for a given query and (ii) a mapping algorithm to enable efficient encoding without interpolation of rotated tiles. In exchange for increased server storage, we demonstrate that significant reductions in average transmission rate can be achieved relative to conventional cubic tiling techniques, e.g., nearly 40% reduction in average transmission rate for less than a factor of twenty overhead in storage before compression. Note that, as shown in our earlier work on the 2D case, the storage overhead will be lower after compression (e.g., in 2D the relative increase in storage in the compressed domain was at least a factor of two lower than in the uncompressed domain.)
具有随机数据访问的体可视化提出了重大挑战。虽然平铺技术带来了简单的实现,但它们并不适合于目标是访问任意位置的子维度数据集的情况(例如,能够从3D体中显示任意的2D平面“切割”)。在体积数据压缩方面已经投入了大量的努力,大多数技术都建议将体积平铺成长方体子体积,以实现随机访问。在本文中,我们表明,在访问子维度数据集的情况下,这会导致显著的传输效率低下。作为替代方案,我们提出了一种新的基于服务器客户端的数据表示和检索方法,可用于快速随机访问三维体数据集中的斜平面。本文给出了三维实验,但该方法可以扩展到高维数据集。我们使用3D对象的多个冗余平铺,其中每个平铺有不同的方向。我们讨论了三维矩形平铺方案和这种3D系统的两个主要算法组件,即(i)一个搜索算法,以确定在给定查询中应该检索哪些瓦片,(ii)一个映射算法,以实现有效的编码,而不需要旋转瓦片的插值。作为增加服务器存储的交换,我们证明了与传统的立方平铺技术相比,平均传输速率可以显著降低,例如,压缩前存储开销不到20%的情况下,平均传输速率降低了近40%。请注意,正如我们在2D情况下的早期工作中所示,压缩后的存储开销将更低(例如,在2D中,压缩域中的存储相对增加至少比未压缩域中低两倍)。
{"title":"Overlapped Tiling for Fast Random Oblique Plane Access of 3D Object Datasets","authors":"Zihong Fan, Antonio Ortega, Cheng-hao Chien","doi":"10.1109/DCC.2009.83","DOIUrl":"https://doi.org/10.1109/DCC.2009.83","url":null,"abstract":"Volume visualization with random data access poses significant challenges. While tiling techniques lead to simple implementations, they are not well suited for cases where the goal is to access arbitrarily located subdimensional datasets (e.g., being able to display an arbitrary 2D planar “cut” from a 3D volume). Significant effort has been devoted to volumetric data compression, with most techniques proposing to tile volumes into cuboid subvolumes to enable random access. In this paper we show that, in cases where subdimensional datasets are accessed, this leads to significant transmission inefficiency. As an alternative, we propose novel serverclient based data representation and retrieval methods which can be used for fast random access of oblique plane from 3D volume datasets. In this paper, 3D experiments are shown but the approach may be extended to higher dimensional datasets. We use multiple redundant tilings of the 3D object, where each tiling has a different orientation.We discuss the 3D rectangular tiling scheme and two main algorithm components of such 3D system, namely, (i) a search algorithm to determine which tiles should be retrieved for a given query and (ii) a mapping algorithm to enable efficient encoding without interpolation of rotated tiles. In exchange for increased server storage, we demonstrate that significant reductions in average transmission rate can be achieved relative to conventional cubic tiling techniques, e.g., nearly 40% reduction in average transmission rate for less than a factor of twenty overhead in storage before compression. Note that, as shown in our earlier work on the 2D case, the storage overhead will be lower after compression (e.g., in 2D the relative increase in storage in the compressed domain was at least a factor of two lower than in the uncompressed domain.)","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122903842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Algorithmic Cross-Complexity and Relative Complexity 算法交叉复杂度和相对复杂度
Pub Date : 2009-03-16 DOI: 10.1109/DCC.2009.6
D. Cerra, M. Datcu
Information content and compression are tightly related concepts that can be addressed by classical and algorithmic information theory. Several entities in the latter have been defined relying upon notions of the former, such as entropy and mutual information, since the basic concepts of these two approaches present many common tracts. In this work we further expand this parallelism by defining the algorithmic versions of cross-entropy and relative entropy (or Kullback-Leiblerdivergence), two well-known concepts in classical information theory. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when using such a description for x, with respect to its shortest representation. Since the main drawback of these concepts is their uncomputability, a suitable approximation based on data compression is derived for both and applied to real data. This allows us to improve the results obtained by similar previous methods which were intuitively defined.
信息内容和压缩是紧密相关的概念,可以通过经典信息理论和算法信息理论来解决。后者中的几个实体是根据前者的概念定义的,例如熵和互信息,因为这两种方法的基本概念呈现出许多共同的领域。在这项工作中,我们通过定义交叉熵和相对熵(或Kullback-Leiblerdivergence)这两个经典信息论中众所周知的概念的算法版本,进一步扩展了这种并行性。我们将对象x相对于另一个对象y的交叉复杂性定义为用y指定x所需的计算资源的数量,而x相对于y的复杂性定义为使用这种描述x时损失的压缩能力,相对于它的最短表示。由于这些概念的主要缺点是它们的不可计算性,因此基于数据压缩为两者导出了合适的近似值并应用于实际数据。这使我们能够改进以前类似的方法所获得的结果,这些方法是直观地定义的。
{"title":"Algorithmic Cross-Complexity and Relative Complexity","authors":"D. Cerra, M. Datcu","doi":"10.1109/DCC.2009.6","DOIUrl":"https://doi.org/10.1109/DCC.2009.6","url":null,"abstract":"Information content and compression are tightly related concepts that can be addressed by classical and algorithmic information theory. Several entities in the latter have been defined relying upon notions of the former, such as entropy and mutual information, since the basic concepts of these two approaches present many common tracts. In this work we further expand this parallelism by defining the algorithmic versions of cross-entropy and relative entropy (or Kullback-Leiblerdivergence), two well-known concepts in classical information theory. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when using such a description for x, with respect to its shortest representation. Since the main drawback of these concepts is their uncomputability, a suitable approximation based on data compression is derived for both and applied to real data. This allows us to improve the results obtained by similar previous methods which were intuitively defined.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121155848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2009 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1