首页 > 最新文献

2010 Data Compression Conference最新文献

英文 中文
gFPC: A Self-Tuning Compression Algorithm 一种自调优压缩算法
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.42
Martin Burtscher, P. Ratanaworabhan
This paper presents and evaluates gFPC, a self-tuning implementation of the FPC compression algorithm for double-precision floating-point data. gFPC uses a genetic algorithm to repeatedly reconfigure four hash-function parameters, which enables it to adapt to changes in the data during compression. Self tuning increases the harmonic-mean compression ratio on thirteen scientific datasets from 22% to 28% with sixteen kilobyte hash tables and from 36% to 43% with one megabyte hash tables. Individual datasets compress up to 1.72 times better. The self-tuning overhead reduces the compression speed by a factor of four but makes decompression faster because of the higher compression ratio. On a 2.93 GHz Xeon processor, gFPC compresses at a throughput of almost one gigabit per second and decompresses at over seven gigabits per second.
本文提出并评价了双精度浮点数据FPC压缩算法的自调优实现gFPC。gFPC使用遗传算法反复重新配置四个哈希函数参数,使其能够适应压缩过程中数据的变化。自调优将13个科学数据集的谐波平均压缩比从16千字节哈希表的22%提高到28%,从1兆字节哈希表的36%提高到43%。单个数据集的压缩效果提高了1.72倍。自调优开销将压缩速度降低了四分之一,但由于压缩比更高,因此解压缩速度更快。在2.93 GHz的至强处理器上,gFPC的压缩吞吐量几乎是每秒1千兆比特,解压速度超过每秒7千兆比特。
{"title":"gFPC: A Self-Tuning Compression Algorithm","authors":"Martin Burtscher, P. Ratanaworabhan","doi":"10.1109/DCC.2010.42","DOIUrl":"https://doi.org/10.1109/DCC.2010.42","url":null,"abstract":"This paper presents and evaluates gFPC, a self-tuning implementation of the FPC compression algorithm for double-precision floating-point data. gFPC uses a genetic algorithm to repeatedly reconfigure four hash-function parameters, which enables it to adapt to changes in the data during compression. Self tuning increases the harmonic-mean compression ratio on thirteen scientific datasets from 22% to 28% with sixteen kilobyte hash tables and from 36% to 43% with one megabyte hash tables. Individual datasets compress up to 1.72 times better. The self-tuning overhead reduces the compression speed by a factor of four but makes decompression faster because of the higher compression ratio. On a 2.93 GHz Xeon processor, gFPC compresses at a throughput of almost one gigabit per second and decompresses at over seven gigabits per second.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130825837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Inverted Index Compression for Scalable Image Matching 用于可伸缩图像匹配的倒排索引压缩
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.53
David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, Ramakrishna Vedantham, R. Grzeszczuk, B. Girod
To perform fast image matching against large databases, a Vocabulary Tree (VT) uses an inverted index that maps from each tree node to database images which have visited that node. The inverted index can require gigabytes of memory, which significantly slows down the database server. In this paper, we design, develop, and compare techniques for inverted index compression for image-based retrieval. We show that these techniques significantly reduce memory usage, by as much as 5x, without loss in recognition accuracy. Our work includes fast decoding methods, an offline database reordering scheme that exploits the similarity between images for additional memory savings, and a generalized coding scheme for soft-binned feature descriptor histograms. We also show that reduced index memory permits memory-intensive image matching techniques that boost recognition accuracy.
为了针对大型数据库执行快速图像匹配,词汇树(VT)使用一个倒排索引,该索引将每个树节点映射到访问过该节点的数据库图像。倒排索引可能需要千兆字节的内存,这会显著降低数据库服务器的速度。在本文中,我们设计、开发和比较了基于图像检索的倒排索引压缩技术。我们表明,这些技术显著减少了内存使用,最多减少了5倍,而不会损失识别准确性。我们的工作包括快速解码方法,离线数据库重新排序方案,利用图像之间的相似性来节省额外的内存,以及软分类特征描述符直方图的通用编码方案。我们还表明,减少索引内存允许内存密集型图像匹配技术,提高识别的准确性。
{"title":"Inverted Index Compression for Scalable Image Matching","authors":"David M. Chen, Sam S. Tsai, V. Chandrasekhar, Gabriel Takacs, Ramakrishna Vedantham, R. Grzeszczuk, B. Girod","doi":"10.1109/DCC.2010.53","DOIUrl":"https://doi.org/10.1109/DCC.2010.53","url":null,"abstract":"To perform fast image matching against large databases, a Vocabulary Tree (VT) uses an inverted index that maps from each tree node to database images which have visited that node. The inverted index can require gigabytes of memory, which significantly slows down the database server. In this paper, we design, develop, and compare techniques for inverted index compression for image-based retrieval. We show that these techniques significantly reduce memory usage, by as much as 5x, without loss in recognition accuracy. Our work includes fast decoding methods, an offline database reordering scheme that exploits the similarity between images for additional memory savings, and a generalized coding scheme for soft-binned feature descriptor histograms. We also show that reduced index memory permits memory-intensive image matching techniques that boost recognition accuracy.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132816349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
I/O-Efficient Compressed Text Indexes: From Theory to Practice I/ o高效压缩文本索引:从理论到实践
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.45
Sheng-Yuan Chiu, W. Hon, R. Shah, J. Vitter
Pattern matching on text data has been a fundamental field ofComputer Science for nearly 40 years. Databases supporting full-textindexing functionality on text data are now widely used by biologists.In the theoretical literature, the most popular internal-memory index structures are thesuffix trees and the suffix arrays, and the most popular external-memory index structureis the string B-tree. However, the practical applicabilityof these indexes has been limited mainly because of their spaceconsumption and I/O issues. These structures use a lot more space(almost 20 to 50 times more) than the original text dataand are often disk-resident.Ferragina and Manzini (2005) and Grossi and Vitter (2005)gave the first compressed text indexes with efficient query times inthe internal-memory model. Recently, Chien et al (2008) presenteda compact text index in the external memory based on theconcept of Geometric Burrows-Wheeler Transform.They also presented lower bounds which suggested that it may be hardto obtain a good index structure in the external memory.In this paper, we investigate this issue from a practical point of view.On the positive side we show an external-memory text indexingstructure (based on R-trees and KD-trees) that saves space by aboutan order of magnitude as compared to the standard String B-tree.While saving space, these structures also maintain a comparable I/O efficiency to thatof String B-tree. We also show various space vs I/O efficiency trade-offsfor our structures.
近40年来,文本数据的模式匹配一直是计算机科学的一个基础领域。支持文本数据全文索引功能的数据库现在被生物学家广泛使用。在理论文献中,最流行的内存索引结构是后缀树和后缀数组,而最流行的外部内存索引结构是字符串b树。然而,这些索引的实际适用性受到限制,主要是因为它们的空间消耗和I/O问题。这些结构使用的空间比原始文本数据大得多(几乎是原始文本数据的20到50倍),并且通常位于磁盘上。Ferragina and Manzini(2005)和Grossi and Vitter(2005)在内存模型中给出了第一个具有高效查询时间的压缩文本索引。最近,Chien等人(2008)基于几何Burrows-Wheeler变换的概念提出了一种外部存储器中的紧凑文本索引。他们还提出了下界,这表明在外部存储器中可能很难获得良好的索引结构。在本文中,我们从实际的角度来研究这个问题。从积极的方面来看,我们展示了一个外部内存文本索引结构(基于r树和kd树),与标准字符串b树相比,它节省了大约一个数量级的空间。在节省空间的同时,这些结构也保持了与String B-tree相当的I/O效率。我们还展示了我们的结构的各种空间与I/O效率权衡。
{"title":"I/O-Efficient Compressed Text Indexes: From Theory to Practice","authors":"Sheng-Yuan Chiu, W. Hon, R. Shah, J. Vitter","doi":"10.1109/DCC.2010.45","DOIUrl":"https://doi.org/10.1109/DCC.2010.45","url":null,"abstract":"Pattern matching on text data has been a fundamental field ofComputer Science for nearly 40 years. Databases supporting full-textindexing functionality on text data are now widely used by biologists.In the theoretical literature, the most popular internal-memory index structures are thesuffix trees and the suffix arrays, and the most popular external-memory index structureis the string B-tree. However, the practical applicabilityof these indexes has been limited mainly because of their spaceconsumption and I/O issues. These structures use a lot more space(almost 20 to 50 times more) than the original text dataand are often disk-resident.Ferragina and Manzini (2005) and Grossi and Vitter (2005)gave the first compressed text indexes with efficient query times inthe internal-memory model. Recently, Chien et al (2008) presenteda compact text index in the external memory based on theconcept of Geometric Burrows-Wheeler Transform.They also presented lower bounds which suggested that it may be hardto obtain a good index structure in the external memory.In this paper, we investigate this issue from a practical point of view.On the positive side we show an external-memory text indexingstructure (based on R-trees and KD-trees) that saves space by aboutan order of magnitude as compared to the standard String B-tree.While saving space, these structures also maintain a comparable I/O efficiency to thatof String B-tree. We also show various space vs I/O efficiency trade-offsfor our structures.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132750833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
When Huffman Meets Hamming: A Class of Optimal Variable-Length Error Correcting Codes 当Huffman遇到Hamming:一类最优变长纠错码
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.35
S. Savari, J. Kliewer
We introduce a family of binary prefix condition codes in which each codeword is required to have a Hamming weight which is a multiple of w for some integer w≫=2. Such codes have intrinsic error resilience and are a special case of codes with codewords constrained to belong to a language accepted by a deterministic finite automaton. For a given source over n symbols and parameter w we offer an algorithm to construct a minimum-redundancy code among this class of prefix condition codes which has a running time of O(n^{w+2}).
我们引入了一组二进制前缀条件码,其中每个码字都要求有一个汉明权值,对于某个整数w²=2,汉明权值是w的倍数。这样的代码具有内在的错误弹性,并且是代码的特殊情况,其码字被约束属于确定性有限自动机所接受的语言。对于具有n个符号和参数w的给定源,我们给出了在这类前缀条件码中构造最小冗余码的算法,其运行时间为O(n^{w+2})。
{"title":"When Huffman Meets Hamming: A Class of Optimal Variable-Length Error Correcting Codes","authors":"S. Savari, J. Kliewer","doi":"10.1109/DCC.2010.35","DOIUrl":"https://doi.org/10.1109/DCC.2010.35","url":null,"abstract":"We introduce a family of binary prefix condition codes in which each codeword is required to have a Hamming weight which is a multiple of w for some integer w≫=2. Such codes have intrinsic error resilience and are a special case of codes with codewords constrained to belong to a language accepted by a deterministic finite automaton. For a given source over n symbols and parameter w we offer an algorithm to construct a minimum-redundancy code among this class of prefix condition codes which has a running time of O(n^{w+2}).","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133196419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimized Analog Mappings for Distributed Source-Channel Coding 分布式源信道编码的优化模拟映射
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.92
E. Akyol, K. Rose, T. Ramstad
This paper focuses on optimal analog mappings for zero-delay, distributed source-channel coding. The objective is to obtain the optimal vector trans- formations that map between m-dimensional source spaces and k-dimensional channel spaces, subject to a prescribed power constraint and assuming the mean square error distortion measure. Closed-form necessary conditions for optimality of encoding and decoding mappings are derived. An iterative de- sign algorithm is proposed, which updates encoder and decoder mappings by sequentially enforcing the complementary optimality conditions at each itera- tion. The obtained encoding functions are shown to be a continuous relative of, and in fact subsume as a special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems, by mapping multiple source intervals to the same channel interval. Example mappings and performance results are presented for Gaussian sources and channels.
本文主要研究零延迟分布式信源信道编码的最优模拟映射。目标是获得m维源空间和k维信道空间之间映射的最优向量变换,服从规定的功率约束,并假设均方误差失真测量。导出了编码和解码映射最优性的封闭必要条件。提出了一种迭代设计算法,该算法通过在每次迭代中顺序执行互补最优性条件来更新编码器和解码器映射。通过将多个源间隔映射到相同的信道间隔,得到的编码函数显示为连续的相对函数,并且实际上包含了数字分布式源编码系统中遇到的Wyner-Ziv映射,作为一种特殊情况。给出了高斯源和高斯信道的映射示例和性能结果。
{"title":"Optimized Analog Mappings for Distributed Source-Channel Coding","authors":"E. Akyol, K. Rose, T. Ramstad","doi":"10.1109/DCC.2010.92","DOIUrl":"https://doi.org/10.1109/DCC.2010.92","url":null,"abstract":"This paper focuses on optimal analog mappings for zero-delay, distributed source-channel coding. The objective is to obtain the optimal vector trans- formations that map between m-dimensional source spaces and k-dimensional channel spaces, subject to a prescribed power constraint and assuming the mean square error distortion measure. Closed-form necessary conditions for optimality of encoding and decoding mappings are derived. An iterative de- sign algorithm is proposed, which updates encoder and decoder mappings by sequentially enforcing the complementary optimality conditions at each itera- tion. The obtained encoding functions are shown to be a continuous relative of, and in fact subsume as a special case, the Wyner-Ziv mappings encountered in digital distributed source coding systems, by mapping multiple source intervals to the same channel interval. Example mappings and performance results are presented for Gaussian sources and channels.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Enhanced Lossless Coding Tools of LPC Residual for ITU-T G.711.0 ITU-T G.711.0中增强的LPC残差无损编码工具
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.71
T. Moriya, Y. Kamamoto, N. Harada
Three elementary coding tools -- a progressive order prediction tool, quantized order prediction tool, and adaptive and sub-frame base coding tool for separation parameters -- have been devised to enhance the compression performance of the prediction residual. These are intended for the lossless coding of G.711 log PCM symbols used in packet-based network application such as VoIP. All tools are shown to be effective for reducing the average code length without any significant increase of computational complexity. As a result, all have been adopted in the mapped domain predictive coding part of the ITU-T G.711.0 standard.
为了提高预测残差的压缩性能,设计了三种基本编码工具——渐进式阶数预测工具、量化阶数预测工具和分离参数的自适应和子帧基编码工具。这些是用于G.711日志PCM符号的无损编码,用于基于分组的网络应用,如VoIP。所有的工具都能有效地减少平均代码长度,而不会显著增加计算复杂度。因此,所有这些都已在ITU-T G.711.0标准的映射域预测编码部分被采用。
{"title":"Enhanced Lossless Coding Tools of LPC Residual for ITU-T G.711.0","authors":"T. Moriya, Y. Kamamoto, N. Harada","doi":"10.1109/DCC.2010.71","DOIUrl":"https://doi.org/10.1109/DCC.2010.71","url":null,"abstract":"Three elementary coding tools -- a progressive order prediction tool, quantized order prediction tool, and adaptive and sub-frame base coding tool for separation parameters -- have been devised to enhance the compression performance of the prediction residual. These are intended for the lossless coding of G.711 log PCM symbols used in packet-based network application such as VoIP. All tools are shown to be effective for reducing the average code length without any significant increase of computational complexity. As a result, all have been adopted in the mapped domain predictive coding part of the ITU-T G.711.0 standard.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Integrated Algorithm for Fractional Pixel Interpolation and Motion Estimation of H.264 H.264分数像素插值与运动估计的集成算法
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.101
Jiyuan Lu, Peizhao Zhang, Hongyang Chao, P. Fisher
Fractional pixel motion compensation technology is an area in video image compression that can provide significant gains for coding efficacy, but this improvement comes the associated cost of high computational complexity. This additional complexity arises from two aspects: fractional pixel motion estimation (FPME) and fractional pixel interpolation (FPI). Different from current fast algorithms, we use the internal link between FPME and FPI as a factor in considering optimization by integrally manipulating them rather than attempting to speed them up separately. To coordinate with FPME and FPI, our proposed algorithm estimates fractional motion vectors and interpolates fractional pixels in the same order, which will satisfy the criteria of cost/performance efficiency. Compared with the FFPS+XFPI (the FPI method in X264), the proposed algorithm has already reduced the speed by a factor of 60% without coding loss. Furthermore, the proposed algorithm also achieves a much higher speed and better R-D performance than other fast algorithms e.g. CBFPS+XFPI. This integrated algorithm, therefore, improves the overall video coding speed by a significant measure and its idea of jointly optimizing the computational cost and the R-D performance can be extended to speeding up an even finer fractional motion compensation, such as 1/8 pixel, and to designing new interpolation filters for H.265.
分数像素运动补偿技术是视频图像压缩中的一个领域,它可以显著提高编码效率,但这种改进的代价是高计算复杂度。这种额外的复杂性来自两个方面:分数像素运动估计(FPME)和分数像素插值(FPI)。与当前的快速算法不同,我们使用FPME和FPI之间的内部联系作为考虑优化的因素,通过整体操作它们而不是试图分别加速它们。为了协调FPME和FPI,我们提出的算法估计分数阶运动向量,并以相同的顺序插值分数阶像素,从而满足成本/性能效率的标准。与FFPS+XFPI (X264中的FPI方法)相比,该算法在没有编码损失的情况下,已经将速度降低了60%。此外,与CBFPS+XFPI等快速算法相比,该算法具有更高的速度和更好的R-D性能。因此,这种集成算法大大提高了整体视频编码速度,其共同优化计算成本和R-D性能的思想可以扩展到加速更精细的分数运动补偿,例如1/8像素,以及为H.265设计新的插值滤波器。
{"title":"An Integrated Algorithm for Fractional Pixel Interpolation and Motion Estimation of H.264","authors":"Jiyuan Lu, Peizhao Zhang, Hongyang Chao, P. Fisher","doi":"10.1109/DCC.2010.101","DOIUrl":"https://doi.org/10.1109/DCC.2010.101","url":null,"abstract":"Fractional pixel motion compensation technology is an area in video image compression that can provide significant gains for coding efficacy, but this improvement comes the associated cost of high computational complexity. This additional complexity arises from two aspects: fractional pixel motion estimation (FPME) and fractional pixel interpolation (FPI). Different from current fast algorithms, we use the internal link between FPME and FPI as a factor in considering optimization by integrally manipulating them rather than attempting to speed them up separately. To coordinate with FPME and FPI, our proposed algorithm estimates fractional motion vectors and interpolates fractional pixels in the same order, which will satisfy the criteria of cost/performance efficiency. Compared with the FFPS+XFPI (the FPI method in X264), the proposed algorithm has already reduced the speed by a factor of 60% without coding loss. Furthermore, the proposed algorithm also achieves a much higher speed and better R-D performance than other fast algorithms e.g. CBFPS+XFPI. This integrated algorithm, therefore, improves the overall video coding speed by a significant measure and its idea of jointly optimizing the computational cost and the R-D performance can be extended to speeding up an even finer fractional motion compensation, such as 1/8 pixel, and to designing new interpolation filters for H.265.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131041213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Xampling: Analog Data Compression 示例:模拟数据压缩
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.39
M. Mishali, Yonina C. Eldar
We introduce Xampling, a design methodology for analog compressed sensing in which we sample analog bandlimited signals at rates far lower than Nyquist, without loss of information. This allows compression together with the sampling stage. The main principles underlying this framework are the ability to capture a broad signal model, low sampling rate, efficient analog and digital implementation and lowrate baseband processing. In order to break through the Nyquist barrier so as to compress the signals in the sampling process, one has to combine classic methods from sampling theory together with recent developments in compressed sensing. We show that previous attempts at sub-Nyquist sampling suffer from analog implementation issues, large computational loads, and have no baseband processing capabilities. We then introduce the modulated wideband converter which can satisfy all the Xampling desiderata. We also demonstrate a board implementation of our converter which exhibits sub-Nyquist sampling in practice.
我们介绍了采样,这是一种模拟压缩传感的设计方法,在这种方法中,我们以远低于奈奎斯特的速率采样模拟带宽限制信号,而不会丢失信息。这允许压缩和采样阶段一起进行。该框架的主要原理是能够捕获宽信号模型,低采样率,高效的模拟和数字实现以及低速率基带处理。为了突破奈奎斯特屏障,在采样过程中对信号进行压缩,必须将采样理论的经典方法与压缩感知的最新发展相结合。我们表明,以前的亚奈奎斯特采样尝试存在模拟实现问题,计算负载大,并且没有基带处理能力。然后介绍了满足所有采样要求的调制宽带变换器。我们还演示了我们的转换器的电路板实现,在实践中显示了亚奈奎斯特采样。
{"title":"Xampling: Analog Data Compression","authors":"M. Mishali, Yonina C. Eldar","doi":"10.1109/DCC.2010.39","DOIUrl":"https://doi.org/10.1109/DCC.2010.39","url":null,"abstract":"We introduce Xampling, a design methodology for analog compressed sensing in which we sample analog bandlimited signals at rates far lower than Nyquist, without loss of information. This allows compression together with the sampling stage. The main principles underlying this framework are the ability to capture a broad signal model, low sampling rate, efficient analog and digital implementation and lowrate baseband processing. In order to break through the Nyquist barrier so as to compress the signals in the sampling process, one has to combine classic methods from sampling theory together with recent developments in compressed sensing. We show that previous attempts at sub-Nyquist sampling suffer from analog implementation issues, large computational loads, and have no baseband processing capabilities. We then introduce the modulated wideband converter which can satisfy all the Xampling desiderata. We also demonstrate a board implementation of our converter which exhibits sub-Nyquist sampling in practice.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122358990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Two-Step Coding for High Definition Video Compression 高清晰度视频压缩的两步编码
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.54
Wenfei Jiang, Wenyu Liu, Longin Jan Latecki, Hui Liang, Changqing Wang, Bin Feng
High definition (HD) video has come into people’s life from movie theaters to HDTV. However, the compression of HD videos is a challenging problem due to flicker noise, caused by film grain. The flicker noise significantly limits the applicability of motion estimation (ME), which is a key factor of the efficient video compression in block-based coding standards. Due to the flicker noise, it is difficult to obtain a perfect match between a current block and a reference block. In block-based video coding standards including H.264 a given block is either encoded by inter-frame or intra-frame prediction. We propose a new coding scheme called Two-Step Coding (TSC) that utilizes both for each block. TSC first reduces the resolution of each frame by replacing each block with its DC coefficient of the DCT to the original color values. The flicker noise is greatly reduced in the obtained lower resolution frame, which we call DC frame. The key benefit is that ME becomes very efficient on DC frames, and consequently, the DC frame can be efficiently inter-frame coded. The difference between the original frame and DC frame is actually described by the AC coefficients of the DCT of the original frame. We utilize the existing H.264 tools to combine the intra-frame and inter-frame coded parts of blocks both on the encoder and decoder sides. The key benefit of the proposed TSC in comparison to the most popular standards, in particular, in comparison to H.264 lies in better utilization of inter-frame coding.. Due to flicker noise, H.264 mostly employs intra block coding on HD videos. However, it is well-known that inter-frame coding significantly outperforms intra coding in video compression rate if the temporal correllation is correctly utilized. By reducing each frame to DC frame, TSC makes it possible to apply inter-frame coding. We provide experimental data and analysis to illustrate this fact.
从电影院到高清电视,高清视频已经进入了人们的生活。然而,由于胶片颗粒引起的闪烁噪声,高清视频的压缩是一个具有挑战性的问题。闪烁噪声严重限制了运动估计(ME)的适用性,而运动估计是基于块的编码标准中有效压缩视频的关键因素。由于闪烁噪声的存在,很难在电流块和参考块之间获得完美的匹配。在包括H.264在内的基于块的视频编码标准中,给定的块要么通过帧间预测编码,要么通过帧内预测编码。我们提出了一种新的编码方案,称为两步编码(TSC),它利用了每个块的两步编码。TSC首先通过将DCT的DC系数替换为原始颜色值来降低每一帧的分辨率。在得到的低分辨率帧中,闪烁噪声大大降低,我们称之为直流帧。关键的好处是ME在DC帧上变得非常高效,因此,DC帧可以有效地进行帧间编码。原框架和直流框架之间的差异实际上是由原框架的DCT的交流系数来描述的。我们利用现有的H.264工具将编码器和解码器两端的块的帧内和帧间编码部分结合起来。与最流行的标准相比,特别是与H.264相比,提出的TSC的主要优点在于更好地利用帧间编码。由于闪烁噪声的存在,H.264在高清视频中多采用帧内编码。然而,众所周知,如果正确利用时间相关性,帧间编码在视频压缩率方面明显优于帧内编码。通过将每帧减少为直流帧,TSC使应用帧间编码成为可能。我们提供实验数据和分析来说明这一事实。
{"title":"Two-Step Coding for High Definition Video Compression","authors":"Wenfei Jiang, Wenyu Liu, Longin Jan Latecki, Hui Liang, Changqing Wang, Bin Feng","doi":"10.1109/DCC.2010.54","DOIUrl":"https://doi.org/10.1109/DCC.2010.54","url":null,"abstract":"High definition (HD) video has come into people’s life from movie theaters to HDTV. However, the compression of HD videos is a challenging problem due to flicker noise, caused by film grain. The flicker noise significantly limits the applicability of motion estimation (ME), which is a key factor of the efficient video compression in block-based coding standards. Due to the flicker noise, it is difficult to obtain a perfect match between a current block and a reference block. In block-based video coding standards including H.264 a given block is either encoded by inter-frame or intra-frame prediction. We propose a new coding scheme called Two-Step Coding (TSC) that utilizes both for each block. TSC first reduces the resolution of each frame by replacing each block with its DC coefficient of the DCT to the original color values. The flicker noise is greatly reduced in the obtained lower resolution frame, which we call DC frame. The key benefit is that ME becomes very efficient on DC frames, and consequently, the DC frame can be efficiently inter-frame coded. The difference between the original frame and DC frame is actually described by the AC coefficients of the DCT of the original frame. We utilize the existing H.264 tools to combine the intra-frame and inter-frame coded parts of blocks both on the encoder and decoder sides. The key benefit of the proposed TSC in comparison to the most popular standards, in particular, in comparison to H.264 lies in better utilization of inter-frame coding.. Due to flicker noise, H.264 mostly employs intra block coding on HD videos. However, it is well-known that inter-frame coding significantly outperforms intra coding in video compression rate if the temporal correllation is correctly utilized. By reducing each frame to DC frame, TSC makes it possible to apply inter-frame coding. We provide experimental data and analysis to illustrate this fact.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling the Quantization Staircase Function 量化阶梯函数的建模
Pub Date : 2010-03-24 DOI: 10.1109/DCC.2010.89
S. Aslam, A. Bobick, C. Barnes
Quantization plays a central role in data compression. In speech systems, vector quantizers are used to compress speech parameters. In video systems, scalar quantizers are used to reduce variability in transform coefficients. More generally, quantizers are used to compress all forms of data. In most cases, the quantizers are based on some form of staircase function. Deriving an analytical expression for a uniform midrise quantizer is well known and straightforward. In this paper, we create an alternate method of deriving such an analytical expression with the hope that the steps involved will be useful in understanding quantization and its various applications.
量化在数据压缩中起着核心作用。在语音系统中,矢量量化器用于压缩语音参数。在视频系统中,标量量化器用于减少变换系数的可变性。更一般地说,量化器被用来压缩所有形式的数据。在大多数情况下,量化器是基于某种形式的阶梯函数。推导出均匀中量子器的解析表达式是众所周知的,也是直截了当的。在本文中,我们创建了一种推导这种解析表达式的替代方法,希望所涉及的步骤将有助于理解量化及其各种应用。
{"title":"Modeling the Quantization Staircase Function","authors":"S. Aslam, A. Bobick, C. Barnes","doi":"10.1109/DCC.2010.89","DOIUrl":"https://doi.org/10.1109/DCC.2010.89","url":null,"abstract":"Quantization plays a central role in data compression. In speech systems, vector quantizers are used to compress speech parameters. In video systems, scalar quantizers are used to reduce variability in transform coefficients. More generally, quantizers are used to compress all forms of data. In most cases, the quantizers are based on some form of staircase function. Deriving an analytical expression for a uniform midrise quantizer is well known and straightforward. In this paper, we create an alternate method of deriving such an analytical expression with the hope that the steps involved will be useful in understanding quantization and its various applications.","PeriodicalId":299459,"journal":{"name":"2010 Data Compression Conference","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2010 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1