首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Arithmetic coding revisited 算术编码重述
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515510
Alistair Moffat, Radford M. Neal, I. Witten
During its long gestation in the 1970s and early 1980s, arithmetic coding was widely regarded more as an academic curiosity than a practical coding technique. One factor that helped it gain the popularity it enjoys today was the publication in 1987 of source code for a multi symbol arithmetic coder in Communications of the ACM. Now (1995), our understanding of arithmetic coding has further matured, and it is timely to review the components of that implementation and summarise the improvements that we and other authors have developed since then. We also describe a novel method for performing the underlying calculation needed for arithmetic coding. Accompanying the paper is a "Mark II" implementation that incorporates the improvements we suggest. The areas examined include: changes to the coding procedure that reduce the number of multiplications and divisions and permit them to be done to low precision; the increased range of probability approximations and alphabet sizes that can be supported using limited precision calculation; data structures for support of arithmetic coding on large alphabets; the interface between the modelling and coding subsystems; the use of enhanced models to allow high performance compression. For each of these areas, we consider how the new implementation differs from the CACM package.
在20世纪70年代和80年代初的漫长酝酿中,算术编码被广泛地视为一种学术上的好奇心,而不是一种实用的编码技术。帮助它获得今天的普及的一个因素是1987年在ACM通信中发表的多符号算术编码器的源代码。现在(1995年),我们对算术编码的理解已经进一步成熟,现在是时候回顾一下实现的组成部分,总结一下我们和其他作者从那时起所做的改进了。我们还描述了一种执行算术编码所需的底层计算的新方法。随附论文的是“Mark II”实现,其中包含了我们建议的改进。检查的领域包括:编码程序的变化,减少乘法和除法的数量,并允许以低精度进行;使用有限精度计算可以支持的概率近似值和字母大小的增加范围;支持对大字母进行算术编码的数据结构;建模子系统与编码子系统之间的接口;使用增强的模型来实现高性能压缩。对于这些领域中的每一个,我们考虑新的实现与ccam包的不同之处。
{"title":"Arithmetic coding revisited","authors":"Alistair Moffat, Radford M. Neal, I. Witten","doi":"10.1109/DCC.1995.515510","DOIUrl":"https://doi.org/10.1109/DCC.1995.515510","url":null,"abstract":"During its long gestation in the 1970s and early 1980s, arithmetic coding was widely regarded more as an academic curiosity than a practical coding technique. One factor that helped it gain the popularity it enjoys today was the publication in 1987 of source code for a multi symbol arithmetic coder in Communications of the ACM. Now (1995), our understanding of arithmetic coding has further matured, and it is timely to review the components of that implementation and summarise the improvements that we and other authors have developed since then. We also describe a novel method for performing the underlying calculation needed for arithmetic coding. Accompanying the paper is a \"Mark II\" implementation that incorporates the improvements we suggest. The areas examined include: changes to the coding procedure that reduce the number of multiplications and divisions and permit them to be done to low precision; the increased range of probability approximations and alphabet sizes that can be supported using limited precision calculation; data structures for support of arithmetic coding on large alphabets; the interface between the modelling and coding subsystems; the use of enhanced models to allow high performance compression. For each of these areas, we consider how the new implementation differs from the CACM package.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 611
New algorithms for optimal binary vector quantizer design 二值矢量量化器优化设计的新算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515503
Xiaolin Wu, Yonggang Fang
New algorithms are proposed for designing optimal binary vector quantizers. These algorithms aim to avoid the problem of the generalized Lloyd method of easily getting trapped into a poor local minimum. To improve the subjective quality of vector-quantized binary images, a constrained optimal binary VQ framework is proposed. Within this framework, the optimal VQ design can be done via an interesting use of linear codes.
提出了设计最佳二值矢量量化器的新算法。这些算法旨在避免广义劳埃德方法容易陷入局部极值的问题。为了提高矢量量化二值图像的主观质量,提出了一种约束最优二值VQ框架。在这个框架内,最佳的VQ设计可以通过一个有趣的线性代码的使用来完成。
{"title":"New algorithms for optimal binary vector quantizer design","authors":"Xiaolin Wu, Yonggang Fang","doi":"10.1109/DCC.1995.515503","DOIUrl":"https://doi.org/10.1109/DCC.1995.515503","url":null,"abstract":"New algorithms are proposed for designing optimal binary vector quantizers. These algorithms aim to avoid the problem of the generalized Lloyd method of easily getting trapped into a poor local minimum. To improve the subjective quality of vector-quantized binary images, a constrained optimal binary VQ framework is proposed. Within this framework, the optimal VQ design can be done via an interesting use of linear codes.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vector quantisation for wavelet based image compression 基于小波的矢量量化图像压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515575
P. Fenwick, S. Woolford
Summary form only given. The present work arose from a need to transmit architectural line drawings over relatively slow communication links, such as telephone circuits. The images are mostly large line drawings, but with some shading. The application required good compression, incremental transmission, and excellent reproduction of sharp lines and fine detail such as text. The final system uses an initial wavelet transform stage (actually using a wave-packet transform), an adaptive vector quantiser stage, and a final post-compression stage. This paper emphasises the vector quantiser. Incremental transmission makes it desirable to use only actual data vectors in the database. The standard Linde Buzo Gray (LBG) algorithm was slow, taking 30-60 minutes for a training set, tended to use 'near-zero' vectors instead of 'true-zero' vectors introducing undesirable texture into the reconstructed image, and the quality could not be guaranteed with some images producing; artifacts at even low compression rates. The final vector quantiser uses new techniques with LRU maintenance of the database, updating for 'exact matches' to an existing vector and for 'near matches', using a combination of mean-square error and magnitude error. A conventional counting LRU mechanism is used, with different aging parameters for the two types of LRU update. The new vector quantiser requires about 10 seconds per image (compared with 30-60 minutes for LBG) and essentially eliminates the undesirable compression artifacts.
只提供摘要形式。目前的工作是由于需要通过相对较慢的通信链路(如电话电路)传输建筑线条图而产生的。这些图像大多是大线条画,但有一些阴影。该应用程序需要良好的压缩、增量传输以及清晰的线条和精细的细节(如文本)的出色再现。最后的系统使用初始小波变换阶段(实际上使用波包变换),自适应矢量量化阶段和最后的后压缩阶段。本文着重讨论了矢量量子器。增量传输使得只使用数据库中的实际数据向量是可取的。标准的Linde Buzo Gray (LBG)算法速度慢,一个训练集需要30-60分钟,倾向于使用“近零”向量而不是“真零”向量,在重建图像中引入了不良的纹理,并且产生的图像质量无法保证;低压缩率下的工件。最后的矢量量化器使用了LRU维护数据库的新技术,使用均方误差和幅度误差的组合来更新现有矢量的“精确匹配”和“接近匹配”。采用传统的计数LRU机制,对两种LRU更新使用不同的老化参数。新的矢量量化器每张图像需要大约10秒(相比之下,LBG需要30-60分钟),并且基本上消除了不希望看到的压缩伪影。
{"title":"Vector quantisation for wavelet based image compression","authors":"P. Fenwick, S. Woolford","doi":"10.1109/DCC.1995.515575","DOIUrl":"https://doi.org/10.1109/DCC.1995.515575","url":null,"abstract":"Summary form only given. The present work arose from a need to transmit architectural line drawings over relatively slow communication links, such as telephone circuits. The images are mostly large line drawings, but with some shading. The application required good compression, incremental transmission, and excellent reproduction of sharp lines and fine detail such as text. The final system uses an initial wavelet transform stage (actually using a wave-packet transform), an adaptive vector quantiser stage, and a final post-compression stage. This paper emphasises the vector quantiser. Incremental transmission makes it desirable to use only actual data vectors in the database. The standard Linde Buzo Gray (LBG) algorithm was slow, taking 30-60 minutes for a training set, tended to use 'near-zero' vectors instead of 'true-zero' vectors introducing undesirable texture into the reconstructed image, and the quality could not be guaranteed with some images producing; artifacts at even low compression rates. The final vector quantiser uses new techniques with LRU maintenance of the database, updating for 'exact matches' to an existing vector and for 'near matches', using a combination of mean-square error and magnitude error. A conventional counting LRU mechanism is used, with different aging parameters for the two types of LRU update. The new vector quantiser requires about 10 seconds per image (compared with 30-60 minutes for LBG) and essentially eliminates the undesirable compression artifacts.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125997777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-quantized wavelet subtrees: a wavelet-based theory for fractal image compression 自量化小波子树:基于小波的分形图像压缩理论
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515513
G. Davis
We describe an adaptive wavelet-based compression scheme for images. We decompose an image into a set of quantized wavelet coefficients and quantized wavelet subtrees. The vector codebook used for quantizing the subtrees is drawn from the image. Subtrees are quantized to contracted isometries of coarser scale subtrees. This codebook drawn from the contracted image is effective for quantizing locally smooth regions and locally straight edges. We prove that this self-quantization enables us to recover the fine scale wavelet coefficients of an image given its coarse scale coefficients. We show that this self-quantization algorithm is equivalent to a fractal image compression scheme when the wavelet basis is the Haar basis. The wavelet framework places fractal compression schemes in the context of existing wavelet subtree coding schemes. We obtain a simple convergence proof which strengthens existing fractal compression results considerably, derive an improved means of estimating the error incurred in decoding fractal compressed images, and describe a new reconstruction algorithm which requires O(N) operations for an N pixel image.
我们描述了一种基于小波的自适应图像压缩方案。将图像分解为一组量化的小波系数和量化的小波子树。用于量化子树的矢量码本是从图像中绘制的。子树被量化为更粗尺度子树的收缩等距。从压缩图像中提取的码本对于量化局部光滑区域和局部直边是有效的。我们证明了这种自量化可以使我们在给定图像的粗尺度系数的情况下恢复其细尺度小波系数。结果表明,当小波基为哈尔基时,该自量化算法等价于分形图像压缩方案。小波框架将分形压缩方案置于现有小波子树编码方案的背景下。我们得到了一个简单的收敛证明,大大增强了现有的分形压缩结果,推导了一种改进的分形压缩图像解码误差估计方法,并描述了一种新的重构算法,该算法对N像素的图像只需要O(N)次运算。
{"title":"Self-quantized wavelet subtrees: a wavelet-based theory for fractal image compression","authors":"G. Davis","doi":"10.1109/DCC.1995.515513","DOIUrl":"https://doi.org/10.1109/DCC.1995.515513","url":null,"abstract":"We describe an adaptive wavelet-based compression scheme for images. We decompose an image into a set of quantized wavelet coefficients and quantized wavelet subtrees. The vector codebook used for quantizing the subtrees is drawn from the image. Subtrees are quantized to contracted isometries of coarser scale subtrees. This codebook drawn from the contracted image is effective for quantizing locally smooth regions and locally straight edges. We prove that this self-quantization enables us to recover the fine scale wavelet coefficients of an image given its coarse scale coefficients. We show that this self-quantization algorithm is equivalent to a fractal image compression scheme when the wavelet basis is the Haar basis. The wavelet framework places fractal compression schemes in the context of existing wavelet subtree coding schemes. We obtain a simple convergence proof which strengthens existing fractal compression results considerably, derive an improved means of estimating the error incurred in decoding fractal compressed images, and describe a new reconstruction algorithm which requires O(N) operations for an N pixel image.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124060381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
VQ-based model design algorithms for text compression 基于vq的文本压缩模型设计算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515544
S.P. Kim, X. Ginesta
Summary form only given. We propose a new approach for text compression where fast decoding is more desirable than encoding. An example of such a requirement is an information retrieval system. For efficient compression, high-order conditional probability information of text data is analyzed and modeled by utilizing vector quantization concept. Generally, vector quantization (VQ) has been used for lossy compression where the input symbol is not exactly recovered at the decoder, hence it does not seem applicable to lossless text compression problems. However, VQ can be applied to high-order conditional probability information so that the complexity of the information can be reduced. We represent the conditional probability information of a source in a tree structure where each node in the first level of the tree is associated with respective 1-st order conditional probability and the second level nodes with the 2nd order conditional probability. For good text compression performances, it is necessary that fourth or higher order conditional probability information be used. It is essential that the model be simplified enough for training with a reasonable size of training set. We reduce the number of conditional probability tables and also discuss a semi-adaptive operating mode of the model where the tree is derived through training but actual probability information at each node is obtained adaptively from input data. The performance of the proposed algorithm is comparable to or exceeds other methods such as prediction by partial matching (PPM) but requires smaller memory size.
只提供摘要形式。我们提出了一种新的文本压缩方法,其中快速解码比编码更可取。这种需求的一个例子是信息检索系统。为了提高压缩效率,利用矢量量化的概念对文本数据的高阶条件概率信息进行分析和建模。通常,矢量量化(VQ)已用于有损压缩,其中输入符号在解码器处不能完全恢复,因此它似乎不适用于无损文本压缩问题。然而,VQ可以应用于高阶条件概率信息,从而可以降低信息的复杂性。我们在树结构中表示源的条件概率信息,其中树的第一级节点与各自的1- 1阶条件概率相关联,第二级节点与二阶条件概率相关联。为了获得良好的文本压缩性能,有必要使用四阶或更高阶的条件概率信息。重要的是,模型必须足够简化,以便使用合理大小的训练集进行训练。我们减少了条件概率表的数量,并讨论了模型的半自适应运行模式,其中通过训练获得树,但从输入数据中自适应地获得每个节点的实际概率信息。该算法的性能与部分匹配预测(PPM)等其他方法相当或超过,但需要更小的内存大小。
{"title":"VQ-based model design algorithms for text compression","authors":"S.P. Kim, X. Ginesta","doi":"10.1109/DCC.1995.515544","DOIUrl":"https://doi.org/10.1109/DCC.1995.515544","url":null,"abstract":"Summary form only given. We propose a new approach for text compression where fast decoding is more desirable than encoding. An example of such a requirement is an information retrieval system. For efficient compression, high-order conditional probability information of text data is analyzed and modeled by utilizing vector quantization concept. Generally, vector quantization (VQ) has been used for lossy compression where the input symbol is not exactly recovered at the decoder, hence it does not seem applicable to lossless text compression problems. However, VQ can be applied to high-order conditional probability information so that the complexity of the information can be reduced. We represent the conditional probability information of a source in a tree structure where each node in the first level of the tree is associated with respective 1-st order conditional probability and the second level nodes with the 2nd order conditional probability. For good text compression performances, it is necessary that fourth or higher order conditional probability information be used. It is essential that the model be simplified enough for training with a reasonable size of training set. We reduce the number of conditional probability tables and also discuss a semi-adaptive operating mode of the model where the tree is derived through training but actual probability information at each node is obtained adaptively from input data. The performance of the proposed algorithm is comparable to or exceeds other methods such as prediction by partial matching (PPM) but requires smaller memory size.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123054002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient data compression hardware based on cellular automata 一种基于元胞自动机的高效数据压缩硬件
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515582
S. Bhattacharjee, J. Bhattacharya, P. Chaudhuri
Summary form only given. Reports a parallel scheme for text data compression. The scheme utilizes the simple, regular, modular and cascadable structure of cellular automata (CA) with local interconnection structure that ideally suits VLSI technology. The state transition behaviour of a particular class of non-group CA, referred to as TPSA (two predecessor single attractor) CA, has been studied extensively and the results are utilized to develop a parallel scheme for data compression. The state transition diagram of a TPSA CA generates a unique inverted binary tree. This inverted binary tree is a labeled tree whose leaves and internal nodes have a unique pattern generated by the CA in successive cycles. This unique structure can be viewed as a dictionary for text compression. In effect, storage and retrieval of dictionary of conventional data compression techniques get replaced by the autonomous mode operation of the CA that generates the dictionary dynamically with appropriate mapping of dictionary data to CA states wherever necessary.
只提供摘要形式。报告文本数据压缩的并行方案。该方案利用细胞自动机(CA)的简单、规则、模块化和可级联结构,具有非常适合VLSI技术的局部互连结构。对一类特殊的非群CA,即TPSA(双前代单吸引子)CA的状态转移行为进行了广泛的研究,并利用研究结果开发了一种并行的数据压缩方案。TPSA CA的状态转换图生成唯一的倒立二叉树。这个反向二叉树是一个标记树,其叶子和内部节点具有CA在连续循环中生成的唯一模式。这个独特的结构可以看作是文本压缩的字典。实际上,传统数据压缩技术的字典存储和检索被CA的自治模式操作所取代,CA的自治模式操作动态生成字典,并在必要时将字典数据适当地映射到CA状态。
{"title":"An efficient data compression hardware based on cellular automata","authors":"S. Bhattacharjee, J. Bhattacharya, P. Chaudhuri","doi":"10.1109/DCC.1995.515582","DOIUrl":"https://doi.org/10.1109/DCC.1995.515582","url":null,"abstract":"Summary form only given. Reports a parallel scheme for text data compression. The scheme utilizes the simple, regular, modular and cascadable structure of cellular automata (CA) with local interconnection structure that ideally suits VLSI technology. The state transition behaviour of a particular class of non-group CA, referred to as TPSA (two predecessor single attractor) CA, has been studied extensively and the results are utilized to develop a parallel scheme for data compression. The state transition diagram of a TPSA CA generates a unique inverted binary tree. This inverted binary tree is a labeled tree whose leaves and internal nodes have a unique pattern generated by the CA in successive cycles. This unique structure can be viewed as a dictionary for text compression. In effect, storage and retrieval of dictionary of conventional data compression techniques get replaced by the autonomous mode operation of the CA that generates the dictionary dynamically with appropriate mapping of dictionary data to CA states wherever necessary.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125651927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Wavelet subband coding of computer simulation output using the A++ array class library 用a++数组类库实现计算机仿真输出的小波子带编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515564
J. Bradley, C. Brislawn, D. Quinlan, H.D. Zhang, V. Nuri
Summary form only given. The work focuses on developing discrete wavelet transform/scalar quantization data compression software that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing. The approach is to use the A++/P++ array class library, a C++ software library originally designed for adaptive mesh PDE algorithms. Using a C++ class library has the advantage of allowing to write the scientific algorithm in a high-level, platform-independent syntax; the machine-dependent optimization is hidden in low-level definitions of the library objects. Thus, the high-level code can be ported between different architectures with no rewriting of source code once the machine-dependent layers have been compiled. In particular, while "A++" refers to a serial library, the same source code can be linked to "P++" libraries, which contain platform-dependent parallelized code. The paper compares the overhead incurred in using A++ library operations with a serial implementation (written in C) when compressing the output of a global ocean circulation model running at the Los Alamos Advanced Computing Lab.
只提供摘要形式。本文的工作重点是开发离散小波变换/标量量化数据压缩软件,使其易于在不同的硬件环境之间移植。考虑到大量不同的高性能计算体系结构,学习如何将算法有效地映射到新体系结构的高成本,以及高性能计算领域的快速发展,这是一个极其重要的考虑因素。该方法是使用a++ / p++数组类库,这是一个最初为自适应网格PDE算法设计的c++软件库。使用c++类库的优点是允许用高级的、独立于平台的语法编写科学算法;依赖于机器的优化隐藏在库对象的低级定义中。因此,一旦编译了依赖于机器的层,就可以在不同的体系结构之间移植高级代码,而无需重写源代码。特别是,虽然“a++”指的是串行库,但相同的源代码可以链接到“p++”库,其中包含与平台相关的并行化代码。本文比较了在洛斯阿拉莫斯高级计算实验室(Los Alamos Advanced Computing Lab)压缩全球海洋环流模型的输出时,使用a++库操作与串行实现(用C编写)所产生的开销。
{"title":"Wavelet subband coding of computer simulation output using the A++ array class library","authors":"J. Bradley, C. Brislawn, D. Quinlan, H.D. Zhang, V. Nuri","doi":"10.1109/DCC.1995.515564","DOIUrl":"https://doi.org/10.1109/DCC.1995.515564","url":null,"abstract":"Summary form only given. The work focuses on developing discrete wavelet transform/scalar quantization data compression software that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing. The approach is to use the A++/P++ array class library, a C++ software library originally designed for adaptive mesh PDE algorithms. Using a C++ class library has the advantage of allowing to write the scientific algorithm in a high-level, platform-independent syntax; the machine-dependent optimization is hidden in low-level definitions of the library objects. Thus, the high-level code can be ported between different architectures with no rewriting of source code once the machine-dependent layers have been compiled. In particular, while \"A++\" refers to a serial library, the same source code can be linked to \"P++\" libraries, which contain platform-dependent parallelized code. The paper compares the overhead incurred in using A++ library operations with a serial implementation (written in C) when compressing the output of a global ocean circulation model running at the Los Alamos Advanced Computing Lab.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multiplication-free subband coding of color images 彩色图像的无乘子带编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515525
A. Docef, F. Kossentini, W. Chung, Mark J. T. Smith
This paper describes a very computationally efficient design algorithm for color image coding at low bit rates. The proposed algorithm is based on uniform tree-structured subband decomposition, multistage scalar quantization of the image subbands, and high order entropy coding. The main advantage of the algorithm is that no multiplications are required in both analysis/synthesis and encoding/decoding. This can lead to a simple hardware implementation of the subband coder, while maintaining a high level of performance.
本文描述了一种计算效率很高的低比特率彩色图像编码设计算法。该算法基于均匀树结构子带分解、图像子带多阶段标量量化和高阶熵编码。该算法的主要优点是在分析/合成和编码/解码中都不需要乘法。这可以导致子带编码器的简单硬件实现,同时保持高水平的性能。
{"title":"Multiplication-free subband coding of color images","authors":"A. Docef, F. Kossentini, W. Chung, Mark J. T. Smith","doi":"10.1109/DCC.1995.515525","DOIUrl":"https://doi.org/10.1109/DCC.1995.515525","url":null,"abstract":"This paper describes a very computationally efficient design algorithm for color image coding at low bit rates. The proposed algorithm is based on uniform tree-structured subband decomposition, multistage scalar quantization of the image subbands, and high order entropy coding. The main advantage of the algorithm is that no multiplications are required in both analysis/synthesis and encoding/decoding. This can lead to a simple hardware implementation of the subband coder, while maintaining a high level of performance.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114203263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An investigation of effective compression ratios for the proposed synchronous data compression proto 提出的同步数据压缩原型的有效压缩比的研究
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515597
R. R. Little
The Telecommunications Industry Association (TIA) Technical Committee TR-30 ad hoc Committee on Compression of Synchronous Data for DSUs has submitted three documents to TR30.1 as contributions which specify a standard data compression protocol. The proposed standard uses the Point-to-Point Protocol developed by the Internet Engineering Task Force (IETF) with certain extensions. Following a time for comment, the ad hoc committee planned to submit the draft standard document to TR30.1 for ballot at the January 30, 1995, meeting with balloting expected to be completed in May.
电信工业协会(TIA)技术委员会TR-30 dsu同步数据压缩特设委员会向TR30.1提交了三个文件作为贡献,这些文件指定了标准数据压缩协议。提议的标准使用了由互联网工程任务组(IETF)开发的点对点协议,并进行了某些扩展。在征求意见一段时间后,特设委员会计划将标准文件草案提交TR30.1,以便在1995年1月30日的会议上进行投票,投票预计将于5月完成。
{"title":"An investigation of effective compression ratios for the proposed synchronous data compression proto","authors":"R. R. Little","doi":"10.1109/DCC.1995.515597","DOIUrl":"https://doi.org/10.1109/DCC.1995.515597","url":null,"abstract":"The Telecommunications Industry Association (TIA) Technical Committee TR-30 ad hoc Committee on Compression of Synchronous Data for DSUs has submitted three documents to TR30.1 as contributions which specify a standard data compression protocol. The proposed standard uses the Point-to-Point Protocol developed by the Internet Engineering Task Force (IETF) with certain extensions. Following a time for comment, the ad hoc committee planned to submit the draft standard document to TR30.1 for ballot at the January 30, 1995, meeting with balloting expected to be completed in May.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123894296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Subband coding methods for seismic data compression 地震数据压缩的子带编码方法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515557
A. Kiely, F. Pollara
Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.
只提供摘要形式。典型的地震分析场景包括通过一系列地震仪收集数据,通过提供有限数据速率的信道传输数据,以及存储用于分析的数据。进行地震数据分析是为了监测地震和行星探测,如计划中的火星地震事件研究。地震数据压缩系统需要处理在受限通道上传输的大量数据,并且必须能够准确地再现偶尔发生的高能地震事件。我们提出了一种包括三个阶段的压缩算法:基于子带编码的去相关阶段,引入可控制的失真量以实现高压缩比的量化阶段,以及基于简单但有效的块自适应算术编码方法的无损熵编码阶段。通过将数据分割成单独编码的块来实现对波形非平稳行为的自适应。所提出方案的压缩比可以设置为满足规定的保真度要求,即波形可以以足够的保真度再现,以便进行准确的解释和分析。目前,一些地震学家正在评估这种压缩方案造成的扭曲。由于指定算术编码器的参数所需的开销很低,因此编码的效率很高。给出了各种滤波器组和分解层数对地震波形的率失真性能结果。
{"title":"Subband coding methods for seismic data compression","authors":"A. Kiely, F. Pollara","doi":"10.1109/DCC.1995.515557","DOIUrl":"https://doi.org/10.1109/DCC.1995.515557","url":null,"abstract":"Summary form only given. A typical seismic analysis scenario involves collection of data by an array of seismometers, transmission over a channel offering limited data rate, and storage of data for analysis. Seismic data analysis is performed for monitoring earthquakes and for planetary exploration as in the planned study of seismic events on Mars. Seismic data compression systems are required to cope with the transmission of vast amounts of data over constrained channels and must be able to accurately reproduce occasional high energy seismic events. We propose a compression algorithm that includes three stages: a decorrelation stage based on subband coding, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient block-adaptive arithmetic coding method. Adaptivity to the non-stationary behavior of the waveform is achieved by partitioning the data into blocks which are encoded separately. The compression ratio of the proposed scheme can be set to meet prescribed fidelity requirements, i.e. the waveform can be reproduced with sufficient fidelity for accurate interpretation and analysis. The distortions incurred by this compression scheme are currently being evaluated by several seismologists. Encoding is done with high efficiency due to the low overhead required to specify the parameters of the arithmetic encoder. Rate-distortion performance results on seismic waveforms are presented for various filter banks and numbers of levels of decomposition.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114896874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1