首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Reduced-search fractal block coding of images 图像的减少搜索分形块编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515571
W. Kinsner, L. Wall
Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.
仅给出摘要形式,如下。自Barnsley引入迭代函数系统(IFS)以来,基于分形的数据压缩引起了极大的兴趣,迭代函数系统是一种紧凑地表示复杂图像结构的方案。本文以Jacquin(1990)的工作为基础,讨论了面向块的静态图像分形编码技术的增量发展。简要概述了Jacquin的方法,并讨论了它的几个特点。特别是,解决了与该技术相关的高阶计算复杂性。本文提出了一种称为频率敏感竞争学习(FSCL)的神经网络范式来帮助编码器在源图像中定位分形自相似性。为获得最佳的时间性能,提供了一个明智的发展适当的神经网络大小。这种最优选择的网络具有将Jacquin原始编码算法的时间复杂度从O(n/sup 4/)降低到O(n/sup 3/)的效果。此外,提出了一种独立于平均像素亮度和方差的有效距离度量方法。这一度量对于确定分形块变换是必不可少的,而不是由Jacquin提供的。提出了一种利用FSCL实现分形块编码的方法,并对编码结果进行了比较。文中还介绍了相关软件模拟器的结构。
{"title":"Reduced-search fractal block coding of images","authors":"W. Kinsner, L. Wall","doi":"10.1109/DCC.1995.515571","DOIUrl":"https://doi.org/10.1109/DCC.1995.515571","url":null,"abstract":"Summary form only given, as follows. Fractal based data compression has attracted a great deal of interest since Barnsley's introduction of iterated functions systems (IFS), a scheme for compactly representing intricate image structures. This paper discusses the incremental development of a block-oriented fractal coding technique for still images based on the work of Jacquin (1990). A brief overview of Jacquin's method is provided, and several of its features are discussed. In particular, the high order of computational complexity associated with the technique is addressed. This paper proposes that a neural network paradigm known as frequency sensitive competitive learning (FSCL) be employed to assist the encoder in locating fractal self-similarity within a source image. A judicious development of the proper neural network size for optimal time performance is provided. Such an optimally-chosen network has the effect of reducing the time complexity of Jacquin's original encoding algorithm from O(n/sup 4/) to O(n/sup 3/). In addition, an efficient distance measure for comparing two image segments independent of mean pixel brightness and variance is developed. This measure, not provided by Jacquin, is essential for determining the fractal block transformations. An implementation of fractal block coding employing FSCL is presented and coding results are compared with other popular image compression techniques. The paper also present the structure of the associated software simulator.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127663991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CREW: Compression with Reversible Embedded Wavelets CREW:压缩与可逆嵌入小波
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515511
A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek
Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a "reversible" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.
可逆嵌入小波压缩(CREW)是一种统一的无损和有损连续色调静止图像压缩系统。它是基于小波的,使用最好的小波滤波器之一的“可逆”近似。可逆小波是一种非线性四舍五入的线性滤波器,它以最小精度的整数运算实现精确重构系统。小波系数以位重要性嵌入顺序编码,允许通过简单地截断压缩数据进行有损压缩。对于系数的编码,CREW使用了一种类似于J. Shapiro(1993)的零树方法,以及一种全新的称为Horizon的方法。水平编码是一种基于上下文的编码,它利用了小波域中可用的空间和频谱信息。CREW提供最先进的医学图像无损压缩(大于8位深度),以及单个系统对8位深度图像的有损和无损压缩。CREW具有合理的软硬件实现。
{"title":"CREW: Compression with Reversible Embedded Wavelets","authors":"A. Zandi, James D. Allen, E. L. Schwartz, M. Boliek","doi":"10.1109/DCC.1995.515511","DOIUrl":"https://doi.org/10.1109/DCC.1995.515511","url":null,"abstract":"Compression with Reversible Embedded Wavelets (CREW) is a unified lossless and lossy continuous tone still image compression system. It is wavelet based using a \"reversible\" approximation of one of the best wavelet filters. Reversible wavelets are linear filters with non linear rounding which implement exact reconstruction systems with minimal precision integer arithmetic. Wavelet coefficients are encoded in a bit significance embedded order, allowing lossy compression by simply truncating the compressed data. For coding of coefficients, CREW uses a method similar to J. Shapiro's (1993) zero tree, and a completely novel method called Horizon. Horizon coding is a context based coding that takes advantage of the spatial and spectral information available in the wavelet domain. CREW provides state of the art lossless compression of medical images (greater than 8 bits deep), and lossy and lossless compression of 8 bit deep images with a single system. CREW has reasonable software and hardware implementations.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 254
Hierarchical vector quantization of perceptually weighted block transforms 感知加权块变换的层次向量量化
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515490
N. Chaddha, M. Vishwanath, P. Chou
This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.
本文介绍了一种基于通用块变换的矢量量化编码器的设计技术,该编码器通过表查找实现。在这些表查找编码器中,编码器的输入向量直接用作码表中的地址来选择码字。不需要执行正向或反向转换。它们在表中实现。为了在大维度VQ中保持可管理的表大小,我们使用分层结构逐级量化向量。由于编码器和解码器都是通过表查找实现的,因此在最终的系统实现中不需要进行算术计算。该算法是任意通用块变换(DCT, Haar, WHT)和分层矢量量化的新颖组合。他们在VQ的设计中使用了感知加权和主观失真度量。它们的独特之处在于编码器和解码器都仅通过表查找实现,并且适用于高效的软件和硬件解决方案。
{"title":"Hierarchical vector quantization of perceptually weighted block transforms","authors":"N. Chaddha, M. Vishwanath, P. Chou","doi":"10.1109/DCC.1995.515490","DOIUrl":"https://doi.org/10.1109/DCC.1995.515490","url":null,"abstract":"This paper presents techniques for the design of generic block transform based vector quantizer encoders implemented by table lookups. In these table lookup encoders, input vectors to the encoders are used directly as addresses in code tables to choose the codewords. There is no need to perform the forward or reverse transforms. They are implemented in the tables. In order to preserve manageable table sizes for large dimension VQ's, we use hierarchical structures to quantize the vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. The algorithms are a novel combination of any generic block transform (DCT, Haar, WHT) and hierarchical vector quantization. They use perceptual weighting and subjective distortion measures in the design of VQ's. They are unique in that both the encoder and the decoder are implemented with only table lookups and are amenable to efficient software and hardware solutions.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
An improved hierarchical lossless text compression algorithm 一种改进的分层无损文本压缩算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515519
Chia-Yuan Teng, D. Neuhoff
Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.
对Bugajski-Russo N-gram算法进行了改进。当应用于英文文本时,这些算法具有相当的复杂性,并且比常用的COMPRESS算法大约低10%到30%。
{"title":"An improved hierarchical lossless text compression algorithm","authors":"Chia-Yuan Teng, D. Neuhoff","doi":"10.1109/DCC.1995.515519","DOIUrl":"https://doi.org/10.1109/DCC.1995.515519","url":null,"abstract":"Several improvements to the Bugajski-Russo N-gram algorithm are proposed. When applied to English text these result in an algorithm with comparable complexity and approximately 10 to 30% less rate than the commonly used COMPRESS algorithm.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A massively parallel algorithm for vector quantization 矢量量化的大规模并行算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515604
K. S. Prashant, V. J. Mathews
Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.
仅给出摘要形式,如下。本文研究了一个矢量量化系统在Maspar MP-2(单指令多数据大规模并行计算机)上的并行实现。矢量量化器(VQ)由两个映射组成:一个编码器和一个解码器。编码器将最接近它的编矢量的索引赋给每个输入矢量。解码器使用这个索引来重建信号。在我们的工作中,我们使用欧几里得失真测量来找到最接近每个输入向量的协矢量。本文中描述的工作使用了位于马里兰州格林贝尔特戈达德太空飞行中心的马斯帕MP-2216。该系统有16384个处理器单元(pe),排列在128 × 128节点的矩形阵列中。并行VQ算法是基于流水线的。编矢量均匀分布在PE阵列的第一行PE中。然后在其余的处理器行上复制这些编码向量。沿着PE数组的任何一行遍历都相当于遍历整个码本。在用编码向量填充pe之后,输入向量呈现给pe的第一列。每个PE一次接收一个向量。现在将第一组数据向量与第一列中的一组协矢量进行比较。一个数据包包含输入向量、输入向量和代码向量之间失真的最小值,以及对应于当前失真最小值的编码向量的索引,该索引与每个输入向量相关联。在更新数据包的条目后,它在PE数组中向右移动一列。下一组输入向量在第一列中占有它的位置。重复上述过程,直到所有输入向量耗尽。第一组数据向量的索引在适当的移位次数之后得到。其余的指数在以后的班次中得到。广泛的绩效评估结果在全文中呈现。这些结果表明,我们的算法非常有效地利用了Maspar系统的并行能力。像本文所提出的这样的高效算法的存在应该会增加矢量量化在地球和空间科学应用中的有用性和适用性。
{"title":"A massively parallel algorithm for vector quantization","authors":"K. S. Prashant, V. J. Mathews","doi":"10.1109/DCC.1995.515604","DOIUrl":"https://doi.org/10.1109/DCC.1995.515604","url":null,"abstract":"Summary form only given, as follows. This work is concerned with the parallel implementation of a vector quantizer system on Maspar MP-2, a single-instruction, multiple-data (SIMD) massively parallel computer. A vector quantizer (VQ) consists of two mappings: an encoder and a decoder. The encoder assigns to each input vector the index of the codevector that is closest to it. The decoder uses this index to reconstruct the signal. In our work, we used the Euclidean distortion measure to find the codevector closest to each input vector. The work described in this paper used a Maspar MP-2216 located at the Goddard Space Flight Center, Greenbelt, Maryland. This system has 16,384 processor elements (PEs) arranged in a rectangular array of 128 x 128 nodes. The parallel VQ algorithm is based on pipelining. The codevectors are distributed equally among the PEs in the first row of the PE array. These codevectors are then duplicated on the remaining processor rows. Traversing along any row of the PE array amounts to traversing through the entire codebook. After populating the PEs with the codevectors, the input vectors are presented to the first column of PEs. Each PE receives one vector at a time. The first set of data vectors are now compared with the group of codevectors in the first column. A data packet containing the the input vector, the minimum value of the distortion between the input vector and the code vectors it has encountered so far, and the index corresponding to the codevector that accounted for the current minimum value of the distortion is associated with each input vector. After updating the entries of the data packet, it is shifted one column to the right in the PE array. The next set of input vectors takes its place in the first column. The above process is repeated till all the input vectors are exhausted. The indices for the first set of data vectors are obtained after an appropriate number of shifts. The remaining indices are obtained in subsequent shifts. Results of extensive performance evaluations are presented in the full-length paper. These results suggest that our algorithm makes very efficient use of the parallel capabilities of the Maspar system. The existence of efficient algorithms such as the one presented in this paper should increase the usefulness and applicability of vector quantizers in Earth and Space science applications.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"365 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126704123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
RD-OPT: an efficient algorithm for optimizing DCT quantization tables RD-OPT:优化DCT量化表的有效算法
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515523
Viresh Ratnakar, M. Livny
The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.
离散余弦变换(DCT)广泛应用于有损图像和视频压缩方案,如JPEG和MPEG。在本文中,我们描述了RD-OPT,一种有效的算法,用于构造具有最佳率失真折衷的DCT量化表。该算法以一种新颖的方式使用DCT系数分布统计,并使用动态规划策略在大范围的速率和畸变下生成最优量化表。它可以用来压缩图像在任何期望的信噪比或压缩大小。
{"title":"RD-OPT: an efficient algorithm for optimizing DCT quantization tables","authors":"Viresh Ratnakar, M. Livny","doi":"10.1109/DCC.1995.515523","DOIUrl":"https://doi.org/10.1109/DCC.1995.515523","url":null,"abstract":"The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"30 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Adaptive wavelet subband coding for music compression 自适应小波子带编码的音乐压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515570
K. Ferens, W. Kinsner
This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.
本文描述了宽带音频信号的小波子带系数域的建模,以实现低比特率和高质量的压缩。目的是在小波域建立宽带音频信号的感知模型。通过将特定子带的量化步长设置为与子带能量成反比的大小,然后在子带内将能量确定的步长修改为与系数的幅度概率密度成反比,使用适应子带信号的方案对小波子带中的系数进行量化。采用基于频率敏感竞争学习的矢量/标量量化方法对各子带系数的幅度概率密度进行建模。源数据由1通道,16位线性数据以44.1 kHz采样从包含主要古典和流行音乐的CD。初步结果显示,比特率为150 kbps,而不是705.6 kbps,并且没有感知质量损失。与傅立叶变换等其他标准变换相比,小波变换在表示多重分形信号(如宽带音频)方面提供了更好的结果。
{"title":"Adaptive wavelet subband coding for music compression","authors":"K. Ferens, W. Kinsner","doi":"10.1109/DCC.1995.515570","DOIUrl":"https://doi.org/10.1109/DCC.1995.515570","url":null,"abstract":"This paper describes modelling of the coefficient domain in wavelet subbands of wideband audio signals for low-bit rate and high-quality compression. The purpose is to develop models of the perception of wideband audio signals in the wavelet domain. The coefficients in the wavelet subbands are quantized using a scheme that adapts to the subband signal by setting the quantization step size for a particular subband to a size that is inversely proportional to the subband energy, and then, within a subband, by modifying the energy determined step size as inversely proportional to the amplitude probability density of the coefficient. The amplitude probability density of the coefficients in each subband is modelled using learned vector/scalar quantization employing frequency sensitive competitive learning. The source data consists of 1-channel, 16-bit linear data sampled at 44.1 kHz from a CD containing major classical and pop music. Preliminary results show a bit-rate of 150 kbps, rather than 705.6 kbps, with no perceptual loss in quality. The wavelet transform provides better results for representing multifractal signals, such as wide band audio, than do other standard transforms, such as the Fourier transform.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123534064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Lossless compression by simulated annealing 模拟退火的无损压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515562
R. Bowen-Wright, K. Sayood
Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating "optimum" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no "holes" in it.
只提供摘要形式。线性预测方案是无损图像压缩中最简单的技术之一。尽管它们很简单,但它们已被证明具有惊人的效率。目前的JPEG图像编码标准在其无损模式下使用线性预测编码器。预测编码最初用于有损压缩技术,如差分脉冲编码调制(DPCM)。在这些技术中,对预测误差进行量化,并将量化后的值传输给接收机。为了减小量化误差,必须减小预测误差方差。因此,生成“最佳”预测系数的技术通常试图最小化某些预测误差方差的度量。在无损压缩中,目标是最小化预测误差的熵,因此,旨在最小化预测误差方差的技术可能不适合获得预测系数。我们试图通过最小化预测误差的一阶熵来获得无损图像压缩的预测系数。我们使用模拟退火来执行最小化。提高线性预测技术性能的一种方法是首先重新映射像素值,使重新映射图像的直方图中不包含“洞”。
{"title":"Lossless compression by simulated annealing","authors":"R. Bowen-Wright, K. Sayood","doi":"10.1109/DCC.1995.515562","DOIUrl":"https://doi.org/10.1109/DCC.1995.515562","url":null,"abstract":"Summary form only given. Linear predictive schemes are some of the simplest techniques in lossless image compression. In spite of their simplicity they have proven to be surprisingly efficient. The current JPEG image coding standard uses linear predictive coders in its lossless mode. Predictive coding was originally used in lossy compression techniques such as differential pulse code modulation (DPCM). In these techniques the prediction error is quantized, and the quantized value transmitted to the receiver. In order to reduce the quantization error it was necessary to reduce the prediction error variance. Therefore techniques for generating \"optimum\" predictor coefficients generally attempt to minimize some measure of the prediction error variance. In lossless compression the objective is to minimize the entropy of the prediction error, therefore techniques geared to minimizing the variance of the prediction error may not be best suited for obtaining the predictor coefficients. We have attempted to obtain the predictor coefficient for lossless image compression by minimizing the first order entropy of the prediction error. We have used simulated annealing to perform the minimization. One way to improve the performance of linear predictive techniques is to first remap the pixel values such that a histogram of the remapped image contains no \"holes\" in it.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Constraining the size of the instantaneous alphabet in trellis quantizers 在网格量化器中限制瞬时字母表的大小
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515492
M. F. Larsen, R. L. Frost
A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.
提出了一种降低栅格量化编码器(TQ)计算复杂度的方法。我们首先在考虑的量子平均瞬时数的约束下发展速率畸变理论。这个约束具有实际意义:在TQ中,量子的平均瞬时数恰好是编码器所需的平均乘法数。该理论表明,如果将每个量子的条件概率限制在有限的支持区域内,则可以在SQNR性能中以很少或没有成本的情况下使所考虑的量子的瞬时数量非常小。tq的模拟证实了这一预测。这种复杂性的降低使得基于模型的tq (mtq)的使用变得可行,这在以前被认为在计算上是不合理的。对于语音,在计算复杂度相似的情况下,仅使用一阶MTQ就可以获得多个dB SQNR优于自适应预测方案的性能增益。
{"title":"Constraining the size of the instantaneous alphabet in trellis quantizers","authors":"M. F. Larsen, R. L. Frost","doi":"10.1109/DCC.1995.515492","DOIUrl":"https://doi.org/10.1109/DCC.1995.515492","url":null,"abstract":"A method is developed for decreasing the computational complexity of a trellis quantizer (TQ) encoder. We begin by developing a rate-distortion theory under a constraint on the average instantaneous number of quanta considered. This constraint has practical importance: in a TQ, the average instantaneous number of quanta is exactly the average number of multiplies required at the encoder. The theory shows that if the conditional probability of each quanta is restricted to a finite region of support, the instantaneous number of quanta considered can be made quite small at little or no cost in SQNR performance. Simulations of TQs confirm this prediction. This reduction in complexity makes practical the use of model-based TQs (MTQs), which had previously been considered computationally unreasonable. For speech, performance gains of several dB SQNR over adaptive predictive schemes at a similar computational complexity are obtained using only a first-order MTQ.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization 基于混合DPCM/DCT和熵约束网格编码量化的高光谱图像压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515522
G. Abousleman
A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.
提出了一种基于网格编码量化(TCQ)的高光谱图像压缩系统。具体而言,采用DPCM对高光谱数据进行频谱去相关,采用二维离散余弦变换(DCT)编码方案对高光谱数据进行空间去相关。熵约束码本是使用广义劳埃德算法的改进版本设计的。该编码器的压缩比大于70:1,编码的高光谱序列的平均PSNR超过40.0 dB。
{"title":"Compression of hyperspectral imagery using hybrid DPCM/DCT and entropy-constrained trellis coded quantization","authors":"G. Abousleman","doi":"10.1109/DCC.1995.515522","DOIUrl":"https://doi.org/10.1109/DCC.1995.515522","url":null,"abstract":"A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, DPCM is used to spectrally decorrelate the hyperspectral data, while a 2-D discrete cosine transform (DCT) coding scheme is used for spatial decorrelation. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This coder achieves compression ratios of greater than 70:1 with average PSNR of the coded hyperspectral sequence exceeding 40.0 dB.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1