首页 > 最新文献

Proceedings DCC '95 Data Compression Conference最新文献

英文 中文
Fast subband coder for telephone quality audio 快速子带编码器电话质量音频
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515581
H. Raittinen, K. Kaski
Summary form only given. A simple and fast audio signal compression method that uses subband filtering and quantization is presented. The method is suitable for compression of telephone quality audio signals. It can compress four CCITT 64 kbit/s PCM A- or /spl mu/-coded speech channels into one channel with sufficient sound quality for telephone use. A straightforward implementation of the compression and decompression methods have the following steps. First the incoming speech signal is converted from a /spl mu/ or A-law coded signal into 16 bit linear PCM signal and then divided into 16 bands of equal bandwidth by using the analysis filter bank. Then the sampling frequencies of the frequency channels are decreased by a factor of 16. After this decimation the subband samples are fed to a fixed quantizer. Finally the quantized subband values and the side information needed for decoding is packed into a data stream and sent to the receiver.
只提供摘要形式。提出了一种基于子带滤波和量化的音频信号压缩方法。该方法适用于电话级音频信号的压缩。它可以将4个CCITT 64kbit /s PCM A或/spl mu/编码语音通道压缩成一个具有足够音质的电话通道。压缩和解压缩方法的简单实现有以下步骤。首先将输入的语音信号从a /spl mu/ or a -law编码信号转换为16位线性PCM信号,然后使用分析滤波器组将其划分为16个等带宽的频带。然后将频率通道的采样频率降低16倍。在抽取后,子带样本被送入固定的量化器。最后将量化后的子带值和解码所需的侧信息打包成数据流发送给接收端。
{"title":"Fast subband coder for telephone quality audio","authors":"H. Raittinen, K. Kaski","doi":"10.1109/DCC.1995.515581","DOIUrl":"https://doi.org/10.1109/DCC.1995.515581","url":null,"abstract":"Summary form only given. A simple and fast audio signal compression method that uses subband filtering and quantization is presented. The method is suitable for compression of telephone quality audio signals. It can compress four CCITT 64 kbit/s PCM A- or /spl mu/-coded speech channels into one channel with sufficient sound quality for telephone use. A straightforward implementation of the compression and decompression methods have the following steps. First the incoming speech signal is converted from a /spl mu/ or A-law coded signal into 16 bit linear PCM signal and then divided into 16 bands of equal bandwidth by using the analysis filter bank. Then the sampling frequencies of the frequency channels are decreased by a factor of 16. After this decimation the subband samples are fed to a fixed quantizer. Finally the quantized subband values and the side information needed for decoding is packed into a data stream and sent to the receiver.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithm evaluation for synchronous data compression 同步数据压缩算法评价
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515554
M.W. Maier
Summary form only given. As part of an industry standardization effort, we have evaluated compression algorithms for throughput enhancement in a synchronous communication environment. Synchronous data compression systems are link layer compressors used between digital transmission devices in internetworks to increase effective throughput. Compression is capable of speeding such links, but achievable performance is effected by interaction of algorithm, the networking protocols, and implementation details. The compression environment is different from traditional file compression in inducing a trade-off between compression ratio, compression time, and the performance metric (network throughput). In addition, other parameters and behavior are introduced, including robustness to data retransmission and multiple interleaved streams. Specifically, we have evaluated the following issues through both synchronous queuing and direct network simulation: (1) relative algorithm capability; (2) throughput improvement for various algorithms as a function of compression processor capability; (3) the impact of multiple compression context; (4) protocol interactions; and (5) specialized algorithms.
只提供摘要形式。作为行业标准化工作的一部分,我们评估了同步通信环境中用于提高吞吐量的压缩算法。同步数据压缩系统是用于互联网络中数字传输设备之间的链路层压缩器,用于提高有效吞吐量。压缩能够加速这样的链接,但可实现的性能受到算法、网络协议和实现细节的交互影响。这种压缩环境与传统的文件压缩不同,它需要在压缩比、压缩时间和性能指标(网络吞吐量)之间进行权衡。此外,还介绍了其他参数和行为,包括对数据重传和多交叉流的鲁棒性。具体而言,我们通过同步排队和直接网络仿真评估了以下问题:(1)相对算法能力;(2)各种算法的吞吐量改进作为压缩处理器能力的函数;(3)多重压缩上下文的影响;(4)协议交互;(5)专用算法。
{"title":"Algorithm evaluation for synchronous data compression","authors":"M.W. Maier","doi":"10.1109/DCC.1995.515554","DOIUrl":"https://doi.org/10.1109/DCC.1995.515554","url":null,"abstract":"Summary form only given. As part of an industry standardization effort, we have evaluated compression algorithms for throughput enhancement in a synchronous communication environment. Synchronous data compression systems are link layer compressors used between digital transmission devices in internetworks to increase effective throughput. Compression is capable of speeding such links, but achievable performance is effected by interaction of algorithm, the networking protocols, and implementation details. The compression environment is different from traditional file compression in inducing a trade-off between compression ratio, compression time, and the performance metric (network throughput). In addition, other parameters and behavior are introduced, including robustness to data retransmission and multiple interleaved streams. Specifically, we have evaluated the following issues through both synchronous queuing and direct network simulation: (1) relative algorithm capability; (2) throughput improvement for various algorithms as a function of compression processor capability; (3) the impact of multiple compression context; (4) protocol interactions; and (5) specialized algorithms.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127883378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Histogram analysis of JPEG compressed images as an aid in image deblocking JPEG压缩图像的直方图分析作为图像块化的辅助
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515535
M. Datcu, G. Schwarz, K. Schmidt, C. Reck
Summary form only given, substantially as follows. Transform coded images suffer from specific image degradations. In the case of standard JPEG compression/decompression the image quality losses are known to be blocking effects resulting from mean value discontinuities along the 8*8 pixel block boundaries as well as ringing artifacts due to the limited precision of the reconstruction from linear combinations of quantized or discarded basis functions. The most evident consequence of JPEG compression is the fragmentation of image histograms mainly caused by blocking in low activity image subareas. The histogram of the image shows spikes that contain most of the signal amplitudes, the other values are distributed on the remaining permissible levels. As a measure of the blocking effect, the blocking factor is defined as the ratio of the spikes area to the total area of image histogram. This method represents a promising approach to the control of locally adaptive image deblocking when the necessary enhancement depends on the local image characteristics. The blocking factor is easy to compute and provides a direct measure of the local image degradation due to blocking. A new deblocking algorithm is proposed.
仅给出摘要形式,内容大致如下。变换编码图像遭受特定的图像退化。在标准JPEG压缩/解压缩的情况下,已知图像质量损失是由8*8像素块边界上的平均值不连续引起的块效应,以及由于量化或丢弃基函数的线性组合的重建精度有限而导致的环形伪影。JPEG压缩最明显的后果是图像直方图的碎片化,这主要是由于低活动图像子区域的阻塞造成的。图像的直方图显示了包含大多数信号幅度的尖峰,其他值分布在剩余的允许电平上。作为对阻塞效果的度量,阻塞系数定义为峰值面积与图像直方图总面积的比值。当所需的增强依赖于局部图像特征时,该方法代表了一种很有前途的局部自适应图像块控制方法。阻塞因子易于计算,并提供了由于阻塞引起的局部图像退化的直接度量。提出了一种新的去块算法。
{"title":"Histogram analysis of JPEG compressed images as an aid in image deblocking","authors":"M. Datcu, G. Schwarz, K. Schmidt, C. Reck","doi":"10.1109/DCC.1995.515535","DOIUrl":"https://doi.org/10.1109/DCC.1995.515535","url":null,"abstract":"Summary form only given, substantially as follows. Transform coded images suffer from specific image degradations. In the case of standard JPEG compression/decompression the image quality losses are known to be blocking effects resulting from mean value discontinuities along the 8*8 pixel block boundaries as well as ringing artifacts due to the limited precision of the reconstruction from linear combinations of quantized or discarded basis functions. The most evident consequence of JPEG compression is the fragmentation of image histograms mainly caused by blocking in low activity image subareas. The histogram of the image shows spikes that contain most of the signal amplitudes, the other values are distributed on the remaining permissible levels. As a measure of the blocking effect, the blocking factor is defined as the ratio of the spikes area to the total area of image histogram. This method represents a promising approach to the control of locally adaptive image deblocking when the necessary enhancement depends on the local image characteristics. The blocking factor is easy to compute and provides a direct measure of the local image degradation due to blocking. A new deblocking algorithm is proposed.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131498430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lossy compression of clustered-dot halftones using sub-cell prediction 使用子单元预测的聚类点半色调有损压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515501
R. A. V. Kam, R. Gray
We propose a predictive coding algorithm for lossy compression of digital halftones produced by clustered-dot dithering. In our scheme, the predictor estimates the size and shape of each halftone dot (cluster) based on the characteristics of neighboring clusters. The prediction template depends on which portion, or sub-cell, of the dithering matrix produced the dot. Information loss is permitted through imperfect representation of the prediction residuals. For some clusters, no residual is transmitted at all, and for others, information about the spatial locations of bit errors is omitted. Specifying only the number of bit errors in the residual is enough to allow the decoder to form an excellent approximation to the original dot structure. We also propose a simple alternative to the ordinary Hamming distance for computing distortion in bi-level images. Experiments with 1024/spl times/1024 images, 8/spl times/8 dithering cells, and 600 dpi printing have shown that the coding algorithm maintains good image quality while achieving rates below 0.1 bits per pixel.
提出了一种预测编码算法,用于对聚簇点抖动产生的数字半色调进行有损压缩。在我们的方案中,预测器根据相邻簇的特征估计每个半色调点(簇)的大小和形状。预测模板取决于抖动矩阵的哪个部分或子单元产生了点。由于预测残差的不完美表示,允许信息丢失。对于某些簇,根本不传输残差,而对于其他簇,省略了有关误码的空间位置的信息。仅指定残差中的误码数就足以使解码器形成与原始点结构的极好近似。我们还提出了一种简单的替代普通汉明距离的方法来计算双电平图像的失真。对1024/spl次/1024图像、8/spl次/8抖动单元和600 dpi打印进行的实验表明,编码算法在实现低于0.1比特/像素的速率时保持了良好的图像质量。
{"title":"Lossy compression of clustered-dot halftones using sub-cell prediction","authors":"R. A. V. Kam, R. Gray","doi":"10.1109/DCC.1995.515501","DOIUrl":"https://doi.org/10.1109/DCC.1995.515501","url":null,"abstract":"We propose a predictive coding algorithm for lossy compression of digital halftones produced by clustered-dot dithering. In our scheme, the predictor estimates the size and shape of each halftone dot (cluster) based on the characteristics of neighboring clusters. The prediction template depends on which portion, or sub-cell, of the dithering matrix produced the dot. Information loss is permitted through imperfect representation of the prediction residuals. For some clusters, no residual is transmitted at all, and for others, information about the spatial locations of bit errors is omitted. Specifying only the number of bit errors in the residual is enough to allow the decoder to form an excellent approximation to the original dot structure. We also propose a simple alternative to the ordinary Hamming distance for computing distortion in bi-level images. Experiments with 1024/spl times/1024 images, 8/spl times/8 dithering cells, and 600 dpi printing have shown that the coding algorithm maintains good image quality while achieving rates below 0.1 bits per pixel.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126915873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Application specific hardware compression of ray-casting data 光线投射数据的特定应用硬件压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515594
G. Kedem, T. Alexander
Summary form only given, as follows. Ray-casting, that is, calculating the intersections of a large array of lines with a solid object is a well-known technique that is central to many algorithms useful in solid modeling. Ray-casting is a compact and elegant way for displaying and calculating the geometrical properties of 3-D objects. The Ray-Casting Engine RCE-1.5 is an application specific massively parallel computer dedicated to ray-casting 3D objects. We present an application specific hardware-oriented data compression algorithm. We developed a simple yet powerful data compression hardware specifically tailored to compressing ray-files, the data structure internal to the RCE-1.5. We have used the compression hardware to meet performance goals while reducing the cost of building the RCE-1.5. We had to balance compression performance on the one hand with real time constraints, development time constraints and hardware costs on the other. With a modest amount of compression hardware we were able to more than double the internal and external data transfer rates. In addition we more than doubled the effective internal memory buffer size. The increase throughput rate enabled us to use (slow but inexpensive) DRAM rather than (faster but expensive) SRAM, dramatically reducing the over-all system cost. This is but one example where judicious use of data compression techniques can dramatically enhance system performance while at the same time reducing the system cost.
仅给出摘要形式,如下。光线投射,即计算大量直线与实体物体的交点,是一种众所周知的技术,是实体建模中许多有用算法的核心。光线投射是显示和计算三维物体几何属性的一种简洁而优雅的方法。Ray-Casting Engine RCE-1.5是一款专门用于光线投射3D物体的大规模并行计算机。提出了一种面向硬件的数据压缩算法。我们开发了一个简单而强大的数据压缩硬件,专门用于压缩射线文件,这是RCE-1.5内部的数据结构。我们使用压缩硬件来满足性能目标,同时降低构建RCE-1.5的成本。我们必须在压缩性能与实时限制、开发时间限制和硬件成本之间取得平衡。使用适量的压缩硬件,我们能够将内部和外部数据传输速率提高一倍以上。此外,我们将有效的内部内存缓冲区大小增加了一倍以上。吞吐量的提高使我们能够使用(慢但便宜)DRAM而不是(快但昂贵)SRAM,从而大大降低了系统的总体成本。这只是明智地使用数据压缩技术可以显著提高系统性能,同时降低系统成本的一个例子。
{"title":"Application specific hardware compression of ray-casting data","authors":"G. Kedem, T. Alexander","doi":"10.1109/DCC.1995.515594","DOIUrl":"https://doi.org/10.1109/DCC.1995.515594","url":null,"abstract":"Summary form only given, as follows. Ray-casting, that is, calculating the intersections of a large array of lines with a solid object is a well-known technique that is central to many algorithms useful in solid modeling. Ray-casting is a compact and elegant way for displaying and calculating the geometrical properties of 3-D objects. The Ray-Casting Engine RCE-1.5 is an application specific massively parallel computer dedicated to ray-casting 3D objects. We present an application specific hardware-oriented data compression algorithm. We developed a simple yet powerful data compression hardware specifically tailored to compressing ray-files, the data structure internal to the RCE-1.5. We have used the compression hardware to meet performance goals while reducing the cost of building the RCE-1.5. We had to balance compression performance on the one hand with real time constraints, development time constraints and hardware costs on the other. With a modest amount of compression hardware we were able to more than double the internal and external data transfer rates. In addition we more than doubled the effective internal memory buffer size. The increase throughput rate enabled us to use (slow but inexpensive) DRAM rather than (faster but expensive) SRAM, dramatically reducing the over-all system cost. This is but one example where judicious use of data compression techniques can dramatically enhance system performance while at the same time reducing the system cost.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125371977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lattice-based designs of direct sum codebooks for vector quantization 基于格的矢量量化直接和码本设计
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515546
C. Barrett, R. L. Frost
Summary form only given. A direct sum codebook (DSC) has the potential to reduce both memory and computational costs of vector quantization. A DSC consists of several sets or stages of vectors. An equivalent code vector is made from the direct sum of one vector from each stage. Such a structure, with p stages containing m vectors each, has m/sup p/ equivalent code vectors, while requiring the storage of only mp vectors. DSC quantizers are not only memory efficient, they also have a naturally simple encoding algorithm, called a residual encoding. A residual encoding uses the nearest neighbor at each stage, requiring comparison with mp vectors rather than all m/sup p/ possible combinations. Unfortunately, this encoding algorithm is suboptimal because of a problem called entanglement. Entanglement occurs when a different vector from that obtained by a residual encoding is actually a better fit for the input vector. An optimal encoding can be obtained by an exhaustive search, but this sacrifices the savings in computation. Lattice-based DSC quantizers are designed to be optimal under a residual encoding by avoiding entanglement Successive stages of the codebook produce finer and finer partitions of the space, resulting in equivalent code vectors which are points in a truncated lattice. After the initial design, the codebook can be optimized for a given source, increasing performance beyond that of a simple lattice vector quantizer. Experimental results show that DSC quantizers based on cubical lattices perform as well as exhaustive search quantizers on a scalar source.
只提供摘要形式。直接和码本(DSC)具有降低矢量量化的内存和计算成本的潜力。DSC由几组或几级矢量组成。等效的代码向量是由每个阶段的一个向量的直接和得到的。这样的结构有p个阶段,每个阶段包含m个向量,有m/sup p/等效代码向量,而只需要存储mp个向量。DSC量化器不仅具有内存效率,而且具有自然简单的编码算法,称为残差编码。残差编码在每个阶段使用最近邻,需要与mp向量进行比较,而不是与所有m/sup / p/可能组合进行比较。不幸的是,由于纠缠问题,这种编码算法不是最优的。当残差编码得到的向量实际上更适合于输入向量时,就会发生纠缠。通过穷举搜索可以得到最优编码,但这牺牲了计算量的节省。基于格子的DSC量化器通过避免纠缠被设计为残差编码下的最优,码本的连续阶段产生越来越精细的空间分区,从而产生等效的编码向量,这些编码向量是截断格子中的点。在初始设计之后,码本可以针对给定的源进行优化,从而提高性能,超越简单的晶格矢量量化器。实验结果表明,基于立方格的DSC量化器在标量源上的性能优于穷举搜索量化器。
{"title":"Lattice-based designs of direct sum codebooks for vector quantization","authors":"C. Barrett, R. L. Frost","doi":"10.1109/DCC.1995.515546","DOIUrl":"https://doi.org/10.1109/DCC.1995.515546","url":null,"abstract":"Summary form only given. A direct sum codebook (DSC) has the potential to reduce both memory and computational costs of vector quantization. A DSC consists of several sets or stages of vectors. An equivalent code vector is made from the direct sum of one vector from each stage. Such a structure, with p stages containing m vectors each, has m/sup p/ equivalent code vectors, while requiring the storage of only mp vectors. DSC quantizers are not only memory efficient, they also have a naturally simple encoding algorithm, called a residual encoding. A residual encoding uses the nearest neighbor at each stage, requiring comparison with mp vectors rather than all m/sup p/ possible combinations. Unfortunately, this encoding algorithm is suboptimal because of a problem called entanglement. Entanglement occurs when a different vector from that obtained by a residual encoding is actually a better fit for the input vector. An optimal encoding can be obtained by an exhaustive search, but this sacrifices the savings in computation. Lattice-based DSC quantizers are designed to be optimal under a residual encoding by avoiding entanglement Successive stages of the codebook produce finer and finer partitions of the space, resulting in equivalent code vectors which are points in a truncated lattice. After the initial design, the codebook can be optimized for a given source, increasing performance beyond that of a simple lattice vector quantizer. Experimental results show that DSC quantizers based on cubical lattices perform as well as exhaustive search quantizers on a scalar source.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114582108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Near optimal compression with respect to a static dictionary on a practical massively parallel architecture 在实际的大规模并行架构中,相对于静态字典的接近最优压缩
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515507
D. Belinskaya, S. Agostino, J. Storer
We consider sublinear massively parallel algorithms for compressing text with respect to a static dictionary. Algorithms for the PRAM model can do this optimally in O(m+log(n)) time with n processors, where m is the length of the longest entry in the dictionary and n is the length of the input string. We consider what is perhaps the most practical model of massively parallel computation imaginable: a linear array of processors where each processor is connected only to its left and right neighbors. We present an algorithm which in time O(km+mlog(m)) with n/(km) processors is guaranteed to be within a factor of (k+1)/k of optimal, for any integer k/spl ges/1. We also present experiments indicating that performance may be even better in practice.
我们考虑了相对于静态字典压缩文本的次线性大规模并行算法。PRAM模型的算法可以在O(m+log(n))时间内用n个处理器完成此任务,其中m是字典中最长条目的长度,n是输入字符串的长度。我们考虑的可能是大规模并行计算最实用的模型:一个处理器的线性阵列,其中每个处理器只连接到它的左右邻居。我们提出了一种算法,保证在n/(km)个处理器的O(km+mlog(m))时间内,对于任意整数k/spl ges/1,其最优性在(k+1)/k因子内。我们还提出了实验,表明在实践中表现可能更好。
{"title":"Near optimal compression with respect to a static dictionary on a practical massively parallel architecture","authors":"D. Belinskaya, S. Agostino, J. Storer","doi":"10.1109/DCC.1995.515507","DOIUrl":"https://doi.org/10.1109/DCC.1995.515507","url":null,"abstract":"We consider sublinear massively parallel algorithms for compressing text with respect to a static dictionary. Algorithms for the PRAM model can do this optimally in O(m+log(n)) time with n processors, where m is the length of the longest entry in the dictionary and n is the length of the input string. We consider what is perhaps the most practical model of massively parallel computation imaginable: a linear array of processors where each processor is connected only to its left and right neighbors. We present an algorithm which in time O(km+mlog(m)) with n/(km) processors is guaranteed to be within a factor of (k+1)/k of optimal, for any integer k/spl ges/1. We also present experiments indicating that performance may be even better in practice.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123317271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Context coding of parse trees 解析树的上下文编码
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515552
J. Tarhio
Summary form only given. General-purpose text compression works normally at the lexical level assuming that symbols to be encoded are independent or they depend on preceding symbols within a fixed distance. Traditionally such syntactical models have been focused on compression of source programs, but also other areas are feasible. The compression of a parse tree is an important and challenging part of syntactical modeling. A parse tree can be represented by a left parse which is a sequence of productions applied in preorder. A left parse can be encoded efficiently with arithmetic coding using counts of production alternatives of each nonterminal. We introduce a more refined method which reduces the size of a compressed tree. A blending scheme, PPM (prediction by partial matching) produces very good compression on text files. In PPM, adaptive models of several context lengths are maintained and they are blended during processing. The k preceding symbols of the symbol to be encoded form the context of order k. We apply the PPM technique to a left parse so that we use contexts of nodes instead of contexts consisting of preceding symbols in the sequence. We tested our approach with parse trees of Pascal programs. Our method gave on the average 20 percent better compression than the standard method based on counts of production alternatives of nonterminals. In our model, an item of the context is a pair (production, branch). The form of the item seems to be crucial. We tested three other variations for an item: production, nonterminal, and (nonterminal, branch), but all these three approaches produced clearly worse results.
只提供摘要形式。通用文本压缩通常在词法级别工作,假设要编码的符号是独立的,或者它们在固定距离内依赖于前面的符号。传统上,这种语法模型主要集中在源程序的压缩上,但在其他领域也是可行的。解析树的压缩是语法建模的一个重要且具有挑战性的部分。解析树可以用左解析表示,左解析是按预先顺序应用的一系列结果。左解析可以通过使用每个非终结符的产生替代的计数进行有效的算术编码。我们引入了一种更精细的方法,它减少了压缩树的大小。混合方案PPM(部分匹配预测)对文本文件产生非常好的压缩。在PPM中,维护多个上下文长度的自适应模型,并在处理期间将它们混合在一起。要编码的符号的k个前面的符号形成顺序k的上下文。我们将PPM技术应用于左解析,以便我们使用节点上下文而不是由序列中前面的符号组成的上下文。我们用Pascal程序的解析树测试了我们的方法。我们的方法比基于非终端生产替代品计数的标准方法平均压缩率提高了20%。在我们的模型中,上下文的项是一对(生产、分支)。项目的形式似乎是至关重要的。我们对一个项目测试了另外三种变体:生产、非终端和(非终端、分支),但这三种方法产生的结果显然更差。
{"title":"Context coding of parse trees","authors":"J. Tarhio","doi":"10.1109/DCC.1995.515552","DOIUrl":"https://doi.org/10.1109/DCC.1995.515552","url":null,"abstract":"Summary form only given. General-purpose text compression works normally at the lexical level assuming that symbols to be encoded are independent or they depend on preceding symbols within a fixed distance. Traditionally such syntactical models have been focused on compression of source programs, but also other areas are feasible. The compression of a parse tree is an important and challenging part of syntactical modeling. A parse tree can be represented by a left parse which is a sequence of productions applied in preorder. A left parse can be encoded efficiently with arithmetic coding using counts of production alternatives of each nonterminal. We introduce a more refined method which reduces the size of a compressed tree. A blending scheme, PPM (prediction by partial matching) produces very good compression on text files. In PPM, adaptive models of several context lengths are maintained and they are blended during processing. The k preceding symbols of the symbol to be encoded form the context of order k. We apply the PPM technique to a left parse so that we use contexts of nodes instead of contexts consisting of preceding symbols in the sequence. We tested our approach with parse trees of Pascal programs. Our method gave on the average 20 percent better compression than the standard method based on counts of production alternatives of nonterminals. In our model, an item of the context is a pair (production, branch). The form of the item seems to be crucial. We tested three other variations for an item: production, nonterminal, and (nonterminal, branch), but all these three approaches produced clearly worse results.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123591496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A universal compressed data format for foreign file systems 用于外部文件系统的通用压缩数据格式
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515539
T. Kawashima, T. Igarashi, R. Hines, M. Ogawa
The authors have proposed a compressed data format that can be used with any foreign file system and that allows users to access data randomly in a compressed file without entirely decompressing it. Since the compressed file includes all information regarding compression in this format, there is a great advantage that any file system can treat compressed files as just usual files, even if the file system does not have compression capability.
作者提出了一种可以与任何外部文件系统一起使用的压缩数据格式,允许用户在不完全解压缩的情况下随机访问压缩文件中的数据。由于压缩文件包含了与这种格式的压缩有关的所有信息,因此有一个很大的优点,即任何文件系统都可以将压缩文件视为普通文件,即使文件系统没有压缩能力。
{"title":"A universal compressed data format for foreign file systems","authors":"T. Kawashima, T. Igarashi, R. Hines, M. Ogawa","doi":"10.1109/DCC.1995.515539","DOIUrl":"https://doi.org/10.1109/DCC.1995.515539","url":null,"abstract":"The authors have proposed a compressed data format that can be used with any foreign file system and that allows users to access data randomly in a compressed file without entirely decompressing it. Since the compressed file includes all information regarding compression in this format, there is a great advantage that any file system can treat compressed files as just usual files, even if the file system does not have compression capability.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129412363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JPEG optimization using an entropy-constrained quantization framework 使用熵约束量化框架的JPEG优化
Pub Date : 1995-03-28 DOI: 10.1109/DCC.1995.515524
M. Crouse, K. Ramchandran
Previous works, including adaptive quantizer selection and adaptive coefficient thresholding, have addressed the optimization of a baseline-decodable JPEG coder in a rate-distortion (R-D) sense. In this work, by developing an entropy-constrained quantization framework, we show that these previous works do not fully realize the attainable coding gain, and then formulate a computationally efficient way that attempts to fully realize this gain for baseline-JPEG-decodable systems. Interestingly, we find that the gains obtained using the previous algorithms are almost additive. The framework involves viewing a scalar-quantized system with fixed quantizers as a special type of vector quantizer (VQ), and then to use techniques akin to entropy-constrained vector quantization (ECVQ) to optimize the system. In the JPEG case, a computationally efficient algorithm can be derived, without training, by jointly performing coefficient thresholding, quantizer selection, and Huffman table customization, all compatible with the baseline JPEG syntax. Our algorithm achieves significant R-D improvement over standard JPEG (about 2 dB for typical images) with performance comparable to that of more complex "state-of-the-art" coders. For example, for the Lenna image coded at 1.0 bits per pixel, our JPEG-compatible coder achieves a PSNR of 39.6 dB, which even slightly exceeds the published performance of Shapiro's wavelet coder. Although PSNR does not guarantee subjective performance, our algorithm can be applied with a flexible range of visually-based distortion metrics.
以前的工作,包括自适应量化器选择和自适应系数阈值,已经在率失真(R-D)意义上解决了基线可解码JPEG编码器的优化问题。在这项工作中,通过开发一个熵约束的量化框架,我们表明这些先前的工作并没有完全实现可实现的编码增益,然后制定了一个计算效率的方法,试图完全实现基线- jpeg可解码系统的这种增益。有趣的是,我们发现使用先前算法获得的增益几乎是加性的。该框架包括将带有固定量化器的标量量化系统视为一种特殊类型的矢量量化器(VQ),然后使用类似于熵约束矢量量化(ECVQ)的技术来优化系统。在JPEG的情况下,通过联合执行系数阈值、量化器选择和Huffman表定制,可以推导出计算效率高的算法,而无需训练,所有这些都与基线JPEG语法兼容。我们的算法比标准JPEG(典型图像约2 dB)实现了显著的R-D改进,其性能可与更复杂的“最先进”编码器相媲美。例如,对于编码为每像素1.0比特的Lenna图像,我们的jpeg兼容编码器实现了39.6 dB的PSNR,甚至略高于夏皮罗的小波编码器的公开性能。虽然PSNR不能保证主观性能,但我们的算法可以应用于灵活的基于视觉的失真指标。
{"title":"JPEG optimization using an entropy-constrained quantization framework","authors":"M. Crouse, K. Ramchandran","doi":"10.1109/DCC.1995.515524","DOIUrl":"https://doi.org/10.1109/DCC.1995.515524","url":null,"abstract":"Previous works, including adaptive quantizer selection and adaptive coefficient thresholding, have addressed the optimization of a baseline-decodable JPEG coder in a rate-distortion (R-D) sense. In this work, by developing an entropy-constrained quantization framework, we show that these previous works do not fully realize the attainable coding gain, and then formulate a computationally efficient way that attempts to fully realize this gain for baseline-JPEG-decodable systems. Interestingly, we find that the gains obtained using the previous algorithms are almost additive. The framework involves viewing a scalar-quantized system with fixed quantizers as a special type of vector quantizer (VQ), and then to use techniques akin to entropy-constrained vector quantization (ECVQ) to optimize the system. In the JPEG case, a computationally efficient algorithm can be derived, without training, by jointly performing coefficient thresholding, quantizer selection, and Huffman table customization, all compatible with the baseline JPEG syntax. Our algorithm achieves significant R-D improvement over standard JPEG (about 2 dB for typical images) with performance comparable to that of more complex \"state-of-the-art\" coders. For example, for the Lenna image coded at 1.0 bits per pixel, our JPEG-compatible coder achieves a PSNR of 39.6 dB, which even slightly exceeds the published performance of Shapiro's wavelet coder. Although PSNR does not guarantee subjective performance, our algorithm can be applied with a flexible range of visually-based distortion metrics.","PeriodicalId":107017,"journal":{"name":"Proceedings DCC '95 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129883739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Proceedings DCC '95 Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1