首页 > 最新文献

[Proceedings] DCC `93: Data Compression Conference最新文献

英文 中文
Codes with monotonic codeword lengths 具有单调码字长度的码
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253145
J. Abrahams
The author studies minimum average codeword length coding under the constraint that the codewords are monotonically non-decreasing in length. She derives bounds on the average length of an optimal 'monotonic' code, and gives sufficient conditions such that algorithms for optimal alphabetic codes can be used to find the optimal 'monotonic' code.<>
在码字长度单调不减小的约束下,研究了最小平均码字长度编码。她导出了最优“单调”码的平均长度的界,并给出了最优字母码的算法可以用来找到最优“单调”码的充分条件
{"title":"Codes with monotonic codeword lengths","authors":"J. Abrahams","doi":"10.1109/DCC.1993.253145","DOIUrl":"https://doi.org/10.1109/DCC.1993.253145","url":null,"abstract":"The author studies minimum average codeword length coding under the constraint that the codewords are monotonically non-decreasing in length. She derives bounds on the average length of an optimal 'monotonic' code, and gives sufficient conditions such that algorithms for optimal alphabetic codes can be used to find the optimal 'monotonic' code.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116071093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An embedded hierarchical image coder using zerotrees of wavelet coefficients 使用小波系数零树的嵌入式分层图像编码器
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253128
J. M. Shapiro
This paper describes a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance. A fully embedded code represents a sequence of binary decisions that distinguish an image from the 'null' image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source. It is based on four key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression achieved via adaptive arithmetic coding.<>
本文描述了一种简单但非常有效的图像压缩算法,该算法的特点是比特流中的比特是按重要顺序生成的。完全嵌入的代码表示将图像与“空”图像区分开来的二进制决策序列。使用嵌入式编码算法,编码器可以在任何点终止编码,从而允许精确地满足目标速率或目标失真度量。此外,解码器可以在比特流中的任何点停止解码,并且仍然产生与被截断的比特流对应的比特率编码的完全相同的图像。该算法始终产生的压缩结果与几乎所有已知的标准测试图像压缩算法相竞争,但绝对不需要训练,不需要预先存储表或代码本,也不需要事先了解图像源。它基于四个关键概念:(1)小波变换或分层子带分解;(2)利用图像固有的自相似性来预测跨尺度的重要信息缺失;(3)熵编码的逐次逼近量化;(4)通过自适应算术编码实现的通用无损数据压缩。
{"title":"An embedded hierarchical image coder using zerotrees of wavelet coefficients","authors":"J. M. Shapiro","doi":"10.1109/DCC.1993.253128","DOIUrl":"https://doi.org/10.1109/DCC.1993.253128","url":null,"abstract":"This paper describes a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance. A fully embedded code represents a sequence of binary decisions that distinguish an image from the 'null' image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The algorithm consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images, but requires absolutely no training, no pre-stored tables or codebooks, and no prior knowledge of the image source. It is based on four key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression achieved via adaptive arithmetic coding.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129798903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 139
On-line adaptive vector quantization with variable size codebook entries 可变码本条目的在线自适应矢量量化
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253147
C. Constantinescu, J. Storer
A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization. The authors have tested this algorithm with a host of standard test images (e.g. gray scale magazine images, medical images, space and scientific images, fingerprint images, and handwriting images) and with no prior knowledge of the data or training, for a given fidelity the compression achieved typically equals or exceeds that of the JPEG standard. The only information that must be specified in advance is the fidelity criterion.<>
一种新的图像压缩算法采用了一些最成功的自适应无损压缩方法来执行自适应在线(单次)矢量量化。作者已经用大量标准测试图像(例如灰度杂志图像、医学图像、空间和科学图像、指纹图像和手写图像)测试了该算法,并且在没有数据或训练的先验知识的情况下,对于给定的保真度,所实现的压缩通常等于或超过JPEG标准。唯一必须事先指定的信息是保真度标准
{"title":"On-line adaptive vector quantization with variable size codebook entries","authors":"C. Constantinescu, J. Storer","doi":"10.1109/DCC.1993.253147","DOIUrl":"https://doi.org/10.1109/DCC.1993.253147","url":null,"abstract":"A new image compression algorithm employs some of the most successful approaches to adaptive lossless compression to perform adaptive on-line (single pass) vector quantization. The authors have tested this algorithm with a host of standard test images (e.g. gray scale magazine images, medical images, space and scientific images, fingerprint images, and handwriting images) and with no prior knowledge of the data or training, for a given fidelity the compression achieved typically equals or exceeds that of the JPEG standard. The only information that must be specified in advance is the fidelity criterion.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125529445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Generalized fractal transforms: complexity issues 广义分形变换:复杂性问题
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253124
D. Monro
The Bath Fractal Transform (BFT) defines a strategy for obtaining least squares fractal approximations and can be implemented using functions of varying complexity. The approximation method used in ITT-coding is itself the zero-order instance of the BFT. Some of the complexity options available are explored by combining various orders of BFT approximation with various degrees and types of searching. This may be regarded either as the inclusion of searching with the BFT or as a generalization of the matching criterion of ITT-coding. This is considered from the point of view of the cost-fidelity trade-offs incurred, and the implications for practical application to multimedia information retrieval systems and real-time video are discussed.<>
巴斯分形变换(BFT)定义了一种获得最小二乘分形近似的策略,可以使用不同复杂度的函数来实现。itt编码中使用的近似方法本身就是BFT的零阶实例。通过将各种阶次的BFT近似与各种程度和类型的搜索相结合,可以探索一些可用的复杂性选项。这可以看作是包含了BFT的搜索,也可以看作是itt编码匹配标准的推广。这是从所产生的成本-保真度权衡的角度考虑的,并讨论了对多媒体信息检索系统和实时视频的实际应用的影响。
{"title":"Generalized fractal transforms: complexity issues","authors":"D. Monro","doi":"10.1109/DCC.1993.253124","DOIUrl":"https://doi.org/10.1109/DCC.1993.253124","url":null,"abstract":"The Bath Fractal Transform (BFT) defines a strategy for obtaining least squares fractal approximations and can be implemented using functions of varying complexity. The approximation method used in ITT-coding is itself the zero-order instance of the BFT. Some of the complexity options available are explored by combining various orders of BFT approximation with various degrees and types of searching. This may be regarded either as the inclusion of searching with the BFT or as a generalization of the matching criterion of ITT-coding. This is considered from the point of view of the cost-fidelity trade-offs incurred, and the implications for practical application to multimedia information retrieval systems and real-time video are discussed.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125959244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Ziv-Lempel encoding with multi-bit flags 多位标志的Ziv-Lempel编码
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253136
P. Fenwick
LZ77 and more recently LZSS text compression use one-bit flags to identify a following pointer or literal. This paper investigates the use of multi-bit flags to allow a greater variety of entities in the compressed data stream. Two approaches are described. The first uses flags of 2 or 3 bits with operands constrained to be 1, 2 or 3 bytes long. The other codes entirely in units of 2 or 3 bits (instead of the more usual single bits). Both methods are shown to yield compressors of good performance.<>
LZ77和最近的LZSS文本压缩使用一位标志来标识后面的指针或文字。本文研究了多比特标志的使用,以允许在压缩数据流中有更多种类的实体。本文描述了两种方法。第一种使用2位或3位的标志,操作数被限制为1、2或3字节长。其他代码完全以2位或3位为单位(而不是更常见的单个比特)。这两种方法都能产生性能良好的压缩机。
{"title":"Ziv-Lempel encoding with multi-bit flags","authors":"P. Fenwick","doi":"10.1109/DCC.1993.253136","DOIUrl":"https://doi.org/10.1109/DCC.1993.253136","url":null,"abstract":"LZ77 and more recently LZSS text compression use one-bit flags to identify a following pointer or literal. This paper investigates the use of multi-bit flags to allow a greater variety of entities in the compressed data stream. Two approaches are described. The first uses flags of 2 or 3 bits with operands constrained to be 1, 2 or 3 bytes long. The other codes entirely in units of 2 or 3 bits (instead of the more usual single bits). Both methods are shown to yield compressors of good performance.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Full-frame compression of tomographic images using the discrete Fourier transform 使用离散傅里叶变换的层析图像的全帧压缩
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253130
J. Villasenor
The unacceptability of block artifacts in medical image data compression has led to systems employing full-frame discrete cosine transform (DCT) compression. Although the DCT is the optimum fast transform when block coding is used, it is outperformed by the discrete Fourier transform (DFT) and discrete Hartley transform for images obtained using positron emission tomography and magnetic resonance imaging. Such images are characterized by a roughly circular region of non-zero intensity bounded by a region R in which the image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels, but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. The DCT therefore loses its advantage over the DFT because neither transform introduced significant artificial discontinuities.<>
医学图像数据压缩中不可接受的块伪影导致系统采用全帧离散余弦变换(DCT)压缩。虽然DCT是使用分组编码时的最佳快速变换,但对于使用正电子发射断层扫描和磁共振成像获得的图像,离散傅立叶变换(DFT)和离散哈特利变换优于DCT。这样的图像的特征是一个非零强度的大致圆形区域,该区域由图像强度基本上为零的区域R所包围。将R裁剪到最小程度可以减少低强度像素的数量,但实际要求将图像存储在矩形网格上,这意味着零强度的显著区域必须保留为待压缩图像的组成部分。因此,DCT失去了相对于DFT的优势,因为两种变换都没有引入明显的人为不连续
{"title":"Full-frame compression of tomographic images using the discrete Fourier transform","authors":"J. Villasenor","doi":"10.1109/DCC.1993.253130","DOIUrl":"https://doi.org/10.1109/DCC.1993.253130","url":null,"abstract":"The unacceptability of block artifacts in medical image data compression has led to systems employing full-frame discrete cosine transform (DCT) compression. Although the DCT is the optimum fast transform when block coding is used, it is outperformed by the discrete Fourier transform (DFT) and discrete Hartley transform for images obtained using positron emission tomography and magnetic resonance imaging. Such images are characterized by a roughly circular region of non-zero intensity bounded by a region R in which the image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels, but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. The DCT therefore loses its advantage over the DFT because neither transform introduced significant artificial discontinuities.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132784216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Globally optimal bit allocation 全局最优位分配
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253148
Xiaolin Wu
Given M quantizers of variable rates, scaler or/and vector, the globally optimal allocation of B bits to the M quantizers can be computed in O(MB/sup 2/) time with integer constraint, or O(MB2/sup B/) time without. The author also considers the nested optimization problem of optimal bit allocation with respect to optimal quantizers. Various algorithmic techniques are proposed to solve this new problem in pseudo-polynomial time.<>
给定M个可变速率、标量或/和矢量量化器,在有整数约束的情况下,可以在O(MB/sup 2/)时间内计算出B位对M个量化器的全局最优分配,或者在没有整数约束的情况下,可以在O(MB/sup 2/)时间内计算出B位的全局最优分配。作者还考虑了最优量化器的最优位分配的嵌套优化问题。为了在伪多项式时间内解决这个新问题,提出了各种算法技术。
{"title":"Globally optimal bit allocation","authors":"Xiaolin Wu","doi":"10.1109/DCC.1993.253148","DOIUrl":"https://doi.org/10.1109/DCC.1993.253148","url":null,"abstract":"Given M quantizers of variable rates, scaler or/and vector, the globally optimal allocation of B bits to the M quantizers can be computed in O(MB/sup 2/) time with integer constraint, or O(MB2/sup B/) time without. The author also considers the nested optimization problem of optimal bit allocation with respect to optimal quantizers. Various algorithmic techniques are proposed to solve this new problem in pseudo-polynomial time.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Multispectral image compression algorithms 多光谱图像压缩算法
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253110
T. Markas, J. Reif
This paper presents a data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images. The loss of information ranges from a perceptually lossless level, achieved at 20-30:1 compression ratios, to a one where exploitation of the images is still possible (over 100:1 ratios). A one-dimensional transform coder removes the spectral redundancy, and a two-dimensional wavelet transform removes the spatial redundancy of multispectral images. The transformed images are subsequently divided into active regions that contain significant wavelet coefficients. Each active block is then hierarchically encoded using multidimensional bitmap trees. Application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance. Landsat Thematic Mapper data are used to illustrate the performance of the proposed algorithm.<>
本文提出了一种能够显著减少多光谱和高光谱图像中包含的信息量的数据压缩算法。信息的损失范围从感知上的无损水平(在20-30:1的压缩比下实现)到仍然可以利用图像的水平(超过100:1的比例)。一维变换编码器去除光谱冗余,二维小波变换去除多光谱图像的空间冗余。变换后的图像随后被划分为含有显著小波系数的活动区域。然后使用多维位图树对每个活动块进行分层编码。在谱带上应用可逆直方图均衡化方法可以显著提高压缩/失真性能。使用Landsat专题地图数据来说明所提出算法的性能。
{"title":"Multispectral image compression algorithms","authors":"T. Markas, J. Reif","doi":"10.1109/DCC.1993.253110","DOIUrl":"https://doi.org/10.1109/DCC.1993.253110","url":null,"abstract":"This paper presents a data compression algorithm capable of significantly reducing the amounts of information contained in multispectral and hyperspectral images. The loss of information ranges from a perceptually lossless level, achieved at 20-30:1 compression ratios, to a one where exploitation of the images is still possible (over 100:1 ratios). A one-dimensional transform coder removes the spectral redundancy, and a two-dimensional wavelet transform removes the spatial redundancy of multispectral images. The transformed images are subsequently divided into active regions that contain significant wavelet coefficients. Each active block is then hierarchically encoded using multidimensional bitmap trees. Application of reversible histogram equalization methods on the spectral bands can significantly increase the compression/distortion performance. Landsat Thematic Mapper data are used to illustrate the performance of the proposed algorithm.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Application of AVL trees to adaptive compression of numerical data AVL树在数值数据自适应压缩中的应用
Pub Date : 1993-03-30 DOI: 10.1109/DCC.1993.253118
H. Yokoo
This paper discusses the compression of computer files of data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adel'son-Vel'skii-Landis trees, is effective to any word length. Its application to the lossless compression of gray-scale images shows wider applicability to any ordered set of 18-bit or 36-bit data.<>
本文讨论了未事先给出统计性质的数据的计算机文件的压缩问题。为此,一种新的无损编码方法利用Adel’s - vel’skii- landis树,对任何长度的字都有效。它在灰度图像无损压缩中的应用显示出更广泛的适用性,适用于任何18位或36位的有序数据集
{"title":"Application of AVL trees to adaptive compression of numerical data","authors":"H. Yokoo","doi":"10.1109/DCC.1993.253118","DOIUrl":"https://doi.org/10.1109/DCC.1993.253118","url":null,"abstract":"This paper discusses the compression of computer files of data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adel'son-Vel'skii-Landis trees, is effective to any word length. Its application to the lossless compression of gray-scale images shows wider applicability to any ordered set of 18-bit or 36-bit data.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"70 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114004308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient compression of wavelet coefficients for smooth and fractal-like data 小波系数对光滑和分形数据的有效压缩
Pub Date : 1993-02-25 DOI: 10.1109/DCC.1993.253126
K. Culík, S. Dube
The authors show how to integrate wavelet-based and fractal-based approaches for data compression. If the data is self-similar or smooth, one can efficiently store its wavelet coefficients using fractal compression techniques resulting in high compression ratios.<>
作者展示了如何整合基于小波和基于分形的数据压缩方法。如果数据是自相似的或光滑的,可以使用分形压缩技术有效地存储其小波系数,从而获得高压缩比
{"title":"Efficient compression of wavelet coefficients for smooth and fractal-like data","authors":"K. Culík, S. Dube","doi":"10.1109/DCC.1993.253126","DOIUrl":"https://doi.org/10.1109/DCC.1993.253126","url":null,"abstract":"The authors show how to integrate wavelet-based and fractal-based approaches for data compression. If the data is self-similar or smooth, one can efficiently store its wavelet coefficients using fractal compression techniques resulting in high compression ratios.<<ETX>>","PeriodicalId":315077,"journal":{"name":"[Proceedings] DCC `93: Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
[Proceedings] DCC `93: Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1