首页 > 最新文献

Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)最新文献

英文 中文
Linear global detectors of redundant and rare substrings 冗余和稀有子串的线性全局检测器
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755666
A. Apostolico, M. Bock, S. Lonardi
The identification of strings that are, by some measure, redundant or rare in the context of larger sequences is an implicit goal of any data compression method. In the straightforward approach to searching for unusual substrings, the words (up to a certain length) are enumerated more or less exhaustively and individually checked in terms of observed and expected frequencies, variances, and scores of discrepancy and significance thereof. As is well known, clever methods are available to compute and organize the counts of occurrences of all substrings of a given string. The corresponding tables take up the tree-like structure of a special kind of digital search index or trie. We show here that under several accepted measures of deviation from expected frequency, the candidate over- or under-represented words are restricted to the O(n) words that end at internal nodes of a compact suffix tree, as opposed to the /spl Theta/(n/sup 2/) possible substrings. This surprising fact is a consequence of properties in the form that if a word that ends in the middle of an arc is, say, over-represented, then its extension to the nearest node of the tree is even more so. Based on this, we design global linear detectors of favoured and unfavored words for our probabilistic framework, and display the results of some preliminary that apply our constructions to the analysis of genomic sequences.
从某种程度上说,识别在较大序列上下文中冗余或罕见的字符串是任何数据压缩方法的隐含目标。在搜索不寻常子字符串的直接方法中,对单词(不超过一定长度)进行或多或少的详尽枚举,并根据观察到的和期望的频率、方差、差异分数及其重要性进行单独检查。众所周知,可以使用一些聪明的方法来计算和组织给定字符串的所有子字符串的出现次数。相应的表采用一种特殊类型的数字搜索索引或树状结构。我们在这里表明,在几个可接受的偏离预期频率的度量下,候选过度或未充分表示的单词被限制为O(n)个以紧凑后缀树的内部节点结尾的单词,而不是/spl Theta/(n/sup 2/)可能的子字符串。这个令人惊讶的事实是一个属性的结果,如果一个单词在圆弧中间结束,那么它延伸到树的最近节点的情况就更严重了。在此基础上,我们为我们的概率框架设计了偏好词和不偏好词的全局线性检测器,并展示了一些将我们的结构应用于基因组序列分析的初步结果。
{"title":"Linear global detectors of redundant and rare substrings","authors":"A. Apostolico, M. Bock, S. Lonardi","doi":"10.1109/DCC.1999.755666","DOIUrl":"https://doi.org/10.1109/DCC.1999.755666","url":null,"abstract":"The identification of strings that are, by some measure, redundant or rare in the context of larger sequences is an implicit goal of any data compression method. In the straightforward approach to searching for unusual substrings, the words (up to a certain length) are enumerated more or less exhaustively and individually checked in terms of observed and expected frequencies, variances, and scores of discrepancy and significance thereof. As is well known, clever methods are available to compute and organize the counts of occurrences of all substrings of a given string. The corresponding tables take up the tree-like structure of a special kind of digital search index or trie. We show here that under several accepted measures of deviation from expected frequency, the candidate over- or under-represented words are restricted to the O(n) words that end at internal nodes of a compact suffix tree, as opposed to the /spl Theta/(n/sup 2/) possible substrings. This surprising fact is a consequence of properties in the form that if a word that ends in the middle of an arc is, say, over-represented, then its extension to the nearest node of the tree is even more so. Based on this, we design global linear detectors of favoured and unfavored words for our probabilistic framework, and display the results of some preliminary that apply our constructions to the analysis of genomic sequences.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131410210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image coding using Markov models with hidden states 使用隐状态马尔可夫模型的图像编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785681
S. Forchhammer
Summary form only given. Lossless image coding may be performed by applying arithmetic coding sequentially to probabilities conditioned on the past data. Therefore the model is very important. A new image model is applied to image coding. The model is based on a Markov process involving hidden states. An underlying Markov process called the slice process specifies D rows with the width of the image. Each new row of the image coincides with row N of an instance of the slice process. The N-1 previous rows are read from the causal part of the image and the last D-N rows are hidden. This gives a description of the current row conditioned on the N-1 previous rows. From the slice process we may decompose the description into a sequence of conditional probabilities, involving a combination of a forward and a backward pass. In effect the causal part of the last N rows of the image becomes the context. The forward pass obtained directly from the slice process starts from the left for each row with D-N hidden rows. The backward pass starting from the right additionally has the current row as hidden. The backward pass may be described as a completion of the forward pass. It plays the role of normalizing the possible completions of the forward pass for each pixel. The hidden states may effectively be represented in a trellis structure as in an HMM. For the slice process we use a state of D rows and V-1 columns, thus involving V columns in each transition. The new model was applied to a bi-level image (SO9 of the JBIG test set) in a two-part coding scheme.
只提供摘要形式。无损图像编码可以通过对以过去数据为条件的概率依次应用算术编码来实现。因此模型是非常重要的。将一种新的图像模型应用于图像编码。该模型基于包含隐藏状态的马尔可夫过程。底层的马尔可夫过程称为切片过程,它用图像的宽度指定D行。图像的每一个新行都与切片处理实例的第N行重合。从图像的因果部分读取前面的N-1行,隐藏最后的D-N行。这给出了当前行以前N-1行为条件的描述。从切片过程中,我们可以将描述分解为条件概率序列,包括向前传递和向后传递的组合。实际上,图像最后N行的因果部分成为上下文。直接从切片过程中获得的正向传递从左侧开始,每一行有D-N个隐藏行。从右侧开始的反向传递将当前行隐藏起来。向后传球可以被描述为向前传球的完成。它的作用是对每个像素的前向通道的可能完成进行归一化。隐藏状态可以像HMM一样有效地用网格结构表示。对于切片过程,我们使用D行和V-1列的状态,因此在每个转换中涉及V列。将新模型应用于JBIG测试集SO9的双水平图像,采用两部分编码方案。
{"title":"Image coding using Markov models with hidden states","authors":"S. Forchhammer","doi":"10.1109/DCC.1999.785681","DOIUrl":"https://doi.org/10.1109/DCC.1999.785681","url":null,"abstract":"Summary form only given. Lossless image coding may be performed by applying arithmetic coding sequentially to probabilities conditioned on the past data. Therefore the model is very important. A new image model is applied to image coding. The model is based on a Markov process involving hidden states. An underlying Markov process called the slice process specifies D rows with the width of the image. Each new row of the image coincides with row N of an instance of the slice process. The N-1 previous rows are read from the causal part of the image and the last D-N rows are hidden. This gives a description of the current row conditioned on the N-1 previous rows. From the slice process we may decompose the description into a sequence of conditional probabilities, involving a combination of a forward and a backward pass. In effect the causal part of the last N rows of the image becomes the context. The forward pass obtained directly from the slice process starts from the left for each row with D-N hidden rows. The backward pass starting from the right additionally has the current row as hidden. The backward pass may be described as a completion of the forward pass. It plays the role of normalizing the possible completions of the forward pass for each pixel. The hidden states may effectively be represented in a trellis structure as in an HMM. For the slice process we use a state of D rows and V-1 columns, thus involving V columns in each transition. The new model was applied to a bi-level image (SO9 of the JBIG test set) in a two-part coding scheme.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122458257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A video codec based on R/D-optimized adaptive vector quantization 基于R/ d优化自适应矢量量化的视频编解码器
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785713
M. Wagner, Ralf Herz, H. Hartenstein, R. Hamzaoui, D. Saupe
Summary form only given. We present a new AVQ-based video coder for very low bitrates. To encode a block from a frame, the encoder offers three modes: (1) a block from the same position in the last frame can be taken; (2) the block can be represented with a vector from the codebook; or (3) a new vector, that sufficiently represents a block, can be inserted into the codebook. For mode 2 a mean-removed VQ scheme is used. The decision on how blocks are encoded and how the codebook is updated is done in an rate-distortion (R-D) optimized fashion. The codebook of shape blocks is updated once per frame. First results for an implementation of such a scheme have been reported previously. Here we extend the method to incorporate a wavelet image transform before coding in order to enhance the compression performance. In addition the rate-distortion optimization is comprehensively discussed. Our R-D optimization is based on an efficient convex-hull computation. This method is compared to common R-D optimizations that use a Lagrangian multiplier approach. In the discussion of our R-D method we show the similarities and differences between our scheme and the generalized threshold replenishment (GTR) method of Fowler et al. (1997). Furthermore, we demonstrate that the translation of our R-D optimized AVQ into the wavelet domain leads to an improved coding performance. We present coding results that show that one can achieve the same encoding quality as with comparable standard transform coding (H.263). In addition we offer an empirical analysis of the short- and long-term behavior of the adaptive codebook. This analysis indicates that the AVQ method uses the vectors in its codebook for some kind of long-term prediction.
只提供摘要形式。我们提出了一种新的基于avq的低比特率视频编码器。为了从一帧中编码一个块,编码器提供三种模式:(1)可以从上一帧中的相同位置获取一个块;(2)块可以用码本中的向量表示;或者(3)在码本中插入一个新的向量,它可以充分地表示一个块。对于模式2,采用均值去除VQ方案。关于如何对块进行编码以及如何更新码本的决定是以率失真(R-D)优化的方式完成的。形状块的码本每帧更新一次。以前已经报告了执行这一计划的初步结果。在这里,我们扩展了该方法,在编码前加入小波图像变换,以提高压缩性能。此外,还对率畸变优化问题进行了全面讨论。我们的研发优化是基于有效的凸壳计算。该方法与使用拉格朗日乘子方法的常见R-D优化方法进行了比较。在讨论我们的R-D方法时,我们展示了我们的方案与Fowler等人(1997)的广义阈值补充(GTR)方法之间的异同。此外,我们证明了将我们的R-D优化的AVQ转换到小波域可以提高编码性能。我们给出的编码结果表明,可以实现与使用可比较的标准转换编码(H.263)相同的编码质量。此外,我们还对自适应码本的短期和长期行为进行了实证分析。这一分析表明,AVQ方法使用其码本中的向量进行某种长期预测。
{"title":"A video codec based on R/D-optimized adaptive vector quantization","authors":"M. Wagner, Ralf Herz, H. Hartenstein, R. Hamzaoui, D. Saupe","doi":"10.1109/DCC.1999.785713","DOIUrl":"https://doi.org/10.1109/DCC.1999.785713","url":null,"abstract":"Summary form only given. We present a new AVQ-based video coder for very low bitrates. To encode a block from a frame, the encoder offers three modes: (1) a block from the same position in the last frame can be taken; (2) the block can be represented with a vector from the codebook; or (3) a new vector, that sufficiently represents a block, can be inserted into the codebook. For mode 2 a mean-removed VQ scheme is used. The decision on how blocks are encoded and how the codebook is updated is done in an rate-distortion (R-D) optimized fashion. The codebook of shape blocks is updated once per frame. First results for an implementation of such a scheme have been reported previously. Here we extend the method to incorporate a wavelet image transform before coding in order to enhance the compression performance. In addition the rate-distortion optimization is comprehensively discussed. Our R-D optimization is based on an efficient convex-hull computation. This method is compared to common R-D optimizations that use a Lagrangian multiplier approach. In the discussion of our R-D method we show the similarities and differences between our scheme and the generalized threshold replenishment (GTR) method of Fowler et al. (1997). Furthermore, we demonstrate that the translation of our R-D optimized AVQ into the wavelet domain leads to an improved coding performance. We present coding results that show that one can achieve the same encoding quality as with comparable standard transform coding (H.263). In addition we offer an empirical analysis of the short- and long-term behavior of the adaptive codebook. This analysis indicates that the AVQ method uses the vectors in its codebook for some kind of long-term prediction.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127986219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A general joint source-channel matching method for wireless video transmission 一种通用的无线视频传输源信道联合匹配方法
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755691
Leiming Qian, Douglas L. Jones, K. Ramchandran, S. Appadwedula
With the rapid growth of multimedia content in wireless communication, there is an increasing demand for efficient image and video transmission systems. We present a joint source-channel matching scheme for wireless video transmission which jointly optimizes the source and channel coder to yield the optimal transmission quality while satisfying real-time delay and buffer constraints. We utilize a parametric model approach which avoids the necessity of having detailed a priori knowledge of the coders, thus making the scheme applicable to a wide variety of source and channel coder pairs. Simulations show that the scheme yields excellent results and works for several different types of source and channel coders.
随着无线通信中多媒体内容的快速增长,对高效的图像和视频传输系统的需求日益增加。提出了一种用于无线视频传输的联合源信道匹配方案,该方案在满足实时延迟和缓冲区约束的同时,对源信道编码器进行了优化,以获得最佳的传输质量。我们利用参数模型方法,避免了对编码器有详细的先验知识的必要性,从而使该方案适用于各种各样的源和信道编码器对。仿真结果表明,该方案可适用于多种不同类型的信源和信道编码器。
{"title":"A general joint source-channel matching method for wireless video transmission","authors":"Leiming Qian, Douglas L. Jones, K. Ramchandran, S. Appadwedula","doi":"10.1109/DCC.1999.755691","DOIUrl":"https://doi.org/10.1109/DCC.1999.755691","url":null,"abstract":"With the rapid growth of multimedia content in wireless communication, there is an increasing demand for efficient image and video transmission systems. We present a joint source-channel matching scheme for wireless video transmission which jointly optimizes the source and channel coder to yield the optimal transmission quality while satisfying real-time delay and buffer constraints. We utilize a parametric model approach which avoids the necessity of having detailed a priori knowledge of the coders, thus making the scheme applicable to a wide variety of source and channel coder pairs. Simulations show that the scheme yields excellent results and works for several different types of source and channel coders.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128477600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Resynchronization properties of arithmetic coding 算术编码的重同步特性
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785697
P. W. Moo, Xiaolin Wu
Summary form only given. Arithmetic coding is a popular and efficient lossless compression technique that maps a sequence of source symbols to an interval of numbers between zero and one. We consider the important problem of decoding an arithmetic code stream when an initial segment of that code stream is unknown. We call decoding under these conditions resynchronizing an arithmetic code. This problem has importance in both error resilience and cryptology. If an initial segment of the code stream is corrupted by channel noise, then the decoder must attempt to determine the original source sequence without full knowledge of the code stream. In this case, the ability to resynchronize helps the decoder to recover from the channel errors. But in the situation of encryption one would like to have very high time complexity for resynchronization. We consider the problem of resynchronizing simple arithmetic codes. This research lays the groundwork for future analysis of arithmetic codes with high-order context models. In order for the decoder to achieve full resynchronization, the unknown, initial b bits of the code stream must be determined exactly. When the source is approximately IID, the search complexity associated with choosing the correct sequence is at least O(2/sup b/2/). Therefore, when b is 100 or more, the time complexity required to achieve full resynchronization is prohibitively high. To partially resynchronize, the decoder must determine the coding interval after b bits have been output by the encoder. For a stationary source and a finite-precision static binary arithmetic coder, the complexity of determining the code interval is O(2/sup 2s/), where the precision is s bits.
只提供摘要形式。算术编码是一种流行且高效的无损压缩技术,它将源符号序列映射到0到1之间的数字区间。我们考虑了一个算术码流的译码的重要问题,当码流的初始段是未知的。在这种情况下,我们称解码为算术码的重新同步。这个问题在错误恢复和密码学中都很重要。如果码流的初始段被信道噪声破坏,那么解码器必须在不完全了解码流的情况下尝试确定原始源序列。在这种情况下,重新同步的能力有助于解码器从信道错误中恢复。但是在加密的情况下,人们希望有很高的时间复杂度来进行重同步。我们考虑简单算术码的重同步问题。本研究为今后高阶上下文模型算术编码的分析奠定了基础。为了使解码器实现完全的再同步,必须精确地确定码流的未知初始b位。当源近似为IID时,与选择正确序列相关的搜索复杂度至少为0 (2/sup b/2/)。因此,当b大于等于100时,实现完全重同步所需的时间复杂度将高得令人望而却步。为了部分重新同步,解码器必须在编码器输出b位之后确定编码间隔。对于固定源和有限精度静态二进制算术编码器,确定码间隔的复杂度为0 (2/sup 2s/),其中精度为s位。
{"title":"Resynchronization properties of arithmetic coding","authors":"P. W. Moo, Xiaolin Wu","doi":"10.1109/DCC.1999.785697","DOIUrl":"https://doi.org/10.1109/DCC.1999.785697","url":null,"abstract":"Summary form only given. Arithmetic coding is a popular and efficient lossless compression technique that maps a sequence of source symbols to an interval of numbers between zero and one. We consider the important problem of decoding an arithmetic code stream when an initial segment of that code stream is unknown. We call decoding under these conditions resynchronizing an arithmetic code. This problem has importance in both error resilience and cryptology. If an initial segment of the code stream is corrupted by channel noise, then the decoder must attempt to determine the original source sequence without full knowledge of the code stream. In this case, the ability to resynchronize helps the decoder to recover from the channel errors. But in the situation of encryption one would like to have very high time complexity for resynchronization. We consider the problem of resynchronizing simple arithmetic codes. This research lays the groundwork for future analysis of arithmetic codes with high-order context models. In order for the decoder to achieve full resynchronization, the unknown, initial b bits of the code stream must be determined exactly. When the source is approximately IID, the search complexity associated with choosing the correct sequence is at least O(2/sup b/2/). Therefore, when b is 100 or more, the time complexity required to achieve full resynchronization is prohibitively high. To partially resynchronize, the decoder must determine the coding interval after b bits have been output by the encoder. For a stationary source and a finite-precision static binary arithmetic coder, the complexity of determining the code interval is O(2/sup 2s/), where the precision is s bits.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116054011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Adaptive linear prediction lossless image coding 自适应线性预测无损图像编码
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755699
G. Motta, J. Storer, B. Carpentieri
The practical lossless digital image compressors that achieve the best results in terms of compression ratio are also simple and fast algorithms with low complexity both in terms of memory usage and running time. Surprisingly, the compression ratio achieved by these systems cannot be substantially improved even by using image-by-image optimization techniques or more sophisticate and complex algorithms. Meyer and Tischer (1998) were able, with their TMW, to improve some current best results (they do not report results for all test images) by using global optimization techniques and multiple blended linear predictors. Our investigation is directed to determine the effectiveness of an algorithm that uses multiple adaptive linear predictors, locally optimized on a pixel-by-pixel basis. The results we obtained on a test set of nine standard images are encouraging, where we improve over CALIC on some images.
在实际应用中,能够获得最佳压缩比的无损数字图像压缩器,在内存使用和运行时间方面都是简单快速、复杂度低的算法。令人惊讶的是,即使使用逐图优化技术或更复杂的算法,这些系统所获得的压缩比也不能得到实质性的提高。Meyer和Tischer(1998)能够通过使用全局优化技术和多个混合线性预测因子,利用他们的TMW改进一些目前最好的结果(他们没有报告所有测试图像的结果)。我们的研究旨在确定一种算法的有效性,该算法使用多个自适应线性预测器,在逐像素的基础上进行局部优化。我们在九张标准图像的测试集上获得了令人鼓舞的结果,其中我们在一些图像上改进了CALIC。
{"title":"Adaptive linear prediction lossless image coding","authors":"G. Motta, J. Storer, B. Carpentieri","doi":"10.1109/DCC.1999.755699","DOIUrl":"https://doi.org/10.1109/DCC.1999.755699","url":null,"abstract":"The practical lossless digital image compressors that achieve the best results in terms of compression ratio are also simple and fast algorithms with low complexity both in terms of memory usage and running time. Surprisingly, the compression ratio achieved by these systems cannot be substantially improved even by using image-by-image optimization techniques or more sophisticate and complex algorithms. Meyer and Tischer (1998) were able, with their TMW, to improve some current best results (they do not report results for all test images) by using global optimization techniques and multiple blended linear predictors. Our investigation is directed to determine the effectiveness of an algorithm that uses multiple adaptive linear predictors, locally optimized on a pixel-by-pixel basis. The results we obtained on a test set of nine standard images are encouraging, where we improve over CALIC on some images.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125443982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Comparison and application possibilities of JPEG and fractal-based image compressing methods in the development of multimedia-based material JPEG与基于分形的图像压缩方法在多媒体素材开发中的比较与应用可能性
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785674
J. Berke
When developing multimedia-based material the format in which we want to enclose our images is an important question. It is a more crucial question if we want our material to appear in commerce because the cost of developing CD (master copy and copies) changes according to the amount of data on the disk. In the case of fractal-based compression (FIF) we can save space and money. Our studies verified that this compression method also results in an improvement in quality in the case of raster image size (640/spl times/480, 800/spl times/600, 1024/spl times/768). This is extremely true in the case of images full of shades, and in the case of the enlargement of parts.
在开发基于多媒体的材料时,我们想要封装图像的格式是一个重要的问题。这是一个更关键的问题,如果我们希望我们的材料出现在商业中,因为开发CD(母盘和副本)的成本随着磁盘上的数据量而变化。在分形压缩(FIF)的情况下,我们可以节省空间和金钱。我们的研究证实,在光栅图像大小(640/spl倍/ 480,800 /spl倍/ 600,1024 /spl倍/768)的情况下,这种压缩方法也会导致质量的提高。在充满阴影的图像和放大部分的情况下,这是非常正确的。
{"title":"Comparison and application possibilities of JPEG and fractal-based image compressing methods in the development of multimedia-based material","authors":"J. Berke","doi":"10.1109/DCC.1999.785674","DOIUrl":"https://doi.org/10.1109/DCC.1999.785674","url":null,"abstract":"When developing multimedia-based material the format in which we want to enclose our images is an important question. It is a more crucial question if we want our material to appear in commerce because the cost of developing CD (master copy and copies) changes according to the amount of data on the disk. In the case of fractal-based compression (FIF) we can save space and money. Our studies verified that this compression method also results in an improvement in quality in the case of raster image size (640/spl times/480, 800/spl times/600, 1024/spl times/768). This is extremely true in the case of images full of shades, and in the case of the enlargement of parts.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125015017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Performance evaluation of reversible integer-to-integer wavelet transforms for image compression 图像压缩中可逆整数到整数小波变换的性能评价
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785671
M. Adams, F. Kossentini
[Summary form only given]. There has been a growing interest in reversible integer-to-integer wavelet transforms for image coding applications. In this paper, a number of such transforms are compared on the basis of their objective and subjective lossy compression performance, lossless compression performance, and computational complexity. Of the transforms considered, several were found to perform particularly well, with the best choice for a given application depending on the relative importance of lossless compression performance, lossy compression performance, and computational complexity. Reversible integer-to-integer versions of numerous transforms are also compared to their conventional (i.e., nonreversible real-valued) counterparts for lossy compression. In many cases, the reversible integer-to-integer and conventional versions of a transform were found to yield results with comparable image quality.
[仅提供摘要形式]。对于图像编码应用的可逆整数到整数小波变换,人们的兴趣越来越大。在本文中,根据客观和主观有损压缩性能、无损压缩性能和计算复杂度对一些这样的变换进行了比较。在考虑的转换中,有几个被发现表现得特别好,对于给定的应用程序来说,最佳选择取决于无损压缩性能、有损压缩性能和计算复杂性的相对重要性。许多变换的可逆整数到整数版本也与它们的常规(即,不可逆实值)对偶有损压缩进行了比较。在许多情况下,发现可逆的整数到整数的变换和传统的变换版本产生的结果具有相当的图像质量。
{"title":"Performance evaluation of reversible integer-to-integer wavelet transforms for image compression","authors":"M. Adams, F. Kossentini","doi":"10.1109/DCC.1999.785671","DOIUrl":"https://doi.org/10.1109/DCC.1999.785671","url":null,"abstract":"[Summary form only given]. There has been a growing interest in reversible integer-to-integer wavelet transforms for image coding applications. In this paper, a number of such transforms are compared on the basis of their objective and subjective lossy compression performance, lossless compression performance, and computational complexity. Of the transforms considered, several were found to perform particularly well, with the best choice for a given application depending on the relative importance of lossless compression performance, lossy compression performance, and computational complexity. Reversible integer-to-integer versions of numerous transforms are also compared to their conventional (i.e., nonreversible real-valued) counterparts for lossy compression. In many cases, the reversible integer-to-integer and conventional versions of a transform were found to yield results with comparable image quality.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125670038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Eigen wavelet: hyperspectral image compression algorithm 特征小波:高光谱图像压缩算法
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.785707
S. Srinivasan, L. Kanal
Summary form only given. The increased information content of hyperspectral imagery over multispectral data has attracted significant interest from the defense and remote sensing communities. We develop a mechanism for compressing hyperspectral imagery with no loss of information. The challenge of hyperspectral image compression lies in the non-isotropy and non-stationarity that is displayed across the spectral channels. Short-range dependence is exhibited over the spatial axes due to the finite extent of objects/texture on the imaged area, while long-range dependence is shown by the spectral axis due to the spectral response of the imaged pixel and transmission medium. A secondary, though critical, challenge is one of speed. In order to be of practical interest, a good solution must be able to scale up to speeds of the order of 20 MByte/s. We use an integerizable eigendecomposition along the spectral channel to optimally extract spectral redundancies. Subsequently, we apply wavelet-based encoding to transmit the residuals of eigendecomposition. We use contextual arithmetic encoding implemented with several innovations that guarantee speed and performance. Our implementation attains operating speeds of 550 kBytes of raw imagery per second, and achieves a compression ratio of around 2.7:1 on typical AVIRIS data. This demonstrates the utility and applicability of our algorithm towards realizing a deployable hyperspectral image compression system.
只提供摘要形式。与多光谱数据相比,高光谱图像信息量的增加引起了国防和遥感界的极大兴趣。我们开发了一种不丢失信息的压缩高光谱图像的机制。高光谱图像压缩的挑战在于其在光谱通道上显示的非各向同性和非平稳性。由于物体/纹理在成像区域上的范围有限,在空间轴上表现出短距离依赖性,而由于成像像素和传输介质的光谱响应,在光谱轴上表现出远距离依赖性。第二个挑战是速度,尽管这很关键。为了具有实际意义,一个好的解决方案必须能够扩展到20 MByte/s的速度。我们使用沿频谱通道的可积特征分解来最优地提取频谱冗余。随后,我们采用基于小波的编码来传输特征分解的残差。我们使用上下文算术编码实现了几个创新,以保证速度和性能。我们的实现实现了每秒550kbytes原始图像的操作速度,并且在典型的AVIRIS数据上实现了约2.7:1的压缩比。这证明了我们的算法在实现可部署的高光谱图像压缩系统方面的实用性和适用性。
{"title":"Eigen wavelet: hyperspectral image compression algorithm","authors":"S. Srinivasan, L. Kanal","doi":"10.1109/DCC.1999.785707","DOIUrl":"https://doi.org/10.1109/DCC.1999.785707","url":null,"abstract":"Summary form only given. The increased information content of hyperspectral imagery over multispectral data has attracted significant interest from the defense and remote sensing communities. We develop a mechanism for compressing hyperspectral imagery with no loss of information. The challenge of hyperspectral image compression lies in the non-isotropy and non-stationarity that is displayed across the spectral channels. Short-range dependence is exhibited over the spatial axes due to the finite extent of objects/texture on the imaged area, while long-range dependence is shown by the spectral axis due to the spectral response of the imaged pixel and transmission medium. A secondary, though critical, challenge is one of speed. In order to be of practical interest, a good solution must be able to scale up to speeds of the order of 20 MByte/s. We use an integerizable eigendecomposition along the spectral channel to optimally extract spectral redundancies. Subsequently, we apply wavelet-based encoding to transmit the residuals of eigendecomposition. We use contextual arithmetic encoding implemented with several innovations that guarantee speed and performance. Our implementation attains operating speeds of 550 kBytes of raw imagery per second, and achieves a compression ratio of around 2.7:1 on typical AVIRIS data. This demonstrates the utility and applicability of our algorithm towards realizing a deployable hyperspectral image compression system.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124594906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Variable-to-fixed length codes and plurally parsable dictionaries 可变到固定长度的代码和复数可解析的字典
Pub Date : 1999-03-29 DOI: 10.1109/DCC.1999.755695
S. Savari
The goal of lossless data compression is to map the set of strings from a given source into a set of binary code strings. A variable-to-fixed length encoding procedure is a mapping from a dictionary of variable length strings of source outputs to the set of codewords of a given length. For memoryless sources, the Tunstall procedure can be applied to construct optimal uniquely parsable dictionaries and the resulting codes are known to work especially well for sources with small entropies. We introduce the idea of plurally parsable dictionaries and show how to design plurally parsable dictionaries that can outperform the Tunstall dictionary of the same size on very predictable binary, memoryless sources.
无损数据压缩的目标是将来自给定源的一组字符串映射为一组二进制代码字符串。可变长度到固定长度的编码过程是从源输出的可变长度字符串的字典到给定长度的码字集的映射。对于无内存源,可以应用Tunstall过程来构造最优的唯一可解析字典,并且已知结果代码对于具有小熵的源特别有效。我们介绍了多元可解析字典的思想,并展示了如何设计多元可解析字典,它可以在非常可预测的二进制、无内存源上优于相同大小的Tunstall字典。
{"title":"Variable-to-fixed length codes and plurally parsable dictionaries","authors":"S. Savari","doi":"10.1109/DCC.1999.755695","DOIUrl":"https://doi.org/10.1109/DCC.1999.755695","url":null,"abstract":"The goal of lossless data compression is to map the set of strings from a given source into a set of binary code strings. A variable-to-fixed length encoding procedure is a mapping from a dictionary of variable length strings of source outputs to the set of codewords of a given length. For memoryless sources, the Tunstall procedure can be applied to construct optimal uniquely parsable dictionaries and the resulting codes are known to work especially well for sources with small entropies. We introduce the idea of plurally parsable dictionaries and show how to design plurally parsable dictionaries that can outperform the Tunstall dictionary of the same size on very predictable binary, memoryless sources.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130905031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1