首页 > 最新文献

Proceedings DCC '97. Data Compression Conference最新文献

英文 中文
Redundancy of the Lempel-Ziv-Welch code Lempel-Ziv-Welch码的冗余
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582011
S. Savari
The Lempel-Ziv codes are universal variable-to-fixed length codes that have become virtually standard in practical lossless data compression. For any given source output string from a Markov of unifilar source, we upper bound the difference between the number of binary digits needed by the Lempel-Ziv-Welch code (1977, 1978, 1984) to encode the string and the self-information of the string. We use this result to demonstrate that for unifilar, Markov sources, the redundancy of encoding the first n letters of the source output with LZW is O((ln n)/sup -1/), and we upper bound the exact form of convergence.
Lempel-Ziv码是通用的可变长度到固定长度的代码,在实际的无损数据压缩中已经成为几乎标准的代码。对于任何给定的源输出字符串,从一个相同的马尔可夫源,我们上界的Lempel-Ziv-Welch码(1977,1978,1984)编码字符串所需的二进制位数和字符串的自信息之间的差。我们用这个结果证明了对于类似的马尔可夫源,用LZW编码源输出的前n个字母的冗余是O((ln n)/sup -1/),并且我们给出了收敛的确切形式的上界。
{"title":"Redundancy of the Lempel-Ziv-Welch code","authors":"S. Savari","doi":"10.1109/DCC.1997.582011","DOIUrl":"https://doi.org/10.1109/DCC.1997.582011","url":null,"abstract":"The Lempel-Ziv codes are universal variable-to-fixed length codes that have become virtually standard in practical lossless data compression. For any given source output string from a Markov of unifilar source, we upper bound the difference between the number of binary digits needed by the Lempel-Ziv-Welch code (1977, 1978, 1984) to encode the string and the self-information of the string. We use this result to demonstrate that for unifilar, Markov sources, the redundancy of encoding the first n letters of the source output with LZW is O((ln n)/sup -1/), and we upper bound the exact form of convergence.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133526150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Block sorting and compression 块排序和压缩
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582009
Z. Arnavut, S. Magliveras
The block sorting lossless data compression algorithm (BSLDCA) described by Burrows and Wheeler (1994) has received considerable attention. It achieves as good compression rates as context-based methods, such as PPM, but at execution speeds closer to Ziv-Lempel techniques. This paper, describes the lexical permutation sorting algorithm (LPSA), its theoretical basis, and delineates its relationship to the BSLDCA. In particular we describe how the BSLDCA can be reduced to the LPSA and show how the LPSA could give better results than the BSLDCA when transmitting permutations. We also introduce a new technique, inversion frequencies, and show that it does as well as move-to-front (MTF) coding when there is locality of reference in the data.
Burrows和Wheeler(1994)提出的块排序无损数据压缩算法(BSLDCA)得到了相当多的关注。它实现了与基于上下文的方法(如PPM)一样好的压缩率,但执行速度更接近Ziv-Lempel技术。本文介绍了词法排列排序算法(LPSA)及其理论基础,并阐述了它与BSLDCA的关系。特别是,我们描述了如何将BSLDCA简化为LPSA,并展示了LPSA在传输排列时如何比BSLDCA提供更好的结果。我们还介绍了一种新的技术,反转频率,并表明当数据中有局部参考时,它可以很好地移动到前面(MTF)编码。
{"title":"Block sorting and compression","authors":"Z. Arnavut, S. Magliveras","doi":"10.1109/DCC.1997.582009","DOIUrl":"https://doi.org/10.1109/DCC.1997.582009","url":null,"abstract":"The block sorting lossless data compression algorithm (BSLDCA) described by Burrows and Wheeler (1994) has received considerable attention. It achieves as good compression rates as context-based methods, such as PPM, but at execution speeds closer to Ziv-Lempel techniques. This paper, describes the lexical permutation sorting algorithm (LPSA), its theoretical basis, and delineates its relationship to the BSLDCA. In particular we describe how the BSLDCA can be reduced to the LPSA and show how the LPSA could give better results than the BSLDCA when transmitting permutations. We also introduce a new technique, inversion frequencies, and show that it does as well as move-to-front (MTF) coding when there is locality of reference in the data.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133139466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Multiple descriptions encoding of images 图像的多重描述编码
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582138
P. Subrahmanya, T. Berger
Summary form only given. The authors consider the following multiterminal data compression problem. An image is to be compressed and transmitted to a destination over a set of unreliable links. If only one of the links is functional (i.e. data transmitted over all other links is completely lost), we would like to be able to reconstruct a low resolution version of our original image. If more links are functional, we would like the image quality to improve. Finally, with all links functioning, we desire a high resolution, possibly lossless reconstruction of the original image. Motivating applications for this problem are briefly discussed. We present conceptually simple and computationally efficient schemes that are useful in these applications. The schemes can be implemented as simple pre-processing and post-processing operations on existing image compression algorithms.
只提供摘要形式。作者考虑了以下多终端数据压缩问题。图像将被压缩并通过一组不可靠的链路传输到目的地。如果只有一个链接是功能性的(即在所有其他链接上传输的数据完全丢失),我们希望能够重建原始图像的低分辨率版本。如果更多的链接是功能性的,我们希望图像质量得到改善。最后,在所有链接都正常工作的情况下,我们希望对原始图像进行高分辨率、尽可能无损的重建。简要讨论了该问题的激励应用。我们提出了在这些应用中有用的概念上简单且计算效率高的方案。这些方案可以通过对现有图像压缩算法进行简单的预处理和后处理操作来实现。
{"title":"Multiple descriptions encoding of images","authors":"P. Subrahmanya, T. Berger","doi":"10.1109/DCC.1997.582138","DOIUrl":"https://doi.org/10.1109/DCC.1997.582138","url":null,"abstract":"Summary form only given. The authors consider the following multiterminal data compression problem. An image is to be compressed and transmitted to a destination over a set of unreliable links. If only one of the links is functional (i.e. data transmitted over all other links is completely lost), we would like to be able to reconstruct a low resolution version of our original image. If more links are functional, we would like the image quality to improve. Finally, with all links functioning, we desire a high resolution, possibly lossless reconstruction of the original image. Motivating applications for this problem are briefly discussed. We present conceptually simple and computationally efficient schemes that are useful in these applications. The schemes can be implemented as simple pre-processing and post-processing operations on existing image compression algorithms.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115426895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fast weighted universal transform coding: toward optimal, low complexity bases for image compression 快速加权通用变换编码:面向图像压缩的最优、低复杂度基础
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582021
M. Effros
Effros and Chou (see Proceedings of the IEEE International Conference on Image Processing, Washington, DC, 1995) introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal transform codes, the WUTC achieves significant performance gains over JPEG. The computational and storage costs of that performance gain are effectively the computation and storage required to operate and store a collection of transform codes rather than a single transform code. We consider two complexity- and storage-constrained variations of the WUTC. The complexity and storage of the algorithm are controlled by constraining the order of the bases. In the first algorithm, called the fast WUTC (FWUTC), complexity is controlled by controlling the maximum order of each transform. On a sequence of combined text and gray-scale images, the FWUTC achieves performance comparable to the WUTC. In the second algorithm, called the jointly optimized fast WUTC (JWUTC), the complexity is controlled by controlling the average order of the transforms. On the same data set and for the same complexity, the performance of the JWUTC always exceeds the performance of the FWUTC. The JWUTC and FWUTC algorithm are interesting both for their complexity and storage savings in data compression and for the insights that they lend into the choice of appropriate fixed- and variable-order bases for image representation.
Effros和Chou(参见IEEE国际图像处理会议论文集,华盛顿特区,1995)介绍了一种称为加权通用变换码(WUTC)的两阶段通用变换码。通过用一组最优转换代码替换JPEG的单个非最优转换代码,WUTC比JPEG获得了显著的性能提升。这种性能增益的计算和存储成本实际上是操作和存储一组转换代码而不是单个转换代码所需的计算和存储成本。我们考虑WUTC的两种复杂性和存储约束的变体。通过约束基的顺序来控制算法的复杂度和存储量。在第一种算法中,称为快速WUTC (FWUTC),通过控制每个变换的最大阶数来控制复杂度。在组合文本和灰度图像的序列上,FWUTC实现了与WUTC相当的性能。第二种算法称为联合优化快速WUTC (JWUTC),通过控制变换的平均阶数来控制复杂度。对于相同的数据集和相同的复杂度,JWUTC的性能总是优于FWUTC的性能。JWUTC和FWUTC算法的有趣之处在于它们在数据压缩方面的复杂性和存储节省,以及它们为图像表示选择适当的固定顺序和可变顺序基所提供的见解。
{"title":"Fast weighted universal transform coding: toward optimal, low complexity bases for image compression","authors":"M. Effros","doi":"10.1109/DCC.1997.582021","DOIUrl":"https://doi.org/10.1109/DCC.1997.582021","url":null,"abstract":"Effros and Chou (see Proceedings of the IEEE International Conference on Image Processing, Washington, DC, 1995) introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal transform codes, the WUTC achieves significant performance gains over JPEG. The computational and storage costs of that performance gain are effectively the computation and storage required to operate and store a collection of transform codes rather than a single transform code. We consider two complexity- and storage-constrained variations of the WUTC. The complexity and storage of the algorithm are controlled by constraining the order of the bases. In the first algorithm, called the fast WUTC (FWUTC), complexity is controlled by controlling the maximum order of each transform. On a sequence of combined text and gray-scale images, the FWUTC achieves performance comparable to the WUTC. In the second algorithm, called the jointly optimized fast WUTC (JWUTC), the complexity is controlled by controlling the average order of the transforms. On the same data set and for the same complexity, the performance of the JWUTC always exceeds the performance of the FWUTC. The JWUTC and FWUTC algorithm are interesting both for their complexity and storage savings in data compression and for the insights that they lend into the choice of appropriate fixed- and variable-order bases for image representation.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123963374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Word based multiple dictionary scheme for text compression with application to 2D bar code 基于字的多字典文本压缩方案及其在二维条码中的应用
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582120
K. Ng, L. Cheng
Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.
只提供摘要形式。文本压缩的研究主要涉及文档应用;它很少考虑其他应用。在增加条形码符号的数据容量和信息密度方面,以前已经作出了重大的努力。这些努力的结果创造了二维条形码的格式。我们以Symbol Technologies开发的PDF417 (Pavlidis et al. 1992)为例。PDF417是最流行的二维条码符号。然而PDF417的存储容量限制了其广泛应用。本文提出了一种基于反向搜索算法和新的存储协议的文本压缩技术。研究了一种基于词的多字典文本压缩技术,以提高二维条码的存储容量。为了加快文本的搜索速度,还描述了一个哈希函数。对于数据库检索的应用来说,该技术特别有用。对于部件号、位置、名称、参考等有限形式的二维条码存储的数据,由于命中率可以达到100%,压缩比可以高达2。对于解码器设计,复杂性不需要太复杂,因为解码器只需要知道“光”和“暗”。为了使字典变得更加“智能”,提出了一种子字典,它允许编码文本更加独立。
{"title":"Word based multiple dictionary scheme for text compression with application to 2D bar code","authors":"K. Ng, L. Cheng","doi":"10.1109/DCC.1997.582120","DOIUrl":"https://doi.org/10.1109/DCC.1997.582120","url":null,"abstract":"Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-based retrieval from compressed-image databases 基于内容的压缩图像数据库检索
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582126
P. Ogunbona, P. Sangassapaviriya
Summary form only given. There is an enormous amount of multimedia data including images, video, speech, audio and text, distributed among the various computer nodes on the Internet. The extent to which a user will be able to derive useful information from these data depends largely on the ease with which required data can be retrieved from the databases. The volume of the data also poses a storage constraint on the databases; hence these data will need to exist in the compressed form on the databases. We concentrate on image data and propose a new paradigm in which a compressed-image database can be searched for its contents. In this paradigm, the need for a separate index is obviated by utilising image compression schemes that can support some form of object search in the compressed be domain. The central idea is to store the image in layers of different resolutions and to be able to synthesise an edge image from a subset of the layers. This edge image then constitutes a model of the image that can be used as a searchable index. The implication of this approach is that the index is inherent in the compressed image file and does not occupy any additional storage space as would be the case in a conventional index. The preliminary results obtained from the system simulated in our experiments indicate the feasibility of the proposed paradigm.
只提供摘要形式。有大量的多媒体数据,包括图像、视频、语音、音频和文本,分布在互联网上的各个计算机节点上。用户能够从这些数据中获得有用信息的程度在很大程度上取决于从数据库检索所需数据的难易程度。数据量也对数据库的存储构成了约束;因此,这些数据需要以压缩形式存在于数据库中。我们专注于图像数据,并提出了一个新的范例,其中压缩图像数据库可以搜索其内容。在这个范例中,通过使用能够在压缩的be域中支持某种形式的对象搜索的图像压缩方案,可以避免对单独索引的需要。其核心思想是将图像存储在不同分辨率的层中,并能够从层的子集中合成边缘图像。然后,这个边缘图像构成了图像的一个模型,可以用作可搜索的索引。这种方法的含义是,索引是压缩图像文件中固有的,不像传统索引那样占用任何额外的存储空间。从我们的实验模拟系统中获得的初步结果表明了所提出范式的可行性。
{"title":"Content-based retrieval from compressed-image databases","authors":"P. Ogunbona, P. Sangassapaviriya","doi":"10.1109/DCC.1997.582126","DOIUrl":"https://doi.org/10.1109/DCC.1997.582126","url":null,"abstract":"Summary form only given. There is an enormous amount of multimedia data including images, video, speech, audio and text, distributed among the various computer nodes on the Internet. The extent to which a user will be able to derive useful information from these data depends largely on the ease with which required data can be retrieved from the databases. The volume of the data also poses a storage constraint on the databases; hence these data will need to exist in the compressed form on the databases. We concentrate on image data and propose a new paradigm in which a compressed-image database can be searched for its contents. In this paradigm, the need for a separate index is obviated by utilising image compression schemes that can support some form of object search in the compressed be domain. The central idea is to store the image in layers of different resolutions and to be able to synthesise an edge image from a subset of the layers. This edge image then constitutes a model of the image that can be used as a searchable index. The implication of this approach is that the index is inherent in the compressed image file and does not occupy any additional storage space as would be the case in a conventional index. The preliminary results obtained from the system simulated in our experiments indicate the feasibility of the proposed paradigm.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128840121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimal fractal coding is NP-hard 最优分形编码是np困难的
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582049
M. Ruhl, H. Hartenstein
In fractal compression a signal is encoded by the parameters of a contractive transformation whose fixed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the collage theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.
在分形压缩中,信号是由一个不动点(吸引子)是原始数据近似值的压缩变换的参数编码的。因此,分形编码可以看作是在一组可容许的压缩变换中寻找吸引子最接近给定信号的变换的优化问题。基于拼贴定理的标准分形编码方案只产生次优解。我们通过MAXCUT的约简证明了确定最优分形码的问题是np困难的。据我们所知,这是第一次分析分形编码的内在复杂性。此外,我们还证明了标准分形编码并不是这个问题的近似算法。
{"title":"Optimal fractal coding is NP-hard","authors":"M. Ruhl, H. Hartenstein","doi":"10.1109/DCC.1997.582049","DOIUrl":"https://doi.org/10.1109/DCC.1997.582049","url":null,"abstract":"In fractal compression a signal is encoded by the parameters of a contractive transformation whose fixed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the collage theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125267692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Generalised locally adaptive DPCM 广义局部自适应DPCM
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582142
T. Seemann, P. Tischer
Summary form only given. In differential pulse code modulation (DPCM) we make a prediction f/spl circ/=/spl Sigma/a(i)-f(i) of the next pixel using a linear combination of neighbouring pixels f(i). It is possible to have the coefficients a(i)s constant over a whole image, but better results can be obtained by adapting the a(i)s to the local image behaviour as the image is encoded. One difficulty with present schemes is that they can only produce predictors with positive a(i)s. This is desirable in the presence of noise, but in regions where the intensity varies smoothly, we require at least one negative coefficient to properly estimate a gradient. However, if we consider the four neighbouring pixels as four local sub-predictors W, N, NW and NE, and the gradient measure as the sum of absolute prediction errors of those sub-predictors within the local neighbourhood, then we can use any sub-predictors we choose, even nonlinear ones. In our experiments, we chose to use three additional linear predictors suited for smooth regions, each having one negative coefficient. Results were computed for three versions of the standard JPEG test set and some 12 bpp medical images.
只提供摘要形式。在差分脉冲编码调制(DPCM)中,我们使用相邻像素f(i)的线性组合对下一个像素进行预测f/spl circ/=/spl Sigma/a(i)-f(i)。系数a(i)s可以在整个图像上保持恒定,但是在图像编码时,通过使a(i)s适应局部图像行为可以获得更好的结果。目前方案的一个困难是,它们只能产生具有正a(i)s的预测因子。这在存在噪声的情况下是理想的,但在强度平滑变化的区域,我们需要至少一个负系数来正确估计梯度。然而,如果我们将四个相邻像素视为四个局部子预测器W, N, NW和NE,并且梯度度量作为局部邻域中这些子预测器的绝对预测误差之和,那么我们可以使用我们选择的任何子预测器,甚至是非线性子预测器。在我们的实验中,我们选择使用另外三个适合于光滑区域的线性预测器,每个都有一个负系数。计算了三个版本的标准JPEG测试集和一些12bpp医学图像的结果。
{"title":"Generalised locally adaptive DPCM","authors":"T. Seemann, P. Tischer","doi":"10.1109/DCC.1997.582142","DOIUrl":"https://doi.org/10.1109/DCC.1997.582142","url":null,"abstract":"Summary form only given. In differential pulse code modulation (DPCM) we make a prediction f/spl circ/=/spl Sigma/a(i)-f(i) of the next pixel using a linear combination of neighbouring pixels f(i). It is possible to have the coefficients a(i)s constant over a whole image, but better results can be obtained by adapting the a(i)s to the local image behaviour as the image is encoded. One difficulty with present schemes is that they can only produce predictors with positive a(i)s. This is desirable in the presence of noise, but in regions where the intensity varies smoothly, we require at least one negative coefficient to properly estimate a gradient. However, if we consider the four neighbouring pixels as four local sub-predictors W, N, NW and NE, and the gradient measure as the sum of absolute prediction errors of those sub-predictors within the local neighbourhood, then we can use any sub-predictors we choose, even nonlinear ones. In our experiments, we chose to use three additional linear predictors suited for smooth regions, each having one negative coefficient. Results were computed for three versions of the standard JPEG test set and some 12 bpp medical images.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Fractal color compression in the L*a*b* uniform color space 分形颜色压缩在L*a*b*均匀颜色空间
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582090
I. M. Danciu, J. Hart
Summary form only given. We present comparative results obtained in the context of 24-bit true color image encoding by using searchless vs. search-based fractal compression techniques in a perceptually uniform color space. A pixel in the color space is represented as a vector with each component corresponding to a color channel. The least squares approximation of an image block by an iterated function system (IFS) is adapted to reflect the added color dimensions. To account for the nonlinearity of the human visual perception, compression in the L/sup */a/sup */b/sup */ uniform color space is proposed. In this color space, two pairs of colors with the same Euclidean distance metric are perceptually almost equally similar or different.
只提供摘要形式。我们通过在感知均匀的颜色空间中使用无搜索与基于搜索的分形压缩技术,在24位真彩色图像编码的背景下获得比较结果。色彩空间中的像素表示为一个矢量,每个分量对应于一个色彩通道。采用迭代函数系统(IFS)对图像块进行最小二乘逼近,以反映增加的颜色维度。针对人类视觉感知的非线性,提出了L/sup */a/sup */b/sup */均匀色彩空间的压缩方法。在这个颜色空间中,具有相同欧几里得距离度量的两对颜色在感知上几乎相等地相似或不同。
{"title":"Fractal color compression in the L*a*b* uniform color space","authors":"I. M. Danciu, J. Hart","doi":"10.1109/DCC.1997.582090","DOIUrl":"https://doi.org/10.1109/DCC.1997.582090","url":null,"abstract":"Summary form only given. We present comparative results obtained in the context of 24-bit true color image encoding by using searchless vs. search-based fractal compression techniques in a perceptually uniform color space. A pixel in the color space is represented as a vector with each component corresponding to a color channel. The least squares approximation of an image block by an iterated function system (IFS) is adapted to reflect the added color dimensions. To account for the nonlinearity of the human visual perception, compression in the L/sup */a/sup */b/sup */ uniform color space is proposed. In this color space, two pairs of colors with the same Euclidean distance metric are perceptually almost equally similar or different.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast and compact volume rendering in the compressed transform domain 快速和紧凑的体渲染在压缩变换域
Pub Date : 1997-03-25 DOI: 10.1109/DCC.1997.582050
Sefeng Chen, J. Reif
Potentially, data compression techniques may have a broad impact in computing not only by decreasing storage and communication costs, but also by speeding up computation. For many image processing applications, the use of data compression is so pervasive that we can assume the inputs and outputs are in a compressed domain, and it is intriguing to consider doing computations on the data entirely in the compressed domain. We speed up processing by doing computations, including dot product and convolution on vectors and arrays, in a compressed transform domain. To do this, we make use of sophisticated algebraic techniques for evaluation and interpolation of sparse polynomials. We illustrate the basic methodology by applying these techniques to image processing problems, and in particular to speed up the well known splatting algorithm for volume rendering. The splatting algorithm is one of the most efficient of existing high quality volume rendering algorithms; it takes as input three dimensional volume sample data of size N/sup 3/ and outputs an N/spl times/N image in O(N/sup 3/f) time, where f is a parameter known as footprint size (which often is hundreds of pixels in practice). Assuming that the original sample data and the resulting image are stored in the transform domain and can be lossily compressed by a factor /spl rho/ with small error, we show that the rendering of the image can be done entirely in the compressed transform domain in decreased time O(/spl rho/N/sup 3/ log N). Hence we obtain a significant speedup over the splatting algorithm when f/spl Gt//spl rho/ log N.
潜在的,数据压缩技术可能会对计算产生广泛的影响,不仅通过降低存储和通信成本,而且通过加快计算速度。对于许多图像处理应用程序,数据压缩的使用是如此普遍,以至于我们可以假设输入和输出在压缩域中,并且考虑完全在压缩域中对数据进行计算是很有趣的。我们通过在压缩变换域中对向量和数组进行点积和卷积计算来加快处理速度。为了做到这一点,我们使用复杂的代数技术来评估和插值稀疏多项式。我们通过将这些技术应用于图像处理问题来说明基本方法,特别是加速众所周知的体绘制的飞溅算法。飞溅算法是现有高质量体绘制算法中效率最高的一种;它将大小为N/sup 3/的三维体积样本数据作为输入,并在O(N/sup 3/f)时间内输出N/spl倍/N的图像,其中f是称为足迹大小的参数(在实践中通常为数百像素)。假设原始样本数据和生成的图像存储在变换域中,并且可以被一个因子/spl rho/以较小的误差进行损耗压缩,我们证明了图像的渲染可以在压缩变换域中完全完成,减少时间为O(/spl rho/N/sup 3/ log N),因此我们获得了比飞溅算法在f/spl Gt//spl rho/ log N时显著的加速。
{"title":"Fast and compact volume rendering in the compressed transform domain","authors":"Sefeng Chen, J. Reif","doi":"10.1109/DCC.1997.582050","DOIUrl":"https://doi.org/10.1109/DCC.1997.582050","url":null,"abstract":"Potentially, data compression techniques may have a broad impact in computing not only by decreasing storage and communication costs, but also by speeding up computation. For many image processing applications, the use of data compression is so pervasive that we can assume the inputs and outputs are in a compressed domain, and it is intriguing to consider doing computations on the data entirely in the compressed domain. We speed up processing by doing computations, including dot product and convolution on vectors and arrays, in a compressed transform domain. To do this, we make use of sophisticated algebraic techniques for evaluation and interpolation of sparse polynomials. We illustrate the basic methodology by applying these techniques to image processing problems, and in particular to speed up the well known splatting algorithm for volume rendering. The splatting algorithm is one of the most efficient of existing high quality volume rendering algorithms; it takes as input three dimensional volume sample data of size N/sup 3/ and outputs an N/spl times/N image in O(N/sup 3/f) time, where f is a parameter known as footprint size (which often is hundreds of pixels in practice). Assuming that the original sample data and the resulting image are stored in the transform domain and can be lossily compressed by a factor /spl rho/ with small error, we show that the rendering of the image can be done entirely in the compressed transform domain in decreased time O(/spl rho/N/sup 3/ log N). Hence we obtain a significant speedup over the splatting algorithm when f/spl Gt//spl rho/ log N.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133748937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings DCC '97. Data Compression Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1