The Lempel-Ziv codes are universal variable-to-fixed length codes that have become virtually standard in practical lossless data compression. For any given source output string from a Markov of unifilar source, we upper bound the difference between the number of binary digits needed by the Lempel-Ziv-Welch code (1977, 1978, 1984) to encode the string and the self-information of the string. We use this result to demonstrate that for unifilar, Markov sources, the redundancy of encoding the first n letters of the source output with LZW is O((ln n)/sup -1/), and we upper bound the exact form of convergence.
{"title":"Redundancy of the Lempel-Ziv-Welch code","authors":"S. Savari","doi":"10.1109/DCC.1997.582011","DOIUrl":"https://doi.org/10.1109/DCC.1997.582011","url":null,"abstract":"The Lempel-Ziv codes are universal variable-to-fixed length codes that have become virtually standard in practical lossless data compression. For any given source output string from a Markov of unifilar source, we upper bound the difference between the number of binary digits needed by the Lempel-Ziv-Welch code (1977, 1978, 1984) to encode the string and the self-information of the string. We use this result to demonstrate that for unifilar, Markov sources, the redundancy of encoding the first n letters of the source output with LZW is O((ln n)/sup -1/), and we upper bound the exact form of convergence.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133526150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The block sorting lossless data compression algorithm (BSLDCA) described by Burrows and Wheeler (1994) has received considerable attention. It achieves as good compression rates as context-based methods, such as PPM, but at execution speeds closer to Ziv-Lempel techniques. This paper, describes the lexical permutation sorting algorithm (LPSA), its theoretical basis, and delineates its relationship to the BSLDCA. In particular we describe how the BSLDCA can be reduced to the LPSA and show how the LPSA could give better results than the BSLDCA when transmitting permutations. We also introduce a new technique, inversion frequencies, and show that it does as well as move-to-front (MTF) coding when there is locality of reference in the data.
{"title":"Block sorting and compression","authors":"Z. Arnavut, S. Magliveras","doi":"10.1109/DCC.1997.582009","DOIUrl":"https://doi.org/10.1109/DCC.1997.582009","url":null,"abstract":"The block sorting lossless data compression algorithm (BSLDCA) described by Burrows and Wheeler (1994) has received considerable attention. It achieves as good compression rates as context-based methods, such as PPM, but at execution speeds closer to Ziv-Lempel techniques. This paper, describes the lexical permutation sorting algorithm (LPSA), its theoretical basis, and delineates its relationship to the BSLDCA. In particular we describe how the BSLDCA can be reduced to the LPSA and show how the LPSA could give better results than the BSLDCA when transmitting permutations. We also introduce a new technique, inversion frequencies, and show that it does as well as move-to-front (MTF) coding when there is locality of reference in the data.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133139466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The authors consider the following multiterminal data compression problem. An image is to be compressed and transmitted to a destination over a set of unreliable links. If only one of the links is functional (i.e. data transmitted over all other links is completely lost), we would like to be able to reconstruct a low resolution version of our original image. If more links are functional, we would like the image quality to improve. Finally, with all links functioning, we desire a high resolution, possibly lossless reconstruction of the original image. Motivating applications for this problem are briefly discussed. We present conceptually simple and computationally efficient schemes that are useful in these applications. The schemes can be implemented as simple pre-processing and post-processing operations on existing image compression algorithms.
{"title":"Multiple descriptions encoding of images","authors":"P. Subrahmanya, T. Berger","doi":"10.1109/DCC.1997.582138","DOIUrl":"https://doi.org/10.1109/DCC.1997.582138","url":null,"abstract":"Summary form only given. The authors consider the following multiterminal data compression problem. An image is to be compressed and transmitted to a destination over a set of unreliable links. If only one of the links is functional (i.e. data transmitted over all other links is completely lost), we would like to be able to reconstruct a low resolution version of our original image. If more links are functional, we would like the image quality to improve. Finally, with all links functioning, we desire a high resolution, possibly lossless reconstruction of the original image. Motivating applications for this problem are briefly discussed. We present conceptually simple and computationally efficient schemes that are useful in these applications. The schemes can be implemented as simple pre-processing and post-processing operations on existing image compression algorithms.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115426895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effros and Chou (see Proceedings of the IEEE International Conference on Image Processing, Washington, DC, 1995) introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal transform codes, the WUTC achieves significant performance gains over JPEG. The computational and storage costs of that performance gain are effectively the computation and storage required to operate and store a collection of transform codes rather than a single transform code. We consider two complexity- and storage-constrained variations of the WUTC. The complexity and storage of the algorithm are controlled by constraining the order of the bases. In the first algorithm, called the fast WUTC (FWUTC), complexity is controlled by controlling the maximum order of each transform. On a sequence of combined text and gray-scale images, the FWUTC achieves performance comparable to the WUTC. In the second algorithm, called the jointly optimized fast WUTC (JWUTC), the complexity is controlled by controlling the average order of the transforms. On the same data set and for the same complexity, the performance of the JWUTC always exceeds the performance of the FWUTC. The JWUTC and FWUTC algorithm are interesting both for their complexity and storage savings in data compression and for the insights that they lend into the choice of appropriate fixed- and variable-order bases for image representation.
{"title":"Fast weighted universal transform coding: toward optimal, low complexity bases for image compression","authors":"M. Effros","doi":"10.1109/DCC.1997.582021","DOIUrl":"https://doi.org/10.1109/DCC.1997.582021","url":null,"abstract":"Effros and Chou (see Proceedings of the IEEE International Conference on Image Processing, Washington, DC, 1995) introduce a two-stage universal transform code called the weighted universal transform code (WUTC). By replacing JPEG's single, non-optimal transform code with a collection of optimal transform codes, the WUTC achieves significant performance gains over JPEG. The computational and storage costs of that performance gain are effectively the computation and storage required to operate and store a collection of transform codes rather than a single transform code. We consider two complexity- and storage-constrained variations of the WUTC. The complexity and storage of the algorithm are controlled by constraining the order of the bases. In the first algorithm, called the fast WUTC (FWUTC), complexity is controlled by controlling the maximum order of each transform. On a sequence of combined text and gray-scale images, the FWUTC achieves performance comparable to the WUTC. In the second algorithm, called the jointly optimized fast WUTC (JWUTC), the complexity is controlled by controlling the average order of the transforms. On the same data set and for the same complexity, the performance of the JWUTC always exceeds the performance of the FWUTC. The JWUTC and FWUTC algorithm are interesting both for their complexity and storage savings in data compression and for the insights that they lend into the choice of appropriate fixed- and variable-order bases for image representation.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123963374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.
只提供摘要形式。文本压缩的研究主要涉及文档应用;它很少考虑其他应用。在增加条形码符号的数据容量和信息密度方面,以前已经作出了重大的努力。这些努力的结果创造了二维条形码的格式。我们以Symbol Technologies开发的PDF417 (Pavlidis et al. 1992)为例。PDF417是最流行的二维条码符号。然而PDF417的存储容量限制了其广泛应用。本文提出了一种基于反向搜索算法和新的存储协议的文本压缩技术。研究了一种基于词的多字典文本压缩技术,以提高二维条码的存储容量。为了加快文本的搜索速度,还描述了一个哈希函数。对于数据库检索的应用来说,该技术特别有用。对于部件号、位置、名称、参考等有限形式的二维条码存储的数据,由于命中率可以达到100%,压缩比可以高达2。对于解码器设计,复杂性不需要太复杂,因为解码器只需要知道“光”和“暗”。为了使字典变得更加“智能”,提出了一种子字典,它允许编码文本更加独立。
{"title":"Word based multiple dictionary scheme for text compression with application to 2D bar code","authors":"K. Ng, L. Cheng","doi":"10.1109/DCC.1997.582120","DOIUrl":"https://doi.org/10.1109/DCC.1997.582120","url":null,"abstract":"Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. There is an enormous amount of multimedia data including images, video, speech, audio and text, distributed among the various computer nodes on the Internet. The extent to which a user will be able to derive useful information from these data depends largely on the ease with which required data can be retrieved from the databases. The volume of the data also poses a storage constraint on the databases; hence these data will need to exist in the compressed form on the databases. We concentrate on image data and propose a new paradigm in which a compressed-image database can be searched for its contents. In this paradigm, the need for a separate index is obviated by utilising image compression schemes that can support some form of object search in the compressed be domain. The central idea is to store the image in layers of different resolutions and to be able to synthesise an edge image from a subset of the layers. This edge image then constitutes a model of the image that can be used as a searchable index. The implication of this approach is that the index is inherent in the compressed image file and does not occupy any additional storage space as would be the case in a conventional index. The preliminary results obtained from the system simulated in our experiments indicate the feasibility of the proposed paradigm.
{"title":"Content-based retrieval from compressed-image databases","authors":"P. Ogunbona, P. Sangassapaviriya","doi":"10.1109/DCC.1997.582126","DOIUrl":"https://doi.org/10.1109/DCC.1997.582126","url":null,"abstract":"Summary form only given. There is an enormous amount of multimedia data including images, video, speech, audio and text, distributed among the various computer nodes on the Internet. The extent to which a user will be able to derive useful information from these data depends largely on the ease with which required data can be retrieved from the databases. The volume of the data also poses a storage constraint on the databases; hence these data will need to exist in the compressed form on the databases. We concentrate on image data and propose a new paradigm in which a compressed-image database can be searched for its contents. In this paradigm, the need for a separate index is obviated by utilising image compression schemes that can support some form of object search in the compressed be domain. The central idea is to store the image in layers of different resolutions and to be able to synthesise an edge image from a subset of the layers. This edge image then constitutes a model of the image that can be used as a searchable index. The implication of this approach is that the index is inherent in the compressed image file and does not occupy any additional storage space as would be the case in a conventional index. The preliminary results obtained from the system simulated in our experiments indicate the feasibility of the proposed paradigm.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128840121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In fractal compression a signal is encoded by the parameters of a contractive transformation whose fixed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the collage theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.
{"title":"Optimal fractal coding is NP-hard","authors":"M. Ruhl, H. Hartenstein","doi":"10.1109/DCC.1997.582049","DOIUrl":"https://doi.org/10.1109/DCC.1997.582049","url":null,"abstract":"In fractal compression a signal is encoded by the parameters of a contractive transformation whose fixed point (attractor) is an approximation of the original data. Thus fractal coding can be viewed as the optimization problem of finding in a set of admissible contractive transformations the transformation whose attractor is closest to a given signal. The standard fractal coding scheme based on the collage theorem produces only a suboptimal solution. We demonstrate by a reduction from MAXCUT that the problem of determining the optimal fractal code is NP-hard. To our knowledge, this is the first analysis of the intrinsic complexity of fractal coding. Additionally, we show that standard fractal coding is not an approximating algorithm for this problem.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125267692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In differential pulse code modulation (DPCM) we make a prediction f/spl circ/=/spl Sigma/a(i)-f(i) of the next pixel using a linear combination of neighbouring pixels f(i). It is possible to have the coefficients a(i)s constant over a whole image, but better results can be obtained by adapting the a(i)s to the local image behaviour as the image is encoded. One difficulty with present schemes is that they can only produce predictors with positive a(i)s. This is desirable in the presence of noise, but in regions where the intensity varies smoothly, we require at least one negative coefficient to properly estimate a gradient. However, if we consider the four neighbouring pixels as four local sub-predictors W, N, NW and NE, and the gradient measure as the sum of absolute prediction errors of those sub-predictors within the local neighbourhood, then we can use any sub-predictors we choose, even nonlinear ones. In our experiments, we chose to use three additional linear predictors suited for smooth regions, each having one negative coefficient. Results were computed for three versions of the standard JPEG test set and some 12 bpp medical images.
只提供摘要形式。在差分脉冲编码调制(DPCM)中,我们使用相邻像素f(i)的线性组合对下一个像素进行预测f/spl circ/=/spl Sigma/a(i)-f(i)。系数a(i)s可以在整个图像上保持恒定,但是在图像编码时,通过使a(i)s适应局部图像行为可以获得更好的结果。目前方案的一个困难是,它们只能产生具有正a(i)s的预测因子。这在存在噪声的情况下是理想的,但在强度平滑变化的区域,我们需要至少一个负系数来正确估计梯度。然而,如果我们将四个相邻像素视为四个局部子预测器W, N, NW和NE,并且梯度度量作为局部邻域中这些子预测器的绝对预测误差之和,那么我们可以使用我们选择的任何子预测器,甚至是非线性子预测器。在我们的实验中,我们选择使用另外三个适合于光滑区域的线性预测器,每个都有一个负系数。计算了三个版本的标准JPEG测试集和一些12bpp医学图像的结果。
{"title":"Generalised locally adaptive DPCM","authors":"T. Seemann, P. Tischer","doi":"10.1109/DCC.1997.582142","DOIUrl":"https://doi.org/10.1109/DCC.1997.582142","url":null,"abstract":"Summary form only given. In differential pulse code modulation (DPCM) we make a prediction f/spl circ/=/spl Sigma/a(i)-f(i) of the next pixel using a linear combination of neighbouring pixels f(i). It is possible to have the coefficients a(i)s constant over a whole image, but better results can be obtained by adapting the a(i)s to the local image behaviour as the image is encoded. One difficulty with present schemes is that they can only produce predictors with positive a(i)s. This is desirable in the presence of noise, but in regions where the intensity varies smoothly, we require at least one negative coefficient to properly estimate a gradient. However, if we consider the four neighbouring pixels as four local sub-predictors W, N, NW and NE, and the gradient measure as the sum of absolute prediction errors of those sub-predictors within the local neighbourhood, then we can use any sub-predictors we choose, even nonlinear ones. In our experiments, we chose to use three additional linear predictors suited for smooth regions, each having one negative coefficient. Results were computed for three versions of the standard JPEG test set and some 12 bpp medical images.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present comparative results obtained in the context of 24-bit true color image encoding by using searchless vs. search-based fractal compression techniques in a perceptually uniform color space. A pixel in the color space is represented as a vector with each component corresponding to a color channel. The least squares approximation of an image block by an iterated function system (IFS) is adapted to reflect the added color dimensions. To account for the nonlinearity of the human visual perception, compression in the L/sup */a/sup */b/sup */ uniform color space is proposed. In this color space, two pairs of colors with the same Euclidean distance metric are perceptually almost equally similar or different.
{"title":"Fractal color compression in the L*a*b* uniform color space","authors":"I. M. Danciu, J. Hart","doi":"10.1109/DCC.1997.582090","DOIUrl":"https://doi.org/10.1109/DCC.1997.582090","url":null,"abstract":"Summary form only given. We present comparative results obtained in the context of 24-bit true color image encoding by using searchless vs. search-based fractal compression techniques in a perceptually uniform color space. A pixel in the color space is represented as a vector with each component corresponding to a color channel. The least squares approximation of an image block by an iterated function system (IFS) is adapted to reflect the added color dimensions. To account for the nonlinearity of the human visual perception, compression in the L/sup */a/sup */b/sup */ uniform color space is proposed. In this color space, two pairs of colors with the same Euclidean distance metric are perceptually almost equally similar or different.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Potentially, data compression techniques may have a broad impact in computing not only by decreasing storage and communication costs, but also by speeding up computation. For many image processing applications, the use of data compression is so pervasive that we can assume the inputs and outputs are in a compressed domain, and it is intriguing to consider doing computations on the data entirely in the compressed domain. We speed up processing by doing computations, including dot product and convolution on vectors and arrays, in a compressed transform domain. To do this, we make use of sophisticated algebraic techniques for evaluation and interpolation of sparse polynomials. We illustrate the basic methodology by applying these techniques to image processing problems, and in particular to speed up the well known splatting algorithm for volume rendering. The splatting algorithm is one of the most efficient of existing high quality volume rendering algorithms; it takes as input three dimensional volume sample data of size N/sup 3/ and outputs an N/spl times/N image in O(N/sup 3/f) time, where f is a parameter known as footprint size (which often is hundreds of pixels in practice). Assuming that the original sample data and the resulting image are stored in the transform domain and can be lossily compressed by a factor /spl rho/ with small error, we show that the rendering of the image can be done entirely in the compressed transform domain in decreased time O(/spl rho/N/sup 3/ log N). Hence we obtain a significant speedup over the splatting algorithm when f/spl Gt//spl rho/ log N.
{"title":"Fast and compact volume rendering in the compressed transform domain","authors":"Sefeng Chen, J. Reif","doi":"10.1109/DCC.1997.582050","DOIUrl":"https://doi.org/10.1109/DCC.1997.582050","url":null,"abstract":"Potentially, data compression techniques may have a broad impact in computing not only by decreasing storage and communication costs, but also by speeding up computation. For many image processing applications, the use of data compression is so pervasive that we can assume the inputs and outputs are in a compressed domain, and it is intriguing to consider doing computations on the data entirely in the compressed domain. We speed up processing by doing computations, including dot product and convolution on vectors and arrays, in a compressed transform domain. To do this, we make use of sophisticated algebraic techniques for evaluation and interpolation of sparse polynomials. We illustrate the basic methodology by applying these techniques to image processing problems, and in particular to speed up the well known splatting algorithm for volume rendering. The splatting algorithm is one of the most efficient of existing high quality volume rendering algorithms; it takes as input three dimensional volume sample data of size N/sup 3/ and outputs an N/spl times/N image in O(N/sup 3/f) time, where f is a parameter known as footprint size (which often is hundreds of pixels in practice). Assuming that the original sample data and the resulting image are stored in the transform domain and can be lossily compressed by a factor /spl rho/ with small error, we show that the rendering of the image can be done entirely in the compressed transform domain in decreased time O(/spl rho/N/sup 3/ log N). Hence we obtain a significant speedup over the splatting algorithm when f/spl Gt//spl rho/ log N.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133748937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}