Summary form only given. This paper describes an improved lossless data compression scheme. The proposed scheme contains three innovations: first, an efficient algorithm is introduced to decide when and how to switch from transparent mode to compressed mode, second, a temporary buffer is introduced at the encoder, and third, an approach to decide when to discard the entire dictionary is advanced. According to the developed method the changes are only at the transmitter and any V.42bis-compatible receiver can be used as a decoder. Therefore devices using V.42bis can use the proposed method after a firmware upgrade. To make a decision when to switch modes, we introduce two look-ahead buffers, B/sub C/ and B/sub T/, for each mode of operation of the encoder. Regardless of which mode the encoder is in, the output of both modes of operation is written to the corresponding look-ahead buffer. The simulation results demonstrate that the proposed method achieves higher compression ratios in most cases. Another goal of the work presented in this paper is to analyze what is the improvement obtained after the dictionary is reset and when is a good time to discard the dictionary. Our results for different file types show that consistently the compression ratio before dictionary reset is 0.87 and after dictionary reset is 1.07, an increase of 22.6 %. It is noted that V.44 is a newer compression standard, based on a different compression algorithm. While our results do not apply directly to V.44, they may be used after appropriate modifications.
{"title":"An improved method for lossless data compression","authors":"Yuhua Bai, T. Cooklev","doi":"10.1109/DCC.2005.14","DOIUrl":"https://doi.org/10.1109/DCC.2005.14","url":null,"abstract":"Summary form only given. This paper describes an improved lossless data compression scheme. The proposed scheme contains three innovations: first, an efficient algorithm is introduced to decide when and how to switch from transparent mode to compressed mode, second, a temporary buffer is introduced at the encoder, and third, an approach to decide when to discard the entire dictionary is advanced. According to the developed method the changes are only at the transmitter and any V.42bis-compatible receiver can be used as a decoder. Therefore devices using V.42bis can use the proposed method after a firmware upgrade. To make a decision when to switch modes, we introduce two look-ahead buffers, B/sub C/ and B/sub T/, for each mode of operation of the encoder. Regardless of which mode the encoder is in, the output of both modes of operation is written to the corresponding look-ahead buffer. The simulation results demonstrate that the proposed method achieves higher compression ratios in most cases. Another goal of the work presented in this paper is to analyze what is the improvement obtained after the dictionary is reset and when is a good time to discard the dictionary. Our results for different file types show that consistently the compression ratio before dictionary reset is 0.87 and after dictionary reset is 1.07, an increase of 22.6 %. It is noted that V.44 is a newer compression standard, based on a different compression algorithm. While our results do not apply directly to V.44, they may be used after appropriate modifications.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"12 1","pages":"451-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84195980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A robust scheme for the efficient transmission of packet video over a tandem wireless Internet channel is extended to a time varying scenario with a feedback channel. This channel is assumed to have bit errors (due to noise and fading on the wireless portion of the channel) and packet erasures (due to congestion on the wired portion). Simulation results showed that refined estimation can dramatically improve the performance for varying channel conditions, and that combined feedback of both channel conditions and ACK/NACK information can further improve system performance compared with the feedback of just one type of information.
{"title":"Video coding for a time varying tandem channel with feedback","authors":"Yushi Shen, P. Cosman, L. Milstein","doi":"10.1109/DCC.2005.95","DOIUrl":"https://doi.org/10.1109/DCC.2005.95","url":null,"abstract":"Summary form only given. A robust scheme for the efficient transmission of packet video over a tandem wireless Internet channel is extended to a time varying scenario with a feedback channel. This channel is assumed to have bit errors (due to noise and fading on the wireless portion of the channel) and packet erasures (due to congestion on the wired portion). Simulation results showed that refined estimation can dramatically improve the performance for varying channel conditions, and that combined feedback of both channel conditions and ACK/NACK information can further improve system performance compared with the feedback of just one type of information.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"49 1","pages":"480-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89859748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Many scientific applications require that image data be stored in floating point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating point data. The extensions are based on the idea of representing floating point values as "extended integers", and these extensions maintain desirable properties of JPEG2000, such as scalable embedded bit streams and rate distortion optimality. Like JPEG2000, the proposed methods can be applied to both lossy and lossless compression. However, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results show that one of the proposed lossless methods improve upon the compression ratio of standard methods such as gzip by an average of 16%.
{"title":"JPEG2000 compliant lossless coding of floating point data","authors":"B. Usevitch","doi":"10.1109/DCC.2005.49","DOIUrl":"https://doi.org/10.1109/DCC.2005.49","url":null,"abstract":"Summary form only given. Many scientific applications require that image data be stored in floating point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating point data. The extensions are based on the idea of representing floating point values as \"extended integers\", and these extensions maintain desirable properties of JPEG2000, such as scalable embedded bit streams and rate distortion optimality. Like JPEG2000, the proposed methods can be applied to both lossy and lossless compression. However, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results show that one of the proposed lossless methods improve upon the compression ratio of standard methods such as gzip by an average of 16%.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"155 1","pages":"484-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86298527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present an invertible transform for 2D data which has the objective of reordering the matrix to improve its (lossless) compression at later stages. Given a binary matrix, the transform involves first searching for the largest uniform submatrix, that is, a submatrix solely composed by the same symbol (either 0 or 1) induced by a subset of rows and columns (which are not necessarily contiguous). Then, the rows and the columns are reordered such that the uniform submatrix is moved to the left-upper corner of the matrix. The transform is recursively applied on the rest of the matrix. The recursion is stopped when the partition produces a matrix which is smaller than a predetermined threshold. The inverse transform (decompression) is fast and can be implemented in linear time in the size of the matrix. The effects of the transform on the compressibility of 2D data is studied empirically by comparing the performance of gzip and bzip2 before and after the application of the transform on several inputs. The preliminary results show that the transform boosts compression.
{"title":"A compression-boosting transform for 2D data","authors":"Qiaofeng Yang, S. Lonardi","doi":"10.1109/DCC.2005.2","DOIUrl":"https://doi.org/10.1109/DCC.2005.2","url":null,"abstract":"In this paper, we present an invertible transform for 2D data which has the objective of reordering the matrix to improve its (lossless) compression at later stages. Given a binary matrix, the transform involves first searching for the largest uniform submatrix, that is, a submatrix solely composed by the same symbol (either 0 or 1) induced by a subset of rows and columns (which are not necessarily contiguous). Then, the rows and the columns are reordered such that the uniform submatrix is moved to the left-upper corner of the matrix. The transform is recursively applied on the rest of the matrix. The recursion is stopped when the partition produces a matrix which is smaller than a predetermined threshold. The inverse transform (decompression) is fast and can be implemented in linear time in the size of the matrix. The effects of the transform on the compressibility of 2D data is studied empirically by comparing the performance of gzip and bzip2 before and after the application of the transform on several inputs. The preliminary results show that the transform boosts compression.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"66 1","pages":"492-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86413939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In many channel decoding applications, redundancy is left in the channel coded data. A new method for utilizing this redundancy in channel decoding is proposed. The method is based on the Burrows-Wheeler transform (BWT) and on universal compression techniques for piecewise stationary memoryless sources (PSMS), and is applied to regular low-density parity-check (LDPC) codes. Two settings are proposed. In the first, the BWT-PSMS loop is in the decoder, while in the second, the rearrangement of the data is performed with the BWT before channel encoding, and then the decoder is designed for extracting statistics in a PSMS. After the last iteration, the data is reassembled with the inverse BWT. Simulations show that the bit error rate performance of the new method (in either setting) is almost as good as genie-aided decoding with perfect knowledge of the statistics.
{"title":"BWT based universal lossless source controlled channel decoding with low density parity check codes","authors":"Li Wang, G. Shamir","doi":"10.1109/DCC.2005.24","DOIUrl":"https://doi.org/10.1109/DCC.2005.24","url":null,"abstract":"Summary form only given. In many channel decoding applications, redundancy is left in the channel coded data. A new method for utilizing this redundancy in channel decoding is proposed. The method is based on the Burrows-Wheeler transform (BWT) and on universal compression techniques for piecewise stationary memoryless sources (PSMS), and is applied to regular low-density parity-check (LDPC) codes. Two settings are proposed. In the first, the BWT-PSMS loop is in the decoder, while in the second, the rearrangement of the data is performed with the BWT before channel encoding, and then the decoder is designed for extracting statistics in a PSMS. After the last iteration, the data is reassembled with the inverse BWT. Simulations show that the bit error rate performance of the new method (in either setting) is almost as good as genie-aided decoding with perfect knowledge of the statistics.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"19 1","pages":"487-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90510473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last requirement of equal norm and construct, to our best knowledge, the first real, tight, equal-norm frames maximally robust to erasures.
{"title":"Real, tight frames with maximal robustness to erasures","authors":"Markus Püschel, J. Kovacevic","doi":"10.1109/DCC.2005.77","DOIUrl":"https://doi.org/10.1109/DCC.2005.77","url":null,"abstract":"Motivated by the use of frames for robust transmission over the Internet, we present a first systematic construction of real tight frames with maximum robustness to erasures. We approach the problem in steps: we first construct maximally robust frames by using polynomial transforms. We then add tightness as an additional property with the help of orthogonal polynomials. Finally, we impose the last requirement of equal norm and construct, to our best knowledge, the first real, tight, equal-norm frames maximally robust to erasures.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"67 1","pages":"63-72"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88190356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and aggressive PNN. Parallel LBG is based on the LBG algorithm with the K-mean method. The cost of both latter algorithms mainly consists of: a) the computation part; b) the communication part; and c) the update part. Aggressive PNN is a parallelized version of the PNN (pairwise nearest neighbor) algorithm, whose cost mainly consists of: a) the computation part; b) the communication part; and c) the merge part. We measured the speedups and elapsed times of both algorithms on a PC cluster system. When the quality of images compressed by both algorithms is the same, the number of training vectors required by the aggressive PNN is much less than that by the parallel LBG, and the aggressive PNN is superior in terms of the elapsed time.
{"title":"Parallelization of VQ codebook generation by two algorithms: parallel LBG and aggressive PNN [image compression applications]","authors":"A. Wakatani","doi":"10.1109/DCC.2005.69","DOIUrl":"https://doi.org/10.1109/DCC.2005.69","url":null,"abstract":"Summary form only given. We evaluate two parallel algorithms for the codebook generation of the VQ compression: parallel LBG and aggressive PNN. Parallel LBG is based on the LBG algorithm with the K-mean method. The cost of both latter algorithms mainly consists of: a) the computation part; b) the communication part; and c) the update part. Aggressive PNN is a parallelized version of the PNN (pairwise nearest neighbor) algorithm, whose cost mainly consists of: a) the computation part; b) the communication part; and c) the merge part. We measured the speedups and elapsed times of both algorithms on a PC cluster system. When the quality of images compressed by both algorithms is the same, the number of training vectors required by the aggressive PNN is much less than that by the parallel LBG, and the aggressive PNN is superior in terms of the elapsed time.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"5 1","pages":"486-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81513807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In this paper, we propose a novel approach for the second step of the Burrows-Wheeler compression algorithm, based on the idea that the probabilities of events are not continuous valued, but are rather quantized with respect to a specific class of base functions. The first pass of encoding transforms the input sequence x into sequence x/spl tilde/. The second pass models and codes x/spl tilde/using entropy coding. The entropy decoding, modeling, and context updating for decoding x/spl tilde/ are the same as the ones used for encoding. We have proved that the quantized local frequency transform is optimal in the case of binary and ternary alphabet memoryless sources, showing that x and x/spl tilde/ have the same entropy; for larger alphabets, we verified this by simulation.
{"title":"QLFC - a compression algorithm using the Burrows-Wheeler transform","authors":"F. Ghido","doi":"10.1109/DCC.2005.75","DOIUrl":"https://doi.org/10.1109/DCC.2005.75","url":null,"abstract":"Summary form only given. In this paper, we propose a novel approach for the second step of the Burrows-Wheeler compression algorithm, based on the idea that the probabilities of events are not continuous valued, but are rather quantized with respect to a specific class of base functions. The first pass of encoding transforms the input sequence x into sequence x/spl tilde/. The second pass models and codes x/spl tilde/using entropy coding. The entropy decoding, modeling, and context updating for decoding x/spl tilde/ are the same as the ones used for encoding. We have proved that the quantized local frequency transform is optimal in the case of binary and ternary alphabet memoryless sources, showing that x and x/spl tilde/ have the same entropy; for larger alphabets, we verified this by simulation.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"26 1","pages":"459-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73652105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ be a (possibly infinite) sequence of nonnegative integers and /spl Sigma/ some D-ary alphabet. The Kraft-inequality states that /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ are the lengths of the words in some prefix (free) code over /spl Sigma/ if and only if /spl Sigma//sub i=1//sup n/D/sup -/spl lscr/ i//spl les/1. Furthermore, the code is exhaustive if and only if equality holds. The McMillan inequality states that if /spl lscr/ /sub n/ are the lengths of the words in some uniquely decipherable code, then the same condition holds. In this paper we examine how the Kraft-McMillan inequality conditions for the existence of a prefix or uniquely decipherable code change when the code is not only required to be prefix but all of the codewords are restricted to belong to a given specific language L. For example, L might be all words that end in a particular pattern or, if /spl Sigma/ is binary, might be all words in which the number of zeros equals the number of ones.
{"title":"Generalizing the Kraft-McMillan inequality to restricted languages","authors":"M. Golin, Hyeon-Suk Na","doi":"10.1109/DCC.2005.42","DOIUrl":"https://doi.org/10.1109/DCC.2005.42","url":null,"abstract":"Let /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ be a (possibly infinite) sequence of nonnegative integers and /spl Sigma/ some D-ary alphabet. The Kraft-inequality states that /spl lscr/ /sub 1/,/spl lscr/ /sub 2/,...,/spl lscr/ /sub n/ are the lengths of the words in some prefix (free) code over /spl Sigma/ if and only if /spl Sigma//sub i=1//sup n/D/sup -/spl lscr/ i//spl les/1. Furthermore, the code is exhaustive if and only if equality holds. The McMillan inequality states that if /spl lscr/ /sub n/ are the lengths of the words in some uniquely decipherable code, then the same condition holds. In this paper we examine how the Kraft-McMillan inequality conditions for the existence of a prefix or uniquely decipherable code change when the code is not only required to be prefix but all of the codewords are restricted to belong to a given specific language L. For example, L might be all words that end in a particular pattern or, if /spl Sigma/ is binary, might be all words in which the number of zeros equals the number of ones.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"21 1","pages":"163-172"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74449647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces are on the sheet. The sheet is a tri mesh separating active and inactive tets. It is initialized with the back-facing boundary of the mesh. It is compressed using EdgeBreaker and transmitted first. It is maintained by both the server and the client and advanced towards the viewer passing one free tet at a time. The client receives a compressed bit stream indicating where to attach free tets to the sheet. It renders each free tet and updates the sheet by either flipping a concave edge, removing a concave valence-3 vertex, or inserting a new vertex to split a tri. TetStreamer compresses the connectivity of the whole let mesh to an average of about 1.7 bits per tet. The footprint (in-core memory required by the client) needs only to hold the evolving sheet, which is a small fraction of the storage that would be required by the entire tet-mesh. Hence, TetStreamer permits us to receive, decompress, and visualize or process very large meshes on clients with a small in-core memory. Furthermore, it permits us to use volumetric visualization techniques, which require that the mesh be processed in view-dependent back-to-front order, at no extra memory, performance or transmission cost.
{"title":"TetStreamer: compressed back-to-front transmission of Delaunay tetrahedra meshes","authors":"Urs Bischoff, J. Rossignac","doi":"10.1109/DCC.2005.85","DOIUrl":"https://doi.org/10.1109/DCC.2005.85","url":null,"abstract":"We use the abbreviations tet and tri for tetrahedron and triangle. TetStreamer encodes a Delaunay tet mesh in a back-to-front visibility order and streams it from a server to a client (volumetric visualizer). During decompression, the server performs the view-dependent back-to-front sorting of the tets by identifying and deactivating one free tet at a time. A tet is free when all its back faces are on the sheet. The sheet is a tri mesh separating active and inactive tets. It is initialized with the back-facing boundary of the mesh. It is compressed using EdgeBreaker and transmitted first. It is maintained by both the server and the client and advanced towards the viewer passing one free tet at a time. The client receives a compressed bit stream indicating where to attach free tets to the sheet. It renders each free tet and updates the sheet by either flipping a concave edge, removing a concave valence-3 vertex, or inserting a new vertex to split a tri. TetStreamer compresses the connectivity of the whole let mesh to an average of about 1.7 bits per tet. The footprint (in-core memory required by the client) needs only to hold the evolving sheet, which is a small fraction of the storage that would be required by the entire tet-mesh. Hence, TetStreamer permits us to receive, decompress, and visualize or process very large meshes on clients with a small in-core memory. Furthermore, it permits us to use volumetric visualization techniques, which require that the mesh be processed in view-dependent back-to-front order, at no extra memory, performance or transmission cost.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"9 1","pages":"93-102"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78292615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}