We present a method for utilizing soft information in decoding of variable length codes (VLCs). When compared with traditional VLC decoding, which is performed using "hard" input bits and a state machine, soft-input VLC decoding offers improved performance in terms of packet and symbol error rates. Soft-input VLC decoding is free from the risk, encountered in hard decision VLC decoders in noisy environments, of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.
{"title":"Utilizing soft information in decoding of variable length codes","authors":"Jiangtao Wen, J. Villasenor","doi":"10.1109/DCC.1999.755662","DOIUrl":"https://doi.org/10.1109/DCC.1999.755662","url":null,"abstract":"We present a method for utilizing soft information in decoding of variable length codes (VLCs). When compared with traditional VLC decoding, which is performed using \"hard\" input bits and a state machine, soft-input VLC decoding offers improved performance in terms of packet and symbol error rates. Soft-input VLC decoding is free from the risk, encountered in hard decision VLC decoders in noisy environments, of terminating the decoding in an unsynchronized state, and it offers the possibility to exploit a priori knowledge, if available, of the number of symbols contained in the packet.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130954858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present our work on rate-distortion (RD) optimized spatial scalability for MC-DCT based video coding. Extending our work on RD optimized coding from the single layered to the multi-layered framework, we incorporate the additional inter-layer coding dependencies present in a multilayered framework into the set of permissible coding parameters. We employ the Lagrangian rate-distortion functional as it provides an elegant framework for determining the optimal choice of motion vectors, coding modes, and quantized coefficient levels by weighting a distortion term against a resulting rate term. We obtain a simple relationship between the Lagrangian parameter /spl lambda/, that controls rate-distortion tradeoffs, and the reference and enhancement layer quantization parameters QP, to allow the RD optimized framework to work easily in conjunction with rate control techniques that control the average bit rate by adjusting the quantization parameters. We then incorporate these relationships into our coder and generate two-layer bit streams with both the non-RD optimized coder and the RD optimized coder. We also generate RD optimized single-layer bit streams with the same resolution as the second layer of the two-layer bit streams. For the two-layer bit streams, we obtain a 0.6 to 1.4 dB improvement in PSNR by using RD optimization in both the base and enhancement layers. Compared to the single-layer bit stream, RD optimization in both the base and enhancement layers causes the decrease in PSNR to be reduced from 1.1 to 1.7 dB, to 0.3 to 0.5 dB.
{"title":"Rate-distortion optimized spatial scalability for DCT-based video coding","authors":"M. Gallant, F. Kossentini","doi":"10.1109/DCC.1999.785682","DOIUrl":"https://doi.org/10.1109/DCC.1999.785682","url":null,"abstract":"Summary form only given. We present our work on rate-distortion (RD) optimized spatial scalability for MC-DCT based video coding. Extending our work on RD optimized coding from the single layered to the multi-layered framework, we incorporate the additional inter-layer coding dependencies present in a multilayered framework into the set of permissible coding parameters. We employ the Lagrangian rate-distortion functional as it provides an elegant framework for determining the optimal choice of motion vectors, coding modes, and quantized coefficient levels by weighting a distortion term against a resulting rate term. We obtain a simple relationship between the Lagrangian parameter /spl lambda/, that controls rate-distortion tradeoffs, and the reference and enhancement layer quantization parameters QP, to allow the RD optimized framework to work easily in conjunction with rate control techniques that control the average bit rate by adjusting the quantization parameters. We then incorporate these relationships into our coder and generate two-layer bit streams with both the non-RD optimized coder and the RD optimized coder. We also generate RD optimized single-layer bit streams with the same resolution as the second layer of the two-layer bit streams. For the two-layer bit streams, we obtain a 0.6 to 1.4 dB improvement in PSNR by using RD optimization in both the base and enhancement layers. Compared to the single-layer bit stream, RD optimization in both the base and enhancement layers causes the decrease in PSNR to be reduced from 1.1 to 1.7 dB, to 0.3 to 0.5 dB.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"699 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126946095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Ordentlich, D. Taubman, M. Weinberger, G. Seroussi, M. Marcellin
We study the problem of memory-efficient scalable image compression and investigate some tradeoffs in the complexity versus coding efficiency space. The focus is on a low-complexity algorithm centered around the use of sub-bit-planes, scan-causal modeling, and a simplified arithmetic coder. This algorithm approaches the lowest possible memory usage for scalable wavelet-based image compression and demonstrates that the generation of a scalable bit-stream is not incompatible with a low-memory architecture.
{"title":"Memory-efficient scalable line-based image coding","authors":"E. Ordentlich, D. Taubman, M. Weinberger, G. Seroussi, M. Marcellin","doi":"10.1109/DCC.1999.755671","DOIUrl":"https://doi.org/10.1109/DCC.1999.755671","url":null,"abstract":"We study the problem of memory-efficient scalable image compression and investigate some tradeoffs in the complexity versus coding efficiency space. The focus is on a low-complexity algorithm centered around the use of sub-bit-planes, scan-causal modeling, and a simplified arithmetic coder. This algorithm approaches the lowest possible memory usage for scalable wavelet-based image compression and demonstrates that the generation of a scalable bit-stream is not incompatible with a low-memory architecture.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127035446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm based on arithmetic coding that allows decompression to start at any point in the compressed file. This random access requirement poses some restrictions on the implementation of arithmetic coding and on the model used. Our main application area is executable code compression for computer systems where machine instructions are decompressed on-the-fly before execution. We focus on the decompression side of arithmetic coding and we propose a fast decoding scheme based on finite state machines. Furthermore, we present a method to decode multiple bits per cycle, while keeping the size of the decoder small.
{"title":"Random access decompression using binary arithmetic coding","authors":"H. Lekatsas, W. Wolf","doi":"10.1109/DCC.1999.755680","DOIUrl":"https://doi.org/10.1109/DCC.1999.755680","url":null,"abstract":"We present an algorithm based on arithmetic coding that allows decompression to start at any point in the compressed file. This random access requirement poses some restrictions on the implementation of arithmetic coding and on the model used. Our main application area is executable code compression for computer systems where machine instructions are decompressed on-the-fly before execution. We focus on the decompression side of arithmetic coding and we propose a fast decoding scheme based on finite state machines. Furthermore, we present a method to decode multiple bits per cycle, while keeping the size of the decoder small.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"326 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123238571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. 16-bit Asian language texts are difficult to compress using conventional 8-bit sampling text compression schemes. Recently the word-based text compression method has been studied with the intention of compressing Japanese and Chinese texts individually. In order to compress a large number of small-sized Japanese documents, such as groupware and E-mail, we applied a semi-adaptive word-based method to Japanese at DCC'98. To further enable multilingual text compression, we also applied a static word-based method to both the Japanese and Chinese texts and evaluated compression characteristics and performance using a computer simulation.
{"title":"Application of a word-based text compression method to Japanese and Chinese texts","authors":"S. Yoshida, T. Morihara, H. Yahagi, Noriko Itani","doi":"10.1109/DCC.1999.785718","DOIUrl":"https://doi.org/10.1109/DCC.1999.785718","url":null,"abstract":"Summary form only given. 16-bit Asian language texts are difficult to compress using conventional 8-bit sampling text compression schemes. Recently the word-based text compression method has been studied with the intention of compressing Japanese and Chinese texts individually. In order to compress a large number of small-sized Japanese documents, such as groupware and E-mail, we applied a semi-adaptive word-based method to Japanese at DCC'98. To further enable multilingual text compression, we also applied a static word-based method to both the Japanese and Chinese texts and evaluated compression characteristics and performance using a computer simulation.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128965983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. This paper addresses an approach for handling SAR and US images with different statistical properties. The approach is based on a image-structure/speckle-texture decomposition. The image model in this case views an image X(i,j) as the combination of two components: an image structure S(i,j) and a speckle texture T(i,j). An octave-band subband decomposition is performed on the data and the structure is separated from the speckle by applying soft-thresholding to the high frequency subband coefficients. The coefficients remaining after the operation are used to synthesize S(i,j) while the complement set of coefficients is a representation of T(i,j). Once the two components are obtained, they are coded separately. S(i,j) has a low frequency characteristic similar to natural images and is suitable for conventional compression techniques. In the proposed algorithm we use a quadtree coder for S(i,j). The speckle component is parametrized using a texture model. Two texture models have been tested: a 2D-AR model and the pyramid-based algorithm proposed by Heeger and Bergen. For the latter, a compact parametrization of the texture is achieved by modeling the histograms of T(i,j) and its pyramid subbands as generalized Gaussians. The synthesized speckle is visually similar to the original for both models. The image is reconstructed by adding together the decoded structure and the synthesized speckle. The subjective quality gains obtained from the proposed approach are evident. We performed a subjective test, which followed the CCIR recommendation 500-4 for image quality assessment. Several codecs were included in the tests.
{"title":"Compression of SAR and ultrasound imagery using texture models","authors":"J. Rosiles, Mark J. T. Smith","doi":"10.1109/DCC.1999.785704","DOIUrl":"https://doi.org/10.1109/DCC.1999.785704","url":null,"abstract":"Summary form only given. This paper addresses an approach for handling SAR and US images with different statistical properties. The approach is based on a image-structure/speckle-texture decomposition. The image model in this case views an image X(i,j) as the combination of two components: an image structure S(i,j) and a speckle texture T(i,j). An octave-band subband decomposition is performed on the data and the structure is separated from the speckle by applying soft-thresholding to the high frequency subband coefficients. The coefficients remaining after the operation are used to synthesize S(i,j) while the complement set of coefficients is a representation of T(i,j). Once the two components are obtained, they are coded separately. S(i,j) has a low frequency characteristic similar to natural images and is suitable for conventional compression techniques. In the proposed algorithm we use a quadtree coder for S(i,j). The speckle component is parametrized using a texture model. Two texture models have been tested: a 2D-AR model and the pyramid-based algorithm proposed by Heeger and Bergen. For the latter, a compact parametrization of the texture is achieved by modeling the histograms of T(i,j) and its pyramid subbands as generalized Gaussians. The synthesized speckle is visually similar to the original for both models. The image is reconstructed by adding together the decoded structure and the synthesized speckle. The subjective quality gains obtained from the proposed approach are evident. We performed a subjective test, which followed the CCIR recommendation 500-4 for image quality assessment. Several codecs were included in the tests.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116710535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The proposed novel lossy image compression approach represents an image as segments comprised of variable-sized right-angled triangles. The recursive triangular partitioning proposed is shown to be more efficient than square partitioning. A novel and economic blending model (similar to Bezier polynomials) is applied to represent each triangular surface. A framework to design blending surfaces for triangular regions is presented. This economic model allows coefficient (control point) sharing among neighbor triangles. Sharing results in blockiness reduction as compared to block-based techniques. The technique is specially appealing to images with smooth transitions. Compression and visual quality results compare favorably against a wavelet codec using decomposition into seven bands. As an alternative, a greedy algorithm based on priority queues is proposed to further reduce the entropy of the control point bitstream. This optimization step achieves better performance in a rate-distortion R-D sense when compared to uniform quantization of the control points.
{"title":"A blending model for efficient compression of smooth images","authors":"J. Mayer","doi":"10.1109/DCC.1999.755672","DOIUrl":"https://doi.org/10.1109/DCC.1999.755672","url":null,"abstract":"The proposed novel lossy image compression approach represents an image as segments comprised of variable-sized right-angled triangles. The recursive triangular partitioning proposed is shown to be more efficient than square partitioning. A novel and economic blending model (similar to Bezier polynomials) is applied to represent each triangular surface. A framework to design blending surfaces for triangular regions is presented. This economic model allows coefficient (control point) sharing among neighbor triangles. Sharing results in blockiness reduction as compared to block-based techniques. The technique is specially appealing to images with smooth transitions. Compression and visual quality results compare favorably against a wavelet codec using decomposition into seven bands. As an alternative, a greedy algorithm based on priority queues is proposed to further reduce the entropy of the control point bitstream. This optimization step achieves better performance in a rate-distortion R-D sense when compared to uniform quantization of the control points.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117121560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present a new compression method, called WLZW, which is a word-based modification of classic LZW. The algorithm is two-phase, it uses only one table for words and non-words (so called tokens), and a single data structure for the lexicon is usable as a text index. The length of words and non-words is restricted. This feature improves the compress ratio achieved. Tokens of unlimited length alternate, when they are read from the input stream. Because of restricted length of tokens alternating of tokens is corrupted, because some tokens are divided into several parts of same type. To save alternating of tokens two special tokens are created. They are empty word and empty non-word. They contain no character. Empty word is inserted between two non-words and empty non-word between two words. Alternating of tokens is saved for all sequences of tokens. The alternating of tokens is an important piece of information. With this knowledge the kind of the next token can be predicted. One selected (so-called victim) non-word can be deleted from input stream. An algorithm to search the victim is also presented. In the decompression phase, a deleted victim is recognized as an error in alternating of words and non-words in sequence. The algorithm was tested on many texts in different formats (ASCII, RTF). The Canterbury corpus, a large set, was used as a standard for publication results. The compression ratio achieved is fairly good, on average 25%-22%. Decompression is very fast. Moreover, the algorithm enables evaluation of database queries in given text. This supports the idea of leaving data in the compressed state as long as possible, and to decompress it when it is necessary.
{"title":"Word-based compression methods for large text documents","authors":"J. Dvorský, J. Pokorný, V. Snás̃el","doi":"10.1109/DCC.1999.785680","DOIUrl":"https://doi.org/10.1109/DCC.1999.785680","url":null,"abstract":"Summary form only given. We present a new compression method, called WLZW, which is a word-based modification of classic LZW. The algorithm is two-phase, it uses only one table for words and non-words (so called tokens), and a single data structure for the lexicon is usable as a text index. The length of words and non-words is restricted. This feature improves the compress ratio achieved. Tokens of unlimited length alternate, when they are read from the input stream. Because of restricted length of tokens alternating of tokens is corrupted, because some tokens are divided into several parts of same type. To save alternating of tokens two special tokens are created. They are empty word and empty non-word. They contain no character. Empty word is inserted between two non-words and empty non-word between two words. Alternating of tokens is saved for all sequences of tokens. The alternating of tokens is an important piece of information. With this knowledge the kind of the next token can be predicted. One selected (so-called victim) non-word can be deleted from input stream. An algorithm to search the victim is also presented. In the decompression phase, a deleted victim is recognized as an error in alternating of words and non-words in sequence. The algorithm was tested on many texts in different formats (ASCII, RTF). The Canterbury corpus, a large set, was used as a standard for publication results. The compression ratio achieved is fairly good, on average 25%-22%. Decompression is very fast. Moreover, the algorithm enables evaluation of database queries in given text. This supports the idea of leaving data in the compressed state as long as possible, and to decompress it when it is necessary.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116144588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. An application program interface (API) for meddling sequential text is described. The API is intended to shield the user from details of the modelling and probability estimation process. This should enable different implementations of models to be replaced transparently in application programs. The motivation for this API is work on the use of textual models for applications in addition to strict data compression. The API is probabilistic, that is, it supplies the probability of the next symbol in the sequence. It is general enough to deal accurately with models that include escapes for probabilities. The concepts abstracted by the API are explained together with details of the API calls. Such predictive models can be used for a number of applications other than compression. Users of the models do not want to be concerned about the details either of the implementation of the models or how they were trained and the sources of the training text. The problem considered is how to permit code for different models and actual trained models themselves to be interchanged easily between users. The fundamental idea is that it should be possible to write application programs independent of the details of particular modelling code, that it should be possible to implement different modelling code independent of the various applications, and that it should be possible to easily exchange different pre-trained models between users. It is hoped that this independence will foster the exchange and use of high-performance modelling code, the construction of sophisticated adaptive systems based on the best available models, and the proliferation and provision of high-quality models of standard text types such as English or other natural languages, and easy comparison of different modelling techniques.
{"title":"An open interface for probabilistic models of text","authors":"J. Cleary, W. Teahan","doi":"10.1109/DCC.1999.785679","DOIUrl":"https://doi.org/10.1109/DCC.1999.785679","url":null,"abstract":"Summary form only given. An application program interface (API) for meddling sequential text is described. The API is intended to shield the user from details of the modelling and probability estimation process. This should enable different implementations of models to be replaced transparently in application programs. The motivation for this API is work on the use of textual models for applications in addition to strict data compression. The API is probabilistic, that is, it supplies the probability of the next symbol in the sequence. It is general enough to deal accurately with models that include escapes for probabilities. The concepts abstracted by the API are explained together with details of the API calls. Such predictive models can be used for a number of applications other than compression. Users of the models do not want to be concerned about the details either of the implementation of the models or how they were trained and the sources of the training text. The problem considered is how to permit code for different models and actual trained models themselves to be interchanged easily between users. The fundamental idea is that it should be possible to write application programs independent of the details of particular modelling code, that it should be possible to implement different modelling code independent of the various applications, and that it should be possible to easily exchange different pre-trained models between users. It is hoped that this independence will foster the exchange and use of high-performance modelling code, the construction of sophisticated adaptive systems based on the best available models, and the proliferation and provision of high-quality models of standard text types such as English or other natural languages, and easy comparison of different modelling techniques.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132324277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The piecewise-constant image model (PWC) is remarkably effective for compressing palette images. This paper discloses a new streaming version of PWC that retains the excellent compression efficiency of the original algorithm while dramatically enhancing compression performance. Further, compression throughput is made more constant, making it possible to code sparse images very quickly.
{"title":"A streaming piecewise-constant model","authors":"Paul J. Ausbeck","doi":"10.1109/DCC.1999.755670","DOIUrl":"https://doi.org/10.1109/DCC.1999.755670","url":null,"abstract":"The piecewise-constant image model (PWC) is remarkably effective for compressing palette images. This paper discloses a new streaming version of PWC that retains the excellent compression efficiency of the original algorithm while dramatically enhancing compression performance. Further, compression throughput is made more constant, making it possible to code sparse images very quickly.","PeriodicalId":103598,"journal":{"name":"Proceedings DCC'99 Data Compression Conference (Cat. No. PR00096)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121411494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}