Summary form only given. We present a new trellis coded vector residual quantizer (TCVRQ) that combines trellis coding and vector residual quantization. Our TCVRQ is a general-purpose sub-optimal vector quantizer with low computational costs and small memory requirement that permits high memory savings when compared to traditional quantizers. Our experiments confirm that TCVRQ is a good compromise between memory/speed requirements and quality and that it is not sensitive to codebook design errors. We propose a method for computing quantization levels and experimentally analyze the performance of our TCVRQ when applied to speech coding at very low bit rates and to direct image coding. We employed our TCVRQ in a linear prediction based speech codec for the quantization of the LP parameters. Several experiments were performed using both SNR and a perceptive measure of distortion known as cepstral distance. The results obtained and some informal listening tests show that nearly transparent quantization can be performed at a rate of 1.9 bits per parameter. The experiments in image coding were performed encoding some 256 gray levels, 512/spl times/512 pixel images using blocks of 3/spl times/3 pixels. Our TCVRQ were compared, on the same training and test sets, to an exhaustive search vector quantizer (built using the generalized Lloyd algorithm) and to a tree quantizer for different coding rates ranging from 3 to 10 bits per block.
{"title":"A new trellis vector residual quantizer with applications to speech and image coding","authors":"B. Carpentieri, G. Motta","doi":"10.1109/DCC.1997.582083","DOIUrl":"https://doi.org/10.1109/DCC.1997.582083","url":null,"abstract":"Summary form only given. We present a new trellis coded vector residual quantizer (TCVRQ) that combines trellis coding and vector residual quantization. Our TCVRQ is a general-purpose sub-optimal vector quantizer with low computational costs and small memory requirement that permits high memory savings when compared to traditional quantizers. Our experiments confirm that TCVRQ is a good compromise between memory/speed requirements and quality and that it is not sensitive to codebook design errors. We propose a method for computing quantization levels and experimentally analyze the performance of our TCVRQ when applied to speech coding at very low bit rates and to direct image coding. We employed our TCVRQ in a linear prediction based speech codec for the quantization of the LP parameters. Several experiments were performed using both SNR and a perceptive measure of distortion known as cepstral distance. The results obtained and some informal listening tests show that nearly transparent quantization can be performed at a rate of 1.9 bits per parameter. The experiments in image coding were performed encoding some 256 gray levels, 512/spl times/512 pixel images using blocks of 3/spl times/3 pixels. Our TCVRQ were compared, on the same training and test sets, to an exhaustive search vector quantizer (built using the generalized Lloyd algorithm) and to a tree quantizer for different coding rates ranging from 3 to 10 bits per block.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124525476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Among the methods for lossless compression of bilevel images, algorithms that do node splitting on context pixels obtain the highest compression ratios. For the most part, these methods use binary variables to do the splitting. Variables that can adopt more than two values are sometimes used, but each possible value of the variable always determines a separate child of a node. We put forward the use of splitting variables that can adopt a very large number of values, including intervals over the reals. At the same time, the number of children per node is kept small as needed. We use a greedy algorithm to repeatedly divide the range of the splitting variable so as to maximize entropy reduction at each step. Both non-local information, e.g., position, and functions on neighborhood pixels can go into tree-building. The resulting compression ratios are higher than those of traditional node-splitting methods. We also show that a context-based codebook, i.e. a function from the set of all possible contexts to the real interval [0,1], can be composed with the inverse of a function from the set of all possible contexts to the reals, such as a function based on Grey coding of the context bitstring, to produce a function from the reals to [0,1] that is very amenable to moderately lossy compression. Even though compression of the codebook is lossy, compression of the image itself is lossless.
{"title":"Generalized node splitting and bilevel image compression","authors":"H. Helfgott, J. Storer","doi":"10.1109/DCC.1997.582102","DOIUrl":"https://doi.org/10.1109/DCC.1997.582102","url":null,"abstract":"Summary form only given. Among the methods for lossless compression of bilevel images, algorithms that do node splitting on context pixels obtain the highest compression ratios. For the most part, these methods use binary variables to do the splitting. Variables that can adopt more than two values are sometimes used, but each possible value of the variable always determines a separate child of a node. We put forward the use of splitting variables that can adopt a very large number of values, including intervals over the reals. At the same time, the number of children per node is kept small as needed. We use a greedy algorithm to repeatedly divide the range of the splitting variable so as to maximize entropy reduction at each step. Both non-local information, e.g., position, and functions on neighborhood pixels can go into tree-building. The resulting compression ratios are higher than those of traditional node-splitting methods. We also show that a context-based codebook, i.e. a function from the set of all possible contexts to the real interval [0,1], can be composed with the inverse of a function from the set of all possible contexts to the reals, such as a function based on Grey coding of the context bitstring, to produce a function from the reals to [0,1] that is very amenable to moderately lossy compression. Even though compression of the codebook is lossy, compression of the image itself is lossless.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121387774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present the results of the comparison of heuristics algorithms for efficient storage compression for 3D regions. We have implemented five different algorithms. We present the experimental results of the comparison of these five algorithms; the first one is a simple, space consuming, approach that works as the upper bound for the storage requirements of the other four algorithms. It actually groups cubes into larger parallelepipeds. The second algorithm is an invariant of the algorithm of Franzblau-Kleitman (1984). We produced an invariant of their algorithm for 3D regions. Our contribution is the development of the other three algorithms that have less storage requirements than the algorithm of Franzblau-Kleitman. The algorithms have been tested in practice. We used them on files containing 3D regions. Every file contained cubes, described with triples of coordinates. We calculated the number of rectangles that every algorithm generates. A small number of rectangles indicates good performance.
{"title":"Efficient storage compression for 3D regions","authors":"G. Panagopoulou, S. Sirmakessis, A. Tsakalidis","doi":"10.1109/DCC.1997.582128","DOIUrl":"https://doi.org/10.1109/DCC.1997.582128","url":null,"abstract":"Summary form only given. We present the results of the comparison of heuristics algorithms for efficient storage compression for 3D regions. We have implemented five different algorithms. We present the experimental results of the comparison of these five algorithms; the first one is a simple, space consuming, approach that works as the upper bound for the storage requirements of the other four algorithms. It actually groups cubes into larger parallelepipeds. The second algorithm is an invariant of the algorithm of Franzblau-Kleitman (1984). We produced an invariant of their algorithm for 3D regions. Our contribution is the development of the other three algorithms that have less storage requirements than the algorithm of Franzblau-Kleitman. The algorithms have been tested in practice. We used them on files containing 3D regions. Every file contained cubes, described with triples of coordinates. We calculated the number of rectangles that every algorithm generates. A small number of rectangles indicates good performance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132066462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In 1951 Shannon estimated the entropy of English text by giving human subjects a sample of text and asking them to guess the next letters. He found, in one example, that 79% of the attempts were correct at the first try, 8% needed two attempts and 3% needed 3 attempts. By regarding the number of attempts as an information source he could estimate the language entropy. Shannon also stated that an "identical twin" to the original predictor could recover the original text and these ideas are developed here to provide a new taxonomy of text compressors. In all cases these compressors recode the input into "rankings" of "most probable symbol", "next most probable symbol", and so on. The rankings have a very skew distribution (low entropy) and are processed by a conventional statistical compressor. Several "symbol ranking" compressors have appeared in the literature, though seldom with that name or even reference to Shannon's work. The author has developed a compressor which uses constant-order contexts and is based on a set-associative cache with LRU update. A software implementation has run at about 1 Mbyte/s with an average compression of 3.6 bits/byte on the Calgary Corpus.
{"title":"Symbol ranking text compressors","authors":"P. Fenwick","doi":"10.1109/DCC.1997.582093","DOIUrl":"https://doi.org/10.1109/DCC.1997.582093","url":null,"abstract":"Summary form only given. In 1951 Shannon estimated the entropy of English text by giving human subjects a sample of text and asking them to guess the next letters. He found, in one example, that 79% of the attempts were correct at the first try, 8% needed two attempts and 3% needed 3 attempts. By regarding the number of attempts as an information source he could estimate the language entropy. Shannon also stated that an \"identical twin\" to the original predictor could recover the original text and these ideas are developed here to provide a new taxonomy of text compressors. In all cases these compressors recode the input into \"rankings\" of \"most probable symbol\", \"next most probable symbol\", and so on. The rankings have a very skew distribution (low entropy) and are processed by a conventional statistical compressor. Several \"symbol ranking\" compressors have appeared in the literature, though seldom with that name or even reference to Shannon's work. The author has developed a compressor which uses constant-order contexts and is based on a set-associative cache with LRU update. A software implementation has run at about 1 Mbyte/s with an average compression of 3.6 bits/byte on the Calgary Corpus.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128469294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In the past decade the field of image processing has grown considerably, and although various successful techniques have been developed for tasks such as image compression, understanding and segmentation, one final piece is missing. Bearing in mind that ultimately, an image is evaluated by a human observer, it is obvious that the usual mean-square error is not appropriate and thus we still sorely lack subjective measures of image quality. Some physiological results point out that the early stage of the human visual system (HVS) works like a filter bank on the retinal image. The filters in such a filter bank can be seen as being obtained by rotation and modulation from an original prototype filter. This work is just the first step and concentrates on the design of local bases obtained by unitary transformations of a single (or more than one) prototype filter.
{"title":"Orthonormal sets of filters obtained by modulations and rotations of a prototype","authors":"R. Bernardini, J. Kovacevic","doi":"10.1109/DCC.1997.582079","DOIUrl":"https://doi.org/10.1109/DCC.1997.582079","url":null,"abstract":"Summary form only given. In the past decade the field of image processing has grown considerably, and although various successful techniques have been developed for tasks such as image compression, understanding and segmentation, one final piece is missing. Bearing in mind that ultimately, an image is evaluated by a human observer, it is obvious that the usual mean-square error is not appropriate and thus we still sorely lack subjective measures of image quality. Some physiological results point out that the early stage of the human visual system (HVS) works like a filter bank on the retinal image. The filters in such a filter bank can be seen as being obtained by rotation and modulation from an original prototype filter. This work is just the first step and concentrates on the design of local bases obtained by unitary transformations of a single (or more than one) prototype filter.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123695229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Traditional salient point based approaches fail in coding calligraphic characters as noisy boundaries make the extraction of the salient points a difficult task. We propose an alternative solution based on genetic algorithm which searches through the space of possible parameter values until a global optimal solution is found. The objective function we employed is a modified version of the total energy function found in the active contour literature.
{"title":"Calligraphic character boundary coding with rational B-spline based on energy minimization using genetic algorithm","authors":"P. Bao, S. Lam","doi":"10.1109/DCC.1997.582076","DOIUrl":"https://doi.org/10.1109/DCC.1997.582076","url":null,"abstract":"Summary form only given. Traditional salient point based approaches fail in coding calligraphic characters as noisy boundaries make the extraction of the salient points a difficult task. We propose an alternative solution based on genetic algorithm which searches through the space of possible parameter values until a global optimal solution is found. The objective function we employed is a modified version of the total energy function found in the active contour literature.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123807401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Fax is immensely successful with over 100 million machines sold world-wide. It is successful because a fax machine is simple and easy to use as a stand-alone device, but it is also very versatile when combined with PC technology. This trend highlights the divergence between PCs and fax machines in image resolution. Many PC applications require resolutions as high as 600 dpi, for example desktop publishing, graphic design, while the highest resolution commonly supported by fax machines is 200 lines/inch. Clearly, transmission of the sort of images which can be generated by a PC places new demands on facsimile. There are at present three main standards for coding facsimile images that have been introduced within the last few years, of which the vast majority of machines use the simplest and oldest technique. This study compares the current standards with a fourth technique developed at De Montfort University called the contour tree format. This new format is a strict two dimensional representation of regions and offers some intrinsic advantages. A set of criteria was investigated with emphasis regarding the compression ratio using many different input conditions.
{"title":"Facsimile-images of the future","authors":"M. J. Turner, K. Halton","doi":"10.1109/DCC.1997.582143","DOIUrl":"https://doi.org/10.1109/DCC.1997.582143","url":null,"abstract":"Summary form only given. Fax is immensely successful with over 100 million machines sold world-wide. It is successful because a fax machine is simple and easy to use as a stand-alone device, but it is also very versatile when combined with PC technology. This trend highlights the divergence between PCs and fax machines in image resolution. Many PC applications require resolutions as high as 600 dpi, for example desktop publishing, graphic design, while the highest resolution commonly supported by fax machines is 200 lines/inch. Clearly, transmission of the sort of images which can be generated by a PC places new demands on facsimile. There are at present three main standards for coding facsimile images that have been introduced within the last few years, of which the vast majority of machines use the simplest and oldest technique. This study compares the current standards with a fourth technique developed at De Montfort University called the contour tree format. This new format is a strict two dimensional representation of regions and offers some intrinsic advantages. A set of criteria was investigated with emphasis regarding the compression ratio using many different input conditions.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The carry-over problem is inherent in arithmetic coding as a result of using finite precision arithmetic. As far as we know, the currently widely used solution for this problem is the bit-stuffing technique, which was proposed by Rissanen and Langdon (198l). However, this technique is not completely satisfactory. The stuffed-bit affects the coding efficiency slightly. The code stream inserted with several stuffed-bits loses its concept as a real number. This conflicts with the principle that arithmetic coding maps an input stream to an interval on the real line, which is neither perfect nor convenient for analysis. We present our solution for the carry-over problem, the carry-trap technique, which works without the deliberately inserted stuffed-bit. We also present a concise termination method, named the medium termination technique. Both are proved rigorously.
{"title":"Arithmetic coding with improved solution for the carry-over problem","authors":"Xiaohui Xue, Wen Gao","doi":"10.1109/DCC.1997.582146","DOIUrl":"https://doi.org/10.1109/DCC.1997.582146","url":null,"abstract":"Summary form only given. The carry-over problem is inherent in arithmetic coding as a result of using finite precision arithmetic. As far as we know, the currently widely used solution for this problem is the bit-stuffing technique, which was proposed by Rissanen and Langdon (198l). However, this technique is not completely satisfactory. The stuffed-bit affects the coding efficiency slightly. The code stream inserted with several stuffed-bits loses its concept as a real number. This conflicts with the principle that arithmetic coding maps an input stream to an interval on the real line, which is neither perfect nor convenient for analysis. We present our solution for the carry-over problem, the carry-trap technique, which works without the deliberately inserted stuffed-bit. We also present a concise termination method, named the medium termination technique. Both are proved rigorously.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130053612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces into practice and empirically evaluates a set of techniques for information-theoretic state selection that have been developing in asymptotic results state selection, which actually implements the selection of an entire model from among a set of competing models, is performed at least trivially by all of the suffix-tree FSMs used for on-line probability estimation. The set of state-selection techniques presented combines orthogonally with the other sets of design options covered in the companion paper of Bunton (Proceedings Data Compression Conference, p.42, 1997).
本文在实践中引入并经验地评价了一组在渐近结果状态选择中发展起来的信息论状态选择技术,这些技术实际上实现了从一组竞争模型中选择整个模型,至少可以通过用于在线概率估计的所有词尾树fsm来实现。所提出的状态选择技术集与Bunton的论文(Proceedings Data Compression Conference,第42页,1997)中涵盖的其他设计选项集正交结合。
{"title":"A percolating state selector for suffix-tree context models","authors":"S. Bunton","doi":"10.1109/DCC.1997.581957","DOIUrl":"https://doi.org/10.1109/DCC.1997.581957","url":null,"abstract":"This paper introduces into practice and empirically evaluates a set of techniques for information-theoretic state selection that have been developing in asymptotic results state selection, which actually implements the selection of an entire model from among a set of competing models, is performed at least trivially by all of the suffix-tree FSMs used for on-line probability estimation. The set of state-selection techniques presented combines orthogonally with the other sets of design options covered in the companion paper of Bunton (Proceedings Data Compression Conference, p.42, 1997).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121219450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of authors have used the Calgary corpus of texts to provide empirical results for lossless compression algorithms. This corpus was collected in 1987, although it was not published until 1990. The advances with compression algorithms have been achieving relatively small improvements in compression, measured using the Calgary corpus. There is a concern that algorithms are being fine-tuned to this corpus, and that small improvements measured in this way may not apply to other files. Furthermore, the corpus is almost ten years old, and over this period there have been changes in the kinds of files that are compressed, particularly with the development of the Internet, and the rapid growth of high-capacity secondary storage for personal computers. We explore the issues raised above, and develop a principled technique for collecting a corpus of test data for compression methods. A corpus, called the Canterbury corpus, is developed using this technique, and we report the performance of a collection of compression methods using the new corpus.
{"title":"A corpus for the evaluation of lossless compression algorithms","authors":"R. Arnold, T. Bell","doi":"10.1109/DCC.1997.582019","DOIUrl":"https://doi.org/10.1109/DCC.1997.582019","url":null,"abstract":"A number of authors have used the Calgary corpus of texts to provide empirical results for lossless compression algorithms. This corpus was collected in 1987, although it was not published until 1990. The advances with compression algorithms have been achieving relatively small improvements in compression, measured using the Calgary corpus. There is a concern that algorithms are being fine-tuned to this corpus, and that small improvements measured in this way may not apply to other files. Furthermore, the corpus is almost ten years old, and over this period there have been changes in the kinds of files that are compressed, particularly with the development of the Internet, and the rapid growth of high-capacity secondary storage for personal computers. We explore the issues raised above, and develop a principled technique for collecting a corpus of test data for compression methods. A corpus, called the Canterbury corpus, is developed using this technique, and we report the performance of a collection of compression methods using the new corpus.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127216853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}