Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. Several recently proposed image compression algorithms have focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients. This paper presents a new framework for understanding the efficiency of one such algorithm as a simplified attempt to a global entropy-constrained image quantizer. The principle insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. The specific algorithm described is designed around one conveniently structured collection of such sets. The efficiency of hierarchical wavelet coding algorithms derives from their success at identifying and exploiting dependencies between coefficients in the hierarchical structure. The second part of the paper presents an empirical study of the distribution of high-band wavelet coefficients, the band responsible for most of the performance improvements of the new algorithms.<>
{"title":"An investigation of wavelet-based image coding using an entropy-constrained quantization framework","authors":"K. Ramchandran, M. Orchard","doi":"10.1109/DCC.1994.305942","DOIUrl":"https://doi.org/10.1109/DCC.1994.305942","url":null,"abstract":"Wavelet image decompositions generate a tree-structured set of coefficients, providing an hierarchical data-structure for representing images. Several recently proposed image compression algorithms have focused on new ways for exploiting dependencies between this hierarchy of wavelet coefficients. This paper presents a new framework for understanding the efficiency of one such algorithm as a simplified attempt to a global entropy-constrained image quantizer. The principle insight offered by the new framework is that improved performance is achieved by more accurately characterizing the joint probabilities of arbitrary sets of wavelet coefficients. The specific algorithm described is designed around one conveniently structured collection of such sets. The efficiency of hierarchical wavelet coding algorithms derives from their success at identifying and exploiting dependencies between coefficients in the hierarchical structure. The second part of the paper presents an empirical study of the distribution of high-band wavelet coefficients, the band responsible for most of the performance improvements of the new algorithms.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114897083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compares methods for choosing motion vectors for motion-compensated video compression. The primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where the motion is usually limited, and where the frames must be coded in the order they are generated. the authors provide evidence, using established benchmark videos of this type, that choosing motion vectors to minimize codelength subject to (implicit) constraints on quality yields substantially better rate-distortion tradeoffs than minimizing notions of prediction error. They illustrate this point using an algorithm within the p/spl times/64 standard. They show that using quadtrees to code the motion vectors in conjunction with explicit codelength minimization yields further improvement. They describe a dynamic-programming algorithm for choosing a quadtree to minimize the codelength.<>
{"title":"Explicit bit minimization for motion-compensated video coding","authors":"Dzung T. Hoang, Philip M. Long, J. Vitter","doi":"10.1109/DCC.1994.305925","DOIUrl":"https://doi.org/10.1109/DCC.1994.305925","url":null,"abstract":"Compares methods for choosing motion vectors for motion-compensated video compression. The primary focus is on videophone and videoconferencing applications, where very low bit rates are necessary, where the motion is usually limited, and where the frames must be coded in the order they are generated. the authors provide evidence, using established benchmark videos of this type, that choosing motion vectors to minimize codelength subject to (implicit) constraints on quality yields substantially better rate-distortion tradeoffs than minimizing notions of prediction error. They illustrate this point using an algorithm within the p/spl times/64 standard. They show that using quadtrees to code the motion vectors in conjunction with explicit codelength minimization yields further improvement. They describe a dynamic-programming algorithm for choosing a quadtree to minimize the codelength.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121169619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors address the variable rate tree-structured vector quantizer design problem, wherein the rate is measured by the quantizer's entropy. For this problem, tree pruning via the generalized Breiman-Friedman-Olshen-Stone (1980) algorithm obtains solutions which are optimal over the restricted solution space consisting of all pruned trees derivable from an initial tree. However, the restrictions imposed on such solutions have several implications. In addition to depending on the tree initialization, growing and pruning solutions result in tree-structured vector quantizers which use a sub-optimal encoding rule. To remedy the latter problem, they consider a "tree-constrained" version of entropy-constrained vector quantizer design. This leads to an optimal tree-structured encoding rule for the leaves. In practice, though, improvements obtained in this fashion are limited by the tree initialization, as well as by the sub-optimal encoding performed at non-leaf nodes. To address these problems, they develop a joint optimization method which is inspired by the deterministic annealing algorithm for data clustering, and which extends their previous work on tree-structured vector quantization. The method is based on the principle of minimum cross entropy, using informative priors to approximate the unstructured solution while imposing the structural constraint. As in the original deterministic annealing method, the number of distinct codevectors (and hence the tree) grows by a sequence of bifurcations in the process, which occur as solutions of a free energy minimization. Their method obtains performance gains over growing and pruning methods for variable rate quantization of Gauss-Markov and Gaussian mixture sources.<>
{"title":"Entropy-constrained tree-structured vector quantizer design by the minimum cross entropy principle","authors":"K. Rose, David J. Miller, A. Gersho","doi":"10.1109/DCC.1994.305908","DOIUrl":"https://doi.org/10.1109/DCC.1994.305908","url":null,"abstract":"The authors address the variable rate tree-structured vector quantizer design problem, wherein the rate is measured by the quantizer's entropy. For this problem, tree pruning via the generalized Breiman-Friedman-Olshen-Stone (1980) algorithm obtains solutions which are optimal over the restricted solution space consisting of all pruned trees derivable from an initial tree. However, the restrictions imposed on such solutions have several implications. In addition to depending on the tree initialization, growing and pruning solutions result in tree-structured vector quantizers which use a sub-optimal encoding rule. To remedy the latter problem, they consider a \"tree-constrained\" version of entropy-constrained vector quantizer design. This leads to an optimal tree-structured encoding rule for the leaves. In practice, though, improvements obtained in this fashion are limited by the tree initialization, as well as by the sub-optimal encoding performed at non-leaf nodes. To address these problems, they develop a joint optimization method which is inspired by the deterministic annealing algorithm for data clustering, and which extends their previous work on tree-structured vector quantization. The method is based on the principle of minimum cross entropy, using informative priors to approximate the unstructured solution while imposing the structural constraint. As in the original deterministic annealing method, the number of distinct codevectors (and hence the tree) grows by a sequence of bifurcations in the process, which occur as solutions of a free energy minimization. Their method obtains performance gains over growing and pruning methods for variable rate quantization of Gauss-Markov and Gaussian mixture sources.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116333815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes a technique that constructs models of symbol sequences in the form of small, human-readable, hierarchical grammars. The grammars are both semantically plausible and compact. The technique can induce structure from a variety of different kinds of sequence, and examples are given of models derived from English text, C source code and a sequence of terminal control codes. It explains the grammatical induction technique, demonstrates its application to three very different sequences, evaluates its compression performance, and concludes by briefly discussing its use as a method for knowledge acquisition.<>
{"title":"Compression by induction of hierarchical grammars","authors":"C. Nevill-Manning, I. Witten, D. Maulsby","doi":"10.1109/DCC.1994.305932","DOIUrl":"https://doi.org/10.1109/DCC.1994.305932","url":null,"abstract":"The paper describes a technique that constructs models of symbol sequences in the form of small, human-readable, hierarchical grammars. The grammars are both semantically plausible and compact. The technique can induce structure from a variety of different kinds of sequence, and examples are given of models derived from English text, C source code and a sequence of terminal control codes. It explains the grammatical induction technique, demonstrates its application to three very different sequences, evaluates its compression performance, and concludes by briefly discussing its use as a method for knowledge acquisition.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131009035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lossless gray scale image compression is necessary for many purposes, such as medical imaging, image databases and so on. Lossy images are important as well, because of the high compression ratio. The authors propose a lossless image compression scheme using a lossy image generated with the JPEG-DCT scheme. The concept is, send a JPEG-compressed lossy image primarily, then send residual information and reconstruct the original image using both the lossy image and residual information. 3D adaptive prediction and adaptive arithmetic coding are used, which fully use the statistical parameters of the distribution of the symbol source. The optimal number of neighbor pixels and lossy pixels used for prediction is discussed. The compression ratio is better than previous work and quite close to the original lossless algorithm.<>
{"title":"Lossless image compression with lossy image using adaptive prediction and arithmetic coding","authors":"Seishi Takamura, M. Takagi","doi":"10.1109/DCC.1994.305924","DOIUrl":"https://doi.org/10.1109/DCC.1994.305924","url":null,"abstract":"Lossless gray scale image compression is necessary for many purposes, such as medical imaging, image databases and so on. Lossy images are important as well, because of the high compression ratio. The authors propose a lossless image compression scheme using a lossy image generated with the JPEG-DCT scheme. The concept is, send a JPEG-compressed lossy image primarily, then send residual information and reconstruct the original image using both the lossy image and residual information. 3D adaptive prediction and adaptive arithmetic coding are used, which fully use the statistical parameters of the distribution of the symbol source. The optimal number of neighbor pixels and lossy pixels used for prediction is discussed. The compression ratio is better than previous work and quite close to the original lossless algorithm.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133062081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzes a differential technique of tracking and quantizing a continuous time Gauss-Markov process using the process and its derivatives. By using fine quantization approximations the author derives expressions for the time-average smoothed error. Analytical bounds are derived on the overall smoothed error and it is confirmed that the differential scheme outperforms vector quantization of the scalar process, state component quantization, and state vector quantization. It is shown that when the overall rate R in bits per second is high, the optimal smoothed error varies as 1/R/sup 3/ for the differential scheme. This is better than the performance of DPCM and a modified vector DPCM, analyzed under the same framework. For both these schemes the asymptotic variation of the smoothed error is 1/R/sup 2/ at rate R. For differential state quantisation, the resulting optimal size of the vector quantizers are small and can be used in practice.<>
{"title":"Differential state quantization of high order Gauss-Markov process","authors":"A. Bist","doi":"10.1109/DCC.1994.305913","DOIUrl":"https://doi.org/10.1109/DCC.1994.305913","url":null,"abstract":"Analyzes a differential technique of tracking and quantizing a continuous time Gauss-Markov process using the process and its derivatives. By using fine quantization approximations the author derives expressions for the time-average smoothed error. Analytical bounds are derived on the overall smoothed error and it is confirmed that the differential scheme outperforms vector quantization of the scalar process, state component quantization, and state vector quantization. It is shown that when the overall rate R in bits per second is high, the optimal smoothed error varies as 1/R/sup 3/ for the differential scheme. This is better than the performance of DPCM and a modified vector DPCM, analyzed under the same framework. For both these schemes the asymptotic variation of the smoothed error is 1/R/sup 2/ at rate R. For differential state quantisation, the resulting optimal size of the vector quantizers are small and can be used in practice.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130384325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image block transform techniques usually introduce several types of spatial periodic distortion which are mostly noticeable at low bit rates. One way to reduce these artifacts to obtain an acceptable visual quality level is to postfilter the decoded images using nonlinear space-variant adaptive filters derived from the structural relationships and residual spectral information provided by the discrete-time Fourier transform (DTFT) of block transforms such as the discrete cosine transform (DCT) and the lapped orthogonal transform (LOT). A method for analyzing and filtering the DCT blocking noise and the LOT ringing noise for moderate and highly compressed images is described and several test cases are presented. A generalized Fourier analysis of the block transform distortion as seen in the frequency domain is discussed in conjunction with an outline of a separable adaptive postfiltering algorithm for decoded image enhancement.<>
{"title":"Enhancement of block transform coded images using residual spectra adaptive postfiltering","authors":"I. Linares, R. Mersereau, Mark J. T. Smith","doi":"10.1109/DCC.1994.305940","DOIUrl":"https://doi.org/10.1109/DCC.1994.305940","url":null,"abstract":"Image block transform techniques usually introduce several types of spatial periodic distortion which are mostly noticeable at low bit rates. One way to reduce these artifacts to obtain an acceptable visual quality level is to postfilter the decoded images using nonlinear space-variant adaptive filters derived from the structural relationships and residual spectral information provided by the discrete-time Fourier transform (DTFT) of block transforms such as the discrete cosine transform (DCT) and the lapped orthogonal transform (LOT). A method for analyzing and filtering the DCT blocking noise and the LOT ringing noise for moderate and highly compressed images is described and several test cases are presented. A generalized Fourier analysis of the block transform distortion as seen in the frequency domain is discussed in conjunction with an outline of a separable adaptive postfiltering algorithm for decoded image enhancement.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"291 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors have examined the variation in visibility of single DCT basis functions as a function of display visual resolution. They have shown that the existing model (Ahumada and Peterson, 1992; and Peterson et al., 1993) accommodates resolutions of 16, 32, and 64 pixels/degree, provided that one parameter, the peak sensitivity s0, is allowed to vary. Variations in this parameter are to some extent consistent with spatial summation, although sensitivity is lower at the lowest resolution than summation would predict. Practical DCT quantization matrices must take into account both the visibility of single basis functions, and the spatial pooling of artifacts from block to block. Peterson et al. (1993) showed that to a first approximation this pooling is consistent with probability summation. If one considers two images of equivalent size in degrees, but visual resolutions differing by a factor of two, then the sensitivity to individual artifacts would be lower by 4/sup 1/4/ in the higher resolution image due to the smaller block size in degrees, but higher by 4/sup 1/4/ in the same image due to the greater number of blocks. Thus the same matrix should be used with both. The point of the illustration is that the overall gain of the best quantization matrix must take into account both display resolution and image size.<>
{"title":"Visibility of DCT basis functions: effects of display resolution","authors":"A. Watson, J. Solomon, A. Ahumada","doi":"10.1109/DCC.1994.305945","DOIUrl":"https://doi.org/10.1109/DCC.1994.305945","url":null,"abstract":"The authors have examined the variation in visibility of single DCT basis functions as a function of display visual resolution. They have shown that the existing model (Ahumada and Peterson, 1992; and Peterson et al., 1993) accommodates resolutions of 16, 32, and 64 pixels/degree, provided that one parameter, the peak sensitivity s0, is allowed to vary. Variations in this parameter are to some extent consistent with spatial summation, although sensitivity is lower at the lowest resolution than summation would predict. Practical DCT quantization matrices must take into account both the visibility of single basis functions, and the spatial pooling of artifacts from block to block. Peterson et al. (1993) showed that to a first approximation this pooling is consistent with probability summation. If one considers two images of equivalent size in degrees, but visual resolutions differing by a factor of two, then the sensitivity to individual artifacts would be lower by 4/sup 1/4/ in the higher resolution image due to the smaller block size in degrees, but higher by 4/sup 1/4/ in the same image due to the greater number of blocks. Thus the same matrix should be used with both. The point of the illustration is that the overall gain of the best quantization matrix must take into account both display resolution and image size.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two new approximate methods for coding the binary alphabet with negligible loss of compression efficiency are proposed. An overview is provided of arithmetic coding and bi-level image modeling, the proposed methods are described, followed by their implementation. A theoretical discussion of the the compression performance is also included with an empirical evaluation of the proposed techniques. The focus throughout is on encoding; the decoding process is similar.<>
{"title":"Multiplication and division free adaptive arithmetic coding techniques for bi-level images","authors":"Linh Huynh","doi":"10.1109/DCC.1994.305934","DOIUrl":"https://doi.org/10.1109/DCC.1994.305934","url":null,"abstract":"Two new approximate methods for coding the binary alphabet with negligible loss of compression efficiency are proposed. An overview is provided of arithmetic coding and bi-level image modeling, the proposed methods are described, followed by their implementation. A theoretical discussion of the the compression performance is also included with an empirical evaluation of the proposed techniques. The focus throughout is on encoding; the decoding process is similar.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126835937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parallel algorithms for lossless data compression via dictionary compression using optimal and greedy parsing strategies are described. Dictionary compression removes redundancy by replacing substrings of the input by references to strings stored in a dictionary. Given a static dictionary stored as a suffix tree, the authors present a parallel random access concurrent read, concurrent write (PRAM CREW) algorithm for optimal compression which runs in O(M+log M log n) time with O(nM/sup 2/) processors, where it is assumed that M is the maximum length of any dictionary entry. They also describe an O(M+log n) time and O(n) processor algorithm for greedy parsing given a static or sliding-window dictionary. For sliding-window compression. A different approach finds the greedy parsing in O(log n) time using O(nM log M/log n) processors. Their algorithms are practical in the sense that their analysis elicits small constants.<>
{"title":"Parsing algorithms for dictionary compression on the PRAM","authors":"D. Hirschberg, L. M. Stauffer","doi":"10.1109/DCC.1994.305921","DOIUrl":"https://doi.org/10.1109/DCC.1994.305921","url":null,"abstract":"Parallel algorithms for lossless data compression via dictionary compression using optimal and greedy parsing strategies are described. Dictionary compression removes redundancy by replacing substrings of the input by references to strings stored in a dictionary. Given a static dictionary stored as a suffix tree, the authors present a parallel random access concurrent read, concurrent write (PRAM CREW) algorithm for optimal compression which runs in O(M+log M log n) time with O(nM/sup 2/) processors, where it is assumed that M is the maximum length of any dictionary entry. They also describe an O(M+log n) time and O(n) processor algorithm for greedy parsing given a static or sliding-window dictionary. For sliding-window compression. A different approach finds the greedy parsing in O(log n) time using O(nM log M/log n) processors. Their algorithms are practical in the sense that their analysis elicits small constants.<<ETX>>","PeriodicalId":244935,"journal":{"name":"Proceedings of IEEE Data Compression Conference (DCC'94)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127083401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}