Summary form only given. We introduced (Kirovski and Landau (2004)) a memory-based model of the source signal, which explores multimedia repetitiveness to improve upon compression rates achieved by classic memoryless or simple prediction-based audio compression algorithms such as MP3. The representation error is masked using a psycho-acoustic filter. The goal of the masking function is to set the error such that reconstruction of audible samples is exact whereas the reconstruction of inaudible samples is such that the absolute magnitude of the error is minimized. We compute the entropy of the quantized pointers to all blocks, the quantized pointers to the applied transforms, the quantized scalars used to create the linear combination of transformed blocks, and the error vector returned.
{"title":"Parameter analysis for the generalized LZ compression of audio","authors":"D. Kirovski, Zeph Landau","doi":"10.1109/DCC.2005.70","DOIUrl":"https://doi.org/10.1109/DCC.2005.70","url":null,"abstract":"Summary form only given. We introduced (Kirovski and Landau (2004)) a memory-based model of the source signal, which explores multimedia repetitiveness to improve upon compression rates achieved by classic memoryless or simple prediction-based audio compression algorithms such as MP3. The representation error is masked using a psycho-acoustic filter. The goal of the masking function is to set the error such that reconstruction of audible samples is exact whereas the reconstruction of inaudible samples is such that the absolute magnitude of the error is minimized. We compute the entropy of the quantized pointers to all blocks, the quantized pointers to the applied transforms, the quantized scalars used to create the linear combination of transformed blocks, and the error vector returned.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"40 1","pages":"465-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80069132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In text compression applications, it is important to be able to process compressed data without requiring (complete) decompression. In this context it is crucial to study compression methods that allow time/space efficient access to any fragment of a compressed file without being forced to perform complete decompression. We study here the real-time recovery of consecutive symbols from compressed files, in the context of grammar-based compression. In this setting, a compressed text is represented as a small (a few Kb) dictionary D (containing a set of code words), and a very long (a few Mb) string based on symbols drawn from the dictionary D. The space efficiency of this kind of compression is comparable with standard compression methods based on the Lempel-Ziv approach. We show, that one can visit consecutive symbols of the original text, moving from one symbol to another in constant time and extra O(|D|) space. This algorithm is an improvement of the on-line linear (amortised) time algorithm presented in (L. Gasieniec et al, Proc. 13th Int. Symp. on Fund. of Comp. Theo., LNCS, vol.2138, p.138-152, 2001).
只提供摘要形式。在文本压缩应用程序中,能够处理压缩数据而不需要(完全)解压缩是很重要的。在这种情况下,研究压缩方法是至关重要的,这些方法允许时间/空间有效地访问压缩文件的任何片段,而不必强制执行完全解压缩。本文研究了基于语法压缩的压缩文件中连续符号的实时恢复。在这种情况下,压缩文本被表示为一个小的(几Kb)字典D(包含一组码字)和一个非常长的(几Mb)字符串(基于从字典D中绘制的符号)。这种压缩的空间效率与基于Lempel-Ziv方法的标准压缩方法相当。我们证明,一个人可以访问原始文本的连续符号,在恒定的时间和额外的O(|D|)空间内从一个符号移动到另一个符号。该算法是对(L. Gasieniec et al ., Proc. 13 Int)中提出的在线线性(摊平)时间算法的改进。计算机协会。在基金。西奥公司。生物医学工程学报,vol.2138, p.138-152, 2001)。
{"title":"Real-time traversal in grammar-based compressed files","authors":"L. Gąsieniec, R. Kolpakov, I. Potapov, P. Sant","doi":"10.1109/DCC.2005.78","DOIUrl":"https://doi.org/10.1109/DCC.2005.78","url":null,"abstract":"Summary form only given. In text compression applications, it is important to be able to process compressed data without requiring (complete) decompression. In this context it is crucial to study compression methods that allow time/space efficient access to any fragment of a compressed file without being forced to perform complete decompression. We study here the real-time recovery of consecutive symbols from compressed files, in the context of grammar-based compression. In this setting, a compressed text is represented as a small (a few Kb) dictionary D (containing a set of code words), and a very long (a few Mb) string based on symbols drawn from the dictionary D. The space efficiency of this kind of compression is comparable with standard compression methods based on the Lempel-Ziv approach. We show, that one can visit consecutive symbols of the original text, moving from one symbol to another in constant time and extra O(|D|) space. This algorithm is an improvement of the on-line linear (amortised) time algorithm presented in (L. Gasieniec et al, Proc. 13th Int. Symp. on Fund. of Comp. Theo., LNCS, vol.2138, p.138-152, 2001).","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"9 1","pages":"458-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81949172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider image retrieval based on minimum distortion selection of features of color images modelled by Gauss mixtures. The proposed algorithm retrieves the image in a database having minimum distortion when the query image is encoded by a separate Gauss mixture codebook representing each image in the database. We use Gauss mixture vector quantization (GMVQ) for clustering Gauss mixtures, instead of the conventional expectation-maximization (EM) algorithm. Experimental comparison shows that the simpler GMVQ and the EM algorithms have close Gauss mixture parameters with similar convergence speeds. We also provide a new color-interleaving method, reducing the dimension of feature vectors and the size of covariance matrices, thereby reducing computation. This method shows a slightly better retrieval performance than the usual color-interleaving method in HSV color space. Our proposed minimum distortion image retrieval performs better than probabilistic image retrieval.
{"title":"Minimum distortion color image retrieval based on Lloyd-clustered Gauss mixtures","authors":"Sangoh Jeong, R. Gray","doi":"10.1109/DCC.2005.52","DOIUrl":"https://doi.org/10.1109/DCC.2005.52","url":null,"abstract":"We consider image retrieval based on minimum distortion selection of features of color images modelled by Gauss mixtures. The proposed algorithm retrieves the image in a database having minimum distortion when the query image is encoded by a separate Gauss mixture codebook representing each image in the database. We use Gauss mixture vector quantization (GMVQ) for clustering Gauss mixtures, instead of the conventional expectation-maximization (EM) algorithm. Experimental comparison shows that the simpler GMVQ and the EM algorithms have close Gauss mixture parameters with similar convergence speeds. We also provide a new color-interleaving method, reducing the dimension of feature vectors and the size of covariance matrices, thereby reducing computation. This method shows a slightly better retrieval performance than the usual color-interleaving method in HSV color space. Our proposed minimum distortion image retrieval performs better than probabilistic image retrieval.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"112 1","pages":"279-288"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85782650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend the rate-distortion function for Wyner-Ziv coding of noisy sources with quadratic distortion, in the jointly Gaussian case, to more general statistics. It suffices that the noisy observation Z be the sum of a function of the side information Y and independent Gaussian noise, while the source data X must be the sum of a function of Y, a linear function of Z, and a random variable N such that the conditional expectation of N given Y and Z is zero, almost surely. Furthermore, the side information Y may be arbitrarily distributed in any alphabet, discrete or continuous. Under these general conditions, we prove that no rate loss is incurred due to the unavailability of the side information at the encoder. In the noiseless Wyner-Ziv case, i.e., when the source data is directly observed, the assumptions are still less restrictive than those recently established in the literature. We confirm, theoretically and experimentally, the consistency of this analysis with some of the main results on high-rate Wyner-Ziv quantization of noisy sources.
{"title":"Generalization of the rate-distortion function for Wyner-Ziv coding of noisy sources in the quadratic-Gaussian case","authors":"D. Rebollo-Monedero, B. Girod","doi":"10.1109/DCC.2005.6","DOIUrl":"https://doi.org/10.1109/DCC.2005.6","url":null,"abstract":"We extend the rate-distortion function for Wyner-Ziv coding of noisy sources with quadratic distortion, in the jointly Gaussian case, to more general statistics. It suffices that the noisy observation Z be the sum of a function of the side information Y and independent Gaussian noise, while the source data X must be the sum of a function of Y, a linear function of Z, and a random variable N such that the conditional expectation of N given Y and Z is zero, almost surely. Furthermore, the side information Y may be arbitrarily distributed in any alphabet, discrete or continuous. Under these general conditions, we prove that no rate loss is incurred due to the unavailability of the side information at the encoder. In the noiseless Wyner-Ziv case, i.e., when the source data is directly observed, the assumptions are still less restrictive than those recently established in the literature. We confirm, theoretically and experimentally, the consistency of this analysis with some of the main results on high-rate Wyner-Ziv quantization of noisy sources.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"2 1","pages":"23-32"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86028489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We derive an upper bound on the average MAP decoding error probability of random linear SW codes for arbitrary correlated stationary memoryless sources defined on Galois fields. By using this tool, we analyze the performance of SW codes based on LDPC codes and random permutations, and show that under some conditions, all but a diminishingly small proportion of LDPC encoders and permutations are good enough for the design of practical SW systems when the coding length is very large.
{"title":"On the performance of linear Slepian-Wolf codes for correlated stationary memoryless sources","authors":"Shengtian Yang, Peiliang Qiu","doi":"10.1109/DCC.2005.65","DOIUrl":"https://doi.org/10.1109/DCC.2005.65","url":null,"abstract":"We derive an upper bound on the average MAP decoding error probability of random linear SW codes for arbitrary correlated stationary memoryless sources defined on Galois fields. By using this tool, we analyze the performance of SW codes based on LDPC codes and random permutations, and show that under some conditions, all but a diminishingly small proportion of LDPC encoders and permutations are good enough for the design of practical SW systems when the coding length is very large.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"18 1","pages":"53-62"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77994994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We introduce an extension of the Burrows-Wheeler transform to a multiset of primitive words. Primitiveness is not actually a restrictive hypothesis, since in practice almost all the processed texts are primitive (or become primitive by adding an end-of-string symbol). We prove that such a transformation as the BWT is reversible. We show how to use the transformation as a preprocessing for the simultaneous compression of different texts.
{"title":"An extension of the Burrows Wheeler transform to k words","authors":"S. Mantaci, A. Restivo, M. Sciortino","doi":"10.1109/DCC.2005.13","DOIUrl":"https://doi.org/10.1109/DCC.2005.13","url":null,"abstract":"Summary form only given. We introduce an extension of the Burrows-Wheeler transform to a multiset of primitive words. Primitiveness is not actually a restrictive hypothesis, since in practice almost all the processed texts are primitive (or become primitive by adding an end-of-string symbol). We prove that such a transformation as the BWT is reversible. We show how to use the transformation as a preprocessing for the simultaneous compression of different texts.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"15 1","pages":"469-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73149064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The research is undertaken by NOAA/NESDIS, for its GOES-R Earth observation satellite series, to be launched in the 2013 time frame, to enable greater distribution of its scientific data, both within the US and internationally. We have developed a new lossless algorithm for compression of the signals from NOAA's environmental satellites using current spacecraft to simulate data from the upcoming GOES-R instrument, and focusing on Aqua Spacecraft's AIRS (atmospheric infrared sounder) instrument in our case study. The AIRS is a high resolution instrument which measures infrared radiances at 2378 wavelengths ranging from 3.74-15.4 /spl mu/m. The AIRS takes 90 measurements as it scans 48.95 degrees perpendicular to the satellite's orbit every 2.667 seconds. We use Level 1A digital count data granules, which represent 6 minutes (or 135 scans) of measurements. Therefore, our data set consists of a 90/spl times/135/spl times/1502 cube of integers ranging from 12-14 bits. Our compression algorithm consists of the following steps: 1) channel partitioning; 2) adaptive clustering; 3) projection onto principal directions; 4) entropy coding of the residuals.
{"title":"Compression algorithm for infrared hyperspectral sounder data","authors":"I. Gladkova, L. Roytman, M. Goldberg","doi":"10.1109/DCC.2005.27","DOIUrl":"https://doi.org/10.1109/DCC.2005.27","url":null,"abstract":"Summary form only given. The research is undertaken by NOAA/NESDIS, for its GOES-R Earth observation satellite series, to be launched in the 2013 time frame, to enable greater distribution of its scientific data, both within the US and internationally. We have developed a new lossless algorithm for compression of the signals from NOAA's environmental satellites using current spacecraft to simulate data from the upcoming GOES-R instrument, and focusing on Aqua Spacecraft's AIRS (atmospheric infrared sounder) instrument in our case study. The AIRS is a high resolution instrument which measures infrared radiances at 2378 wavelengths ranging from 3.74-15.4 /spl mu/m. The AIRS takes 90 measurements as it scans 48.95 degrees perpendicular to the satellite's orbit every 2.667 seconds. We use Level 1A digital count data granules, which represent 6 minutes (or 135 scans) of measurements. Therefore, our data set consists of a 90/spl times/135/spl times/1502 cube of integers ranging from 12-14 bits. Our compression algorithm consists of the following steps: 1) channel partitioning; 2) adaptive clustering; 3) projection onto principal directions; 4) entropy coding of the residuals.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"12 1","pages":"460-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82354247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A joint source-channel coding system for image communication over an additive white Gaussian noise channel is presented. It employs vector quantization based hybrid digital-analog modulation techniques with bandwidth compression and expansion for transmitting and reconstructing the wavelet coefficients of an image. The main advantage of the proposed system is that it achieves good performance at the design channel signal-to-noise ratio (CSNR), while still maintaining a "graceful improvement" characteristic at higher CSNR. Comparisons are made with two purely digital systems and two purely analog systems. Simulation shows that the proposed system is superior to the other investigated systems for a wide range of CSNR.
{"title":"Design of VQ-based hybrid digital-analog joint source-channel codes for image communication","authors":"Yadong Wang, F. Alajaji, T. Linder","doi":"10.1109/DCC.2005.30","DOIUrl":"https://doi.org/10.1109/DCC.2005.30","url":null,"abstract":"A joint source-channel coding system for image communication over an additive white Gaussian noise channel is presented. It employs vector quantization based hybrid digital-analog modulation techniques with bandwidth compression and expansion for transmitting and reconstructing the wavelet coefficients of an image. The main advantage of the proposed system is that it achieves good performance at the design channel signal-to-noise ratio (CSNR), while still maintaining a \"graceful improvement\" characteristic at higher CSNR. Comparisons are made with two purely digital systems and two purely analog systems. Simulation shows that the proposed system is superior to the other investigated systems for a wide range of CSNR.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"33 1","pages":"193-202"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76237579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. XML is gaining widespread acceptance as a standard for storing and transmitting structured data. One of the drawbacks of XML is that it is quite verbose: an XML representation of a set of data can easily be ten times as large as a more economical representation of the data. To overcome this limitation, we present a compression scheme tailored specifically to XML named AXECHOP. The compression strategy used in AXECHOP begins by dividing the source XML document into structural and data segments. The former is represented using a byte tokenization scheme that preserves the original structure of the document (i.e. it maintains the proper nesting and ordering of elements, attributes, and data values). The MPM compression algorithm is used to generate a context-free grammar capable of deriving this original structure, and the grammar is passed through an adaptive arithmetic coder before being written to the compressed file. The document's data is organized into a series of containers (where container membership is determined by the identity of the XML element or attribute that encloses the data) and then the Burrows-Wheeler transform (BWT) is applied to the contents of each dictionary, with the results being appended to the compressed file.
{"title":"AXECHOP: a grammar-based compressor for XML","authors":"G. Leighton, Jim Diamond, T. Müldner","doi":"10.1109/DCC.2005.20","DOIUrl":"https://doi.org/10.1109/DCC.2005.20","url":null,"abstract":"Summary form only given. XML is gaining widespread acceptance as a standard for storing and transmitting structured data. One of the drawbacks of XML is that it is quite verbose: an XML representation of a set of data can easily be ten times as large as a more economical representation of the data. To overcome this limitation, we present a compression scheme tailored specifically to XML named AXECHOP. The compression strategy used in AXECHOP begins by dividing the source XML document into structural and data segments. The former is represented using a byte tokenization scheme that preserves the original structure of the document (i.e. it maintains the proper nesting and ordering of elements, attributes, and data values). The MPM compression algorithm is used to generate a context-free grammar capable of deriving this original structure, and the grammar is passed through an adaptive arithmetic coder before being written to the compressed file. The document's data is organized into a series of containers (where container membership is determined by the identity of the XML element or attribute that encloses the data) and then the Burrows-Wheeler transform (BWT) is applied to the contents of each dictionary, with the results being appended to the compressed file.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"44 1","pages":"467-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76094702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. With an increasing amount of text data being stored in compressed format, being able to access the compressed data randomly and decode it partially is highly desirable for efficient retrieval in many applications. The efficiency of these operations depends on the compression method used. We present a modified LZW algorithm that supports efficient indexing and searching on compressed files. Our method performs in a sublinear complexity, since we only decode a small portion of the file. The proposed approach not only provides the flexibility for dynamic indexing in different text granularities, but also provides the possibility for parallel processing in both encoding and decoding sides, independent of the number of processors available. It also provides good error resilience. The compression ratio is improved using the proposed modified LZW algorithm. Test results show that our public trie method has a compression ratio of 0.34 for the TREC corpus and 0.32 with text preprocessing using a star transform with an optimal static dictionary; this is very close to the efficient word Huffman and phrase based word Huffman schemes, but has a more flexible random access ability.
{"title":"A flexible compressed text retrieval system using a modified LZW algorithm","authors":"Nan Zhang, Tao Tao, R. Satya, A. Mukherjee","doi":"10.1109/DCC.2005.5","DOIUrl":"https://doi.org/10.1109/DCC.2005.5","url":null,"abstract":"Summary form only given. With an increasing amount of text data being stored in compressed format, being able to access the compressed data randomly and decode it partially is highly desirable for efficient retrieval in many applications. The efficiency of these operations depends on the compression method used. We present a modified LZW algorithm that supports efficient indexing and searching on compressed files. Our method performs in a sublinear complexity, since we only decode a small portion of the file. The proposed approach not only provides the flexibility for dynamic indexing in different text granularities, but also provides the possibility for parallel processing in both encoding and decoding sides, independent of the number of processors available. It also provides good error resilience. The compression ratio is improved using the proposed modified LZW algorithm. Test results show that our public trie method has a compression ratio of 0.34 for the TREC corpus and 0.32 with text preprocessing using a star transform with an optimal static dictionary; this is very close to the efficient word Huffman and phrase based word Huffman schemes, but has a more flexible random access ability.","PeriodicalId":91161,"journal":{"name":"Proceedings. Data Compression Conference","volume":"43 1","pages":"493-"},"PeriodicalIF":0.0,"publicationDate":"2005-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76321207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}