Summary form only given. Generally, there are two main obstacles in the application of arithmetic coding. One is the relatively heavy computational burden in the coding part, since at least two multiplications are needed for each symbol. The other is that a highly efficient statistical model is hard to implement. We observe that under some important circumstances the number of different symbols in the data stream is definitely small. We specially design both the coding part and the modeling part to get a high performance arithmetic coder for the case of small alphabets. Our method is based on the improved arithmetic coding algorithm. We further improve it to be multiplication-free.
{"title":"High performance arithmetic coding for small alphabets","authors":"Xiaohui Xue, Wen Gao","doi":"10.1109/DCC.1997.582149","DOIUrl":"https://doi.org/10.1109/DCC.1997.582149","url":null,"abstract":"Summary form only given. Generally, there are two main obstacles in the application of arithmetic coding. One is the relatively heavy computational burden in the coding part, since at least two multiplications are needed for each symbol. The other is that a highly efficient statistical model is hard to implement. We observe that under some important circumstances the number of different symbols in the data stream is definitely small. We specially design both the coding part and the modeling part to get a high performance arithmetic coder for the case of small alphabets. Our method is based on the improved arithmetic coding algorithm. We further improve it to be multiplication-free.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.
只提供摘要形式。文本压缩的研究主要涉及文档应用;它很少考虑其他应用。在增加条形码符号的数据容量和信息密度方面,以前已经作出了重大的努力。这些努力的结果创造了二维条形码的格式。我们以Symbol Technologies开发的PDF417 (Pavlidis et al. 1992)为例。PDF417是最流行的二维条码符号。然而PDF417的存储容量限制了其广泛应用。本文提出了一种基于反向搜索算法和新的存储协议的文本压缩技术。研究了一种基于词的多字典文本压缩技术,以提高二维条码的存储容量。为了加快文本的搜索速度,还描述了一个哈希函数。对于数据库检索的应用来说,该技术特别有用。对于部件号、位置、名称、参考等有限形式的二维条码存储的数据,由于命中率可以达到100%,压缩比可以高达2。对于解码器设计,复杂性不需要太复杂,因为解码器只需要知道“光”和“暗”。为了使字典变得更加“智能”,提出了一种子字典,它允许编码文本更加独立。
{"title":"Word based multiple dictionary scheme for text compression with application to 2D bar code","authors":"K. Ng, L. Cheng","doi":"10.1109/DCC.1997.582120","DOIUrl":"https://doi.org/10.1109/DCC.1997.582120","url":null,"abstract":"Summary form only given. Research on text compression mainly concerns documentation applications; it has seldomly considered other applications. Significant efforts have previously been made to increase both the data capacity and the information density of bar code symbologies. The results of these efforts created the formats of 2D bar codes. We take PDF417 (Pavlidis et al. 1992) developed by Symbol Technologies as a example. PDF417 is the most popular of the 2D bar code symbologies. However the storage capacity in PDF417 has limited its wider application. Here, we propose a text compression technique with the back searching algorithm and new storage protocols. Studies on how a word-based multiple-dictionary text compression technique can be used to increase the storage capacity in a 2D bar code are described. In order to speed up the search of the text, a hashing function is also described. For application in data base retrieval the proposed technique is particularly useful. For data stored in 2D bar codes which are in the form of limited forms such as part numbers, location, name and reference, the compression ratio can be as high as 2 because the hit ratio can be 100%. For the decoder design, the complexity need not be complex as the decoder just requires to know the 'light' and 'dark'. To let the dictionaries become more 'intelligent', a sub-dictionary is proposed which allows the encoded text to be more independent.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128697763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.
{"title":"Text compression via alphabet re-representation","authors":"Philip M. Long, A. Natsev, J. Vitter","doi":"10.1109/DCC.1997.582003","DOIUrl":"https://doi.org/10.1109/DCC.1997.582003","url":null,"abstract":"We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134062356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an adaptive image coding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts the coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. These previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.
{"title":"Efficient context-based entropy coding for lossy wavelet image compression","authors":"C. Chrysafis, Antonio Ortega","doi":"10.1109/DCC.1997.582047","DOIUrl":"https://doi.org/10.1109/DCC.1997.582047","url":null,"abstract":"We present an adaptive image coding algorithm based on novel backward-adaptive quantization/classification techniques. We use a simple uniform scalar quantizer to quantize the image subbands. Our algorithm puts the coefficient into one of several classes depending on the values of neighboring previously quantized coefficients. These previously quantized coefficients form contexts which are used to characterize the subband data. To each context type corresponds a different probability model and thus each subband coefficient is compressed with an arithmetic coder having the appropriate model depending on that coefficient's neighborhood. We show how the context selection can be driven by rate-distortion criteria, by choosing the contexts in a way that the total distortion for a given bit rate is minimized. Moreover the probability models for each context are initialized/updated in a very efficient way so that practically no overhead information has to be sent to the decoder. Our results are comparable or in some cases better than the recent state of the art, with our algorithm being simpler than most of the published algorithms of comparable performance.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128527063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. We present a multiresolution-based image coding technique that achieves high visual quality through perceptual-based scalability and robustness to transmission errors. To achieve perceptual coding, the image is first segmented at a block level (16/spl times/16) into smooth, edge, and highly-detailed regions, using the Holder regularity property of the wavelet coefficients as well as their distributions. The activity classifications are used when coding the high-frequency wavelet coefficients. The image is compressed by first performing a 3-level hierarchical decomposition, yielding 10 subbands which are coded independently. The LL band is coded using reconstruction-optimized lapped orthogonal transforms, followed by quantization, runlength encoding, and Huffman coding. The high-frequency coefficients corresponding to the smooth regions are quantized to zero. The high-frequency coefficients corresponding to the edge regions are uniformly quantized, to maintain Holder regularity and sharpness of the edges, while those corresponding to the highly-detailed regions are quantized with a modified uniform quantizer with a dead zone. Bits are allocated based on the scale and orientation selectivity of each high-frequency subband as well as the activity regions inside each band corresponding to the edge and highly-detailed regions of the image. The quantized high-frequency bands are then run-length encoded.
{"title":"Robust image coding with perceptual-based scalability","authors":"M. G. Ramos, S. Hemami","doi":"10.1109/DCC.1997.582133","DOIUrl":"https://doi.org/10.1109/DCC.1997.582133","url":null,"abstract":"Summary form only given. We present a multiresolution-based image coding technique that achieves high visual quality through perceptual-based scalability and robustness to transmission errors. To achieve perceptual coding, the image is first segmented at a block level (16/spl times/16) into smooth, edge, and highly-detailed regions, using the Holder regularity property of the wavelet coefficients as well as their distributions. The activity classifications are used when coding the high-frequency wavelet coefficients. The image is compressed by first performing a 3-level hierarchical decomposition, yielding 10 subbands which are coded independently. The LL band is coded using reconstruction-optimized lapped orthogonal transforms, followed by quantization, runlength encoding, and Huffman coding. The high-frequency coefficients corresponding to the smooth regions are quantized to zero. The high-frequency coefficients corresponding to the edge regions are uniformly quantized, to maintain Holder regularity and sharpness of the edges, while those corresponding to the highly-detailed regions are quantized with a modified uniform quantizer with a dead zone. Bits are allocated based on the scale and orientation selectivity of each high-frequency subband as well as the activity regions inside each band corresponding to the edge and highly-detailed regions of the image. The quantized high-frequency bands are then run-length encoded.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"55 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114104468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary method from error propagation as the result of any k errors on the communication channel or compressed file. Here we further develop this approach and provide experimental evidence that this approach is highly effective in practice against a noisy channel or faulty storage medium. That is, for LZ2-based methods that "blow up" as a result of a single error, with the protocol in place, high error rates (with far more than the k errors for which the protocol was previously designed) can be sustained with no error propagation (the only corrupted bytes decoded are those that are part of the string represented by a pointer that was corrupted).
{"title":"Low-cost prevention of error-propagation for data compression with dynamic dictionaries","authors":"J. Storer, J. Reif","doi":"10.1109/DCC.1997.582007","DOIUrl":"https://doi.org/10.1109/DCC.1997.582007","url":null,"abstract":"In earlier work we presented the k-error protocol, a technique for protecting a dynamic dictionary method from error propagation as the result of any k errors on the communication channel or compressed file. Here we further develop this approach and provide experimental evidence that this approach is highly effective in practice against a noisy channel or faulty storage medium. That is, for LZ2-based methods that \"blow up\" as a result of a single error, with the protocol in place, high error rates (with far more than the k errors for which the protocol was previously designed) can be sustained with no error propagation (the only corrupted bytes decoded are those that are part of the string represented by a pointer that was corrupted).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
If DNA were a random string over its alphabet {A,C,G,T}, an optimal code would assign 2 bits to each nucleotide. We imagine DNA to be a highly ordered, purposeful molecule, and might therefore reasonably expect statistical models of its string representation to produce much lower entropy estimates. Surprisingly this has not been the case for many natural DNA sequences, including portions of the human genome. We introduce a new statistical model (compression algorithm), the strongest reported to date, for naturally occurring DNA sequences. Conventional techniques code a nucleotide using only slightly fewer bits (1.90) than one obtains by relying only on the frequency statistics of individual nucleotides (1.95). Our method in some cases increases this gap by more than five-fold (1.66) and may lead to better performance in microbiological pattern recognition applications. One of our main contributions, and the principle source of these improvements, is the formal inclusion of inexact match information in the model. The existence of matches at various distances forms a panel of experts which are then combined into a single prediction. The structure of this combination is novel and its parameters are learned using expectation maximization (EM).
{"title":"Significantly lower entropy estimates for natural DNA sequences","authors":"D. Loewenstern, P. Yianilos","doi":"10.1109/DCC.1997.581998","DOIUrl":"https://doi.org/10.1109/DCC.1997.581998","url":null,"abstract":"If DNA were a random string over its alphabet {A,C,G,T}, an optimal code would assign 2 bits to each nucleotide. We imagine DNA to be a highly ordered, purposeful molecule, and might therefore reasonably expect statistical models of its string representation to produce much lower entropy estimates. Surprisingly this has not been the case for many natural DNA sequences, including portions of the human genome. We introduce a new statistical model (compression algorithm), the strongest reported to date, for naturally occurring DNA sequences. Conventional techniques code a nucleotide using only slightly fewer bits (1.90) than one obtains by relying only on the frequency statistics of individual nucleotides (1.95). Our method in some cases increases this gap by more than five-fold (1.66) and may lead to better performance in microbiological pattern recognition applications. One of our main contributions, and the principle source of these improvements, is the formal inclusion of inexact match information in the model. The existence of matches at various distances forms a panel of experts which are then combined into a single prediction. The structure of this combination is novel and its parameters are learned using expectation maximization (EM).","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127523276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a mechanism for approximate adaptive coding that makes use of deferred probability update to obtain good throughput rates with no buffering of symbols from the input message. Our proposed mechanism makes use of a novel code calculation process that allows an approximate code for a message of m symbols to be calculated in O(log m) time, improving upon previous methods. We also give analysis that bounds both the total computation time required to encode a message using the approximate code and the inefficiency of the resulting codeword set. Finally, experimental results are given that highlight the role the new method might play in a practical compression system. The current work builds upon two earlier papers. We previously described a mechanism for efficiently calculating a minimum-redundancy code for an alphabet in which there are many symbols with the same frequency of occurrence. We impose a modest amount of additional discipline upon the input frequencies, and show how the calculation of codewords can be performed in time and space logarithmic in the length of the message. The second area we have previously examined is the process of manipulating a code to actually perform compression. We examined mechanisms for encoding and decoding a prefix code that avoid any need for explicit enumeration of the source codewords. This means that we are free to change the source codewords at will during a message without incurring the additional cost of completely recalculating an n entry codebook.
{"title":"Efficient approximate adaptive coding","authors":"A. Turpin, Alistair Moffat","doi":"10.1109/DCC.1997.582059","DOIUrl":"https://doi.org/10.1109/DCC.1997.582059","url":null,"abstract":"We describe a mechanism for approximate adaptive coding that makes use of deferred probability update to obtain good throughput rates with no buffering of symbols from the input message. Our proposed mechanism makes use of a novel code calculation process that allows an approximate code for a message of m symbols to be calculated in O(log m) time, improving upon previous methods. We also give analysis that bounds both the total computation time required to encode a message using the approximate code and the inefficiency of the resulting codeword set. Finally, experimental results are given that highlight the role the new method might play in a practical compression system. The current work builds upon two earlier papers. We previously described a mechanism for efficiently calculating a minimum-redundancy code for an alphabet in which there are many symbols with the same frequency of occurrence. We impose a modest amount of additional discipline upon the input frequencies, and show how the calculation of codewords can be performed in time and space logarithmic in the length of the message. The second area we have previously examined is the process of manipulating a code to actually perform compression. We examined mechanisms for encoding and decoding a prefix code that avoid any need for explicit enumeration of the source codewords. This means that we are free to change the source codewords at will during a message without incurring the additional cost of completely recalculating an n entry codebook.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127473037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A simple algorithm for efficient lossless compression of circuit test data with fast decompression speed is presented. It can easily be converted into a VLSI implementation. The algorithm is based on recursive block structured run-length coding and compresses at ratios of about 6:1 to 1000:1, higher than most of the widely known compression techniques.
{"title":"Recursive block structured data compression","authors":"M. Tilgner, M. Ishida, T. Yamaguchi","doi":"10.1109/DCC.1997.582139","DOIUrl":"https://doi.org/10.1109/DCC.1997.582139","url":null,"abstract":"Summary form only given. A simple algorithm for efficient lossless compression of circuit test data with fast decompression speed is presented. It can easily be converted into a VLSI implementation. The algorithm is based on recursive block structured run-length coding and compresses at ratios of about 6:1 to 1000:1, higher than most of the widely known compression techniques.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121875500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of recent embedded transform coders, including Shapiro's (1993) EZW scheme, Said and Pearlman's (see IEEE Trans. Circuits and Systems for Video Technology, vol.6, no.3, p.243-250, 1996) SPIHT scheme, and Xiong et al. (see IEEE Signal Processing Letters, no.11, 1996) EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge of the transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8/spl times/8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performance from 0.7 to 2.5 dB better than baseline JPEG for standard test images.
{"title":"Image coding using optimized significance tree quantization","authors":"G. Davis, S. Chawla","doi":"10.1109/DCC.1997.582064","DOIUrl":"https://doi.org/10.1109/DCC.1997.582064","url":null,"abstract":"A number of recent embedded transform coders, including Shapiro's (1993) EZW scheme, Said and Pearlman's (see IEEE Trans. Circuits and Systems for Video Technology, vol.6, no.3, p.243-250, 1996) SPIHT scheme, and Xiong et al. (see IEEE Signal Processing Letters, no.11, 1996) EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge of the transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8/spl times/8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performance from 0.7 to 2.5 dB better than baseline JPEG for standard test images.","PeriodicalId":403990,"journal":{"name":"Proceedings DCC '97. Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121716998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}