This paper considers designing punctured Low-Density Parity-Check (LDPC) codes for rate-compatible asymmetric Slepian-Wolf (SW) coding of correlated binary memoryless sources. A virtual non-uniform channel is employed to model the rate-compatible asymmetric SW coding based on the puncture approach, and then the degree distributions of LDPC codes are exclusively optimized for the smallest and the largest puncture ratios. Punctured extended Irregular Repeat-Accumulated (eIRA) codes are introduced and designed as an example to demonstrate the validation of the proposed design method. Simulation results show that the coding efficiency of the designed eIRA codes is better than all other reported results.
{"title":"Design of Punctured LDPC Codes for Rate-Compatible Asymmetric Slepian-Wolf Coding","authors":"F. Cen","doi":"10.1109/DCC.2009.25","DOIUrl":"https://doi.org/10.1109/DCC.2009.25","url":null,"abstract":"This paper considers designing punctured Low-Density Parity-Check (LDPC) codes for rate-compatible asymmetric Slepian-Wolf (SW) coding of correlated binary memoryless sources. A virtual non-uniform channel is employed to model the rate-compatible asymmetric SW coding based on the puncture approach, and then the degree distributions of LDPC codes are exclusively optimized for the smallest and the largest puncture ratios. Punctured extended Irregular Repeat-Accumulated (eIRA) codes are introduced and designed as an example to demonstrate the validation of the proposed design method. Simulation results show that the coding efficiency of the designed eIRA codes is better than all other reported results.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133376063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When transmitting a Gaussian source over an AWGN channel with an input power constraint and a quadratic distortion measure, it is well known that optimal performance can be obtained using an analog joint source-channel scalar scheme which merely scales the input and output signals. In the case of bandwidth expansion, such a joint source-channel analog scheme attaining optimal performance is no longer simple. However, when feedback is available a simple and sequential analog linear procedure based on the Schalkwijk-Kailath scheme for communication, is optimal. Recently, we have introduced a fundamental feedback communication scheme,termed textit{posterior matching}, which generalizes the Schalkwijk-Kailath scheme to arbitrary memoryless channels and input distributions. In this paper, we show how the posterior matching scheme can be adapted to the joint source-channel coding setting with bandwidth expansion and a general distortion measure, when feedback is available.
{"title":"The Posterior Matching Feedback Scheme for Joint Source-Channel Coding with Bandwidth Expansion","authors":"O. Shayevitz, M. Feder","doi":"10.1109/DCC.2009.79","DOIUrl":"https://doi.org/10.1109/DCC.2009.79","url":null,"abstract":"When transmitting a Gaussian source over an AWGN channel with an input power constraint and a quadratic distortion measure, it is well known that optimal performance can be obtained using an analog joint source-channel scalar scheme which merely scales the input and output signals. In the case of bandwidth expansion, such a joint source-channel analog scheme attaining optimal performance is no longer simple. However, when feedback is available a simple and sequential analog linear procedure based on the Schalkwijk-Kailath scheme for communication, is optimal. Recently, we have introduced a fundamental feedback communication scheme,termed textit{posterior matching}, which generalizes the Schalkwijk-Kailath scheme to arbitrary memoryless channels and input distributions. In this paper, we show how the posterior matching scheme can be adapted to the joint source-channel coding setting with bandwidth expansion and a general distortion measure, when feedback is available.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123300276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a source file $S$ and two differencing files $Delta (S,T)$ and $Delta(T,R)$, where $Delta(X,Y)$ is used to denote the delta file of the target file $Y$ with respect to the source file $X$, the objective is to be able to construct $R$.This is intended for the scenario of upgrading software where intermediate releases are missing, or for the case of file system backups, where non consecutive versions must be recovered.The traditional way is to decompress $Delta(S,T)$ in order to construct$T$ and then apply $Delta(T,R)$ on $T$ and obtain $R$.The {it Compressed Transitive Delta Encoding (CTDE)} paradigm, introduced in this paper, is to construct a delta file $Delta(S,R)$ working directly on the two given delta files, $Delta (S,T)$ and $Delta(T,R)$, without any decompression or the use of the base file $S$. A new algorithm for solving CTDE is proposed and its compression performance is compared against the traditional ``double delta decompression''.Not only does it use constant additional space, as opposed to the traditional method which uses linear additional memory storage, but experiments show that the size of the delta files involved is reduced by 15% on average.
{"title":"Compressed Transitive Delta Encoding","authors":"Dana Shapira","doi":"10.1109/DCC.2009.46","DOIUrl":"https://doi.org/10.1109/DCC.2009.46","url":null,"abstract":"Given a source file $S$ and two differencing files $Delta (S,T)$ and $Delta(T,R)$, where $Delta(X,Y)$ is used to denote the delta file of the target file $Y$ with respect to the source file $X$, the objective is to be able to construct $R$.This is intended for the scenario of upgrading software where intermediate releases are missing, or for the case of file system backups, where non consecutive versions must be recovered.The traditional way is to decompress $Delta(S,T)$ in order to construct$T$ and then apply $Delta(T,R)$ on $T$ and obtain $R$.The {it Compressed Transitive Delta Encoding (CTDE)} paradigm, introduced in this paper, is to construct a delta file $Delta(S,R)$ working directly on the two given delta files, $Delta (S,T)$ and $Delta(T,R)$, without any decompression or the use of the base file $S$. A new algorithm for solving CTDE is proposed and its compression performance is compared against the traditional ``double delta decompression''.Not only does it use constant additional space, as opposed to the traditional method which uses linear additional memory storage, but experiments show that the size of the delta files involved is reduced by 15% on average.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122239320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data deduplication is a popular dictionary based compression method in storage archival and backup.The deduplication efficiency (``chunk'' matching) improves for smaller chunk sizes, however the files become highly fragmented requiring many disk accesses during reconstruction or "chattiness"in a client-server architecture. Within the sequence of chunks that an object (file) is decomposed into, sub-sequences of adjacent chunks tend to repeat. We exploit this insight to optimize the chunk sizes by joining repeated sub-sequences of small chunks into new ``super chunks'' with the constraint to achieve practically the same matching performance. We employ suffix arrays to find these repeating sub-sequences and to determine a new encoding that covers the original sequence.With super chunks we significantly reduce fragmentation, improving reconstruction time and the overall deduplication ratio by lowering the amount of metadata (fewer hashes and dictionary entries).
{"title":"Block Size Optimization in Deduplication Systems","authors":"C. Constantinescu, J. Pieper, Tiancheng Li","doi":"10.1109/DCC.2009.51","DOIUrl":"https://doi.org/10.1109/DCC.2009.51","url":null,"abstract":"Data deduplication is a popular dictionary based compression method in storage archival and backup.The deduplication efficiency (``chunk'' matching) improves for smaller chunk sizes, however the files become highly fragmented requiring many disk accesses during reconstruction or \"chattiness\"in a client-server architecture. Within the sequence of chunks that an object (file) is decomposed into, sub-sequences of adjacent chunks tend to repeat. We exploit this insight to optimize the chunk sizes by joining repeated sub-sequences of small chunks into new ``super chunks'' with the constraint to achieve practically the same matching performance. We employ suffix arrays to find these repeating sub-sequences and to determine a new encoding that covers the original sequence.With super chunks we significantly reduce fragmentation, improving reconstruction time and the overall deduplication ratio by lowering the amount of metadata (fewer hashes and dictionary entries).","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123719087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Barret, J. Gutzwiller, Isidore Paul Akam Bita, F. D. Vedova
It is well known in transform coding that the Karhunen-Loève Transform (KLT) can be suboptimal for non Gaussiansources. However in many applications using JPEG2000Part 2 codecs, the KLT is generally considered as the optimal linear transform for reducing redundancies between components of hyperspectral images. In previous works, optimal spectral transforms (OST) compatible with the JPEG2000 Part 2 standard have been introduced, performing better than the KLT but with an heavier computational cost. In this paper, we show that the OST computed on a learningbasis constituted of Hyperion hyperspectral images issuedfrom one sensor performs very well, and even better thanthe KLT, on other images issued from the same sensor.
{"title":"Lossy Hyperspectral Images Coding with Exogenous Quasi Optimal Transforms","authors":"M. Barret, J. Gutzwiller, Isidore Paul Akam Bita, F. D. Vedova","doi":"10.1109/DCC.2009.8","DOIUrl":"https://doi.org/10.1109/DCC.2009.8","url":null,"abstract":"It is well known in transform coding that the Karhunen-Loève Transform (KLT) can be suboptimal for non Gaussiansources. However in many applications using JPEG2000Part 2 codecs, the KLT is generally considered as the optimal linear transform for reducing redundancies between components of hyperspectral images. In previous works, optimal spectral transforms (OST) compatible with the JPEG2000 Part 2 standard have been introduced, performing better than the KLT but with an heavier computational cost. In this paper, we show that the OST computed on a learningbasis constituted of Hyperion hyperspectral images issuedfrom one sensor performs very well, and even better thanthe KLT, on other images issued from the same sensor.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ozgun Y. Bursalioglu, Maria Fresia, G. Caire, H. Poor
The multicasting of an independent and identically distributed Gaussian source over a binary erasure broadcast channel is considered. This model applies to a one-to-many transmission scenario in which some mechanism at the physical layer delivers information packets with losses represented by erasures, and users are subject to different erasure probabilities. The reconstruction signal-to-noise ratio (SNR) region achieved by concatenating a multiresolution source code with a broadcast channel code is characterized and four convex optimization problems corresponding to different performance criteria are solved. Each problem defines a particular operating point on the dominant face of the SNR region. Layered joint source-channel codes are constructed based on the concatenation of embedded scalar quantizers with binary raptor encoders. The proposed schemes are shown to operate very close to the theoretical optimum.
{"title":"Joint Source-Channel Coding at the Application Layer","authors":"Ozgun Y. Bursalioglu, Maria Fresia, G. Caire, H. Poor","doi":"10.1109/DCC.2009.10","DOIUrl":"https://doi.org/10.1109/DCC.2009.10","url":null,"abstract":"The multicasting of an independent and identically distributed Gaussian source over a binary erasure broadcast channel is considered. This model applies to a one-to-many transmission scenario in which some mechanism at the physical layer delivers information packets with losses represented by erasures, and users are subject to different erasure probabilities. The reconstruction signal-to-noise ratio (SNR) region achieved by concatenating a multiresolution source code with a broadcast channel code is characterized and four convex optimization problems corresponding to different performance criteria are solved. Each problem defines a particular operating point on the dominant face of the SNR region. Layered joint source-channel codes are constructed based on the concatenation of embedded scalar quantizers with binary raptor encoders. The proposed schemes are shown to operate very close to the theoretical optimum.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130424697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we study how to encode N-long vectors, with N in the range of hundreds, at low bit rates of 0.5 bit per sample or lower. We adopt a vector quantization structure, where an overall gain is encoded with a scalar quantizer and the remaining scaled vector is encoded using a vector quantizer built out by combining smaller (length L) binary codes known to be efficient in filling the space, the important examples discussed here being the Golay codes. Due to the typical nonstationary distribution of the long vectors, a piecewise stationary plus contamination model is assumed. The generic solution is to encode the outliers using Golomb-Rice codes, and for each L-long subvector to encode the vector of absolute values using the nearest neighbor in a certain shell of a chosen binary {0,1} code, the sign information being transmitted separately. The rate-distortion optimization problem can be very efficiently organized and solved for the unknowns, which include the Hamming weights of the chosen shells for each of the N/L subvectors, and the overall gain g. The essential properties which influence the selection of a certain binary code as a building block are its space filling properties, the number of shells of various Hamming weights (allowing more or less flexibility in the rate-distortion optimization), the closeness of N to a multiple of L, and the existence of fast search of nearest neighbor on a shell. We show results when using the Golay codes for vector quantization on audio coding applications.
{"title":"Low Bit Rate Vector Quantization of Outlier Contaminated Data Based on Shells of Golay Codes","authors":"I. Tabus, A. Vasilache","doi":"10.1109/DCC.2009.62","DOIUrl":"https://doi.org/10.1109/DCC.2009.62","url":null,"abstract":"In this paper we study how to encode N-long vectors, with N in the range of hundreds, at low bit rates of 0.5 bit per sample or lower. We adopt a vector quantization structure, where an overall gain is encoded with a scalar quantizer and the remaining scaled vector is encoded using a vector quantizer built out by combining smaller (length L) binary codes known to be efficient in filling the space, the important examples discussed here being the Golay codes. Due to the typical nonstationary distribution of the long vectors, a piecewise stationary plus contamination model is assumed. The generic solution is to encode the outliers using Golomb-Rice codes, and for each L-long subvector to encode the vector of absolute values using the nearest neighbor in a certain shell of a chosen binary {0,1} code, the sign information being transmitted separately. The rate-distortion optimization problem can be very efficiently organized and solved for the unknowns, which include the Hamming weights of the chosen shells for each of the N/L subvectors, and the overall gain g. The essential properties which influence the selection of a certain binary code as a building block are its space filling properties, the number of shells of various Hamming weights (allowing more or less flexibility in the rate-distortion optimization), the closeness of N to a multiple of L, and the existence of fast search of nearest neighbor on a shell. We show results when using the Golay codes for vector quantization on audio coding applications.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131563511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel A. Martínez-Prieto, J. Adiego, F. Sánchez-Martínez, P. Fuente, Rafael C. Carrasco
This paper describes a novel approach for bilingual parallel corpora (bitexts) compression. The approach takes advantage of the fact that the two texts that form a bitext are mutual translations. First, the two texts are aligned both at the sentence and the word level. Then, word alignments are used to define biwords, that is, pairs of two words, each one from a different text, that are mutual translations. Finally, a biword-based PPM compressor is applied. The results obtained compressing the two texts of the bitext together improve the compression ratios achieved when both texts are independently compressed through a word-based PPM compressor; thus, saving storage and transmission costs.
{"title":"On the Use of Word Alignments to Enhance Bitext Compression","authors":"Miguel A. Martínez-Prieto, J. Adiego, F. Sánchez-Martínez, P. Fuente, Rafael C. Carrasco","doi":"10.1109/DCC.2009.22","DOIUrl":"https://doi.org/10.1109/DCC.2009.22","url":null,"abstract":"This paper describes a novel approach for bilingual parallel corpora (bitexts) compression. The approach takes advantage of the fact that the two texts that form a bitext are mutual translations. First, the two texts are aligned both at the sentence and the word level. Then, word alignments are used to define biwords, that is, pairs of two words, each one from a different text, that are mutual translations. Finally, a biword-based PPM compressor is applied. The results obtained compressing the two texts of the bitext together improve the compression ratios achieved when both texts are independently compressed through a word-based PPM compressor; thus, saving storage and transmission costs.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125208854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we present a SSIM optimal JPEG 2000 rate allocation algorithm. However, our aim is less improving the visual performance of JPEG 2000, but more the study of the performance of the SSIM full reference metric by means beyond correlation measurements.Full reference image quality metrics assign a quality index to a pair of a reference and distorted image. The performance of a metric is then measured by the degree of correlation between the scores obtained from the metric and those from subjective tests. It is the aim of a rate allocation algorithm to minimize the distortion created by a lossy image compression scheme under a rate constraint.Noting this relation between objective function and performance evaluation allows us now to define an alternative approach to evaluate the usefulness of a candidate metric: we want to judge the quality of a metric by its ability to define an objective function for rate control purposes, and evaluate images compressed in this scheme subjectively. It turns out that deficiencies of image quality metrics become much easier visible --- even in the literal sense --- than under traditional correlation experiments.Our candidate metric in this work is the SSIM index proposed by Sheik and Bovik which is both simple enough to be implemented efficiently in rate control algorithms, but yet correlates better to visual quality than MSE; our candidate compression scheme is the highly flexible JPEG 2000 standard.
{"title":"A MS-SSIM Optimal JPEG 2000 Encoder","authors":"T. Richter, Kil Joong Kim","doi":"10.1109/DCC.2009.15","DOIUrl":"https://doi.org/10.1109/DCC.2009.15","url":null,"abstract":"In this work, we present a SSIM optimal JPEG 2000 rate allocation algorithm. However, our aim is less improving the visual performance of JPEG 2000, but more the study of the performance of the SSIM full reference metric by means beyond correlation measurements.Full reference image quality metrics assign a quality index to a pair of a reference and distorted image. The performance of a metric is then measured by the degree of correlation between the scores obtained from the metric and those from subjective tests. It is the aim of a rate allocation algorithm to minimize the distortion created by a lossy image compression scheme under a rate constraint.Noting this relation between objective function and performance evaluation allows us now to define an alternative approach to evaluate the usefulness of a candidate metric: we want to judge the quality of a metric by its ability to define an objective function for rate control purposes, and evaluate images compressed in this scheme subjectively. It turns out that deficiencies of image quality metrics become much easier visible --- even in the literal sense --- than under traditional correlation experiments.Our candidate metric in this work is the SSIM index proposed by Sheik and Bovik which is both simple enough to be implemented efficiently in rate control algorithms, but yet correlates better to visual quality than MSE; our candidate compression scheme is the highly flexible JPEG 2000 standard.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122720474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Milani, Carlos Cruz-Reyes, J. Kari, G. Calvagno
The paper presents an efficient scalable coding approach for bi-level images that relies on reversible non-linear transformations performed by subclasses of Cellular Automata. At each transformation stage the input image is converted into four subimages which are coded separately. In this work we delineate an effective strategy for the entropy coder to code the transformed image into a binary bit stream that outperforms the compression results previously obtained and compares well with the standard JBIG. Experimental results show that our method proves to be more efficient for images where black pixels lie within a connected region and for multiple decomposition levels.
{"title":"A Binary Image Scalable Coder Based on Reversible Cellular Automata Transform and Arithmetic Coding","authors":"S. Milani, Carlos Cruz-Reyes, J. Kari, G. Calvagno","doi":"10.1109/DCC.2009.59","DOIUrl":"https://doi.org/10.1109/DCC.2009.59","url":null,"abstract":"The paper presents an efficient scalable coding approach for bi-level images that relies on reversible non-linear transformations performed by subclasses of Cellular Automata. At each transformation stage the input image is converted into four subimages which are coded separately. In this work we delineate an effective strategy for the entropy coder to code the transformed image into a binary bit stream that outperforms the compression results previously obtained and compares well with the standard JBIG. Experimental results show that our method proves to be more efficient for images where black pixels lie within a connected region and for multiple decomposition levels.","PeriodicalId":377880,"journal":{"name":"2009 Data Compression Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126577195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}