Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973507
Xiao-ru Tan, Lijiao Yang
Corpus Concordancing is a popular research topic. The function of retrieving data from corpus by providing non-adjacent keywords is widely used by users. However, the precision of retrieval results is not very high because the machine can't recognize the relationship of the non-adjacent keywords. To deal with this problem, this paper proposed a rule-based method for the “Yi...Jiu...” construction, which could exclude the unrelated data, even though the data include the keywords. The experiments show that the precision is close to 82%.
{"title":"The retrieval research of non-adjacent keywords in Chinese corpus — A case study of “Yi…Jiu…” construction","authors":"Xiao-ru Tan, Lijiao Yang","doi":"10.1109/IALP.2014.6973507","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973507","url":null,"abstract":"Corpus Concordancing is a popular research topic. The function of retrieving data from corpus by providing non-adjacent keywords is widely used by users. However, the precision of retrieval results is not very high because the machine can't recognize the relationship of the non-adjacent keywords. To deal with this problem, this paper proposed a rule-based method for the “Yi...Jiu...” construction, which could exclude the unrelated data, even though the data include the keywords. The experiments show that the precision is close to 82%.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"59 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120814048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973510
Purushotam G. Radadia, H. Patil
Singer IDentification (SID) is a very challenging problem in Music Information Retrieval (MIR) system. Instrumental accompaniments, quality of recording apparatus and other singing voices (in chorus) make SID very difficult and challenging research problem. In this paper, we propose SID system on large database of 500 Hindi (Bollywood) songs using state-of-the-art Mel Frequency Cepstral Coefficients (MFCC) and Cepstral Mean Subtracted (CMS) features. We compare the performance of 3rd order polynomial classifier and Gaussian Mixture Model (GMM). With 3rd order polynomial classifier, we achieved % SID accuracy of 78 % and 89.5 % (and Equal Error Rate (EER) of 6.75 % and 6.42 %) for MFCC and CMSMFCC, respectively. Furthermore, score-level fusion of MFCC and CMSMFCC reduced EER by 0.95 % than MFCC alone. On the other hand, GMM gave % SID accuracy of 70.75 % for both MFCC and CMSMFCC. Finally, we found that CMS-based features are effective to alleviate album effect in SID problem.
{"title":"A Cepstral Mean Subtraction based features for Singer Identification","authors":"Purushotam G. Radadia, H. Patil","doi":"10.1109/IALP.2014.6973510","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973510","url":null,"abstract":"Singer IDentification (SID) is a very challenging problem in Music Information Retrieval (MIR) system. Instrumental accompaniments, quality of recording apparatus and other singing voices (in chorus) make SID very difficult and challenging research problem. In this paper, we propose SID system on large database of 500 Hindi (Bollywood) songs using state-of-the-art Mel Frequency Cepstral Coefficients (MFCC) and Cepstral Mean Subtracted (CMS) features. We compare the performance of 3rd order polynomial classifier and Gaussian Mixture Model (GMM). With 3rd order polynomial classifier, we achieved % SID accuracy of 78 % and 89.5 % (and Equal Error Rate (EER) of 6.75 % and 6.42 %) for MFCC and CMSMFCC, respectively. Furthermore, score-level fusion of MFCC and CMSMFCC reduced EER by 0.95 % than MFCC alone. On the other hand, GMM gave % SID accuracy of 70.75 % for both MFCC and CMSMFCC. Finally, we found that CMS-based features are effective to alleviate album effect in SID problem.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"519 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116254988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973489
Apurbalal Senapati, Utpal Garain
This paper attempts to analyze one-expressions in Bengali and shows its effectiveness for machine translation. The characteristics of one-expressions are studied in 177 million word corpus. A classification scheme has been proposed for the grouping the one-expressions. The features contributing towards the classification are identified and a CRF-based classifier is trained on an authors' generated annotated dataset containing 2006 instances of one-expressions. The classifier's performance is tested on a test set (containing 300 instances of Bengali one-expressions) which is different from the training data. Evaluation shows that the classifier can correctly classify the one-expressions in 75% cases. Finally, the utility of this classification task is investigated for machine translation (Bengali-English). The translation accuracy is improved from 39% (by Google translator) to 60% (by the proposed approach) and this improvement is found to be statistically significant. All the annotated datasets (there was none before) are made free to facilitate further research on this topic.
{"title":"One-expression classification in Bengali and its role in Bengali-English machine translation","authors":"Apurbalal Senapati, Utpal Garain","doi":"10.1109/IALP.2014.6973489","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973489","url":null,"abstract":"This paper attempts to analyze one-expressions in Bengali and shows its effectiveness for machine translation. The characteristics of one-expressions are studied in 177 million word corpus. A classification scheme has been proposed for the grouping the one-expressions. The features contributing towards the classification are identified and a CRF-based classifier is trained on an authors' generated annotated dataset containing 2006 instances of one-expressions. The classifier's performance is tested on a test set (containing 300 instances of Bengali one-expressions) which is different from the training data. Evaluation shows that the classifier can correctly classify the one-expressions in 75% cases. Finally, the utility of this classification task is investigated for machine translation (Bengali-English). The translation accuracy is improved from 39% (by Google translator) to 60% (by the proposed approach) and this improvement is found to be statistically significant. All the annotated datasets (there was none before) are made free to facilitate further research on this topic.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124588580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973520
A. Luthfi, Bayu Distiawan Trisedya, R. Manurung
This paper describes the development of an Indonesian NER system using online data such as Wikipedia 1 and DBPedia 2. The system is based on the Stanford NER system [8] and utilizes training documents constructed automatically from Wikipedia. Each entity, i.e. word or phrase that has a hyperlink, in the Wikipedia documents are tagged according to information that is obtained from DBPedia. In this very first version, we are only interested in three entities, namely: Person, Place, and Organization. The system is evaluated using cross fold validation and also evaluated using a gold standard that was manually annotated. Using cross validation evaluation, our Indonesian NER managed to obtain precision and recall values above 90%, whereas the evaluation using gold standard shows that the Indonesian NER achieves high precision but very low recall.
{"title":"Building an Indonesian named entity recognizer using Wikipedia and DBPedia","authors":"A. Luthfi, Bayu Distiawan Trisedya, R. Manurung","doi":"10.1109/IALP.2014.6973520","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973520","url":null,"abstract":"This paper describes the development of an Indonesian NER system using online data such as Wikipedia 1 and DBPedia 2. The system is based on the Stanford NER system [8] and utilizes training documents constructed automatically from Wikipedia. Each entity, i.e. word or phrase that has a hyperlink, in the Wikipedia documents are tagged according to information that is obtained from DBPedia. In this very first version, we are only interested in three entities, namely: Person, Place, and Organization. The system is evaluated using cross fold validation and also evaluated using a gold standard that was manually annotated. Using cross validation evaluation, our Indonesian NER managed to obtain precision and recall values above 90%, whereas the evaluation using gold standard shows that the Indonesian NER achieves high precision but very low recall.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973468
A. Al-Thubaity, Marwa Khan, Saad Alotaibi, Badriyya Alonazi
The availability of machine-readable Arabic special domain text in digital libraries, websites of Arabic university publications, and refereed journals fosters numerous interesting studies and applications. Among these applications is automatic term extraction from special domain corpora. These extracted terms can serve as a foundation for other applications and research, such as special domain dictionary building, terminology resource creation, and special domain ontology construction. Our literature survey shows a lack of such studies for Arabic special domain text; moreover, the few studies that have been identified use complex and computationally expensive methods. In this study, we use two basic methods to automatically extract terms from Arabic special domain corpora. Our methods are based on two simple heuristics. The most frequent words and n-grams in special domain corpora are typically terms, which themselves are typically bounded by functional words. We applied our methods on a corpus of applied Arabic linguistics. We obtained results comparable to those of other Arabic term extraction studies in that they exhibited 87% accuracy when only terms strictly pertaining to the field of applied Arabic linguistics were considered, and 93.7% when related terms were included.
{"title":"Automatic Arabic term extraction from special domain corpora","authors":"A. Al-Thubaity, Marwa Khan, Saad Alotaibi, Badriyya Alonazi","doi":"10.1109/IALP.2014.6973468","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973468","url":null,"abstract":"The availability of machine-readable Arabic special domain text in digital libraries, websites of Arabic university publications, and refereed journals fosters numerous interesting studies and applications. Among these applications is automatic term extraction from special domain corpora. These extracted terms can serve as a foundation for other applications and research, such as special domain dictionary building, terminology resource creation, and special domain ontology construction. Our literature survey shows a lack of such studies for Arabic special domain text; moreover, the few studies that have been identified use complex and computationally expensive methods. In this study, we use two basic methods to automatically extract terms from Arabic special domain corpora. Our methods are based on two simple heuristics. The most frequent words and n-grams in special domain corpora are typically terms, which themselves are typically bounded by functional words. We applied our methods on a corpus of applied Arabic linguistics. We obtained results comparable to those of other Arabic term extraction studies in that they exhibited 87% accuracy when only terms strictly pertaining to the field of applied Arabic linguistics were considered, and 93.7% when related terms were included.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125886994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973512
Maha Alrabiah, A. Al-Salman, E. Atwell
Distributional lexical semantics is an empirical approach that is mainly concerned with modeling words' meanings using word distribution statistics gathered from very large corpora. It is basically built on the Distributional Hypothesis by Zellig Harris in 1970, which states that the difference in words' meanings is associated with the difference in their distribution in text. This difference in meaning originates from two kinds of relations between words, which are syntagmatic and paradigmatic relations. Syntagmatic relations are linear combinatorial relations that are established between words that co-occur together in sequential text; while paradigmatic relations are substitutional relations that are established between words that occur in the same context, share neighboring words, but do not co-occur in the same text. In this paper, we present a new association measure, the Refined MI, for measuring syntagmatic relations between words. In addition, an experimental study to evaluate the performance of the proposed measure is presented. The measure showed outstanding results in identifying significant co-occurrences from Classical Arabic text.
{"title":"The refined MI: A significant improvement to mutual information","authors":"Maha Alrabiah, A. Al-Salman, E. Atwell","doi":"10.1109/IALP.2014.6973512","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973512","url":null,"abstract":"Distributional lexical semantics is an empirical approach that is mainly concerned with modeling words' meanings using word distribution statistics gathered from very large corpora. It is basically built on the Distributional Hypothesis by Zellig Harris in 1970, which states that the difference in words' meanings is associated with the difference in their distribution in text. This difference in meaning originates from two kinds of relations between words, which are syntagmatic and paradigmatic relations. Syntagmatic relations are linear combinatorial relations that are established between words that co-occur together in sequential text; while paradigmatic relations are substitutional relations that are established between words that occur in the same context, share neighboring words, but do not co-occur in the same text. In this paper, we present a new association measure, the Refined MI, for measuring syntagmatic relations between words. In addition, an experimental study to evaluate the performance of the proposed measure is presented. The measure showed outstanding results in identifying significant co-occurrences from Classical Arabic text.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130943329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973488
Arjun Das, Utpal Garain, Apurbalal Senapati
This paper presents a pioneering attempt for automatic detection of drops in Bengali. The dominant drops in Bengali refer to subject, object and verb drops. Bengali is a pro-drop language and pro-drops fall under subject/object drops which this research concentrates on. The detection algorithm makes use of off-the-shelf Bengali NLP tools like POS tagger, chunker and a dependency parser. Simple linguistic rules are initially applied to quickly annotate a dataset of 8,455 sentences which are then manually checked. The corrected dataset is then used to train two classifiers that classify a sentence to either one with a drop or no drop. The features previously used by other researchers have been considered. Both the classifiers show comparable overall performance. As a by-product, the current study generates another (apart from the drop-annotated dataset) useful NLP resource, i.e. classification of Bengali verbs (all morphological variants of 881 root verbs) as per their transitivity which in turn used as a feature by the classifiers.
{"title":"Automatic detection of subject/object drops in Bengali","authors":"Arjun Das, Utpal Garain, Apurbalal Senapati","doi":"10.1109/IALP.2014.6973488","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973488","url":null,"abstract":"This paper presents a pioneering attempt for automatic detection of drops in Bengali. The dominant drops in Bengali refer to subject, object and verb drops. Bengali is a pro-drop language and pro-drops fall under subject/object drops which this research concentrates on. The detection algorithm makes use of off-the-shelf Bengali NLP tools like POS tagger, chunker and a dependency parser. Simple linguistic rules are initially applied to quickly annotate a dataset of 8,455 sentences which are then manually checked. The corrected dataset is then used to train two classifiers that classify a sentence to either one with a drop or no drop. The features previously used by other researchers have been considered. Both the classifiers show comparable overall performance. As a by-product, the current study generates another (apart from the drop-annotated dataset) useful NLP resource, i.e. classification of Bengali verbs (all morphological variants of 881 root verbs) as per their transitivity which in turn used as a feature by the classifiers.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"8 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113978152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973493
J. Lee, Y. Kong
We analyze the use of “imagistic lan-guage” and “propositional language” in Classical Chinese poems. It is commonly held that the lines in the middle of a poem tend to be imagistic, while those at the end tend to be propositional. Using features proposed by two literary scholars, Yu-kung Kao and Tsu-lin Mei, we report on the distribution of the imagistic and propositional styles in a tree-bank of Classical Chinese poems. We conclude that imagistic language is indeed rarely found at the end of poems, but propositional language may be more present in the middle of the poem than previously assumed.
{"title":"Imagistic and propositional languages in classical Chinese poetry","authors":"J. Lee, Y. Kong","doi":"10.1109/IALP.2014.6973493","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973493","url":null,"abstract":"We analyze the use of “imagistic lan-guage” and “propositional language” in Classical Chinese poems. It is commonly held that the lines in the middle of a poem tend to be imagistic, while those at the end tend to be propositional. Using features proposed by two literary scholars, Yu-kung Kao and Tsu-lin Mei, we report on the distribution of the imagistic and propositional styles in a tree-bank of Classical Chinese poems. We conclude that imagistic language is indeed rarely found at the end of poems, but propositional language may be more present in the middle of the poem than previously assumed.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134218149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-04DOI: 10.1109/IALP.2014.6973477
H. Cao, Sen Zhang
Emotional tendency refers to people's attitude towards people or things. It is a kind of subjective judgments and it can be divided into several parts, such as praise or criticize, positive or negative, good or bad. The judgment of emotional words' emotional tendency and the problem of how to give emotional words a weight are the base of text tendency analysis. The study of semantic weight has been widely used in text tendency analysis, public sentiment, as well as text classification. This essay extracts words from glossary concept library (refer to glossary.dat) of HowNet and polish the library. In order to make the calculation study of the emotional words' weight more accurately, the paper studies synonyms and antonyms, as well seed words selection manually. The experiment proved the method attains the results expected in sentiment judgment, weight calculation and application in text analysis.
{"title":"Research on building Chinese semantic lexicon based on the concept definition of HowNet","authors":"H. Cao, Sen Zhang","doi":"10.1109/IALP.2014.6973477","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973477","url":null,"abstract":"Emotional tendency refers to people's attitude towards people or things. It is a kind of subjective judgments and it can be divided into several parts, such as praise or criticize, positive or negative, good or bad. The judgment of emotional words' emotional tendency and the problem of how to give emotional words a weight are the base of text tendency analysis. The study of semantic weight has been widely used in text tendency analysis, public sentiment, as well as text classification. This essay extracts words from glossary concept library (refer to glossary.dat) of HowNet and polish the library. In order to make the calculation study of the emotional words' weight more accurately, the paper studies synonyms and antonyms, as well seed words selection manually. The experiment proved the method attains the results expected in sentiment judgment, weight calculation and application in text analysis.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Continuous-space word representation has demonstrated its effectiveness in many natural language pro-cessing(NLP) tasks. The basic idea for embedding training is to update embedding matrix based on its context. However, such context has been constrained on fixed surrounding words, which we believe are not sufficient to represent the actual relations for given center word. In this work we extend previous approach by learning distributed representations from dependency structure of a sentence which can capture long distance relations. Such context can learn better semantics for words, which is proved on Semantic-Syntactic Word Relationship task. Besides, competitive result is also achieved for dependency embeddings on WordSim-353 task.
{"title":"Learning word embeddings from dependency relations","authors":"Yinggong Zhao, Shujian Huang, Xinyu Dai, Jianbing Zhang, Jiajun Chen","doi":"10.1109/IALP.2014.6973490","DOIUrl":"https://doi.org/10.1109/IALP.2014.6973490","url":null,"abstract":"Continuous-space word representation has demonstrated its effectiveness in many natural language pro-cessing(NLP) tasks. The basic idea for embedding training is to update embedding matrix based on its context. However, such context has been constrained on fixed surrounding words, which we believe are not sufficient to represent the actual relations for given center word. In this work we extend previous approach by learning distributed representations from dependency structure of a sentence which can capture long distance relations. Such context can learn better semantics for words, which is proved on Semantic-Syntactic Word Relationship task. Besides, competitive result is also achieved for dependency embeddings on WordSim-353 task.","PeriodicalId":117334,"journal":{"name":"2014 International Conference on Asian Language Processing (IALP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125554127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}