Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037705
Rinaldi Andrian Rahmanda, M. Adriani, Dipta Tanaya
This study presents an approach to generate a bilingual language model that will be used for CLIR task. Language models for Bahasa Indonesia and English are created by utilizing a bilingual parallel corpus, and then the bilingual language model is created by learning the mapping between the Indonesian model and the English model using the Multilayer Perceptron model. Query expansion is also used in this system to boost the results of the retrieval, using pre-Bilingual Mapping, post-Bilingual Mapping and hybrid approaches. The results of the experiments show that the implemented system, with the addition of pre-Bilingual Mapping query expansion, manages to improve the performance of the CLIR task.
{"title":"Cross Language Information Retrieval Using Parallel Corpus with Bilingual Mapping Method","authors":"Rinaldi Andrian Rahmanda, M. Adriani, Dipta Tanaya","doi":"10.1109/IALP48816.2019.9037705","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037705","url":null,"abstract":"This study presents an approach to generate a bilingual language model that will be used for CLIR task. Language models for Bahasa Indonesia and English are created by utilizing a bilingual parallel corpus, and then the bilingual language model is created by learning the mapping between the Indonesian model and the English model using the Multilayer Perceptron model. Query expansion is also used in this system to boost the results of the retrieval, using pre-Bilingual Mapping, post-Bilingual Mapping and hybrid approaches. The results of the experiments show that the implemented system, with the addition of pre-Bilingual Mapping query expansion, manages to improve the performance of the CLIR task.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116701374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037678
Taro Tada, Kazuhide Yamamoto
A radiology report is a medical document based on an examination image in a hospital. However, the preparation of this report is a burden on busy physicians. To support them, a retrieval system of past documents to prepare radiology reports is required. In recent years, distributed representation has been used in various NLP tasks and its usefulness has been demonstrated. However, there is not much research about Japanese medical documents that use distributed representations. In this study, we investigate preprocessing on a retrieval system with a distributed representation of the radiology report, as a first step. As a result, we confirmed that in word segmentation using Morphological analyzer and dictionaries, medical terms in radiology reports are not handled as long nouns, but are more effective as shorter nouns like subwords. We also confirmed that text segmentation by SentencePiece to obtain sentence distributed representation reflects more sentence characteristics. Furthermore, by removing some phrases from the radiology report based on frequency, we were able to reflect the characteristics of the document and avoid unnecessary high similarity between documents. It was confirmed that preprocessing was effective in this task.
{"title":"Effect of Preprocessing for Distributed Representations: Case Study of Japanese Radiology Reports","authors":"Taro Tada, Kazuhide Yamamoto","doi":"10.1109/IALP48816.2019.9037678","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037678","url":null,"abstract":"A radiology report is a medical document based on an examination image in a hospital. However, the preparation of this report is a burden on busy physicians. To support them, a retrieval system of past documents to prepare radiology reports is required. In recent years, distributed representation has been used in various NLP tasks and its usefulness has been demonstrated. However, there is not much research about Japanese medical documents that use distributed representations. In this study, we investigate preprocessing on a retrieval system with a distributed representation of the radiology report, as a first step. As a result, we confirmed that in word segmentation using Morphological analyzer and dictionaries, medical terms in radiology reports are not handled as long nouns, but are more effective as shorter nouns like subwords. We also confirmed that text segmentation by SentencePiece to obtain sentence distributed representation reflects more sentence characteristics. Furthermore, by removing some phrases from the radiology report based on frequency, we were able to reflect the characteristics of the document and avoid unnecessary high similarity between documents. It was confirmed that preprocessing was effective in this task.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125064691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037685
Hui Feng, Jie Lian, Ying Zhao
Under the guidance of the Theory of Multiple Intelligences, this study aims to find whether music training can improve English stress production among Chinese English learners without music background. Major findings are as follows. (1) In stress production, music training has significant influence on the stress production by Chinese English learners. Specifically, after music training, there has been evident improvement in pitch and intensity in the training group in distinguishing stressed and unstressed syllables in disyllabic pseudowords. Besides, the accuracy of the production of unfamiliar words in the training group has increased by 11.5% on average, compared with that of the control group which has little change. In addition, little effect of music training on duration proportion in stressed syllables is found in this experiment. (2) Chinese English learners’ perception of music can be positively transferred to their production of English lexical stress. Such findings provide further proof for the effect of music training on the production of English lexical stress, and propose a method for Chinese English learners to improve their English pronunciation.
{"title":"Effect of Music Training on the Production of English Lexical Stress by Chinese English Learners","authors":"Hui Feng, Jie Lian, Ying Zhao","doi":"10.1109/IALP48816.2019.9037685","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037685","url":null,"abstract":"Under the guidance of the Theory of Multiple Intelligences, this study aims to find whether music training can improve English stress production among Chinese English learners without music background. Major findings are as follows. (1) In stress production, music training has significant influence on the stress production by Chinese English learners. Specifically, after music training, there has been evident improvement in pitch and intensity in the training group in distinguishing stressed and unstressed syllables in disyllabic pseudowords. Besides, the accuracy of the production of unfamiliar words in the training group has increased by 11.5% on average, compared with that of the control group which has little change. In addition, little effect of music training on duration proportion in stressed syllables is found in this experiment. (2) Chinese English learners’ perception of music can be positively transferred to their production of English lexical stress. Such findings provide further proof for the effect of music training on the production of English lexical stress, and propose a method for Chinese English learners to improve their English pronunciation.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122808683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037716
Lin Lin, Tao-Hsing Chang, Fu-Yuan Hsu
Standardized tests are an important tool in education. During the test preparation process, the difficulty of each test item needs to be defined, which previously relied on expert validation or pretest for the most part, requiring a considerable amount of labor and cost. These problems can be overcome by using machines to predict the difficulty of the test items. In this study, long short-term memory (LSTM) will be used to predict the test item difficulty in reading comprehension. Experimental results show that the proposed method has a good prediction for agreement rate.
{"title":"Automated Prediction of Item Difficulty in Reading Comprehension Using Long Short-Term Memory","authors":"Lin Lin, Tao-Hsing Chang, Fu-Yuan Hsu","doi":"10.1109/IALP48816.2019.9037716","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037716","url":null,"abstract":"Standardized tests are an important tool in education. During the test preparation process, the difficulty of each test item needs to be defined, which previously relied on expert validation or pretest for the most part, requiring a considerable amount of labor and cost. These problems can be overcome by using machines to predict the difficulty of the test items. In this study, long short-term memory (LSTM) will be used to predict the test item difficulty in reading comprehension. Experimental results show that the proposed method has a good prediction for agreement rate.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114188439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037724
Yueming Du, Lijiao Yang
The traditional measurement of sentence difficulty only focuses on lexical features but neglects syntactic features. This paper takes 800 sentences in primary school Chinese textbooks published by People's Education Press as the research object and studies their syntactic features. We use random forest to select the top five important features and then employed SVM to do the classification experiment. The precision rate, recall rate and F-scored for the classification of 5 levels are respectively 50.42%, 50.40% and 50.41%, which indicates that the features we selected has practical value for the related research.
{"title":"What affects the difficulty of Chinese syntax?","authors":"Yueming Du, Lijiao Yang","doi":"10.1109/IALP48816.2019.9037724","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037724","url":null,"abstract":"The traditional measurement of sentence difficulty only focuses on lexical features but neglects syntactic features. This paper takes 800 sentences in primary school Chinese textbooks published by People's Education Press as the research object and studies their syntactic features. We use random forest to select the top five important features and then employed SVM to do the classification experiment. The precision rate, recall rate and F-scored for the classification of 5 levels are respectively 50.42%, 50.40% and 50.41%, which indicates that the features we selected has practical value for the related research.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129959442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037567
Akihiro Katsuta, Kazuhide Yamamoto
Automatic sentence simplification aims to reduce the complexity of vocabulary and expressions in a sentence while retaining its original meaning. We constructed a simplification model that does not require a parallel corpus using an unsupervised translation model. In order to learn simplification by unsupervised manner, we show that pseudo-corpus is constructed from the web corpus and that the corpus expansion contributes to output more simplified sentences. In addition, we confirm that it is possible to learn the operation of simplification by preparing large-scale pseudo data even if there is non-parallel corpus for simplification.
{"title":"Improving text simplification by corpus expansion with unsupervised learning","authors":"Akihiro Katsuta, Kazuhide Yamamoto","doi":"10.1109/IALP48816.2019.9037567","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037567","url":null,"abstract":"Automatic sentence simplification aims to reduce the complexity of vocabulary and expressions in a sentence while retaining its original meaning. We constructed a simplification model that does not require a parallel corpus using an unsupervised translation model. In order to learn simplification by unsupervised manner, we show that pseudo-corpus is constructed from the web corpus and that the corpus expansion contributes to output more simplified sentences. In addition, we confirm that it is possible to learn the operation of simplification by preparing large-scale pseudo data even if there is non-parallel corpus for simplification.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"46 26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123345932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037727
C. Y. Yeung, J. Lee, Benjamin Ka-Yin T'sou
This paper presents the first data-driven model for selecting carrier sentences with word and context embeddings. In computer-assisted language learning systems, fill-in-the-blank items help users review or learn new vocabulary. A crucial step in automatic generation of fill-in-the-blank items is the selection of carrier sentences that illustrate the usage and meaning of the target word. Previous approaches for carrier sentence selection have mostly relied on features related to sentence length, vocabulary difficulty and word association strength. We train a statistical classifier on a large-scale, automatically constructed corpus of sample carrier sentences for learning Chinese as a foreign language, and use it to predict the suitability of a candidate carrier sentence for a target word. Human evaluation shows that our approach leads to substantial improvement over a word co-occurrence heuristic, and that context embeddings further enhance selection performance.
{"title":"Carrier Sentence Selection with Word and Context Embeddings","authors":"C. Y. Yeung, J. Lee, Benjamin Ka-Yin T'sou","doi":"10.1109/IALP48816.2019.9037727","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037727","url":null,"abstract":"This paper presents the first data-driven model for selecting carrier sentences with word and context embeddings. In computer-assisted language learning systems, fill-in-the-blank items help users review or learn new vocabulary. A crucial step in automatic generation of fill-in-the-blank items is the selection of carrier sentences that illustrate the usage and meaning of the target word. Previous approaches for carrier sentence selection have mostly relied on features related to sentence length, vocabulary difficulty and word association strength. We train a statistical classifier on a large-scale, automatically constructed corpus of sample carrier sentences for learning Chinese as a foreign language, and use it to predict the suitability of a candidate carrier sentence for a target word. Human evaluation shows that our approach leads to substantial improvement over a word co-occurrence heuristic, and that context embeddings further enhance selection performance.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037659
Kohei Yamamoto, Kazutaka Shimada
In this paper, we propose a knowledge acquisition method for non-task-oriented dialogue systems. Such dialogue systems need a wide variety of knowledge for generating appropriate and sophisticated responses. However, constructing such knowledge is costly. To solve this problem, we focus on a relation about each tweet and the posted time. First, we extract event words, such as verbs, from tweets. Second, we generate frequency distribution for five different time divisions: e.g., a monthly basis. Then, we remove burst words on the basis of variance for obtaining refined distributions. We checked high ranked words in each time division. As a result, we obtained not only common sense things such as “sleep” in night but also interesting activities such as “recruit” in April and May (April is the beginning of the recruitment process for the new year in Japan.) and “raise the spirits/plow into” around 9 AM for inspiring oneself at the beginning of his/her work of the day. In addition, the knowledge that our method extracts probably contributes to not only dialogue systems but also text mining and behavior analysis of data on social media and so on.
{"title":"Acquisition of Knowledge with Time Information from Twitter","authors":"Kohei Yamamoto, Kazutaka Shimada","doi":"10.1109/IALP48816.2019.9037659","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037659","url":null,"abstract":"In this paper, we propose a knowledge acquisition method for non-task-oriented dialogue systems. Such dialogue systems need a wide variety of knowledge for generating appropriate and sophisticated responses. However, constructing such knowledge is costly. To solve this problem, we focus on a relation about each tweet and the posted time. First, we extract event words, such as verbs, from tweets. Second, we generate frequency distribution for five different time divisions: e.g., a monthly basis. Then, we remove burst words on the basis of variance for obtaining refined distributions. We checked high ranked words in each time division. As a result, we obtained not only common sense things such as “sleep” in night but also interesting activities such as “recruit” in April and May (April is the beginning of the recruitment process for the new year in Japan.) and “raise the spirits/plow into” around 9 AM for inspiring oneself at the beginning of his/her work of the day. In addition, the knowledge that our method extracts probably contributes to not only dialogue systems but also text mining and behavior analysis of data on social media and so on.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129961694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037689
Andi Suciati, I. Budi
The goal of opinion mining is to extract the sentiment, emotions, or judgement of reviews and classified it. These reviews are very important because they can affect the decision-making from a person. In this paper, we conducted an aspect-based opinion mining research using customer reviews of restaurants in Indonesia and we focused into analyzing the code-mixed dataset. The evaluation conducted by making four scenarios namely removing stopwords without stemming, without removing stopwords but with stemming, without removing stopwords and stemming, and preprocessing with removing stopwords and stemming. We compared five algorithms which are Random Forest (RF), Multinomial Naive Bayes (NB), Logistic Regression (LR), Decision Tree (DT), and Extra Tree classifier (ET). The models were evaluated by using 10 folds cross validation, and the results show that all aspects achieved highest scores with different algorithms. LR achieved highest score for food (81.76%) and ambience (77.29%) aspects while the highest score for price (78.71%) and service (85.07%) aspects were obtained by DT.
{"title":"Aspect-based Opinion Mining for Code-Mixed Restaurant Reviews in Indonesia","authors":"Andi Suciati, I. Budi","doi":"10.1109/IALP48816.2019.9037689","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037689","url":null,"abstract":"The goal of opinion mining is to extract the sentiment, emotions, or judgement of reviews and classified it. These reviews are very important because they can affect the decision-making from a person. In this paper, we conducted an aspect-based opinion mining research using customer reviews of restaurants in Indonesia and we focused into analyzing the code-mixed dataset. The evaluation conducted by making four scenarios namely removing stopwords without stemming, without removing stopwords but with stemming, without removing stopwords and stemming, and preprocessing with removing stopwords and stemming. We compared five algorithms which are Random Forest (RF), Multinomial Naive Bayes (NB), Logistic Regression (LR), Decision Tree (DT), and Extra Tree classifier (ET). The models were evaluated by using 10 folds cross validation, and the results show that all aspects achieved highest scores with different algorithms. LR achieved highest score for food (81.76%) and ambience (77.29%) aspects while the highest score for price (78.71%) and service (85.07%) aspects were obtained by DT.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133888832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-01DOI: 10.1109/IALP48816.2019.9037668
Changai He, Sibao Chen, Shilei Huang, Jian Zhang, Xiao Song
We propose an Intent Determination (ID) method by combining the single-layer Convolutional Neural Network (CNN) with the Bidirectional Encoder Representations from Transformers (BERT). The ID task is usually treated as a classification issue and the user’s query statement is usually of short text type. It has been proven that CNN is suitable for conducting short text classification tasks. We utilize BERT as a sentence encoder, which can accurately get the context representation of a sentence. Our method improves the performance of ID with the powerful ability to capture semantic and long-distance dependencies in sentences. Our experimental results demonstrate that our model outperforms the state-of-the-art approach and improves the accuracy of 0.67% on the ATIS dataset. On the ground truth of the Chinese dataset, as the intent granularity increases, our method improves the accuracy by 15.99%, 4.75%, 4.69%, 6.29%, and 4.12% compared to the baseline.
{"title":"Using Convolutional Neural Network with BERT for Intent Determination","authors":"Changai He, Sibao Chen, Shilei Huang, Jian Zhang, Xiao Song","doi":"10.1109/IALP48816.2019.9037668","DOIUrl":"https://doi.org/10.1109/IALP48816.2019.9037668","url":null,"abstract":"We propose an Intent Determination (ID) method by combining the single-layer Convolutional Neural Network (CNN) with the Bidirectional Encoder Representations from Transformers (BERT). The ID task is usually treated as a classification issue and the user’s query statement is usually of short text type. It has been proven that CNN is suitable for conducting short text classification tasks. We utilize BERT as a sentence encoder, which can accurately get the context representation of a sentence. Our method improves the performance of ID with the powerful ability to capture semantic and long-distance dependencies in sentences. Our experimental results demonstrate that our model outperforms the state-of-the-art approach and improves the accuracy of 0.67% on the ATIS dataset. On the ground truth of the Chinese dataset, as the intent granularity increases, our method improves the accuracy by 15.99%, 4.75%, 4.69%, 6.29%, and 4.12% compared to the baseline.","PeriodicalId":208066,"journal":{"name":"2019 International Conference on Asian Language Processing (IALP)","volume":"67 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131896266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}