Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678184
Mathuros Panmuang, Chonnikarn Rodmorn, Suriya Pinitkan
This research applied the Deep Convolutional Neural Networks and used the VGG16 model to screen rice varieties by images. The rice varieties selected in the experiment include five varieties: KorKhor 23, Suphanburi 1, Pathum Thani 1, Chainat 1, and Hom Mali Rice 105, totaling 1,500 images. The results of the experiments and model testing showed that the accuracy obtained by training the images of rice seeds is 85%, which is highly reliable. Therefore, the model was used to develop a website that can be accessed via web browsers and mobile apps where farmers or related persons can upload rice seed images to the system so that the system can predict what variety of rice it is and according to the testing of the system, it was found that it can make an accurate forecast of rice varieties.
{"title":"Image Processing for Classification of Rice Varieties with Deep Convolutional Neural Networks","authors":"Mathuros Panmuang, Chonnikarn Rodmorn, Suriya Pinitkan","doi":"10.1109/iSAI-NLP54397.2021.9678184","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678184","url":null,"abstract":"This research applied the Deep Convolutional Neural Networks and used the VGG16 model to screen rice varieties by images. The rice varieties selected in the experiment include five varieties: KorKhor 23, Suphanburi 1, Pathum Thani 1, Chainat 1, and Hom Mali Rice 105, totaling 1,500 images. The results of the experiments and model testing showed that the accuracy obtained by training the images of rice seeds is 85%, which is highly reliable. Therefore, the model was used to develop a website that can be accessed via web browsers and mobile apps where farmers or related persons can upload rice seed images to the system so that the system can predict what variety of rice it is and according to the testing of the system, it was found that it can make an accurate forecast of rice varieties.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116138875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678181
Norranat Songsriboonsit, Kasorn Galajit, Jessada Karnjana, W. Kongprawechnon, P. Aimmanee
Semi-fragile watermarking in speech signals is proposed to solve problems relating to unauthorized speech modification. However, previous methods are fragile against some non-malicious attacks or white noise with a high signal-to-noise ratio. This paper aims to solve this problem by proposing a new watermarking technique based on singular spectrum analysis and quantization index modulation. The singular spectrum analysis is used to extract singular values of segments of speech signals. A watermark bit is embedded into each frame by slightly modifying its singular values according to the quantization index modulation. The experimental results show that the sound quality of a watermarked signal is comparable to that of its original signal. The watermark-bit extraction precision is also similar to that of existing methods. However, the proposed method is robust against non-malicious attacks, such as G.726 speech codec and white noise with a high signal-to-noise ratio.
{"title":"Robustness Improvement against G.726 Speech Codec for Semi-fragile Watermarking in Speech Signals with Singular Spectrum Analysis and Quantization Index Modulation","authors":"Norranat Songsriboonsit, Kasorn Galajit, Jessada Karnjana, W. Kongprawechnon, P. Aimmanee","doi":"10.1109/iSAI-NLP54397.2021.9678181","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678181","url":null,"abstract":"Semi-fragile watermarking in speech signals is proposed to solve problems relating to unauthorized speech modification. However, previous methods are fragile against some non-malicious attacks or white noise with a high signal-to-noise ratio. This paper aims to solve this problem by proposing a new watermarking technique based on singular spectrum analysis and quantization index modulation. The singular spectrum analysis is used to extract singular values of segments of speech signals. A watermark bit is embedded into each frame by slightly modifying its singular values according to the quantization index modulation. The experimental results show that the sound quality of a watermarked signal is comparable to that of its original signal. The watermark-bit extraction precision is also similar to that of existing methods. However, the proposed method is robust against non-malicious attacks, such as G.726 speech codec and white noise with a high signal-to-noise ratio.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122125983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678188
Ye Kyaw Thu, Hlaing Myat New, Hninn Aye Thant, Hay Man Htun, H. Mon, May Myat Myat Khaing, Hsu Pan Oo, Pale Phyu, Nang Aeindray Kyaw, T. Oo, T. Oo, Thet Thet Zin, T. Oo
Unlike many other western languages, the Myanmar language uses a syllabic writing system and no space between words. Syllable segmentation is the necessary preprocess for natural language processing (NLP) tasks such as grapheme-to-phoneme (g2p) conversion, machine translation, romanization, and so on. In this study, sylbreak4all, a syllable segmentation tool, was developed for nine major ethnic languages of Myanmar, and they are Burmese, Shan, Pa’o, Pwo Kayin, S’gaw Kayin, Rakhine, Myeik, Dawei, and Mon by using regular expression (RE) patterns.
{"title":"sylbreak4all: Regular Expressions for Syllable Breaking of Nine Major Ethnic Languages of Myanmar","authors":"Ye Kyaw Thu, Hlaing Myat New, Hninn Aye Thant, Hay Man Htun, H. Mon, May Myat Myat Khaing, Hsu Pan Oo, Pale Phyu, Nang Aeindray Kyaw, T. Oo, T. Oo, Thet Thet Zin, T. Oo","doi":"10.1109/iSAI-NLP54397.2021.9678188","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678188","url":null,"abstract":"Unlike many other western languages, the Myanmar language uses a syllabic writing system and no space between words. Syllable segmentation is the necessary preprocess for natural language processing (NLP) tasks such as grapheme-to-phoneme (g2p) conversion, machine translation, romanization, and so on. In this study, sylbreak4all, a syllable segmentation tool, was developed for nine major ethnic languages of Myanmar, and they are Burmese, Shan, Pa’o, Pwo Kayin, S’gaw Kayin, Rakhine, Myeik, Dawei, and Mon by using regular expression (RE) patterns.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117229475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/isai-nlp54397.2021.9678186
{"title":"Welcome Message from the General Co-Chair","authors":"","doi":"10.1109/isai-nlp54397.2021.9678186","DOIUrl":"https://doi.org/10.1109/isai-nlp54397.2021.9678186","url":null,"abstract":"","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128085849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678162
W. Suwansin, P. Phasukkit
This research proposes a single-sensor acoustic emission (AE) scheme for detection and localization of crack in steel rail (rail head, rail web, and rail foot) under load. In the operation, AE signals were captured by the AE sensor and converted into digital signal data by AE data acquisition module. The digital data were used total variation denoising (TVD) algorithm to remove ambient and wheel/rail contact noises, and the denoised data were processed and classified to localize cracks in the steel rail using a deep learning algorithmic model. The AE signals of pencil lead break at the head, web, and foot of steel rail were used to train (80 % of the input data) and test (20%) the algorithmic model. In training and testing the algorithm, the AE signals were divided into two groupings (150 and 300 AE signals) and the classification accuracy compared. The deep learning-based AE scheme was also implemented on-site to detect cracks in the steel rail. The total accuracy under the first and second groupings were 86.6 % and 96.6 %. The novelty of this research lies in the use of single AE sensor and AE signal-driven deep learning algorithm to detect and localize cracks in the steel rail, unlike conventional AE crack-localization technology which relies on two or more sensors and human interpretation.
{"title":"Deep Learning-Based Acoustic Emission Scheme for Rail Crack Monitoring","authors":"W. Suwansin, P. Phasukkit","doi":"10.1109/iSAI-NLP54397.2021.9678162","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678162","url":null,"abstract":"This research proposes a single-sensor acoustic emission (AE) scheme for detection and localization of crack in steel rail (rail head, rail web, and rail foot) under load. In the operation, AE signals were captured by the AE sensor and converted into digital signal data by AE data acquisition module. The digital data were used total variation denoising (TVD) algorithm to remove ambient and wheel/rail contact noises, and the denoised data were processed and classified to localize cracks in the steel rail using a deep learning algorithmic model. The AE signals of pencil lead break at the head, web, and foot of steel rail were used to train (80 % of the input data) and test (20%) the algorithmic model. In training and testing the algorithm, the AE signals were divided into two groupings (150 and 300 AE signals) and the classification accuracy compared. The deep learning-based AE scheme was also implemented on-site to detect cracks in the steel rail. The total accuracy under the first and second groupings were 86.6 % and 96.6 %. The novelty of this research lies in the use of single AE sensor and AE signal-driven deep learning algorithm to detect and localize cracks in the steel rail, unlike conventional AE crack-localization technology which relies on two or more sensors and human interpretation.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124402148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678172
Shun Katada, Kiyoaki Shirai, S. Okada
Recently developed Bidirectional Encoder Representations from Transformers (BERT) outperforms the state-of-the-art in many natural language processing tasks in English. Although contextual information is known to be useful for dialog act classification, fine-tuning BERT with contextual information has not been investigated, especially in head final languages such as Japanese. This paper investigates whether BERT with contextual information performs well on dialog act classification in Japanese open-domain conversation. In our proposed model, not only the utterance itself but also the information about previous utterances and turn-taking are taken into account. Results of experiments on a Japanese dialog corpus showed that the incorporation of the contextual information improved the F1-score by 6.7 points.
{"title":"Incorporation of Contextual Information into BERT for Dialog Act Classification in Japanese","authors":"Shun Katada, Kiyoaki Shirai, S. Okada","doi":"10.1109/iSAI-NLP54397.2021.9678172","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678172","url":null,"abstract":"Recently developed Bidirectional Encoder Representations from Transformers (BERT) outperforms the state-of-the-art in many natural language processing tasks in English. Although contextual information is known to be useful for dialog act classification, fine-tuning BERT with contextual information has not been investigated, especially in head final languages such as Japanese. This paper investigates whether BERT with contextual information performs well on dialog act classification in Japanese open-domain conversation. In our proposed model, not only the utterance itself but also the information about previous utterances and turn-taking are taken into account. Results of experiments on a Japanese dialog corpus showed that the incorporation of the contextual information improved the F1-score by 6.7 points.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125686408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678165
Research data have been increasingly published. A search function on a research data repository is crucial to improve the accessibility. Generally, the search is executed based on metadata written by the creators of the research data. However, such the metadata may not be sufficiently descriptive because there can exist features or usages that the creators did not expect. The information about features or usages generated by users might appear in citation contexts of research data in their scholarly papers. In this study, we set and discuss the hypothesis that citation contexts in scholarly papers are useful for research data search. First, we investigated whether adding citation contexts to the metadata can enrich information about the research data. Concretely, the existing metadata and the citation contexts were collected, and their overlap was examined. Furthermore, a retrieval experiment was conducted to confirm the effectiveness of the citation contexts. The results indicated the usefulness of the citation contexts in scholarly papers.
{"title":"Using Citation Contexts in Scholarly Papers for Research Data Search","authors":"","doi":"10.1109/iSAI-NLP54397.2021.9678165","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678165","url":null,"abstract":"Research data have been increasingly published. A search function on a research data repository is crucial to improve the accessibility. Generally, the search is executed based on metadata written by the creators of the research data. However, such the metadata may not be sufficiently descriptive because there can exist features or usages that the creators did not expect. The information about features or usages generated by users might appear in citation contexts of research data in their scholarly papers. In this study, we set and discuss the hypothesis that citation contexts in scholarly papers are useful for research data search. First, we investigated whether adding citation contexts to the metadata can enrich information about the research data. Concretely, the existing metadata and the citation contexts were collected, and their overlap was examined. Furthermore, a retrieval experiment was conducted to confirm the effectiveness of the citation contexts. The results indicated the usefulness of the citation contexts in scholarly papers.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"17 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124914716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678154
Since narratives contain a variety of knowledge, it is useful to record their contents. It is difficult to read the transcription even if the narrative speech is transcribed directly because the transcription contains redundant contents. Thus, it is effective to summarize the narrative transcriptions. This paper proposes a method for identifying important utterances in narratives with the aim of summarizing them. The method uses not only the narrative by the speaker but also the attentive listening responses by the listeners for identifying the important utterances. Attentive listening responses are conversational responses to positively show that listeners are attentively listening to the narratives, e.g., back-channel feedbacks. The more important an utterance in a narrative is, the more likely the listeners seem to react to it. In this study, we focus on attentive listening responses as the listener’s reactions. We experimentally evaluated the effectiveness of using attentive listening responses in identifying important utterances in narratives.
{"title":"Identification of Important Utterances in Narrative Speech Using Attentive Listening Responses","authors":"","doi":"10.1109/iSAI-NLP54397.2021.9678154","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678154","url":null,"abstract":"Since narratives contain a variety of knowledge, it is useful to record their contents. It is difficult to read the transcription even if the narrative speech is transcribed directly because the transcription contains redundant contents. Thus, it is effective to summarize the narrative transcriptions. This paper proposes a method for identifying important utterances in narratives with the aim of summarizing them. The method uses not only the narrative by the speaker but also the attentive listening responses by the listeners for identifying the important utterances. Attentive listening responses are conversational responses to positively show that listeners are attentively listening to the narratives, e.g., back-channel feedbacks. The more important an utterance in a narrative is, the more likely the listeners seem to react to it. In this study, we focus on attentive listening responses as the listener’s reactions. We experimentally evaluated the effectiveness of using attentive listening responses in identifying important utterances in narratives.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133227937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678182
Pongpol Assawaroongsakul, Mawin Khumdee, P. Phasukkit, Nongluck Houngkamhang
Human activity detection in obscured or invisible area, for instance, human detection through the wall has become an interesting topic because it has potential for security, rescue, activity analysis application, etc. UWB radar, a detection system produces short radio frequency pulses and measures the reflected signals which UWB pulses have high spatial resolution and enable penetration in dielectric materials, was used to collect human activity through the wall signals at the frequency range of 3 GHz in this research. Subsequently, we applied signal data with the Deep Neural Network model to classify 5 classes of human activity including standing, walking, sitting, laying, and no-human gave the F1 score up to 96.94%.
{"title":"Deep Learning-Based Human Recognition Through the Wall using UWB radar","authors":"Pongpol Assawaroongsakul, Mawin Khumdee, P. Phasukkit, Nongluck Houngkamhang","doi":"10.1109/iSAI-NLP54397.2021.9678182","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678182","url":null,"abstract":"Human activity detection in obscured or invisible area, for instance, human detection through the wall has become an interesting topic because it has potential for security, rescue, activity analysis application, etc. UWB radar, a detection system produces short radio frequency pulses and measures the reflected signals which UWB pulses have high spatial resolution and enable penetration in dielectric materials, was used to collect human activity through the wall signals at the frequency range of 3 GHz in this research. Subsequently, we applied signal data with the Deep Neural Network model to classify 5 classes of human activity including standing, walking, sitting, laying, and no-human gave the F1 score up to 96.94%.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131124675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-21DOI: 10.1109/iSAI-NLP54397.2021.9678190
Peerachet Porkaew, P. Boonkwan, T. Supnithi
Recently, pretrained language representations like BERT and RoBERTa have drawn more and more attention in NLP. In this work we propose a pretrained language representation for Thai language, which based on RoBERTa architecture. Our monolingual data used in the training are collected from publicly available resources including Wikipedia, OpenSubtitles, news and articles. Although the pretrained model can be fine-tuned for wide area of individual tasks, fine-tuning the model with multiple objectives also yields a surprisingly effective model. We evaluated the performance of our multi-task model on part-of-speech tagging, named entity recognition and clause boundary prediction. Our model achieves the comparable performance to strong single-task baselines. Our code and models are available at https://github.com/lstnlp/hoogberta.
{"title":"HoogBERTa: Multi-task Sequence Labeling using Thai Pretrained Language Representation","authors":"Peerachet Porkaew, P. Boonkwan, T. Supnithi","doi":"10.1109/iSAI-NLP54397.2021.9678190","DOIUrl":"https://doi.org/10.1109/iSAI-NLP54397.2021.9678190","url":null,"abstract":"Recently, pretrained language representations like BERT and RoBERTa have drawn more and more attention in NLP. In this work we propose a pretrained language representation for Thai language, which based on RoBERTa architecture. Our monolingual data used in the training are collected from publicly available resources including Wikipedia, OpenSubtitles, news and articles. Although the pretrained model can be fine-tuned for wide area of individual tasks, fine-tuning the model with multiple objectives also yields a surprisingly effective model. We evaluated the performance of our multi-task model on part-of-speech tagging, named entity recognition and clause boundary prediction. Our model achieves the comparable performance to strong single-task baselines. Our code and models are available at https://github.com/lstnlp/hoogberta.","PeriodicalId":339826,"journal":{"name":"2021 16th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132445199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}