In the medical field, a lot of unstructured information which is expressed by natural language exists in medical literature, technical documentation and medical records. IE (Information Extraction) as one of the most important research directions in natural language process aims to help humans extract concerned information automatically. NER (Named Entity Recognition) is one of the subsystems of IE and has direct influence on the quality of IE. Nowadays NER of medical field has not reached ideal precision largely due to the knowledge complexity in medical field. It is hard to describe the medical entity definitely in feature selection and current selected features are not rich and lack of semantic information. Besides that, different medical entities which have the similar characters more easily cause classification algorithm making wrong judgment. Combination multi classification algorithms such as SVM-KNN can overcome its own disadvantages and get higher performance. But current SVM-KNN classification algorithm may provide wrong categories results due to setting inappropriate K value and unbalanced examples distribution. In this paper, we introduce two-level modelling tool to help specialists to build semantic models and select features from them. We design and implement medical named entity recognition analysis engine based on UIMA framework and adopt improved SVM-KNN algorithm called EK-SVM-KNN (Extending K SVM-KNN) in classification. We collect experiment data from SLE(Systemic Lupus Erythematosus) clinical information system in Renji Hospital. We adopt 50 Pathology reports as training data and provide 1000 Pathology as test data. Experiment shows medical NER based on semantic model and improved SVM-KNN algorithm can enhance the quality of NER and we get the precision, recall rate and F value up to 86%.
在医学领域,医学文献、技术文档和病历中存在大量以自然语言表达的非结构化信息。信息抽取(Information Extraction, IE)是自然语言处理中最重要的研究方向之一,旨在帮助人类自动抽取相关信息。命名实体识别(NER, Named Entity Recognition)是IE的一个子系统,直接影响到IE的质量。由于医学领域知识的复杂性,目前医学领域的NER并没有达到理想的精度。在特征选择上难以对医疗实体进行明确的描述,目前所选择的特征语义信息不够丰富和缺乏。此外,具有相似特征的不同医疗实体更容易导致分类算法做出错误判断。SVM-KNN等组合多分类算法可以克服自身的缺点,获得更高的性能。但是目前的SVM-KNN分类算法由于K值设置不合适,样本分布不平衡,可能会给出错误的分类结果。在本文中,我们引入了两级建模工具,以帮助专家建立语义模型并从中选择特征。我们设计并实现了基于UIMA框架的医学命名实体识别分析引擎,并在分类中采用改进的SVM-KNN算法EK-SVM-KNN (extended K SVM-KNN)。实验数据来源于仁济医院系统性红斑狼疮临床信息系统。我们采用50份病理学报告作为培训数据,提供1000份病理学作为测试数据。实验表明,基于语义模型的医学NER和改进的SVM-KNN算法可以提高NER的质量,准确率、召回率和F值均达到86%。
{"title":"The Method of Medical Named Entity Recognition Based on Semantic Model and Improved SVM-KNN Algorithm","authors":"Xia Han, Ruonan Rao","doi":"10.1109/SKG.2011.24","DOIUrl":"https://doi.org/10.1109/SKG.2011.24","url":null,"abstract":"In the medical field, a lot of unstructured information which is expressed by natural language exists in medical literature, technical documentation and medical records. IE (Information Extraction) as one of the most important research directions in natural language process aims to help humans extract concerned information automatically. NER (Named Entity Recognition) is one of the subsystems of IE and has direct influence on the quality of IE. Nowadays NER of medical field has not reached ideal precision largely due to the knowledge complexity in medical field. It is hard to describe the medical entity definitely in feature selection and current selected features are not rich and lack of semantic information. Besides that, different medical entities which have the similar characters more easily cause classification algorithm making wrong judgment. Combination multi classification algorithms such as SVM-KNN can overcome its own disadvantages and get higher performance. But current SVM-KNN classification algorithm may provide wrong categories results due to setting inappropriate K value and unbalanced examples distribution. In this paper, we introduce two-level modelling tool to help specialists to build semantic models and select features from them. We design and implement medical named entity recognition analysis engine based on UIMA framework and adopt improved SVM-KNN algorithm called EK-SVM-KNN (Extending K SVM-KNN) in classification. We collect experiment data from SLE(Systemic Lupus Erythematosus) clinical information system in Renji Hospital. We adopt 50 Pathology reports as training data and provide 1000 Pathology as test data. Experiment shows medical NER based on semantic model and improved SVM-KNN algorithm can enhance the quality of NER and we get the precision, recall rate and F value up to 86%.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127226249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hadoop has shown great power in processing vast data in parallel. Hive, the database on Hadoop, enables more experts to process relational data by providing sql-like interface. However, Hive does not provide an efficient approach for join, a common but expensive operator in relational database. Due to the importance of join, this paper proposes a novel hybrid algorithm, HJA, which can help to automatically choose the relatively better one among several methods, divide and memory copy merge, Partition Join(PJ) and naïve Hive join. Experiments show that HJA can get best performance in most situations.
{"title":"A Hybrid Join Algorithm on Top of Map Reduce","authors":"Weisong Hu, Lili Ma, Xiaowei Liu, Hongwei Qi, L. Zha, Huaming Liao, Yuezhuo Zhang","doi":"10.1109/SKG.2011.13","DOIUrl":"https://doi.org/10.1109/SKG.2011.13","url":null,"abstract":"Hadoop has shown great power in processing vast data in parallel. Hive, the database on Hadoop, enables more experts to process relational data by providing sql-like interface. However, Hive does not provide an efficient approach for join, a common but expensive operator in relational database. Due to the importance of join, this paper proposes a novel hybrid algorithm, HJA, which can help to automatically choose the relatively better one among several methods, divide and memory copy merge, Partition Join(PJ) and naïve Hive join. Experiments show that HJA can get best performance in most situations.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126461789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the problems of Chinese text similarity calculation based on word frequency statistics, this paper proposed a method by using machine translation to translate Chinese text into English text, indirectly calculate similarity of given texts. This method can avoid some shortcomings of Chinese word segmentation and utilize the advantages of the natural word segmentation of English, and also can use machine translation to indirectly take the semantics of part of words into account. The experiments compared it with the way of directly using Chinese, and a detailed analysis was performed. Experiments show that this method can improve most of social texts' similarity computation as well as increase the accuracy of the computation as a whole.
{"title":"Empirical Study of Chinese Text Similarity Computation Based on Machine Translation","authors":"Yu Xu, Jianxun Liu, Mingdong Tang, Yiping Wen","doi":"10.1109/SKG.2011.19","DOIUrl":"https://doi.org/10.1109/SKG.2011.19","url":null,"abstract":"For the problems of Chinese text similarity calculation based on word frequency statistics, this paper proposed a method by using machine translation to translate Chinese text into English text, indirectly calculate similarity of given texts. This method can avoid some shortcomings of Chinese word segmentation and utilize the advantages of the natural word segmentation of English, and also can use machine translation to indirectly take the semantics of part of words into account. The experiments compared it with the way of directly using Chinese, and a detailed analysis was performed. Experiments show that this method can improve most of social texts' similarity computation as well as increase the accuracy of the computation as a whole.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131394226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clustering is an important technique for intelligence computation such as trust, recommendation, reputation, and requirement elicitation. With the user centric nature of service and the user's lack of prior knowledge on the distribution of the raw data, one challenge is on how to associate user quality requirements on the clustering results with the algorithmic output properties (e.g. number of clusters to be targeted). In this paper, we focus on the hierarchical clustering process and propose two quality-driven hierarchical clustering algorithms, HBH (homogeneity-based hierarchical) and HDH (homogeneity-driven hierarchical) clustering algorithms, with minimum acceptable homogeneity and relative population for each cluster output as their input criteria. Furthermore, we also give a HDH-approximation algorithm in order to address the time performance issue. Experimental study on data sets with different density distribution and dispersion levels shows that the HDH gives the best quality result and HDH-approximation can significantly improve the execution time.
{"title":"Quality-Driven Hierarchical Clustering Algorithm for Service Intelligence Computation","authors":"Y. Zhao, Chi-Hung Chi, Chen Ding","doi":"10.1109/SKG.2011.49","DOIUrl":"https://doi.org/10.1109/SKG.2011.49","url":null,"abstract":"Clustering is an important technique for intelligence computation such as trust, recommendation, reputation, and requirement elicitation. With the user centric nature of service and the user's lack of prior knowledge on the distribution of the raw data, one challenge is on how to associate user quality requirements on the clustering results with the algorithmic output properties (e.g. number of clusters to be targeted). In this paper, we focus on the hierarchical clustering process and propose two quality-driven hierarchical clustering algorithms, HBH (homogeneity-based hierarchical) and HDH (homogeneity-driven hierarchical) clustering algorithms, with minimum acceptable homogeneity and relative population for each cluster output as their input criteria. Furthermore, we also give a HDH-approximation algorithm in order to address the time performance issue. Experimental study on data sets with different density distribution and dispersion levels shows that the HDH gives the best quality result and HDH-approximation can significantly improve the execution time.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SLN is a loosely coupled semantic data model assigned for the network resource management. Integrating fuzzy logic into SLN can be used to handle the ambiguity, uncertainty and imprecision. This paper proposes a novel data model of fuzzy semantic link network, and provides membership functions used to define the fuzzy semantic association degree between the semantic nodes. Its nodes can be any type of resources, and its edges can be any kind of semantic relations, the potential of fuzzy compound semantic links can be got from fuzzy reasoning rules which based on fuzzy semantic relations.
{"title":"Integrate Fuzzy Logic into SLN","authors":"Ke Ren, Zhixing Huang, Anping Zhao, Yuhui Qiu","doi":"10.1109/SKG.2011.15","DOIUrl":"https://doi.org/10.1109/SKG.2011.15","url":null,"abstract":"SLN is a loosely coupled semantic data model assigned for the network resource management. Integrating fuzzy logic into SLN can be used to handle the ambiguity, uncertainty and imprecision. This paper proposes a novel data model of fuzzy semantic link network, and provides membership functions used to define the fuzzy semantic association degree between the semantic nodes. Its nodes can be any type of resources, and its edges can be any kind of semantic relations, the potential of fuzzy compound semantic links can be got from fuzzy reasoning rules which based on fuzzy semantic relations.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130045571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper, based on the theories and fruits of Old Chinese Phonology, uses the graph model in mathematics as the method to describe the initial-rhyme relationships of Old Chinese, and the Euclidean distance on 2-dimensional plane to represent the distance between sounds in the phonological harmony. Although the model is still in its infancy and there is still a lot of room for refinement and development by experts on Old Chinese phonology, it provides a more intuitive method of quantitative analysis for the long established qualitative study of Old Chinese phonology which would have some very practical uses in this field, such as the research of Xiesheng system of Chinese characters, or the the research of Chinese Etymology.
{"title":"Computer Description of Old Chinese Phonological System Based on Graph Theory","authors":"Jiajia Hu, Ning Wang","doi":"10.1109/SKG.2011.8","DOIUrl":"https://doi.org/10.1109/SKG.2011.8","url":null,"abstract":"This paper, based on the theories and fruits of Old Chinese Phonology, uses the graph model in mathematics as the method to describe the initial-rhyme relationships of Old Chinese, and the Euclidean distance on 2-dimensional plane to represent the distance between sounds in the phonological harmony. Although the model is still in its infancy and there is still a lot of room for refinement and development by experts on Old Chinese phonology, it provides a more intuitive method of quantitative analysis for the long established qualitative study of Old Chinese phonology which would have some very practical uses in this field, such as the research of Xiesheng system of Chinese characters, or the the research of Chinese Etymology.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132041261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing has emerged as a new paradigm, which is the long-held dream of computing as utility, customer get it on an on-demand model. When discuss about performance requirement and QoS (Quality of Service) of service, it is hard to tackle these problem in a round consideration. In this work we propose an SLA-aware (Service Level Agreement aware) framework ¨C cloud service provider (CSP) for cloud service (cloud infrastructure) delivery, and making allowance for the benefit of stakeholder of cloud service provider and service consumer. By using our proposed system and hierarchy SLA monitoring model we reached our win-win objective which is maximum revenue of cloud provider and minimum SLA violation.
{"title":"Policy-Based SLA-Aware Cloud Service Provision Framework","authors":"Zhilin Wang, Xinhuai Tang, Xiangfeng Luo","doi":"10.1109/SKG.2011.10","DOIUrl":"https://doi.org/10.1109/SKG.2011.10","url":null,"abstract":"Cloud computing has emerged as a new paradigm, which is the long-held dream of computing as utility, customer get it on an on-demand model. When discuss about performance requirement and QoS (Quality of Service) of service, it is hard to tackle these problem in a round consideration. In this work we propose an SLA-aware (Service Level Agreement aware) framework ¨C cloud service provider (CSP) for cloud service (cloud infrastructure) delivery, and making allowance for the benefit of stakeholder of cloud service provider and service consumer. By using our proposed system and hierarchy SLA monitoring model we reached our win-win objective which is maximum revenue of cloud provider and minimum SLA violation.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the work developed aims at contributing to the research related to the Quality of Service (QoS) for Web services. We started this research by providing a comprehensive review and description of the QoS specifications that comprise some already existing factors contributing to the QoS and some newly proposed ones. These include the availability, accessibility, reliability, integrity, the response time of the service, security, performance, etc. Hence, the aim of this research is two folds, first, it helps the designers and developers to provide better web services and second, and it helps ensure consistent software architecture as a reference model for many applications. To achieve this, we model, first, the meta-QoS model of the Web services. Then, we formalize the QoS of the Web services by referring to ARMANI which provides a language of predicates that is powerful enough to reconcile any mismatches and, thus, provide a reliable composition of the Web services. We also, handle the mediation of the composite Web services with the ACME using an automatic MDE approach and implementing a tool for this aim: Web services compositions are transformed onto ACME specifications. We are the, able to check the composition of the Web services through the ACME verification tools.
{"title":"Towards an IDM Approach of Transforming Web Services into ACME Providing Quality of Service","authors":"Amel Mhamdi, Raoudha Maraoui, Mohamed Graiet, Mourad Kmimech, Mohamed Tahar Bhiri, Eric Cariou","doi":"10.1109/SKG.2011.38","DOIUrl":"https://doi.org/10.1109/SKG.2011.38","url":null,"abstract":"In this paper, the work developed aims at contributing to the research related to the Quality of Service (QoS) for Web services. We started this research by providing a comprehensive review and description of the QoS specifications that comprise some already existing factors contributing to the QoS and some newly proposed ones. These include the availability, accessibility, reliability, integrity, the response time of the service, security, performance, etc. Hence, the aim of this research is two folds, first, it helps the designers and developers to provide better web services and second, and it helps ensure consistent software architecture as a reference model for many applications. To achieve this, we model, first, the meta-QoS model of the Web services. Then, we formalize the QoS of the Web services by referring to ARMANI which provides a language of predicates that is powerful enough to reconcile any mismatches and, thus, provide a reliable composition of the Web services. We also, handle the mediation of the composite Web services with the ACME using an automatic MDE approach and implementing a tool for this aim: Web services compositions are transformed onto ACME specifications. We are the, able to check the composition of the Web services through the ACME verification tools.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115181211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an agent based framework to incorporating distributed semantics and reasoning into a conventional distributed system, to support intelligent decisions and behaviours of distributed nodes of the system. In the framework, service agents which reside in distributed nodes provide semantic services specific to the local environment, and mobile agents provide semantic services to applications that use the underlying distributed system.
{"title":"Towards Intelligent Distributed Applications Based on Distributed Semantics and Reasoning","authors":"W. Du","doi":"10.1109/SKG.2011.27","DOIUrl":"https://doi.org/10.1109/SKG.2011.27","url":null,"abstract":"This paper proposes an agent based framework to incorporating distributed semantics and reasoning into a conventional distributed system, to support intelligent decisions and behaviours of distributed nodes of the system. In the framework, service agents which reside in distributed nodes provide semantic services specific to the local environment, and mobile agents provide semantic services to applications that use the underlying distributed system.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a methodology whereby we semantically align heterogeneous databases without changing the databases. The alignment achieved is on the data instance level, which does not require the semantics of the data at the type level. We first introduce semantic alignment in the context of our databases research project then discuss in details how heterogeneous databases that are semantically aligned may be integrated. In general, we align heterogeneous databases with 'Global as View'. We show that by drawing on Barwise-Seligman's information flow channel theory and making use of Enhanced Entity-Relationship model, semantically aligned heterogeneous databases can be successfully integrated, which enables non-conventional queries on constituting databases. We show that without modifying databases our methodology integrates any chosen databases on data instances level without loss of information, which is our contribution over existing works. Furthermore, Barwise-Seligman's information flow channel theory represents a novel and mathematically sound approach to linking independent systems. As we use this theory as a theoretical basis of our work and therefore we believe that our ideas presented here have a general applicability in talking issues in distributed systems.
{"title":"Integration of Semantically Aligned Heterogeneous Databases","authors":"Yi Yang, Junkang Feng","doi":"10.1109/SKG.2011.30","DOIUrl":"https://doi.org/10.1109/SKG.2011.30","url":null,"abstract":"In this paper we propose a methodology whereby we semantically align heterogeneous databases without changing the databases. The alignment achieved is on the data instance level, which does not require the semantics of the data at the type level. We first introduce semantic alignment in the context of our databases research project then discuss in details how heterogeneous databases that are semantically aligned may be integrated. In general, we align heterogeneous databases with 'Global as View'. We show that by drawing on Barwise-Seligman's information flow channel theory and making use of Enhanced Entity-Relationship model, semantically aligned heterogeneous databases can be successfully integrated, which enables non-conventional queries on constituting databases. We show that without modifying databases our methodology integrates any chosen databases on data instances level without loss of information, which is our contribution over existing works. Furthermore, Barwise-Seligman's information flow channel theory represents a novel and mathematically sound approach to linking independent systems. As we use this theory as a theoretical basis of our work and therefore we believe that our ideas presented here have a general applicability in talking issues in distributed systems.","PeriodicalId":184788,"journal":{"name":"2011 Seventh International Conference on Semantics, Knowledge and Grids","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129141407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}