Big data processing is one of the hot scientific issues in the current social development. MapReduce is an important foundation for big data processing. In this paper, we propose a semantic++ MapReduce. This study includes four parts. (1) Semantic++ extraction and management for big data. We will do research about the automatically extracting, labeling and management methods for big data's semantic++ information. (2) SMRPL (Semantic++ MapReduce Programming Language). It is a declarative programming language which is close to the human thinking and be used to program for big data's applications. (3) Semantic++ MapReduce compilation methods. (4) Semantic++ MapReduce computing technology. It includes three parts. 1) Analysis of semantic++ index information of the data block, the description of the semantic++ index structure and semantic++ index information automatic loading method. 2) Analysis of all kinds of semantic++ operations such as semantic++ sorting, semantic++ grouping, semantic+++ merging and semantic++ query in the map and reduce phases. 3) Shuffle scheduling strategy based on semantic++ techniques. This paper's research will optimize the MapReduce and enhance its processing efficiency and ability. Our research will provide theoretical and technological accumulation for intelligent processing of big data.
{"title":"A Semantic++ MapReduce: A Preliminary Report","authors":"Guigang Zhang, Jian Wang, Weixing Huang, C. Li, Yong Zhang, Chunxiao Xing","doi":"10.1109/ICSC.2014.63","DOIUrl":"https://doi.org/10.1109/ICSC.2014.63","url":null,"abstract":"Big data processing is one of the hot scientific issues in the current social development. MapReduce is an important foundation for big data processing. In this paper, we propose a semantic++ MapReduce. This study includes four parts. (1) Semantic++ extraction and management for big data. We will do research about the automatically extracting, labeling and management methods for big data's semantic++ information. (2) SMRPL (Semantic++ MapReduce Programming Language). It is a declarative programming language which is close to the human thinking and be used to program for big data's applications. (3) Semantic++ MapReduce compilation methods. (4) Semantic++ MapReduce computing technology. It includes three parts. 1) Analysis of semantic++ index information of the data block, the description of the semantic++ index structure and semantic++ index information automatic loading method. 2) Analysis of all kinds of semantic++ operations such as semantic++ sorting, semantic++ grouping, semantic+++ merging and semantic++ query in the map and reduce phases. 3) Shuffle scheduling strategy based on semantic++ techniques. This paper's research will optimize the MapReduce and enhance its processing efficiency and ability. Our research will provide theoretical and technological accumulation for intelligent processing of big data.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"97 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128035995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Schneider, Denny Stohr, J. Tingvold, A. B. Amundsen, Lydia Weiland, S. Kopf, W. Effelsberg, A. Scherp
Multimedia documents like PowerPoint presentations or Flash documents are widely adopted in the Internet and exist in context of lots of different topics. However, so far there is no user friendly way to explore and search for this content. The aim of this work is to address this issue by developing a new, easy-to-use user interface approach and prototype search engine. Our system is called fulgeo and specifically focuses on a suitable multimedia interface for visualizing the query results of semantically-enriched Flash documents.
{"title":"Fulgeo -- Towards an Intuitive User Interface for a Semantics-Enabled Multimedia Search Engine","authors":"D. Schneider, Denny Stohr, J. Tingvold, A. B. Amundsen, Lydia Weiland, S. Kopf, W. Effelsberg, A. Scherp","doi":"10.1109/ICSC.2014.52","DOIUrl":"https://doi.org/10.1109/ICSC.2014.52","url":null,"abstract":"Multimedia documents like PowerPoint presentations or Flash documents are widely adopted in the Internet and exist in context of lots of different topics. However, so far there is no user friendly way to explore and search for this content. The aim of this work is to address this issue by developing a new, easy-to-use user interface approach and prototype search engine. Our system is called fulgeo and specifically focuses on a suitable multimedia interface for visualizing the query results of semantically-enriched Flash documents.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114061580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a simple approach to handle recursive SPARQL queries, that is, nested queries that may contain references to the query itself. This powerful feature is obtained by implementing a custom SPARQL function that takes a SPARQL query as a parameter and executes it over a specified endpoint. The behaviour is similar to the SPARQL 1.1 SERVICE clause, with a few fundamental differences: (1) the query passed as argument can be arbitrarily complex, (2) being a string, the query can be created at runtime in the calling (outer) query, and (3) it can reference to itself, enabling recursion. These features transform the SPARQL language into a Turing-equivalent one without introducing special constructs or needing another interpreter implemented on the endpoint server engine. The feature is implemented using the standard Estensible Value Testing described in the recommendations since 1.0, therefore, our proposal is standard compliant and also compatible with older endpoints not supporting 1.1 Specifications, where it can be also a replacement for the missing SERVICE clause.
{"title":"Computing Recursive SPARQL Queries","authors":"M. Atzori","doi":"10.1109/ICSC.2014.54","DOIUrl":"https://doi.org/10.1109/ICSC.2014.54","url":null,"abstract":"We present a simple approach to handle recursive SPARQL queries, that is, nested queries that may contain references to the query itself. This powerful feature is obtained by implementing a custom SPARQL function that takes a SPARQL query as a parameter and executes it over a specified endpoint. The behaviour is similar to the SPARQL 1.1 SERVICE clause, with a few fundamental differences: (1) the query passed as argument can be arbitrarily complex, (2) being a string, the query can be created at runtime in the calling (outer) query, and (3) it can reference to itself, enabling recursion. These features transform the SPARQL language into a Turing-equivalent one without introducing special constructs or needing another interpreter implemented on the endpoint server engine. The feature is implemented using the standard Estensible Value Testing described in the recommendations since 1.0, therefore, our proposal is standard compliant and also compatible with older endpoints not supporting 1.1 Specifications, where it can be also a replacement for the missing SERVICE clause.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125647667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Basharat, I. Arpinar, Shima Dastgheib, Ugur Kursuncu, K. Kochut, Erdogan Dogdu
Crowd sourcing is an emerging paradigm to exploit the notion of human-computation for solving various computational problems, which cannot be accurately solved solely by the machine-based solutions. We use crowd sourcing for large-scale link management in the Semantic Web. More specifically, we develop Crowd Link, which utilizes crowd workers for verification and creation of triples in Linking Open Data (LOD). LOD incorporates the core data sets in the Semantic Web, yet is not in full conformance with the guidelines for publishing high quality linked data on the Web. Our approach can help in enriching and improving quality of mission-critical links in LOD. Scalable LOD link management requires a hybrid approach, where human intelligent and machine intelligent tasks interleave in a workflow execution. Likewise, many other crowd sourcing applications require a sophisticated workflow specification not only on human intelligent tasks, but also machine intelligent tasks to handle data and control-flow, which is strictly deficient in the existing crowd sourcing platforms. Hence, we are strongly motivated to investigate the interplay of crowd sourcing, and semantically enriched workflows for better human-machine cooperation in task completion. We demonstrate usefulness of our approach through various link creation and verification tasks, and workflows using Amazon Mechanical Turk. Experimental evaluation demonstrates promising results in terms of accuracy of the links created, and verified by the crowd workers.
{"title":"CrowdLink: Crowdsourcing for Large-Scale Linked Data Management","authors":"A. Basharat, I. Arpinar, Shima Dastgheib, Ugur Kursuncu, K. Kochut, Erdogan Dogdu","doi":"10.1109/ICSC.2014.14","DOIUrl":"https://doi.org/10.1109/ICSC.2014.14","url":null,"abstract":"Crowd sourcing is an emerging paradigm to exploit the notion of human-computation for solving various computational problems, which cannot be accurately solved solely by the machine-based solutions. We use crowd sourcing for large-scale link management in the Semantic Web. More specifically, we develop Crowd Link, which utilizes crowd workers for verification and creation of triples in Linking Open Data (LOD). LOD incorporates the core data sets in the Semantic Web, yet is not in full conformance with the guidelines for publishing high quality linked data on the Web. Our approach can help in enriching and improving quality of mission-critical links in LOD. Scalable LOD link management requires a hybrid approach, where human intelligent and machine intelligent tasks interleave in a workflow execution. Likewise, many other crowd sourcing applications require a sophisticated workflow specification not only on human intelligent tasks, but also machine intelligent tasks to handle data and control-flow, which is strictly deficient in the existing crowd sourcing platforms. Hence, we are strongly motivated to investigate the interplay of crowd sourcing, and semantically enriched workflows for better human-machine cooperation in task completion. We demonstrate usefulness of our approach through various link creation and verification tasks, and workflows using Amazon Mechanical Turk. Experimental evaluation demonstrates promising results in terms of accuracy of the links created, and verified by the crowd workers.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134485933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our experiment on the "Badea" system. A system designed for the automated extraction of semantic relations from text using a seed ontology and a pattern based approach. We describe the experiment using a set of Arabic language corpora for extracting the antonym semantic relation. Antonyms from the seed ontology are used to extract patterns from the corpora, these patterns are then used to discover new antonym pairs, thus enriching the ontology. Evaluation results show that the system was successful in enriching the ontology with over 400% increase in size. The results also showed that only 2.7% of the patterns were useful in extracting new antonyms, and thus recommendations for pattern scoring are presented in this paper.
{"title":"A Pattern-Based Approach to Semantic Relation Extraction Using a Seed Ontology","authors":"M. Al-Yahya, L. Aldhubayi, Sawsan Al-Malak","doi":"10.1109/ICSC.2014.42","DOIUrl":"https://doi.org/10.1109/ICSC.2014.42","url":null,"abstract":"This paper presents our experiment on the \"Badea\" system. A system designed for the automated extraction of semantic relations from text using a seed ontology and a pattern based approach. We describe the experiment using a set of Arabic language corpora for extracting the antonym semantic relation. Antonyms from the seed ontology are used to extract patterns from the corpora, these patterns are then used to discover new antonym pairs, thus enriching the ontology. Evaluation results show that the system was successful in enriching the ontology with over 400% increase in size. The results also showed that only 2.7% of the patterns were useful in extracting new antonyms, and thus recommendations for pattern scoring are presented in this paper.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133935582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vector Symbolic Architectures (VSA) are methods designed to enable distributed representation and manipulation of semantically-structured information, such as natural languages. Recently, a new VSA based on multiplication of distributed vectors by random matrices was proposed, this is known as Matrix-Binding-of-Additive-Terms (MBAT). We propose an enhancement that introduces an important additional feature to MBAT: the ability to 'unbind' symbols. We show that our method, which exploits the inherent properties of orthogonal matrices, imparts MBAT with the 'question answering' ability found in other VSAs. We compare our results with another popular VSA that was recently demonstrated to have high utility in brain-inspired machine learning applications.
{"title":"Enabling 'Question Answering' in the MBAT Vector Symbolic Architecture by Exploiting Orthogonal Random Matrices","authors":"M. Tissera, M. McDonnell","doi":"10.1109/ICSC.2014.38","DOIUrl":"https://doi.org/10.1109/ICSC.2014.38","url":null,"abstract":"Vector Symbolic Architectures (VSA) are methods designed to enable distributed representation and manipulation of semantically-structured information, such as natural languages. Recently, a new VSA based on multiplication of distributed vectors by random matrices was proposed, this is known as Matrix-Binding-of-Additive-Terms (MBAT). We propose an enhancement that introduces an important additional feature to MBAT: the ability to 'unbind' symbols. We show that our method, which exploits the inherent properties of orthogonal matrices, imparts MBAT with the 'question answering' ability found in other VSAs. We compare our results with another popular VSA that was recently demonstrated to have high utility in brain-inspired machine learning applications.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132504520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technologies based on the internet are improving continuously and so are the harmful websites such as pornography or illegal gambling websites. In addition, it is the characteristics of websites that the changes made to the web address or its contents take effect almost instantaneously. Therefore, it is not easy to identify harmful websites from those that are not. There are two ways to make such decision: manual examination and automated searching for certain texts, videos, or sounds. These two methods require a lot of time. In this paper we propose a method for identifying harmful websites by analyzing the relationship between websites instead of the contents.
{"title":"Semantic Approach for Identifying Harmful Sites Using the Link Relations","authors":"Junghoon Shin, Sangjun Lee, Taehyung Wang","doi":"10.1109/ICSC.2014.53","DOIUrl":"https://doi.org/10.1109/ICSC.2014.53","url":null,"abstract":"Technologies based on the internet are improving continuously and so are the harmful websites such as pornography or illegal gambling websites. In addition, it is the characteristics of websites that the changes made to the web address or its contents take effect almost instantaneously. Therefore, it is not easy to identify harmful websites from those that are not. There are two ways to make such decision: manual examination and automated searching for certain texts, videos, or sounds. These two methods require a lot of time. In this paper we propose a method for identifying harmful websites by analyzing the relationship between websites instead of the contents.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131994164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Big data analytics is the process of examining large amounts of data of a variety of types (big data) to uncover hidden patterns, unknown correlations and other useful information. Its revolutionary potential is now universally recognized. Data complexity, heterogeneity, scale, and timeliness make data analysis a clear bottleneck in many biomedical applications, due to the complexity of the patterns and lack of scalability of the underlying algorithms. Advanced machine learning and data mining algorithms are being developed to address one or more challenges listed above. It is typical that the complexity of potential patterns may grow exponentially with respect to the data complexity, and so is the size of the pattern space. To avoid an exhaustive search through the pattern space, machine learning and data mining algorithms usually employ a greedy approach to search for a local optimum in the solution space, or use a branch-and-bound approach to seek optimal solutions, and consequently, are often implemented as iterative or recursive procedures. To improve efficiency, these algorithms often exploit the dependencies between potential patterns to maximize in-memory computation and/or leverage special hardware (such as GPU and FPGA) for acceleration. These lead to strong data dependency, operation dependency, and hardware dependency, and sometimes ad hoc solutions that cannot be generalized to a broader scope. In this talk, I will present some open challenges faced by data scientist in biomedical fields and the current approaches taken to tackle these challenges.
{"title":"Big Data, Big Challenges","authors":"Wei Wang","doi":"10.1109/ICSC.2014.65","DOIUrl":"https://doi.org/10.1109/ICSC.2014.65","url":null,"abstract":"Summary form only given. Big data analytics is the process of examining large amounts of data of a variety of types (big data) to uncover hidden patterns, unknown correlations and other useful information. Its revolutionary potential is now universally recognized. Data complexity, heterogeneity, scale, and timeliness make data analysis a clear bottleneck in many biomedical applications, due to the complexity of the patterns and lack of scalability of the underlying algorithms. Advanced machine learning and data mining algorithms are being developed to address one or more challenges listed above. It is typical that the complexity of potential patterns may grow exponentially with respect to the data complexity, and so is the size of the pattern space. To avoid an exhaustive search through the pattern space, machine learning and data mining algorithms usually employ a greedy approach to search for a local optimum in the solution space, or use a branch-and-bound approach to seek optimal solutions, and consequently, are often implemented as iterative or recursive procedures. To improve efficiency, these algorithms often exploit the dependencies between potential patterns to maximize in-memory computation and/or leverage special hardware (such as GPU and FPGA) for acceleration. These lead to strong data dependency, operation dependency, and hardware dependency, and sometimes ad hoc solutions that cannot be generalized to a broader scope. In this talk, I will present some open challenges faced by data scientist in biomedical fields and the current approaches taken to tackle these challenges.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128110484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Taheriyan, Craig A. Knoblock, Pedro A. Szekely, J. Ambite
Semantic models of data sources describe the meaning of the data in terms of the concepts and relationships defined by a domain ontology. Building such models is an important step toward integrating data from different sources, where we need to provide the user with a unified view of underlying sources. In this paper, we present a scalable approach to automatically learn semantic models of a structured data source by exploiting the knowledge of previously modeled sources. Our evaluation shows that the approach generates expressive semantic models with minimal user input, and it is scalable to large ontologies and data sources with many attributes.
{"title":"A Scalable Approach to Learn Semantic Models of Structured Sources","authors":"M. Taheriyan, Craig A. Knoblock, Pedro A. Szekely, J. Ambite","doi":"10.1109/ICSC.2014.13","DOIUrl":"https://doi.org/10.1109/ICSC.2014.13","url":null,"abstract":"Semantic models of data sources describe the meaning of the data in terms of the concepts and relationships defined by a domain ontology. Building such models is an important step toward integrating data from different sources, where we need to provide the user with a unified view of underlying sources. In this paper, we present a scalable approach to automatically learn semantic models of a structured data source by exploiting the knowledge of previously modeled sources. Our evaluation shows that the approach generates expressive semantic models with minimal user input, and it is scalable to large ontologies and data sources with many attributes.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129095643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a simple and completely automatic methodology for analyzing sentiment of users in Twitter. Firstly, we built a Twitter corpus by grouping tweets expressing positive and negative polarity through a completely automatic procedure by using only emoticons in tweets. Then, we have built a simple sentiment classifier where an actual stream of tweets from Twitter is processed and its content classified as positive, negative or neutral. The classification is made without the use of any pre-defined polarity lexicon. The lexicon is automatically inferred from the streaming of tweets. Experimental results show that our method reduces human intervention and, consequently, the cost of the whole classification process. We observe that our simple system captures polarity distinctions matching reasonably well the classification done by human judges.
{"title":"Automatic Unsupervised Polarity Detection on a Twitter Data Stream","authors":"D. Terrana, A. Augello, G. Pilato","doi":"10.1109/ICSC.2014.17","DOIUrl":"https://doi.org/10.1109/ICSC.2014.17","url":null,"abstract":"In this paper we propose a simple and completely automatic methodology for analyzing sentiment of users in Twitter. Firstly, we built a Twitter corpus by grouping tweets expressing positive and negative polarity through a completely automatic procedure by using only emoticons in tweets. Then, we have built a simple sentiment classifier where an actual stream of tweets from Twitter is processed and its content classified as positive, negative or neutral. The classification is made without the use of any pre-defined polarity lexicon. The lexicon is automatically inferred from the streaming of tweets. Experimental results show that our method reduces human intervention and, consequently, the cost of the whole classification process. We observe that our simple system captures polarity distinctions matching reasonably well the classification done by human judges.","PeriodicalId":175352,"journal":{"name":"2014 IEEE International Conference on Semantic Computing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122458824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}