Enhancing traffic experience in congested urban areas is one of the main challenge of Intelligent Transportation Systems caused by road infrastructure, time and investment cost. Since road infrastructure have very small chances to be changed and the changes' cost are high it is worth to address research solutions that have low costs and uses Intelligent Transportation System infrastructure's properties. Following this vision, in this work we propose a realistic cost efficient urban traffic simulation methodology in order to accurately simulate vehicular traffic. Such synthetic traffic can be used by congestion avoidance solutions implementations and evaluation in the context of connected vehicles that shares information based on a centralized Vehicle to Cloud infrastructure. The proposed traffic simulation methodology was validated on a real urban area map by using the fundamental traffic flow diagram metrics. Our findings shown the realistic behaviour and valuable output of the proposed model that can be used as input in traffic congestion avoidance solutions.
{"title":"Urban Traffic Simulation Methodology for Connected Vehicles Congestion Avoidance","authors":"Ioan Stan, Raul Ghisa, R. Potolea","doi":"10.1145/3428757.3429102","DOIUrl":"https://doi.org/10.1145/3428757.3429102","url":null,"abstract":"Enhancing traffic experience in congested urban areas is one of the main challenge of Intelligent Transportation Systems caused by road infrastructure, time and investment cost. Since road infrastructure have very small chances to be changed and the changes' cost are high it is worth to address research solutions that have low costs and uses Intelligent Transportation System infrastructure's properties. Following this vision, in this work we propose a realistic cost efficient urban traffic simulation methodology in order to accurately simulate vehicular traffic. Such synthetic traffic can be used by congestion avoidance solutions implementations and evaluation in the context of connected vehicles that shares information based on a centralized Vehicle to Cloud infrastructure. The proposed traffic simulation methodology was validated on a real urban area map by using the fundamental traffic flow diagram metrics. Our findings shown the realistic behaviour and valuable output of the proposed model that can be used as input in traffic congestion avoidance solutions.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116310695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a relation-oriented faceted search method for knowledge bases (KBs) that allows users to explore relations between entities. KBs store a wide range of knowledge about real-world entities in a structured form as (subject, predicate, object). Although it is possible to query entities and relations among entities by specifying appropriate query expressions of SPARQL or keyword queries, the structure and the vocabulary are complicated and it is hard for non-expert users to get the desired information. For this reason, many researchers have proposed faceted search interfaces for KBs. Nevertheless, existing ones are designed for finding entities and are insufficient for finding relations. To this problem, we propose a novel "relation facet" to find relations between entities. To generate it, we apply clustering over predicates based on the Jaccard similarity. We experimentally show the proposed scheme performs better than existing ones in the task of searching relations.
{"title":"Relation-oriented faceted search method for knowledge bases","authors":"Taro Aso, T. Amagasa, H. Kitagawa","doi":"10.1145/3428757.3429254","DOIUrl":"https://doi.org/10.1145/3428757.3429254","url":null,"abstract":"We propose a relation-oriented faceted search method for knowledge bases (KBs) that allows users to explore relations between entities. KBs store a wide range of knowledge about real-world entities in a structured form as (subject, predicate, object). Although it is possible to query entities and relations among entities by specifying appropriate query expressions of SPARQL or keyword queries, the structure and the vocabulary are complicated and it is hard for non-expert users to get the desired information. For this reason, many researchers have proposed faceted search interfaces for KBs. Nevertheless, existing ones are designed for finding entities and are insufficient for finding relations. To this problem, we propose a novel \"relation facet\" to find relations between entities. To generate it, we apply clustering over predicates based on the Jaccard similarity. We experimentally show the proposed scheme performs better than existing ones in the task of searching relations.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gerd Hübscher, V. Geist, D. Auer, Nicole Hübscher, J. Küng
Digitalisation of knowledge work, especially in communication-intensive domains is one of the greatest challenges, but also one of the greatest opportunities to improve today's working environments. This demands for a flexible system that supports both knowledge intensive creative work and highly individual processes. Smooth integration is hindered by the lack of the task context in knowledge management systems so far. Furthermore, a model to define and handle mental concepts, which are typically evolving during daily work, is missing, to allow for targeted use of appropriate knowledge in process tasks. In this paper, we propose a bottom-up approach to model and store the static and dynamic aspects of knowledge in terms of data objects and tasks that are connected with each other. The proposed solution leverages the flexibility of a graph-based model to enable open and continuously evolving user-centred processes for knowledge work, but also predefined administrative processes. Besides our approach, we show results from testing a prototypical implementation in a real-life setting in the domain of intellectual property management applications.
{"title":"Integration of Knowledge and Task Management in an Evolving, Communication-intensive Environment","authors":"Gerd Hübscher, V. Geist, D. Auer, Nicole Hübscher, J. Küng","doi":"10.1145/3428757.3429260","DOIUrl":"https://doi.org/10.1145/3428757.3429260","url":null,"abstract":"Digitalisation of knowledge work, especially in communication-intensive domains is one of the greatest challenges, but also one of the greatest opportunities to improve today's working environments. This demands for a flexible system that supports both knowledge intensive creative work and highly individual processes. Smooth integration is hindered by the lack of the task context in knowledge management systems so far. Furthermore, a model to define and handle mental concepts, which are typically evolving during daily work, is missing, to allow for targeted use of appropriate knowledge in process tasks. In this paper, we propose a bottom-up approach to model and store the static and dynamic aspects of knowledge in terms of data objects and tasks that are connected with each other. The proposed solution leverages the flexibility of a graph-based model to enable open and continuously evolving user-centred processes for knowledge work, but also predefined administrative processes. Besides our approach, we show results from testing a prototypical implementation in a real-life setting in the domain of intellectual property management applications.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacques Chabin, Cédric Eichler, Mirian Halfeld-Ferrari, Nicolas Hiot
This paper introduces SetUp, a theoretical and applied framework for the management of RDF/S database evolution on the basis of graph rewriting rules. Rewriting rules formalize instance or schema changes, ensuring graph's consistency with respect to given constraints. Constraints considered in this paper are a well known variant of RDF/S semantic, but the approach can be adapted to user-defined constraints. Furthermore, SetUp manages updates by ensuring rule applicability through the generation of side-effects: new updates which guarantee that rule application conditions hold. We provide herein formal validation and experimental evaluation of SetUp.
{"title":"Graph Rewriting Rules for RDF Database Evolution Management","authors":"Jacques Chabin, Cédric Eichler, Mirian Halfeld-Ferrari, Nicolas Hiot","doi":"10.1145/3428757.3429126","DOIUrl":"https://doi.org/10.1145/3428757.3429126","url":null,"abstract":"This paper introduces SetUp, a theoretical and applied framework for the management of RDF/S database evolution on the basis of graph rewriting rules. Rewriting rules formalize instance or schema changes, ensuring graph's consistency with respect to given constraints. Constraints considered in this paper are a well known variant of RDF/S semantic, but the approach can be adapted to user-defined constraints. Furthermore, SetUp manages updates by ensuring rule applicability through the generation of side-effects: new updates which guarantee that rule application conditions hold. We provide herein formal validation and experimental evaluation of SetUp.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"19 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131173550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last few years, NoSQL systems are gaining strong popularity and a number of decision makers are using it to implement their warehouses. In the recent years, many web applications are moving towards the use of data in the form of graphs. For example, social media and the emergence of Facebook, LinkedIn and Twitter have accelerated the emergence of the NoSQL database and in particular graph-oriented databases that represent the basic format with which data in these media is stored. Based on these findings and in addition to the absence of a clear approach which allows the creation of data warehouse under NoSQL model, we propose, in this paper, an approach to create a Graph-oriented Data warehouse. We propose the transformation of Dimensional Fact Model into Graph Dimensional Model. Then, we implement the Graph Dimensional Model using java routines in Talend Data Integration tool (TOS).
{"title":"Graph NoSQL Data Warehouse Creation","authors":"Amal Sellami, Ahlem Nabli, F. Gargouri","doi":"10.1145/3428757.3429141","DOIUrl":"https://doi.org/10.1145/3428757.3429141","url":null,"abstract":"Over the last few years, NoSQL systems are gaining strong popularity and a number of decision makers are using it to implement their warehouses. In the recent years, many web applications are moving towards the use of data in the form of graphs. For example, social media and the emergence of Facebook, LinkedIn and Twitter have accelerated the emergence of the NoSQL database and in particular graph-oriented databases that represent the basic format with which data in these media is stored. Based on these findings and in addition to the absence of a clear approach which allows the creation of data warehouse under NoSQL model, we propose, in this paper, an approach to create a Graph-oriented Data warehouse. We propose the transformation of Dimensional Fact Model into Graph Dimensional Model. Then, we implement the Graph Dimensional Model using java routines in Talend Data Integration tool (TOS).","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"9 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123649471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo G. C. Rocha, Danillo Bion, R. Azevedo, Arthur Gomes, Diogo Cordeiro, Renan Leandro, Israel Silva, F. Freitas
Globalization has allowed organizations to intensify the search for solutions that minimize challenges, reduce costs and optimize processes. In this way, global software development has emerged as an attempt to use the best resources for its limitations. In distributed environments, the use of Ontologies brings some benefits such as a uniform understanding of information among teams and ease of communication, as well as making for the lack of a reference model that can be applied in a distributed context. This work aims to propose a viable form of validation for DKDonto a domain ontology developed for Global Software Engineering. The validation allowed a broader and more targeted assessment, different from its original validation, which was carried out in a controlled environment, limited to answering questions already known by the knowledge base itself. The main result of this work is a satisfactory evaluation of the ontology, enabling it to be used and shared by companies or institutions, as well as the presentation of a set of methods and ways to evaluate and verify domain ontologies to be used in different domains.
{"title":"A Syntactic and Semantic Assessment of a Global Software Engineering Domain Ontology","authors":"Rodrigo G. C. Rocha, Danillo Bion, R. Azevedo, Arthur Gomes, Diogo Cordeiro, Renan Leandro, Israel Silva, F. Freitas","doi":"10.1145/3428757.3429143","DOIUrl":"https://doi.org/10.1145/3428757.3429143","url":null,"abstract":"Globalization has allowed organizations to intensify the search for solutions that minimize challenges, reduce costs and optimize processes. In this way, global software development has emerged as an attempt to use the best resources for its limitations. In distributed environments, the use of Ontologies brings some benefits such as a uniform understanding of information among teams and ease of communication, as well as making for the lack of a reference model that can be applied in a distributed context. This work aims to propose a viable form of validation for DKDonto a domain ontology developed for Global Software Engineering. The validation allowed a broader and more targeted assessment, different from its original validation, which was carried out in a controlled environment, limited to answering questions already known by the knowledge base itself. The main result of this work is a satisfactory evaluation of the ontology, enabling it to be used and shared by companies or institutions, as well as the presentation of a set of methods and ways to evaluate and verify domain ontologies to be used in different domains.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130609041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays automatic speech recognition (ASR) systems can achieve higher and higher accuracy rates depending on the methodology applied and datasets used. The rate decreases significantly when the ASR system is being used with a non-native speaker of the language to be recognized. The main reason for this is specific pronunciation and accent features related to the mother tongue of that speaker, which influence the pronunciation. At the same time, an extremely limited volume of labeled non-native speech datasets makes it difficult to train, from the ground up, sufficiently accurate ASR systems for non-native speakers. In this research we address the problem and its influence on the accuracy of ASR systems, using the style transfer methodology. We designed a pipeline for modifying the speech of a non-native speaker so that it more closely resembles the native speech. This paper covers experiments for accent modification using different setups and different approaches, including neural style transfer and autoencoder. The experiments were conducted on English language pronounced by Japanese speakers (UME-ERJ dataset). The results show that there is a significant relative improvement in terms of the speech recognition accuracy. Our methodology reduces the necessity of training new algorithms for non-native speech (thus overcoming the obstacle related to the data scarcity) and can be used as a wrapper for any existing ASR system. The modification can be performed in real time, before a sample is passed into the speech recognition system itself.
{"title":"Support software for Automatic Speech Recognition systems targeted for non-native speech","authors":"K. Radzikowski, O. Yoshie, R. Nowak","doi":"10.1145/3428757.3429971","DOIUrl":"https://doi.org/10.1145/3428757.3429971","url":null,"abstract":"Nowadays automatic speech recognition (ASR) systems can achieve higher and higher accuracy rates depending on the methodology applied and datasets used. The rate decreases significantly when the ASR system is being used with a non-native speaker of the language to be recognized. The main reason for this is specific pronunciation and accent features related to the mother tongue of that speaker, which influence the pronunciation. At the same time, an extremely limited volume of labeled non-native speech datasets makes it difficult to train, from the ground up, sufficiently accurate ASR systems for non-native speakers. In this research we address the problem and its influence on the accuracy of ASR systems, using the style transfer methodology. We designed a pipeline for modifying the speech of a non-native speaker so that it more closely resembles the native speech. This paper covers experiments for accent modification using different setups and different approaches, including neural style transfer and autoencoder. The experiments were conducted on English language pronounced by Japanese speakers (UME-ERJ dataset). The results show that there is a significant relative improvement in terms of the speech recognition accuracy. Our methodology reduces the necessity of training new algorithms for non-native speech (thus overcoming the obstacle related to the data scarcity) and can be used as a wrapper for any existing ASR system. The modification can be performed in real time, before a sample is passed into the speech recognition system itself.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have been developing "Facial expression sensing service" for emotional analysis and quantitative evaluation of care based on subtle facial movements and conducted a preliminary experiment about its practicality. In this research, focusing both obtaining facial expression data and searching for an efficient care method, to elderly people can activate themselves and relieve their stress, we have developed "Video player service" that can easily play videos and automatically collect facial expression data. After developing the service, we have asked people who engage in elderly care to try it and obtained feedback. As a result, we received favorable comments for the usefulness of the service, and we were able to get facial expression data for four people.
{"title":"Evaluating Video Playing Application for Elderly People at Home by Facial Expression Sensing Service","authors":"K. Hirayama, S. Saiki, Masahide Nakamura","doi":"10.1145/3428757.3429113","DOIUrl":"https://doi.org/10.1145/3428757.3429113","url":null,"abstract":"We have been developing \"Facial expression sensing service\" for emotional analysis and quantitative evaluation of care based on subtle facial movements and conducted a preliminary experiment about its practicality. In this research, focusing both obtaining facial expression data and searching for an efficient care method, to elderly people can activate themselves and relieve their stress, we have developed \"Video player service\" that can easily play videos and automatically collect facial expression data. After developing the service, we have asked people who engage in elderly care to try it and obtained feedback. As a result, we received favorable comments for the usefulness of the service, and we were able to get facial expression data for four people.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127967426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To reduce the cost of administrative services, many local governments provide a frequently asked questions (FAQ) page on their websites that lists the questions received from local inhabitants with their official responses. The number of Q&A items posted on the FAQ page, however, will vary depending on the local government. To address this issue, we propose a method for augmenting local government FAQs by using a community-based Q&A (cQA) service. We also propose a new FAQ augmentation task to identify the regional dependence of Q&A to achieve the goal mentioned above. In our experiments, we fine-tuned the bidirectional encoder representations from transformers (BERT) model for this task, using a labeled local-government FAQ dataset. We found that the regional dependence of Q&As can be identified with high accuracy by using both the question and the answer as clues and with fine tuning for the deeper layers in BERT.
{"title":"Augmentation of Local Government FAQs using Community-based Question-answering Data","authors":"Yohei Seki, Masaki Oguni, Sumio Fujita","doi":"10.1145/3428757.3429137","DOIUrl":"https://doi.org/10.1145/3428757.3429137","url":null,"abstract":"To reduce the cost of administrative services, many local governments provide a frequently asked questions (FAQ) page on their websites that lists the questions received from local inhabitants with their official responses. The number of Q&A items posted on the FAQ page, however, will vary depending on the local government. To address this issue, we propose a method for augmenting local government FAQs by using a community-based Q&A (cQA) service. We also propose a new FAQ augmentation task to identify the regional dependence of Q&A to achieve the goal mentioned above. In our experiments, we fine-tuned the bidirectional encoder representations from transformers (BERT) model for this task, using a labeled local-government FAQ dataset. We found that the regional dependence of Q&As can be identified with high accuracy by using both the question and the answer as clues and with fine tuning for the deeper layers in BERT.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125546858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SuperSQL is an extended language of SQL. By structuring the output of relational databases, SuperSQL enables the user to generate various types of structured documents with various layouts which are not represented in SQL. There is a problem that the larger and more complicated the SuperSQL query is, the more difficult it is to detect errors and the more time is spent on debugging. In this study, we propose a system that automatically detects and corrects syntax errors in user queries. When a query parsing fails, the system reanalyzes the query and predicts a correction by using deep learning. To modify the query, we use recurrent neural network and attention mechanism. By presenting the predicted modifications to users, the burden of debugging can be reduced and the efficiency of user's work can be improved.
{"title":"Automatic Correction of Syntax Errors in SuperSQL Queries","authors":"Shunsuke Otawa, Kento Goto, Motomichi Toyama","doi":"10.1145/3428757.3429131","DOIUrl":"https://doi.org/10.1145/3428757.3429131","url":null,"abstract":"SuperSQL is an extended language of SQL. By structuring the output of relational databases, SuperSQL enables the user to generate various types of structured documents with various layouts which are not represented in SQL. There is a problem that the larger and more complicated the SuperSQL query is, the more difficult it is to detect errors and the more time is spent on debugging. In this study, we propose a system that automatically detects and corrects syntax errors in user queries. When a query parsing fails, the system reanalyzes the query and predicts a correction by using deep learning. To modify the query, we use recurrent neural network and attention mechanism. By presenting the predicted modifications to users, the burden of debugging can be reduced and the efficiency of user's work can be improved.","PeriodicalId":212557,"journal":{"name":"Proceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}