In this paper, we introduce SocioPedia, which is a real-time automatic system for efficiently visualizing and analyzing the variations, characteristics, and evolutions of social knowledge following the change of time. SocioPedia has been developed to provide a full knowledge graph life cycle and combined the temporal information into each processed knowledge. To benefit different classes of users, SocioPedia provides a user-friendly and intuitive environment with different visualization types including static knowledge visualization, timeline knowledge visualization, timeline characteristic visualization, and dynamic timeline visualization.
{"title":"SocioPedia: Visualizing Social Knowledge over Time","authors":"Try My Nguyen, Jason J. Jung","doi":"10.1145/3555776.3577660","DOIUrl":"https://doi.org/10.1145/3555776.3577660","url":null,"abstract":"In this paper, we introduce SocioPedia, which is a real-time automatic system for efficiently visualizing and analyzing the variations, characteristics, and evolutions of social knowledge following the change of time. SocioPedia has been developed to provide a full knowledge graph life cycle and combined the temporal information into each processed knowledge. To benefit different classes of users, SocioPedia provides a user-friendly and intuitive environment with different visualization types including static knowledge visualization, timeline knowledge visualization, timeline characteristic visualization, and dynamic timeline visualization.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78734189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A threat modeling exercise involves systematically assessing the likelihood and potential impact of diverse threat scenarios. As threat modeling approaches and tools act at the level of a software architecture or design (e.g., a data flow diagram), they consider threat scenarios at the level of classes or types of system elements. More fine-grained analyses in terms of concrete instances of these elements are typically not conducted explicitly nor rigorously. This hinders (i) expressiveness, as threats that require articulation at the level of instances can not be expressed nor managed properly, and (ii) systematic risk calculation, as risk cannot be expressed and estimated with respect to instance-level properties. In this paper, we present a novel threat modeling approach that acts on two layers: (i) the design layer defines the classes and entity types in the system, and (ii) the instance layer models concrete instances and their properties. This, in turn, allows both rough risk estimates at the design-level, and more precise ones at the instance-level. Motivated by a connected vehicles application, we present the key challenges, the modeling approach and a tool prototype. The presented approach is a key enabler for more continuous and frequent threat (re-)assessment, the integration of threat analysis models in CI/CD pipelines and agile development environments on the one hand (development perspective), and in risk management approaches at run-time (operations perspective).
{"title":"Expressive and Systematic Risk Assessments with Instance-Centric Threat Models","authors":"Stef Verreydt, Dimitri Van Landuyt, W. Joosen","doi":"10.1145/3555776.3577668","DOIUrl":"https://doi.org/10.1145/3555776.3577668","url":null,"abstract":"A threat modeling exercise involves systematically assessing the likelihood and potential impact of diverse threat scenarios. As threat modeling approaches and tools act at the level of a software architecture or design (e.g., a data flow diagram), they consider threat scenarios at the level of classes or types of system elements. More fine-grained analyses in terms of concrete instances of these elements are typically not conducted explicitly nor rigorously. This hinders (i) expressiveness, as threats that require articulation at the level of instances can not be expressed nor managed properly, and (ii) systematic risk calculation, as risk cannot be expressed and estimated with respect to instance-level properties. In this paper, we present a novel threat modeling approach that acts on two layers: (i) the design layer defines the classes and entity types in the system, and (ii) the instance layer models concrete instances and their properties. This, in turn, allows both rough risk estimates at the design-level, and more precise ones at the instance-level. Motivated by a connected vehicles application, we present the key challenges, the modeling approach and a tool prototype. The presented approach is a key enabler for more continuous and frequent threat (re-)assessment, the integration of threat analysis models in CI/CD pipelines and agile development environments on the one hand (development perspective), and in risk management approaches at run-time (operations perspective).","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78860802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thamilselvam B, Y. Ramesh, S. Kalyanasundaram, M. Rao
The analysis of traffic policies, for instance, the duration of green and red phases at intersections, can be quite challenging. While the introduction of communication systems can potentially lead to better solutions, it is important to analyse and formulate policies in the presence of potential communication failures and delays. Given the stochastic nature of traffic, posing the problem as a model checking problem in probabilistic epistemic temporal logic seems promising. In this work, we propose an approach that uses epistemic modalities to model the effect of communication between multiple intersections and temporal modalities to model the progression of traffic volumes over time. We validate our approach in a non-stochastic setting, using the tool Model Checker for Multi-Agent Systems (MCMAS). We develop a Statistical Model Checking module and use it in conjunction with a tool chain that integrates a traffic simulator (SUMO) and a network simulator (OMNeT++/Veins) to study the impact of communications on traffic policies.
{"title":"Traffic Intersections as Agents: A model checking approach for analysing communicating agents","authors":"Thamilselvam B, Y. Ramesh, S. Kalyanasundaram, M. Rao","doi":"10.1145/3555776.3577720","DOIUrl":"https://doi.org/10.1145/3555776.3577720","url":null,"abstract":"The analysis of traffic policies, for instance, the duration of green and red phases at intersections, can be quite challenging. While the introduction of communication systems can potentially lead to better solutions, it is important to analyse and formulate policies in the presence of potential communication failures and delays. Given the stochastic nature of traffic, posing the problem as a model checking problem in probabilistic epistemic temporal logic seems promising. In this work, we propose an approach that uses epistemic modalities to model the effect of communication between multiple intersections and temporal modalities to model the progression of traffic volumes over time. We validate our approach in a non-stochastic setting, using the tool Model Checker for Multi-Agent Systems (MCMAS). We develop a Statistical Model Checking module and use it in conjunction with a tool chain that integrates a traffic simulator (SUMO) and a network simulator (OMNeT++/Veins) to study the impact of communications on traffic policies.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87657020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo de Magalhães Marques dos Santos Silva, Cláudio Correia, M. Correia, Luís Rodrigues
Users often encrypt files they store on cloud storage services to ensure data privacy. Unfortunately, without additional mechanisms, encrypting files prevents the use of server-side deduplication as two identical files will be different when encrypted. Encrypted deduplication techniques combines file encryption and data deduplication. This combination usually requires some form of direct or indirect coordination between the different clients. In this paper, we address the problem of reconciling the need to encrypt data with the advantages of deduplication. In particular, we study techniques that achieve this objective while avoiding frequency analysis attacks, i.e., attacks that infer the content of an encrypted file based on how frequently the file is stored and/or accessed. We propose a new protocol for assigning encryption keys to files that leverages the use of trusted execution environments to hide the frequencies of chunks from the adversary.
{"title":"Deduplication vs Privacy Tradeoffs in Cloud Storage","authors":"Rodrigo de Magalhães Marques dos Santos Silva, Cláudio Correia, M. Correia, Luís Rodrigues","doi":"10.1145/3555776.3577711","DOIUrl":"https://doi.org/10.1145/3555776.3577711","url":null,"abstract":"Users often encrypt files they store on cloud storage services to ensure data privacy. Unfortunately, without additional mechanisms, encrypting files prevents the use of server-side deduplication as two identical files will be different when encrypted. Encrypted deduplication techniques combines file encryption and data deduplication. This combination usually requires some form of direct or indirect coordination between the different clients. In this paper, we address the problem of reconciling the need to encrypt data with the advantages of deduplication. In particular, we study techniques that achieve this objective while avoiding frequency analysis attacks, i.e., attacks that infer the content of an encrypted file based on how frequently the file is stored and/or accessed. We propose a new protocol for assigning encryption keys to files that leverages the use of trusted execution environments to hide the frequencies of chunks from the adversary.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86785354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of Alexa and Siri, and more recently, OpenAI's Chat-GPT, raises the question whether ad hoc biological queries can also be computed without end-users' active involvement in the code writing process. While advances have been made, current querying architectures for biological databases still assume some degree of computational competence and significant structural awareness of the underlying network of databases by biologists, if not active code writing. Given that biological databases are highly distributed and heterogeneous, and most are not FAIR compliant, a significant amount of expertise in data integration is essential for a query to be accurately crafted and meaningfully executed. In this paper, we introduce a flexible and intelligent query reformulation assistant, called Needle, as a back-end query execution engine of a natural language query interface to online biological databases. Needle leverages a data model called BioStar that leverages a meta-knowledgebase, called the schema graph, to map natural language queries to relevant databases and biological concepts. The implementation of Needle using BioStar is the focus of this article.
{"title":"Mapping Strategies for Declarative Queries over Online Heterogeneous Biological Databases for Intelligent Responses","authors":"H. Jamil, Kallol Naha","doi":"10.1145/3555776.3577652","DOIUrl":"https://doi.org/10.1145/3555776.3577652","url":null,"abstract":"The emergence of Alexa and Siri, and more recently, OpenAI's Chat-GPT, raises the question whether ad hoc biological queries can also be computed without end-users' active involvement in the code writing process. While advances have been made, current querying architectures for biological databases still assume some degree of computational competence and significant structural awareness of the underlying network of databases by biologists, if not active code writing. Given that biological databases are highly distributed and heterogeneous, and most are not FAIR compliant, a significant amount of expertise in data integration is essential for a query to be accurately crafted and meaningfully executed. In this paper, we introduce a flexible and intelligent query reformulation assistant, called Needle, as a back-end query execution engine of a natural language query interface to online biological databases. Needle leverages a data model called BioStar that leverages a meta-knowledgebase, called the schema graph, to map natural language queries to relevant databases and biological concepts. The implementation of Needle using BioStar is the focus of this article.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89067093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rubén Alonso, D. Dessí, Antonello Meloni, Diego Reforgiato Recupero
Natural Language Processing (NLP) is crucial to perform recommendations of items that can be only described by natural language. However, NLP usage within recommendation modules is difficult and usually requires a relevant initial effort, thus limiting its widespread adoption. To overcome this limitation, we introduce FORESEE, a novel architecture that can be instantiated with NLP and Machine Learning (ML) modules to perform recommendations of items that are described by natural language features. Furthermore, we describe an instantiation of such architecture to provide a service for the job market where applicants can verify whether their curriculum vitae (CV) is eligible for a given job position, can receive suggestions about which skills and abilities they should obtain, and finally, can obtain recommendations about online resources which might strengthen their CVs.
{"title":"A General and NLP-based Architecture to perform Recommendation: A Use Case for Online Job Search and Skills Acquisition","authors":"Rubén Alonso, D. Dessí, Antonello Meloni, Diego Reforgiato Recupero","doi":"10.1145/3555776.3577844","DOIUrl":"https://doi.org/10.1145/3555776.3577844","url":null,"abstract":"Natural Language Processing (NLP) is crucial to perform recommendations of items that can be only described by natural language. However, NLP usage within recommendation modules is difficult and usually requires a relevant initial effort, thus limiting its widespread adoption. To overcome this limitation, we introduce FORESEE, a novel architecture that can be instantiated with NLP and Machine Learning (ML) modules to perform recommendations of items that are described by natural language features. Furthermore, we describe an instantiation of such architecture to provide a service for the job market where applicants can verify whether their curriculum vitae (CV) is eligible for a given job position, can receive suggestions about which skills and abilities they should obtain, and finally, can obtain recommendations about online resources which might strengthen their CVs.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80721294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nayara Cristina da Silva, M. Albertini, A. R. Backes, G. Pena
Pediatric hospital readmission involves greater burdens for the patient and their family network, and for the health system. Machine learning can be a good strategy to expand knowledge in this area and to assist in the identification of patients at readmission risk. The objective of the study was to develop a predictive model to identify children and adolescents at high risk of potentially avoidable 30-day readmission using a machine learning approach. Retrospective cohort study with patients under 18 years old admitted to a tertiary university hospital. We collected demographic, clinical, and nutritional data from electronic databases. We apply machine learning techniques to build the predictive models. The 30-day hospital readmissions rate was 9.50%. The accuracy for CART model with bagging was 0.79, the sensitivity, and specificity were 76.30% and 64.40%, respectively. Machine learning approaches can predict avoidable 30-day pediatric hospital readmission into tertiary assistance.
{"title":"Prediction of readmissions in hospitalized children and adolescents by machine learning","authors":"Nayara Cristina da Silva, M. Albertini, A. R. Backes, G. Pena","doi":"10.1145/3555776.3577592","DOIUrl":"https://doi.org/10.1145/3555776.3577592","url":null,"abstract":"Pediatric hospital readmission involves greater burdens for the patient and their family network, and for the health system. Machine learning can be a good strategy to expand knowledge in this area and to assist in the identification of patients at readmission risk. The objective of the study was to develop a predictive model to identify children and adolescents at high risk of potentially avoidable 30-day readmission using a machine learning approach. Retrospective cohort study with patients under 18 years old admitted to a tertiary university hospital. We collected demographic, clinical, and nutritional data from electronic databases. We apply machine learning techniques to build the predictive models. The 30-day hospital readmissions rate was 9.50%. The accuracy for CART model with bagging was 0.79, the sensitivity, and specificity were 76.30% and 64.40%, respectively. Machine learning approaches can predict avoidable 30-day pediatric hospital readmission into tertiary assistance.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91138581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lilian Berton, Felipe Mitsuishi, Didier Vega Oliveros
In many real-world applications, labeled instances are costly and infeasible to obtain large training sets. This way, learning strategies that do the most with fewer labels are calling attention, such as semi-supervised learning (SSL) and active learning (AL). Active learning allows querying instance to be labeled in the uncertain region and semi-supervised learning classify with a small set of labeled data. We combine both strategies to investigate how AL improves SSL performance, considering both classification results and computational cost. We present experimental results comparing five AL strategies on seven benchmark datasets encompassing synthetic data, handwritten digit and image recognition, and brain-computing interaction tasks. The best single AL strategy was the ranked batch mode, but it has the highest computational cost. On the other hand, using a consensus committee approach leads to the highest results and low-processing footprints.
{"title":"Analysis of active semi-supervised learning","authors":"Lilian Berton, Felipe Mitsuishi, Didier Vega Oliveros","doi":"10.1145/3555776.3577621","DOIUrl":"https://doi.org/10.1145/3555776.3577621","url":null,"abstract":"In many real-world applications, labeled instances are costly and infeasible to obtain large training sets. This way, learning strategies that do the most with fewer labels are calling attention, such as semi-supervised learning (SSL) and active learning (AL). Active learning allows querying instance to be labeled in the uncertain region and semi-supervised learning classify with a small set of labeled data. We combine both strategies to investigate how AL improves SSL performance, considering both classification results and computational cost. We present experimental results comparing five AL strategies on seven benchmark datasets encompassing synthetic data, handwritten digit and image recognition, and brain-computing interaction tasks. The best single AL strategy was the ranked batch mode, but it has the highest computational cost. On the other hand, using a consensus committee approach leads to the highest results and low-processing footprints.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89748429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because1 the logic bomb performs malicious behaviors only within the branch that triggers the malicious behaviors, if the branch can be easily found, malicious behaviors can be detected efficiently. Existing malicious app analysis tools look for branches that trigger malicious behaviors based on static analysis, so if reflection is used in the app, this branch statement cannot be found properly. Therefore, in this paper, we propose an app execution log-based suspicious conditional statement detection tool that can detect suspicious conditional statements even when reflection is used. The proposed detection tool which is implemented on the android-10.0.0_r47 version of AOSP(Android Open Source Project) can check the branch statement and information about called method while the app is executing, including the method called by reflection. Also, since suspicious conditional statements are detected by checking the method call flow related to branch statements in the execution log, there is no need to examine all branch statements in the app. Experimental results show that the proposed detection tool can detect suspicious conditional statements regardless of the use of reflection.
{"title":"Detecting Suspicious Conditional Statement using App Execution Log","authors":"Sumin Lee, Minho Park, Jiman Hong","doi":"10.1145/3555776.3577722","DOIUrl":"https://doi.org/10.1145/3555776.3577722","url":null,"abstract":"Because1 the logic bomb performs malicious behaviors only within the branch that triggers the malicious behaviors, if the branch can be easily found, malicious behaviors can be detected efficiently. Existing malicious app analysis tools look for branches that trigger malicious behaviors based on static analysis, so if reflection is used in the app, this branch statement cannot be found properly. Therefore, in this paper, we propose an app execution log-based suspicious conditional statement detection tool that can detect suspicious conditional statements even when reflection is used. The proposed detection tool which is implemented on the android-10.0.0_r47 version of AOSP(Android Open Source Project) can check the branch statement and information about called method while the app is executing, including the method called by reflection. Also, since suspicious conditional statements are detected by checking the method call flow related to branch statements in the execution log, there is no need to examine all branch statements in the app. Experimental results show that the proposed detection tool can detect suspicious conditional statements regardless of the use of reflection.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90075158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prabir Mondal, Daipayan Chakder, Subham Raj, S. Saha, N. Onoe
The Recommendation System (RS) development and recommending customers' preferred products to the customer are highly desirable motives in today's digital market. Most of the RSs are mainly based on textual information of the engaged entities in the platform and the ratings provided by the users to the products. This paper develops a movie recommendation system where the cold-start problem relating to rating information dependency has been dealt with and the multi-modality approach is introduced. The proposed method differs from existing approaches in three main aspects: (a) implementation of knowledge graph for text embedding, (b) besides textual information, other modalities of movies like video, and audio are employed rather than rating information for generating movie/user representation and this approach deals with the cold-start problem effectively, (c) utilization of graph convolutional network (GCN) for generating some further hidden features and also for developing regression system.
{"title":"Graph Convolutional Neural Network for Multimodal Movie Recommendation","authors":"Prabir Mondal, Daipayan Chakder, Subham Raj, S. Saha, N. Onoe","doi":"10.1145/3555776.3577853","DOIUrl":"https://doi.org/10.1145/3555776.3577853","url":null,"abstract":"The Recommendation System (RS) development and recommending customers' preferred products to the customer are highly desirable motives in today's digital market. Most of the RSs are mainly based on textual information of the engaged entities in the platform and the ratings provided by the users to the products. This paper develops a movie recommendation system where the cold-start problem relating to rating information dependency has been dealt with and the multi-modality approach is introduced. The proposed method differs from existing approaches in three main aspects: (a) implementation of knowledge graph for text embedding, (b) besides textual information, other modalities of movies like video, and audio are employed rather than rating information for generating movie/user representation and this approach deals with the cold-start problem effectively, (c) utilization of graph convolutional network (GCN) for generating some further hidden features and also for developing regression system.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76865314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}