Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.58
V. Vagin, O. Morosin
This paper contains a description of an argumentation system that uses a defeasible reasoning mechanism. The main idea and the key points are given. Also it contains main algorithms for detecting the conflicts and finding statuses of arguments. Solutions of some problems, which are not solvable in the classical logics, are presented.
{"title":"Modeling Defeasible Reasoning for Argumentation","authors":"V. Vagin, O. Morosin","doi":"10.1109/BRICS-CCI-CBIC.2013.58","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.58","url":null,"abstract":"This paper contains a description of an argumentation system that uses a defeasible reasoning mechanism. The main idea and the key points are given. Also it contains main algorithms for detecting the conflicts and finding statuses of arguments. Solutions of some problems, which are not solvable in the classical logics, are presented.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116688555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.99
Marden B. Pasinato, Carlos E. Mello, Marie-Aude Aufaure, Geraldo Zimbrão
Context-Aware Recommender Systems (CARS) have emerged as a different way of providing more precise and interesting recommendations through the use of data about the context in which consumers buy goods and/or services. CARS consider not only the ratings given to items by consumers (users), but also the context attributes related to these ratings. Several algorithms and methods have been proposed in the literature in order to deal with context-aware ratings. Although there are lots of proposals and approaches working for this kind of recommendation, adequate and public datasets containing user's context-aware ratings about items are limited, and usually, even these are not large enough to evaluate the proposed CARS very well. One solution for this issue is to crawl this kind of data from e-commerce websites. However, it could be very time-expensive and also complicated due to problems regarding legal rights and privacy. In addition, crawled data from e-commerce websites may not be enough for a complete evaluation, being unable to simulate all possible users' behaviors and characteristics. In this article, we propose a methodology to generate a synthetic dataset for context-aware recommender systems, enabling researchers and developers to create their own dataset according to the characteristics in which they want to evaluate their algorithms and methods. Our methodology enables researchers to define the user's behavior of giving ratings based on the Probability Distribution Function (PDF) associated to their profiles.
{"title":"Generating Synthetic Data for Context-Aware Recommender Systems","authors":"Marden B. Pasinato, Carlos E. Mello, Marie-Aude Aufaure, Geraldo Zimbrão","doi":"10.1109/BRICS-CCI-CBIC.2013.99","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.99","url":null,"abstract":"Context-Aware Recommender Systems (CARS) have emerged as a different way of providing more precise and interesting recommendations through the use of data about the context in which consumers buy goods and/or services. CARS consider not only the ratings given to items by consumers (users), but also the context attributes related to these ratings. Several algorithms and methods have been proposed in the literature in order to deal with context-aware ratings. Although there are lots of proposals and approaches working for this kind of recommendation, adequate and public datasets containing user's context-aware ratings about items are limited, and usually, even these are not large enough to evaluate the proposed CARS very well. One solution for this issue is to crawl this kind of data from e-commerce websites. However, it could be very time-expensive and also complicated due to problems regarding legal rights and privacy. In addition, crawled data from e-commerce websites may not be enough for a complete evaluation, being unable to simulate all possible users' behaviors and characteristics. In this article, we propose a methodology to generate a synthetic dataset for context-aware recommender systems, enabling researchers and developers to create their own dataset according to the characteristics in which they want to evaluate their algorithms and methods. Our methodology enables researchers to define the user's behavior of giving ratings based on the Probability Distribution Function (PDF) associated to their profiles.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126859821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.74
H. Alves, M. Valença
Understanding the influence of some factors on a particular phenomenon can be very relevant in many cases of decision-making. An example would be the identification of the level of influence that factors such as smoking, stress and lack of exercise have on the predisposition to heart disease. Knowing which of these inputs are relevant for a person to become a cardiac patient, it is possible to take some preventive measures. This article presents a new method to assist the not so simple task of feature selection, using the statistical function called curve of permanence. In this work we show parmanence curves applied on result data from the executions of some existing algorithms of feature selection, all of them based on Artificial Neural Networks (ANN). The objective of this study is to propose a technique that provides robustness to the process of determine the values of contributions of the inputs of an ANNs.
{"title":"Using Curves of Permanence to Study the Contribution of Input Variables in Artificial Neural Network Models: A New Proposed Methodology","authors":"H. Alves, M. Valença","doi":"10.1109/BRICS-CCI-CBIC.2013.74","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.74","url":null,"abstract":"Understanding the influence of some factors on a particular phenomenon can be very relevant in many cases of decision-making. An example would be the identification of the level of influence that factors such as smoking, stress and lack of exercise have on the predisposition to heart disease. Knowing which of these inputs are relevant for a person to become a cardiac patient, it is possible to take some preventive measures. This article presents a new method to assist the not so simple task of feature selection, using the statistical function called curve of permanence. In this work we show parmanence curves applied on result data from the executions of some existing algorithms of feature selection, all of them based on Artificial Neural Networks (ANN). The objective of this study is to propose a technique that provides robustness to the process of determine the values of contributions of the inputs of an ANNs.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126911248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.20
D. Ferrari, L. N. de Castro
Finding a good clustering solution for an unknown problem is a challenging task. Evolutionary algorithms have proved to be reliable methods to search for high quality solutions to complex problems. The present paper proposes a new set of genetic operators for the Fast Evolutionary Algorithm for Clustering (Fast-EAC) to improve the solution quality and computational efficiency. The new algorithm, called EAC-II, is compared with its original version in terms of quality of solutions and efficiency over several problems from the literature.
{"title":"New Genetic Operators for the Evolutionary Algorithm for Clustering","authors":"D. Ferrari, L. N. de Castro","doi":"10.1109/BRICS-CCI-CBIC.2013.20","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.20","url":null,"abstract":"Finding a good clustering solution for an unknown problem is a challenging task. Evolutionary algorithms have proved to be reliable methods to search for high quality solutions to complex problems. The present paper proposes a new set of genetic operators for the Fast Evolutionary Algorithm for Clustering (Fast-EAC) to improve the solution quality and computational efficiency. The new algorithm, called EAC-II, is compared with its original version in terms of quality of solutions and efficiency over several problems from the literature.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127730622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.59
Andre R. Da Cruz, F. Guimarães, R. Takahashi
This work presents four agents with different strategies to play a version of the 2-sided dominoes game, usually played in Minas Gerais state, Brazil. This incomplete information game must be played with two players and the goal is to discard all tiles first according to the rules. Each pair of agents was tested in a computational experiment, for 1,000,000 matches, in order to evaluate the individual effectiveness. In the first strategy, the agent uses random rules to select an adequate tile, the second agent observes the tiles already on the table and on its hand and selects one using a simple probability information computed in an amateur way, the third strategy also observes the tiles on the table and on the hand, and computes a probability information using the two end tiles of the table and the candidates opposite values in order to decide which one must be thrown, in the last strategy, the agent uses the third strategy and the Boltzmann exploration with a roulette wheel to select the tile. The results showed that the last strategy is the best and that even the random strategy is capable to win a significant number of matches.
{"title":"Comparing Strategies to Play a 2-Sided Dominoes Game","authors":"Andre R. Da Cruz, F. Guimarães, R. Takahashi","doi":"10.1109/BRICS-CCI-CBIC.2013.59","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.59","url":null,"abstract":"This work presents four agents with different strategies to play a version of the 2-sided dominoes game, usually played in Minas Gerais state, Brazil. This incomplete information game must be played with two players and the goal is to discard all tiles first according to the rules. Each pair of agents was tested in a computational experiment, for 1,000,000 matches, in order to evaluate the individual effectiveness. In the first strategy, the agent uses random rules to select an adequate tile, the second agent observes the tiles already on the table and on its hand and selects one using a simple probability information computed in an amateur way, the third strategy also observes the tiles on the table and on the hand, and computes a probability information using the two end tiles of the table and the candidates opposite values in order to decide which one must be thrown, in the last strategy, the agent uses the third strategy and the Boltzmann exploration with a roulette wheel to select the tile. The results showed that the last strategy is the best and that even the random strategy is capable to win a significant number of matches.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.65
Paulo Pereira, S. Leitão, E. Pires
This paper makes a study about optimal supply of the energy service, using simulations of network operation scenarios, in order to optimize resources and minimize the variables: operation cost, energy losses, generation cost and consumers shedding. These simulations create optimal operation models of the network, allowing the system operator obtain knowledge to take pre-established procedures that must be performed in situations of contingency in order to forecast and minimize drawbacks. The simulations were performed using a multiobjective particle swarm optimization algorithm. The algorithm was applied to the IEEE 14 Bus network where the optimal power flow was evaluated by MATPOWER tool to establish an optimal electrical working model to minimize the associated costs.
{"title":"State Operation Optimization in Electrical Networks","authors":"Paulo Pereira, S. Leitão, E. Pires","doi":"10.1109/BRICS-CCI-CBIC.2013.65","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.65","url":null,"abstract":"This paper makes a study about optimal supply of the energy service, using simulations of network operation scenarios, in order to optimize resources and minimize the variables: operation cost, energy losses, generation cost and consumers shedding. These simulations create optimal operation models of the network, allowing the system operator obtain knowledge to take pre-established procedures that must be performed in situations of contingency in order to forecast and minimize drawbacks. The simulations were performed using a multiobjective particle swarm optimization algorithm. The algorithm was applied to the IEEE 14 Bus network where the optimal power flow was evaluated by MATPOWER tool to establish an optimal electrical working model to minimize the associated costs.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.39
M. Perkusich, A. Perkusich, Hyggo Oliveira de Almeida
Recently, Bayesian networks became a popular technique to represent knowledge about uncertain domains and have been successfully used for applications in various areas. Even though there are several cases of success and Bayesian networks have been proved to be capable of representing uncertainty in many different domains, there are still two significant barriers to build large-scale Bayesian networks: building the Directed Acyclic Graph (DAG) and the Node Probability Tables (NPTs). In this paper, we focus on the second barrier and present a method that generates NPTs through weighted expressions generated using data collected from domain experts through a survey. Our method is limited to Bayesian networks composed only of ranked nodes. It consists of five steps: (i) define network's DAG, (ii) run the survey, (iii) order the NPTs' relationships given their relative magnitudes, (iv) generate weighted functions and (v) generate NPTs. The advantage of our method, comparing with existing ones that use weighted expressions to generate NPTs, is the ability to quickly collect data from domain experts located around the world. We describe one case in which the method was used for validation purposes and showed that this method requires less time from each domain expert than other existing methods.
{"title":"Using Survey and Weighted Functions to Generate Node Probability Tables for Bayesian Networks","authors":"M. Perkusich, A. Perkusich, Hyggo Oliveira de Almeida","doi":"10.1109/BRICS-CCI-CBIC.2013.39","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.39","url":null,"abstract":"Recently, Bayesian networks became a popular technique to represent knowledge about uncertain domains and have been successfully used for applications in various areas. Even though there are several cases of success and Bayesian networks have been proved to be capable of representing uncertainty in many different domains, there are still two significant barriers to build large-scale Bayesian networks: building the Directed Acyclic Graph (DAG) and the Node Probability Tables (NPTs). In this paper, we focus on the second barrier and present a method that generates NPTs through weighted expressions generated using data collected from domain experts through a survey. Our method is limited to Bayesian networks composed only of ranked nodes. It consists of five steps: (i) define network's DAG, (ii) run the survey, (iii) order the NPTs' relationships given their relative magnitudes, (iv) generate weighted functions and (v) generate NPTs. The advantage of our method, comparing with existing ones that use weighted expressions to generate NPTs, is the ability to quickly collect data from domain experts located around the world. We describe one case in which the method was used for validation purposes and showed that this method requires less time from each domain expert than other existing methods.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129114171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.23
Harold D. De Mello, A. V. Abs da Cruz, M. Vellasco
This paper presents a Copula-based Estimation of Distribution Algorithm with Parameter Updating for numeric optimization problems. This model implements an estimation of distribution algorithm using a multivariate extension of the Archimedean copula (MEC-EDA) to estimate the conditional probability for generating a population of individuals. Moreover, the model uses traditional crossover and elitism operators during the optimization. We show that this approach improves the overall performance of the optimization when compared to other copula-based EDAs.
{"title":"Estimation of Distribution Algorithm Based on a Multivariate Extension of the Archimedean Copula","authors":"Harold D. De Mello, A. V. Abs da Cruz, M. Vellasco","doi":"10.1109/BRICS-CCI-CBIC.2013.23","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.23","url":null,"abstract":"This paper presents a Copula-based Estimation of Distribution Algorithm with Parameter Updating for numeric optimization problems. This model implements an estimation of distribution algorithm using a multivariate extension of the Archimedean copula (MEC-EDA) to estimate the conditional probability for generating a population of individuals. Moreover, the model uses traditional crossover and elitism operators during the optimization. We show that this approach improves the overall performance of the optimization when compared to other copula-based EDAs.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127412056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.92
T. F. Oliveira, Ricardo T. A. De Oliveira, P. Firmino, Paulo S. G. de Mattos Neto, T. Ferreira
Artificial neural networks (ANN) have been paramount for modeling and forecasting time series phenomena. In this way it has been usual to suppose that each ANN model generates a white noise as prediction error. However, mostly because of disturbances not captured by each model, it is yet possible that such supposition is violated. On the other hand, to adopt a single ANN model may lead to statistical bias and underestimation of uncertainty. The present paper introduces a two-step maximum likelihood method for correcting and combining ANN models. Applications involving single ANN models for Dow Jones Industrial Average Index and S&P500 series illustrate the usefulness of the proposed framework.
{"title":"Combination of Biased Artificial Neural Network Forecasters","authors":"T. F. Oliveira, Ricardo T. A. De Oliveira, P. Firmino, Paulo S. G. de Mattos Neto, T. Ferreira","doi":"10.1109/BRICS-CCI-CBIC.2013.92","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.92","url":null,"abstract":"Artificial neural networks (ANN) have been paramount for modeling and forecasting time series phenomena. In this way it has been usual to suppose that each ANN model generates a white noise as prediction error. However, mostly because of disturbances not captured by each model, it is yet possible that such supposition is violated. On the other hand, to adopt a single ANN model may lead to statistical bias and underestimation of uncertainty. The present paper introduces a two-step maximum likelihood method for correcting and combining ANN models. Applications involving single ANN models for Dow Jones Industrial Average Index and S&P500 series illustrate the usefulness of the proposed framework.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129183222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-08DOI: 10.1109/BRICS-CCI-CBIC.2013.101
Vinicius Prado da Fonseca, P. Rosa
A RSSI-based localization system on a home wireless sensor network is proposed in this work. In order to support a robot assistant in pick-and-place tasks, our current system is capable of estimating the localization of an object using the signal strength received by a mobile device in a ZigBee sensor network. Two models were utilized (a) log-distance path loss - model in which signal lost has a random influence with log-normal distribution, and (b) free space decay law - based on the decay law for a signal on an open space. RSSI measurements were done in laboratory for applying the estimation method. Moreover experiments with satisfactory results were done with a public dataset to benchmark our results.
{"title":"Tracking Objects in a Smart Home","authors":"Vinicius Prado da Fonseca, P. Rosa","doi":"10.1109/BRICS-CCI-CBIC.2013.101","DOIUrl":"https://doi.org/10.1109/BRICS-CCI-CBIC.2013.101","url":null,"abstract":"A RSSI-based localization system on a home wireless sensor network is proposed in this work. In order to support a robot assistant in pick-and-place tasks, our current system is capable of estimating the localization of an object using the signal strength received by a mobile device in a ZigBee sensor network. Two models were utilized (a) log-distance path loss - model in which signal lost has a random influence with log-normal distribution, and (b) free space decay law - based on the decay law for a signal on an open space. RSSI measurements were done in laboratory for applying the estimation method. Moreover experiments with satisfactory results were done with a public dataset to benchmark our results.","PeriodicalId":306195,"journal":{"name":"2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128691197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}