Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586250
Carmen Bratosin, N. Sidorova, Wil M.P. van der Aalst
Process mining aims at discovering process models from data logs in order to offer insight into the real use of information systems. Most of the existing process mining algorithms fail to discover complex constructs or have problems dealing with noise and infrequent behavior. The genetic process mining algorithm overcomes these issues by using genetic operators to search for the fittest solution in the space of all possible process models. The main disadvantage of genetic process mining is the required computation time. In this paper we present a coarse-grained distributed variant of the genetic miner that reduces the computation time. The degree of the improvement obtained highly depends on the parameter values and event logs characteristics. We perform an empirical evaluation to determine guidelines for setting the parameters of the distributed algorithm.
{"title":"Distributed genetic process mining","authors":"Carmen Bratosin, N. Sidorova, Wil M.P. van der Aalst","doi":"10.1109/CEC.2010.5586250","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586250","url":null,"abstract":"Process mining aims at discovering process models from data logs in order to offer insight into the real use of information systems. Most of the existing process mining algorithms fail to discover complex constructs or have problems dealing with noise and infrequent behavior. The genetic process mining algorithm overcomes these issues by using genetic operators to search for the fittest solution in the space of all possible process models. The main disadvantage of genetic process mining is the required computation time. In this paper we present a coarse-grained distributed variant of the genetic miner that reduces the computation time. The degree of the improvement obtained highly depends on the parameter values and event logs characteristics. We perform an empirical evaluation to determine guidelines for setting the parameters of the distributed algorithm.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"6 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80508131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586040
Wei Cui, A. Brabazon, M. O’Neill
Trade execution is concerned with the actual mechanics of buying or selling the desired amount of a financial instrument of interest. A practical problem in trade execution is how to trade a large order as efficiently as possible. A trade execution strategy is designed for this task to minimize total trade cost. Grammatical Evolution (GE) is an evolutionary automatic programming methodology which can be used to evolve rule sets. It has been proved successfully to be able to evolve quality trade execution strategies in our previous work. In this paper, the previous work is extended by adopting two different limit order lifetimes and three benchmark limit order strategies. GE is used to evolve efficient limit order strategies which can determine the aggressiveness levels of limit orders. We found that GE evolved limit order strategies were highly competitive against three benchmark strategies and the limit order strategies with long-term lifetime performed better than those with short-term lifetime.
{"title":"Evolving efficient limit order strategy using Grammatical Evolution","authors":"Wei Cui, A. Brabazon, M. O’Neill","doi":"10.1109/CEC.2010.5586040","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586040","url":null,"abstract":"Trade execution is concerned with the actual mechanics of buying or selling the desired amount of a financial instrument of interest. A practical problem in trade execution is how to trade a large order as efficiently as possible. A trade execution strategy is designed for this task to minimize total trade cost. Grammatical Evolution (GE) is an evolutionary automatic programming methodology which can be used to evolve rule sets. It has been proved successfully to be able to evolve quality trade execution strategies in our previous work. In this paper, the previous work is extended by adopting two different limit order lifetimes and three benchmark limit order strategies. GE is used to evolve efficient limit order strategies which can determine the aggressiveness levels of limit orders. We found that GE evolved limit order strategies were highly competitive against three benchmark strategies and the limit order strategies with long-term lifetime performed better than those with short-term lifetime.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"498 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80509849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586484
T. Takahama, S. Sakai
The ε constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the ε level comparison, which compares search points based on the pair of objective value and constraint violation of them. We have proposed the ε constrained differential evolution (εDE), which is the combination of the ε constrained method and differential evolution (DE). It has been shown that the εDE can run very fast and can find very high quality solutions. Also, we proposed the εDE with gradient-based mutation (εDEg), which utilized gradients of constraints in order to solve problems with difficult constraints. In this study, we propose the ε constrained DE with an archive and gradient-based mutation (εDEag). The εDEag utilizes an archive to maintain the diversity of individuals and adopts a new way of selecting the ε level control parameter in the εDEg. The 18 problems, which are given in special session on “Single Objective Constrained RealParameter Optimization” in CEC2010, are solved by the εDEag and the results are shown in this paper.
{"title":"Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation","authors":"T. Takahama, S. Sakai","doi":"10.1109/CEC.2010.5586484","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586484","url":null,"abstract":"The ε constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the ε level comparison, which compares search points based on the pair of objective value and constraint violation of them. We have proposed the ε constrained differential evolution (εDE), which is the combination of the ε constrained method and differential evolution (DE). It has been shown that the εDE can run very fast and can find very high quality solutions. Also, we proposed the εDE with gradient-based mutation (εDEg), which utilized gradients of constraints in order to solve problems with difficult constraints. In this study, we propose the ε constrained DE with an archive and gradient-based mutation (εDEag). The εDEag utilizes an archive to maintain the diversity of individuals and adopts a new way of selecting the ε level control parameter in the εDEg. The 18 problems, which are given in special session on “Single Objective Constrained RealParameter Optimization” in CEC2010, are solved by the εDEag and the results are shown in this paper.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"108 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81574610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586165
Laura Villanova, P. Falcaro, D. Carta, I. Poli, Rob J Hyndman, K. Smith‐Miles
An evolutionary approach for the optimization of microarray coatings produced via sol-gel chemistry is presented. The aim of the methodology is to face the challenging aspects of the problem: unknown objective function, high dimensional variable space, constraints on the independent variables, multiple responses, expensive or time-consuming experimental trials, expected complexity of the functional relationships between independent and response variables. The proposed approach iteratively selects a set of experiments by combining a multiob-jective Particle Swarm Optimization (PSO) and a multiresponse Multivariate Adaptive Regression Splines (MARS) model. At each iteration of the algorithm the selected experiments are implemented and evaluated, and the system response is used as a feedback for the selection of the new trials. The performance of the approach is measured in terms of improvements with respect to the best coating obtained changing one variable at a time (the method typically used by scientists). Relevant enhancements have been detected, and the proposed evolutionary approach is shown to be a useful methodology for process optimization with great promise for industrial applications.
{"title":"Functionalization of microarray devices: Process optimization using a multiobjective PSO and multiresponse MARS modeling","authors":"Laura Villanova, P. Falcaro, D. Carta, I. Poli, Rob J Hyndman, K. Smith‐Miles","doi":"10.1109/CEC.2010.5586165","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586165","url":null,"abstract":"An evolutionary approach for the optimization of microarray coatings produced via sol-gel chemistry is presented. The aim of the methodology is to face the challenging aspects of the problem: unknown objective function, high dimensional variable space, constraints on the independent variables, multiple responses, expensive or time-consuming experimental trials, expected complexity of the functional relationships between independent and response variables. The proposed approach iteratively selects a set of experiments by combining a multiob-jective Particle Swarm Optimization (PSO) and a multiresponse Multivariate Adaptive Regression Splines (MARS) model. At each iteration of the algorithm the selected experiments are implemented and evaluated, and the system response is used as a feedback for the selection of the new trials. The performance of the approach is measured in terms of improvements with respect to the best coating obtained changing one variable at a time (the method typically used by scientists). Relevant enhancements have been detected, and the proposed evolutionary approach is shown to be a useful methodology for process optimization with great promise for industrial applications.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"11 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81820574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586076
Kent McClymont, E. Keedwell
This paper presents a novel method of generating new probability distributions tailored to specific problem classes for use in optimisation mutation operators. A range of tailored operators with varying behaviours are created using the proposed technique and the evolved multi-modal polynomial distributions are found to match the performance of a tuned Gaussian distribution when applied to a mutation operator incorporated in a simple (1+1) Evolution Strategy. The generated heuristics are shown to display a range of desirable characteristics for the DTLZ test problems 1, 2 and 7; such as speed of convergence.
{"title":"Optimising multi-modal polynomial mutation operators for multi-objective problem classes","authors":"Kent McClymont, E. Keedwell","doi":"10.1109/CEC.2010.5586076","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586076","url":null,"abstract":"This paper presents a novel method of generating new probability distributions tailored to specific problem classes for use in optimisation mutation operators. A range of tailored operators with varying behaviours are created using the proposed technique and the evolved multi-modal polynomial distributions are found to match the performance of a tuned Gaussian distribution when applied to a mutation operator incorporated in a simple (1+1) Evolution Strategy. The generated heuristics are shown to display a range of desirable characteristics for the DTLZ test problems 1, 2 and 7; such as speed of convergence.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"9 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84287427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586288
Riyad Alshammari, A. N. Zincir-Heywood
The classification of Encrypted Traffic, namely Skype, from network traffic represents a particularly challenging problem. Solutions should ideally be both simple — therefore efficient to deploy — and accurate. Recent advances to team-based Genetic Programming provide the opportunity to decompose the original problem into a subset of classifiers with non-overlapping behaviors. Thus, in this work we have investigated the identification of Skype encrypted traffic using Symbiotic Bid-Based (SBB) paradigm of team based Genetic Programming (GP) found on flow features without using IP addresses, port numbers and payload data. Evaluation of SBB-GP against C4.5 and AdaBoost — representing current best practice — indicates that SBB-GP solutions are capable of providing simpler solutions in terms number of features used and the complexity of the solution/model without sacrificing accuracy.
{"title":"Unveiling Skype encrypted tunnels using GP","authors":"Riyad Alshammari, A. N. Zincir-Heywood","doi":"10.1109/CEC.2010.5586288","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586288","url":null,"abstract":"The classification of Encrypted Traffic, namely Skype, from network traffic represents a particularly challenging problem. Solutions should ideally be both simple — therefore efficient to deploy — and accurate. Recent advances to team-based Genetic Programming provide the opportunity to decompose the original problem into a subset of classifiers with non-overlapping behaviors. Thus, in this work we have investigated the identification of Skype encrypted traffic using Symbiotic Bid-Based (SBB) paradigm of team based Genetic Programming (GP) found on flow features without using IP addresses, port numbers and payload data. Evaluation of SBB-GP against C4.5 and AdaBoost — representing current best practice — indicates that SBB-GP solutions are capable of providing simpler solutions in terms number of features used and the complexity of the solution/model without sacrificing accuracy.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"26 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85225614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586127
M. Omidvar, Xiaodong Li, Zhenyu Yang, X. Yao
In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.
{"title":"Cooperative Co-evolution for large scale optimization through more frequent random grouping","authors":"M. Omidvar, Xiaodong Li, Zhenyu Yang, X. Yao","doi":"10.1109/CEC.2010.5586127","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586127","url":null,"abstract":"In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"24 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85227524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586252
F. Hadjam, C. Moraga
Reversible logic is an emerging research area and has attracted significant attention in recent years. Developing systematic logic synthesis algorithms for reversible logic is still an area of research. Unlike other areas of application, there are relatively few publications on applications of genetic programming — (evolutionary algorithms in general) — to reversible logic synthesis. In this paper, we are introducing a new method; a variant of IMEP. The case of digital circuits for the even-parity problem is investigated. The type of gate used to evolve such a problem is the Fredkin gate.
{"title":"Evolutionary design of reversible digital circuits using IMEP the case of the even parity problem","authors":"F. Hadjam, C. Moraga","doi":"10.1109/CEC.2010.5586252","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586252","url":null,"abstract":"Reversible logic is an emerging research area and has attracted significant attention in recent years. Developing systematic logic synthesis algorithms for reversible logic is still an area of research. Unlike other areas of application, there are relatively few publications on applications of genetic programming — (evolutionary algorithms in general) — to reversible logic synthesis. In this paper, we are introducing a new method; a variant of IMEP. The case of digital circuits for the even-parity problem is investigated. The type of gate used to evolve such a problem is the Fredkin gate.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"54 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85662752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586488
K. Anagnostopoulos, G. Koulinas
The resource constrained project scheduling problem is one of the most important issues that project managers have to deal with during the project implementation, as constrained resource availabilities very often lead to delays in project completion and budget overruns. For solving this NP-hard optimization problem, we propose a genetic based hyperheuristic, i.e. an algorithm controlling a set of low-level heuristics which work in the solution domain. Chromosomes impose the sequence that the algorithm applies the low level heuristics. Implemented within a commercial project management software system, the hyperheuristic operates on the priority values that the software uses for scheduling activities. We perform a series of computational experiments with random generated projects. The results show that the algorithm is very promising for finding good solutions in reasonable time.
{"title":"A genetic hyperheuristic algorithm for the resource constrained project scheduling problem","authors":"K. Anagnostopoulos, G. Koulinas","doi":"10.1109/CEC.2010.5586488","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586488","url":null,"abstract":"The resource constrained project scheduling problem is one of the most important issues that project managers have to deal with during the project implementation, as constrained resource availabilities very often lead to delays in project completion and budget overruns. For solving this NP-hard optimization problem, we propose a genetic based hyperheuristic, i.e. an algorithm controlling a set of low-level heuristics which work in the solution domain. Chromosomes impose the sequence that the algorithm applies the low level heuristics. Implemented within a commercial project management software system, the hyperheuristic operates on the priority values that the software uses for scheduling activities. We perform a series of computational experiments with random generated projects. The results show that the algorithm is very promising for finding good solutions in reasonable time.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80881999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-07-18DOI: 10.1109/CEC.2010.5586080
L. E. A. Santana, Ligia Silva, A. Canuto, F. Pintro, K. Vale
In the context of ensemble systems, feature selection methods can be used to provide different subsets of attributes for the individual classifiers, aiming to reduce redundancy among the attributes of a pattern and to increase the diversity in such systems. Among the several techniques that have been proposed in the literature, optimization methods have been used to find the optimal subset of attributes for an ensemble system. In this paper, an investigation of two optimization techniques, genetic algorithm and ant colony optimization, will be used to guide the distribution of the features among the classifiers. This analysis will be conducted in the context of heterogeneous ensembles and using different ensemble sizes.
{"title":"A comparative analysis of genetic algorithm and ant colony optimization to select attributes for an heterogeneous ensemble of classifiers","authors":"L. E. A. Santana, Ligia Silva, A. Canuto, F. Pintro, K. Vale","doi":"10.1109/CEC.2010.5586080","DOIUrl":"https://doi.org/10.1109/CEC.2010.5586080","url":null,"abstract":"In the context of ensemble systems, feature selection methods can be used to provide different subsets of attributes for the individual classifiers, aiming to reduce redundancy among the attributes of a pattern and to increase the diversity in such systems. Among the several techniques that have been proposed in the literature, optimization methods have been used to find the optimal subset of attributes for an ensemble system. In this paper, an investigation of two optimization techniques, genetic algorithm and ant colony optimization, will be used to guide the distribution of the features among the classifiers. This analysis will be conducted in the context of heterogeneous ensembles and using different ensemble sizes.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"4 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2010-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80927626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}