A. Berlanga, P. I. Viñuela, A. Sanchis, J. M. Molina
Neural networks (NN) can be used as controllers in autonomous robots. The specific features of the navigation problem in robotics make generation of good training sets for the NN difficult. An evolution strategy (ES) is introduced to learn the weights of the NN instead of the learning method of the network. The ES is used to learn high performance reactive behavior for navigation and collision avoidance. No subjective information about "how to accomplish the task" has been included in the fitness function. The learned behaviors are able to solve the problem in different environments; therefore, the learning process has the proven ability to obtain a specialized behavior. All the behaviors obtained have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on the mini-robot, Khepera, has been used to learn each behavior.
{"title":"Neural networks robot controller trained with evolution strategies","authors":"A. Berlanga, P. I. Viñuela, A. Sanchis, J. M. Molina","doi":"10.1109/CEC.1999.781954","DOIUrl":"https://doi.org/10.1109/CEC.1999.781954","url":null,"abstract":"Neural networks (NN) can be used as controllers in autonomous robots. The specific features of the navigation problem in robotics make generation of good training sets for the NN difficult. An evolution strategy (ES) is introduced to learn the weights of the NN instead of the learning method of the network. The ES is used to learn high performance reactive behavior for navigation and collision avoidance. No subjective information about \"how to accomplish the task\" has been included in the fitness function. The learned behaviors are able to solve the problem in different environments; therefore, the learning process has the proven ability to obtain a specialized behavior. All the behaviors obtained have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on the mini-robot, Khepera, has been used to learn each behavior.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131350199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of multiple missiles in order to improve the kill probability of a target is studied. The use of the same guidance law or strategy for two missiles fired from approximately the same position does not make the best use of the two to one numerical advantage during the engagement. The use of different guidance strategies is put forward as a method to improve the kill probability. The objective is to produce different intercept trajectories for the two missiles. In this study a medium to short range air-to-air engagement scenario using two active mono-pulse radar based homing missiles is considered. A genetic algorithm (GA) is used to generate two guidance laws which produce different trajectories for intercept and also improve the overall performance of the two missile system. The individual guidance laws produced by the GA are implemented using radial basis function neural networks (RBFN). The laws generate significantly different trajectories for the two missiles, producing a combination of side on and head on intercepts in some scenarios. Their performance and robustness is demonstrated and compared to two modern guidance laws by simulation. The dual RBFN laws are shown to outperform the two analytical laws and have a similar level of robustness.
{"title":"Generation of dual missile strategies using genetic algorithms","authors":"P. A. Creaser, B. A. Stacey","doi":"10.1109/CEC.1999.781915","DOIUrl":"https://doi.org/10.1109/CEC.1999.781915","url":null,"abstract":"The use of multiple missiles in order to improve the kill probability of a target is studied. The use of the same guidance law or strategy for two missiles fired from approximately the same position does not make the best use of the two to one numerical advantage during the engagement. The use of different guidance strategies is put forward as a method to improve the kill probability. The objective is to produce different intercept trajectories for the two missiles. In this study a medium to short range air-to-air engagement scenario using two active mono-pulse radar based homing missiles is considered. A genetic algorithm (GA) is used to generate two guidance laws which produce different trajectories for intercept and also improve the overall performance of the two missile system. The individual guidance laws produced by the GA are implemented using radial basis function neural networks (RBFN). The laws generate significantly different trajectories for the two missiles, producing a combination of side on and head on intercepts in some scenarios. Their performance and robustness is demonstrated and compared to two modern guidance laws by simulation. The dual RBFN laws are shown to outperform the two analytical laws and have a similar level of robustness.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128852351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Neocognitron, inspired by the mammalian visual system, is a complex neural network with numerous parameters and weights which should be trained in order to utilise it for pattern recognition. However, it is not easy to optimise these parameters and weights by gradient decent algorithms. We present a staged training approach using evolutionary algorithms. The experiments demonstrate that evolutionary algorithms can successfully train the Neocognitron to perform image recognition on real world problems.
{"title":"Staged training of Neocognitron by evolutionary algorithms","authors":"Z. Pan, T. Sabisch, R. Adams, H. Bolouri","doi":"10.1109/CEC.1999.785515","DOIUrl":"https://doi.org/10.1109/CEC.1999.785515","url":null,"abstract":"The Neocognitron, inspired by the mammalian visual system, is a complex neural network with numerous parameters and weights which should be trained in order to utilise it for pattern recognition. However, it is not easy to optimise these parameters and weights by gradient decent algorithms. We present a staged training approach using evolutionary algorithms. The experiments demonstrate that evolutionary algorithms can successfully train the Neocognitron to perform image recognition on real world problems.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125451156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An algorithm is presented for learning concept classification rules. It is a hybrid between evolutionary computing and inductive logic programming (ILP). Given input of positive and negative examples, the algorithm constructs a logic program to classify these examples. The algorithm has several attractive features, including the ability to use explicit background (user-supplied) knowledge and to produce comprehensible output. We present results of using the algorithm to a natural language processing problem, part-of-speech tagging. The results indicate that using an evolutionary algorithm to direct a population of ILP learners can increase accuracy. This result is further improved when crossover is used to exchange rules at intermediate stages in learning. The improvement over Progol, a greedy ILP algorithm, is statistically significant (P<0.005).
{"title":"Evolution of logic programs: part-of-speech tagging","authors":"Philip G. K. Reiser, Patricia J. Riddle","doi":"10.1109/CEC.1999.782604","DOIUrl":"https://doi.org/10.1109/CEC.1999.782604","url":null,"abstract":"An algorithm is presented for learning concept classification rules. It is a hybrid between evolutionary computing and inductive logic programming (ILP). Given input of positive and negative examples, the algorithm constructs a logic program to classify these examples. The algorithm has several attractive features, including the ability to use explicit background (user-supplied) knowledge and to produce comprehensible output. We present results of using the algorithm to a natural language processing problem, part-of-speech tagging. The results indicate that using an evolutionary algorithm to direct a population of ILP learners can increase accuracy. This result is further improved when crossover is used to exchange rules at intermediate stages in learning. The improvement over Progol, a greedy ILP algorithm, is statistically significant (P<0.005).","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126647427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article analyzes the behavior of evolution strategies and their current mutation variants on a simple rotating dynamic problem. The degree of rotation is a parameter for the involved dynamism which enables systematic examinations. As a result, the complex covariance matrix adaptation proves to be superior with slow rotation but with increasing dynamism whose adaptation mechanism seldom finds the optimum where the simple uniform adaptation produces stable results. Moreover, this examination gives rise to questioning the principle of small mutation changes with high probability in the dynamic context.
{"title":"On evolution strategy optimization in dynamic environments","authors":"Karsten Weicker, N. Weicker","doi":"10.1109/CEC.1999.785525","DOIUrl":"https://doi.org/10.1109/CEC.1999.785525","url":null,"abstract":"The article analyzes the behavior of evolution strategies and their current mutation variants on a simple rotating dynamic problem. The degree of rotation is a parameter for the involved dynamism which enables systematic examinations. As a result, the complex covariance matrix adaptation proves to be superior with slow rotation but with increasing dynamism whose adaptation mechanism seldom finds the optimum where the simple uniform adaptation produces stable results. Moreover, this examination gives rise to questioning the principle of small mutation changes with high probability in the dynamic context.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121447580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a team of classifier systems (CSs), operating in a distributed environment of a game-theoretic model. This distributed model, a game with limited interaction, is a variant of N-person Prisoner Dilemma game. A payoff of each CS in this model depends only on its action and on actions of limited number of its neighbors in the game. CSs coevolve while competing for their payoffs. We show how such classifiers learn Nash equilibria, and what variety of behavior is generated: from pure competition to pure cooperation.
{"title":"Learning Nash equilibria by coevolving distributed classifier systems","authors":"F. Seredyński, C. Janikow","doi":"10.1109/CEC.1999.785468","DOIUrl":"https://doi.org/10.1109/CEC.1999.785468","url":null,"abstract":"We consider a team of classifier systems (CSs), operating in a distributed environment of a game-theoretic model. This distributed model, a game with limited interaction, is a variant of N-person Prisoner Dilemma game. A payoff of each CS in this model depends only on its action and on actions of limited number of its neighbors in the game. CSs coevolve while competing for their payoffs. We show how such classifiers learn Nash equilibria, and what variety of behavior is generated: from pure competition to pure cooperation.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122463295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marília Oliveira, J. Barreiros, E. Costa, F. B. Pereira
The study of evolutionary processes presents a major challenge due to its physical and temporal scales. Artificial life systems allow the realization of experiments concerning evolution that overcome these constraints. One aspect of the evolution of species that has been widely discussed is the role played by learning in the evolutionary process. We developed an artificial environment, LamBaDa, whose main purpose is the experimental study of interactions between learning in individual agents and evolution of populations. Agents have an internal state and a neural network that can empower them with learning faculties through a reinforcement learning algorithm. The modeling of the evolution of populations is achieved through genetic mechanisms applied during the reproduction process to the neural network weights. In this paper we describe LamBaDa, its architecture and dynamics. We present the simulation settings and discuss the results obtained, with special emphasis on the comparison of populations of agents with and without learning capabilities. The analysis of the results we obtained shows that populations of agents with learning capabilities are in advantage when compared to populations where agents can not learn, even though learned characteristics are not genetically codified. We also observed that this advantage is significant if the agents lived long enough to learn anything useful!.
{"title":"LamBaDa: an artificial environment to study the interaction between evolution and learning","authors":"Marília Oliveira, J. Barreiros, E. Costa, F. B. Pereira","doi":"10.1109/CEC.1999.781919","DOIUrl":"https://doi.org/10.1109/CEC.1999.781919","url":null,"abstract":"The study of evolutionary processes presents a major challenge due to its physical and temporal scales. Artificial life systems allow the realization of experiments concerning evolution that overcome these constraints. One aspect of the evolution of species that has been widely discussed is the role played by learning in the evolutionary process. We developed an artificial environment, LamBaDa, whose main purpose is the experimental study of interactions between learning in individual agents and evolution of populations. Agents have an internal state and a neural network that can empower them with learning faculties through a reinforcement learning algorithm. The modeling of the evolution of populations is achieved through genetic mechanisms applied during the reproduction process to the neural network weights. In this paper we describe LamBaDa, its architecture and dynamics. We present the simulation settings and discuss the results obtained, with special emphasis on the comparison of populations of agents with and without learning capabilities. The analysis of the results we obtained shows that populations of agents with learning capabilities are in advantage when compared to populations where agents can not learn, even though learned characteristics are not genetically codified. We also observed that this advantage is significant if the agents lived long enough to learn anything useful!.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122653532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A hybrid method of evolutionary algorithms, called mixed-integer hybrid differential evolution (MIHDE), is proposed in this study. In the hybrid method, a mixed coding is used to represent the continuous and discrete variables. A rounding operation in the mutation is introduced to handle the integer variables so that the method is not only used to solve mixed-integer nonlinear optimization problems, but also used to solve the real or integer nonlinear optimization problems. The accelerated phase and migrating phase are implemented in MIHDE. These two phases acted as a balancing operator are used to explore the search space and to exploit the best solution. Both examples of mechanical design are tested by the MIHDE. The computation results demonstrate that the MIHDE is superior to other methods in terms of solution quality and robustness property.
{"title":"A hybrid method of evolutionary algorithms for mixed-integer nonlinear optimization problems","authors":"Yung-Chien Lin, Feng-Sheng Wang, Kao-Shing Hwang","doi":"10.1109/CEC.1999.785543","DOIUrl":"https://doi.org/10.1109/CEC.1999.785543","url":null,"abstract":"A hybrid method of evolutionary algorithms, called mixed-integer hybrid differential evolution (MIHDE), is proposed in this study. In the hybrid method, a mixed coding is used to represent the continuous and discrete variables. A rounding operation in the mutation is introduced to handle the integer variables so that the method is not only used to solve mixed-integer nonlinear optimization problems, but also used to solve the real or integer nonlinear optimization problems. The accelerated phase and migrating phase are implemented in MIHDE. These two phases acted as a balancing operator are used to explore the search space and to exploit the best solution. Both examples of mechanical design are tested by the MIHDE. The computation results demonstrate that the MIHDE is superior to other methods in terms of solution quality and robustness property.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Gómez-Ramírez, A. Poznyak, A. Gonzalez-Yunes, M. Avila-Alvarez
There are two important ways in which artificial neural networks are applied for dynamic system identification: preprocessing the training values, and adapting the architecture of the network. The article describes an adaptive process of the architecture of Polynomial Artificial Neural Network (PANN) using a genetic algorithm (GA) to improve the learning process. The optimal structure is obtained without previous knowledge of the behavior of the system to be identified. Due to the nature of the structure of PANN, it is possible to extract the necessary information of the nonlinear time series in order to minimize the training error. The importance of this work lies on adapting the architecture of PANN and processing the necessary inputs to minimize this error at the same time. The training error is compared with other networks used in the field to forecast chaotic time series.
{"title":"Adaptive architecture of polynomial artificial neural network to forecast nonlinear time series","authors":"E. Gómez-Ramírez, A. Poznyak, A. Gonzalez-Yunes, M. Avila-Alvarez","doi":"10.1109/CEC.1999.781942","DOIUrl":"https://doi.org/10.1109/CEC.1999.781942","url":null,"abstract":"There are two important ways in which artificial neural networks are applied for dynamic system identification: preprocessing the training values, and adapting the architecture of the network. The article describes an adaptive process of the architecture of Polynomial Artificial Neural Network (PANN) using a genetic algorithm (GA) to improve the learning process. The optimal structure is obtained without previous knowledge of the behavior of the system to be identified. Due to the nature of the structure of PANN, it is possible to extract the necessary information of the nonlinear time series in order to minimize the training error. The importance of this work lies on adapting the architecture of PANN and processing the necessary inputs to minimize this error at the same time. The training error is compared with other networks used in the field to forecast chaotic time series.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We discuss the evolution of cooperative behavior among neighboring players in a spatial IPD (Iterated Prisoner's Dilemma) game where every player is located in a cell of a two-dimensional grid-world. In our game, a player in a cell plays against players in its neighboring cells. A game strategy of a player is denoted by a bit string, which determines the next action based on a finite history of previous rounds of the IPD game. Genetic operations for generating a new strategy of a player are also performed within its neighborhood. We first compare the evolution of cooperative behavior in the spatial IPD game with that in the standard non-spatial IPD game. Next, we examine the effect of the existence of cooperative (or hostile) players on the evolution of cooperative behavior. For representing such players with high flexibility, we use a generalized fitness function defined as a weighted sum of the player's payoff and its opponent's payoff. The fitness of a player depends on not only its payoff but also its opponent's payoff. Every player has its own weight vector in the generalized fitness function. This means that every player is characterized by its weight vector. Then we consider a more general situation where every player has a different weight vector for each of its neighbors. In this situation, we can examine the evolution of a neighborly relation between every pair of neighboring players. A weight vector of a player for a neighbor is updated based on the result of the IPD game between them. Finally, we examine the spatial IPD game with a different matchmaking scheme where the opponent of a player is randomly selected from its neighbors at every round of the IPD game. In such a spatial IPD game, the next action of a player is determined by its strategy (i.e., bit string) based on a finite history of previous rounds of the IPD game with different opponents.
{"title":"Evolution of neighborly relations in a spatial IPD game with cooperative players and hostile players","authors":"H. Ishibuchi, Tatsuo Nakari, T. Nakashima","doi":"10.1109/CEC.1999.782522","DOIUrl":"https://doi.org/10.1109/CEC.1999.782522","url":null,"abstract":"We discuss the evolution of cooperative behavior among neighboring players in a spatial IPD (Iterated Prisoner's Dilemma) game where every player is located in a cell of a two-dimensional grid-world. In our game, a player in a cell plays against players in its neighboring cells. A game strategy of a player is denoted by a bit string, which determines the next action based on a finite history of previous rounds of the IPD game. Genetic operations for generating a new strategy of a player are also performed within its neighborhood. We first compare the evolution of cooperative behavior in the spatial IPD game with that in the standard non-spatial IPD game. Next, we examine the effect of the existence of cooperative (or hostile) players on the evolution of cooperative behavior. For representing such players with high flexibility, we use a generalized fitness function defined as a weighted sum of the player's payoff and its opponent's payoff. The fitness of a player depends on not only its payoff but also its opponent's payoff. Every player has its own weight vector in the generalized fitness function. This means that every player is characterized by its weight vector. Then we consider a more general situation where every player has a different weight vector for each of its neighbors. In this situation, we can examine the evolution of a neighborly relation between every pair of neighboring players. A weight vector of a player for a neighbor is updated based on the result of the IPD game between them. Finally, we examine the spatial IPD game with a different matchmaking scheme where the opponent of a player is randomly selected from its neighbors at every round of the IPD game. In such a spatial IPD game, the next action of a player is determined by its strategy (i.e., bit string) based on a finite history of previous rounds of the IPD game with different opponents.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132737311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}