This paper proposes to represent the preference of a decision maker by Gaussian functions on a hyperplane. The preference is used to evaluate non-dominated solutions as a second criterion instead of the crowding distance in NSGA-II. High performance of our proposal is demonstrated for many-objective DTLZ problems.
{"title":"Evolutionary many-objective optimization using preference on hyperplane","authors":"Kaname Narukawa, Yuki Tanigaki, H. Ishibuchi","doi":"10.1145/2598394.2598420","DOIUrl":"https://doi.org/10.1145/2598394.2598420","url":null,"abstract":"This paper proposes to represent the preference of a decision maker by Gaussian functions on a hyperplane. The preference is used to evaluate non-dominated solutions as a second criterion instead of the crowding distance in NSGA-II. High performance of our proposal is demonstrated for many-objective DTLZ problems.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blind no more: constant time non-random improving moves and exponentially powerful recombination","authors":"L. D. Whitley","doi":"10.1145/2598394.2605349","DOIUrl":"https://doi.org/10.1145/2598394.2605349","url":null,"abstract":"","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121847955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengyuan Peng, Yue-jiao Gong, Jingjing Li, Ying-biao Lin
Inspired by the division of labor and migration behavior in nature, this paper proposes a novel particle swarm optimization algorithm with multiple learning strategies (PSO-MLS). In the algorithm, particles are divided into three sub-swarms randomly while three learning strategies with different motivations are applied to each sub-swarm respectively. The Traditional Learning Strategy (TLS) inherits the basic operations of PSO to guarantee the stability. Then a Periodically Stochastic Learning Strategy (PSLS) employs a random learning vector to increase the diversity so as to enhance the global search ability. A Random Mutation Learning Strategy (RMLS) adopts mutation to enable particles to jump out of local optima when trapped. Besides, information migration is applied within the intercommunication of sub-swarms. After a certain number of generations, sub-swarms would aggregate to continue search, aiming at global convergence. Through these learning strategies and swarm aggregation, PSO-MLS possesses both good exploration and exploitation abilities. PSO-MLS was tested on a set of benchmarks and the result shows its superiority to gain higher accuracy for unimodal functions and better solution quality for multimodal functions when compared to some PSO variants.
{"title":"Multi-swarm particle swarm optimization with multiple learning strategies","authors":"Mengyuan Peng, Yue-jiao Gong, Jingjing Li, Ying-biao Lin","doi":"10.1145/2598394.2598418","DOIUrl":"https://doi.org/10.1145/2598394.2598418","url":null,"abstract":"Inspired by the division of labor and migration behavior in nature, this paper proposes a novel particle swarm optimization algorithm with multiple learning strategies (PSO-MLS). In the algorithm, particles are divided into three sub-swarms randomly while three learning strategies with different motivations are applied to each sub-swarm respectively. The Traditional Learning Strategy (TLS) inherits the basic operations of PSO to guarantee the stability. Then a Periodically Stochastic Learning Strategy (PSLS) employs a random learning vector to increase the diversity so as to enhance the global search ability. A Random Mutation Learning Strategy (RMLS) adopts mutation to enable particles to jump out of local optima when trapped. Besides, information migration is applied within the intercommunication of sub-swarms. After a certain number of generations, sub-swarms would aggregate to continue search, aiming at global convergence. Through these learning strategies and swarm aggregation, PSO-MLS possesses both good exploration and exploitation abilities. PSO-MLS was tested on a set of benchmarks and the result shows its superiority to gain higher accuracy for unimodal functions and better solution quality for multimodal functions when compared to some PSO variants.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121343191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Could decisions made during some search iterations use information discovered by other search iterations? Then store that information in tags: data that persist between search iterations.
{"title":"Tagging in metaheuristics","authors":"Ben Kovitz, J. Swan","doi":"10.1145/2598394.2609844","DOIUrl":"https://doi.org/10.1145/2598394.2609844","url":null,"abstract":"Could decisions made during some search iterations use information discovered by other search iterations? Then store that information in tags: data that persist between search iterations.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122363568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our great pleasure to welcome you to the GECCO'14 Student Workshop! The goal of the Student Workshop, organized as a joined event for graduate and undergraduate students, is to assist the students with their research in the field of Evolutionary Computation. Exceeding our expectations in both the number and quality of submitted papers, 14 peer-reviewed papers have finally been accepted for presentation at the workshop. They cover a wide range of subjects in evolutionary computation, presenting advances in theory as well as applications, e.g. robotics and the travelling salesman problem. The topics include particle swarm algorithms as well as flood evolution, reinforcement learning, parallelism, niching, and parameter tuning, and many more, all yielding interesting contributions to the field. During the workshop, the students will receive useful feedback on the quality of their work and presentation style. This will be assured by a question and answer period after each talk led by a mentor panel of established researchers. The students are encouraged to use this opportunity to get highly qualified feedback not only on the presented subject but also on future research directions. As it was good practice in the last years, the best contributions will receive a small award sponsored by GECCO. In addition, the contributing students are invited to present their work as a poster at the GECCO'14 Poster Session -- an excellent opportunity to network with industrial and academic members of the community. We hope that the variety of covered topics will catch the attention of a wide range of GECCO'14 attendees, who will learn about fresh research ideas and meet young researchers with related interests. Other students are encouraged to attend the workshop to learn from the work of their colleagues and broaden their (scientific) horizons.
{"title":"Session details: Workshop: student workshop","authors":"Tea Tušar, B. Naujoks","doi":"10.1145/3250287","DOIUrl":"https://doi.org/10.1145/3250287","url":null,"abstract":"It is our great pleasure to welcome you to the GECCO'14 Student Workshop! The goal of the Student Workshop, organized as a joined event for graduate and undergraduate students, is to assist the students with their research in the field of Evolutionary Computation. Exceeding our expectations in both the number and quality of submitted papers, 14 peer-reviewed papers have finally been accepted for presentation at the workshop. They cover a wide range of subjects in evolutionary computation, presenting advances in theory as well as applications, e.g. robotics and the travelling salesman problem. The topics include particle swarm algorithms as well as flood evolution, reinforcement learning, parallelism, niching, and parameter tuning, and many more, all yielding interesting contributions to the field. During the workshop, the students will receive useful feedback on the quality of their work and presentation style. This will be assured by a question and answer period after each talk led by a mentor panel of established researchers. The students are encouraged to use this opportunity to get highly qualified feedback not only on the presented subject but also on future research directions. As it was good practice in the last years, the best contributions will receive a small award sponsored by GECCO. In addition, the contributing students are invited to present their work as a poster at the GECCO'14 Poster Session -- an excellent opportunity to network with industrial and academic members of the community. We hope that the variety of covered topics will catch the attention of a wide range of GECCO'14 attendees, who will learn about fresh research ideas and meet young researchers with related interests. Other students are encouraged to attend the workshop to learn from the work of their colleagues and broaden their (scientific) horizons.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126932085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mina Moradi Kordmahalleh, M. G. Sefidmazgi, A. Homaifar, Dukka Bahadur, A. Guiseppi-Elie
In nonlinear and chaotic time series prediction, constructing the mathematical model of the system dynamics is not an easy task. Partially connected Artificial Neural Network with Evolvable Topology (PANNET) is a new paradigm for prediction of chaotic time series without access to the dynamics and essential memory depth of the system. Evolvable topology of the PANNET provides flexibility in recognition of systems in contrast to fixed layered topology of the traditional ANNs. This evolvable topology guides the relationship between observation nodes and hidden nodes, where hidden nodes are extra nodes that play the role of memory or internal states of the system. In the proposed variable-length Genetic Algorithm (GA), internal neurons can be connected arbitrarily to any type of nodes. Besides, number of neurons, inputs and outputs for each neuron, origin and weight of each connection evolve in order to find the best configuration of the network.
{"title":"Time-series forecasting with evolvable partially connected artificial neural network","authors":"Mina Moradi Kordmahalleh, M. G. Sefidmazgi, A. Homaifar, Dukka Bahadur, A. Guiseppi-Elie","doi":"10.1145/2598394.2598435","DOIUrl":"https://doi.org/10.1145/2598394.2598435","url":null,"abstract":"In nonlinear and chaotic time series prediction, constructing the mathematical model of the system dynamics is not an easy task. Partially connected Artificial Neural Network with Evolvable Topology (PANNET) is a new paradigm for prediction of chaotic time series without access to the dynamics and essential memory depth of the system. Evolvable topology of the PANNET provides flexibility in recognition of systems in contrast to fixed layered topology of the traditional ANNs. This evolvable topology guides the relationship between observation nodes and hidden nodes, where hidden nodes are extra nodes that play the role of memory or internal states of the system. In the proposed variable-length Genetic Algorithm (GA), internal neurons can be connected arbitrarily to any type of nodes. Besides, number of neurons, inputs and outputs for each neuron, origin and weight of each connection evolve in order to find the best configuration of the network.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a set of so-called Pareto-optimal solutions for which no other solution has better function values in all objectives. Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As stochastic blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As set-based algorithms, they allow to compute or approximate the full set of Pareto-optimal solutions in one algorithm run---opposed to classical solution-based techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Pareto-optimal) solutions (innovization) and it has been shown that certain single-objective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (multiobjectivization). This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present some of the most important research results in areas such as indicator-based EMO, preference articulation, and performance assessment. Though classified as introductory, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.
{"title":"GECCO 2014 tutorial on evolutionary multiobjective optimization","authors":"D. Brockhoff","doi":"10.1145/2598394.2605339","DOIUrl":"https://doi.org/10.1145/2598394.2605339","url":null,"abstract":"Many optimization problems are multiobjective in nature in the sense that multiple, conflicting criteria need to be optimized simultaneously. Due to the conflict between objectives, usually, no single optimal solution exists. Instead, the optimum corresponds to a set of so-called Pareto-optimal solutions for which no other solution has better function values in all objectives. Evolutionary Multiobjective Optimization (EMO) algorithms are widely used in practice for solving multiobjective optimization problems due to several reasons. As stochastic blackbox algorithms, EMO approaches allow to tackle problems with nonlinear, nondifferentiable, or noisy objective functions. As set-based algorithms, they allow to compute or approximate the full set of Pareto-optimal solutions in one algorithm run---opposed to classical solution-based techniques from the multicriteria decision making (MCDM) field. Using EMO approaches in practice has two other advantages: they allow to learn about a problem formulation, for example, by automatically revealing common design principles among (Pareto-optimal) solutions (innovization) and it has been shown that certain single-objective problems become easier to solve with randomized search heuristics if the problem is reformulated as a multiobjective one (multiobjectivization). This tutorial aims at giving a broad introduction to the EMO field and at presenting some of its recent research results in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present some of the most important research results in areas such as indicator-based EMO, preference articulation, and performance assessment. Though classified as introductory, this tutorial is intended for both novices and regular users of EMO. Those without any knowledge will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. Open questions, presented throughout the tutorial, can serve for all participants as a starting point for future research and/or discussions during the conference.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126847017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The optimization literature is awash with metaphorically-inspired metaheuristics and their subsequent variants and hybridizations. This results in a plethora of methods, with descriptions that are often polluted with the language of the metaphor which inspired them [8]. Within such a fragmented field, the traditional approach of manual 'operator tweaking' makes it difficult to establish the contribution of individual metaheuristic components to the overall success of a methodology. Irrespective of whether it happens to best the state-of-the-art, such 'tweaking' is so labour-intensive that does relatively little to advance scientific understanding. In order to introduce further structure and rigour, it is therefore desirable to not only to be able to specify entire families of metaheuristics (rather than individual metaheuristics), but also be able to generate and test them. In particular, the adoption of a model agnostic approach towards the generation of metaheuristics would help to establish which metaheuristic components are useful contributors to a solution.
{"title":"Template method hyper-heuristics","authors":"J. Woodward, J. Swan","doi":"10.1145/2598394.2609843","DOIUrl":"https://doi.org/10.1145/2598394.2609843","url":null,"abstract":"The optimization literature is awash with metaphorically-inspired metaheuristics and their subsequent variants and hybridizations. This results in a plethora of methods, with descriptions that are often polluted with the language of the metaphor which inspired them [8]. Within such a fragmented field, the traditional approach of manual 'operator tweaking' makes it difficult to establish the contribution of individual metaheuristic components to the overall success of a methodology. Irrespective of whether it happens to best the state-of-the-art, such 'tweaking' is so labour-intensive that does relatively little to advance scientific understanding. In order to introduce further structure and rigour, it is therefore desirable to not only to be able to specify entire families of metaheuristics (rather than individual metaheuristics), but also be able to generate and test them. In particular, the adoption of a model agnostic approach towards the generation of metaheuristics would help to establish which metaheuristic components are useful contributors to a solution.","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127875386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Theory of swarm intelligence","authors":"Dirk Sudholt","doi":"10.1145/2598394.2605354","DOIUrl":"https://doi.org/10.1145/2598394.2605354","url":null,"abstract":"","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132418327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of genetic and evolutionary computation to problems in medicine has increased rapidly over the past five years, but there are specific issues and challenges that distinguish it from other real-world applications. Obtaining reliable and coherent patient data, establishing the clinical need and demonstrating value in the results obtained are all aspects that require careful and detailed consideration. This tutorial is based on research which uses genetic programming (a representation of Cartesian Genetic Programming) in the diagnosis and monitoring of Parkinson's disease, Alzheimer's disease and other neurodegenerative conditions, as well as in the early detection of breast cancer through automated assessment of mammograms. The work is supported by multiple clinical studies in progress in the UK (Leeds General Infirmary), USA (UCSF), UAE (Dubai Rashid Hospital), Australia (Monash Medical Center) and Singapore (National Neuroscience Institute). The technology is protected through three patent applications and a University spin-out company marketing four medical devices. The tutorial considers the following topics: Introduction to medical applications of genetic and evolutionary computation and how these differ from other real-world applications Overview of past work in the from a medical and evolutionary computation point of view Three case examples of medical applications: i. diagnosis and monitoring of Parkinson's disease ii. detection of beast cancer from mammograms iii. cancer screening using Raman spectroscopy Practical advice on how to get started working on medical applications, including existing medical databases and conducting new medical studies, commercialization and protecting intellectual property. Summary, further reading and links
{"title":"Medical applications of evolutionary computation","authors":"S. Smith","doi":"10.1145/2598394.2605364","DOIUrl":"https://doi.org/10.1145/2598394.2605364","url":null,"abstract":"The application of genetic and evolutionary computation to problems in medicine has increased rapidly over the past five years, but there are specific issues and challenges that distinguish it from other real-world applications. Obtaining reliable and coherent patient data, establishing the clinical need and demonstrating value in the results obtained are all aspects that require careful and detailed consideration. This tutorial is based on research which uses genetic programming (a representation of Cartesian Genetic Programming) in the diagnosis and monitoring of Parkinson's disease, Alzheimer's disease and other neurodegenerative conditions, as well as in the early detection of breast cancer through automated assessment of mammograms. The work is supported by multiple clinical studies in progress in the UK (Leeds General Infirmary), USA (UCSF), UAE (Dubai Rashid Hospital), Australia (Monash Medical Center) and Singapore (National Neuroscience Institute). The technology is protected through three patent applications and a University spin-out company marketing four medical devices. The tutorial considers the following topics: Introduction to medical applications of genetic and evolutionary computation and how these differ from other real-world applications Overview of past work in the from a medical and evolutionary computation point of view Three case examples of medical applications: i. diagnosis and monitoring of Parkinson's disease ii. detection of beast cancer from mammograms iii. cancer screening using Raman spectroscopy Practical advice on how to get started working on medical applications, including existing medical databases and conducting new medical studies, commercialization and protecting intellectual property. Summary, further reading and links","PeriodicalId":298232,"journal":{"name":"Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133593330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}