Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615159
C. Iacopino, P. Palmer, A. Brewer, N. Policella, A. Donati
In the last decade, Dynamic Optimization Problems (DOP) have received increasing attention. Changes in the problem structure pose a great challenge for the optimization techniques. The Ant Colony Optimization (ACO) metaheuristic has a number of potentials in this field due to its adaptability and flexibility. However their design and analysis are still critical issues. This is where research on formal methods can increase the reliability of these systems and improve the understanding of their dynamics in complex problems such as DOPs. This paper presents a novel ACO algorithm based on an analytical model describing the long-terms behaviours of the ACO systems in problems represented as binary chains, a type of DOP. These behaviours are described using modelling techniques already developed for studying dynamical systems. The algorithm developed takes advantage of new insights offered by this model to regulate the tradeoff of exploration/exploitation resulting in a ACO system able to adapt its long-term behaviours to the problem changes and to improve its performance due to the experiences learnt from the previous explorations. An empirical evaluation is used to validate the algorithm capabilities of adaptability and optimization.
{"title":"A novel ACO algorithm for dynamic binary chains based on changes in the system's stability","authors":"C. Iacopino, P. Palmer, A. Brewer, N. Policella, A. Donati","doi":"10.1109/SIS.2013.6615159","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615159","url":null,"abstract":"In the last decade, Dynamic Optimization Problems (DOP) have received increasing attention. Changes in the problem structure pose a great challenge for the optimization techniques. The Ant Colony Optimization (ACO) metaheuristic has a number of potentials in this field due to its adaptability and flexibility. However their design and analysis are still critical issues. This is where research on formal methods can increase the reliability of these systems and improve the understanding of their dynamics in complex problems such as DOPs. This paper presents a novel ACO algorithm based on an analytical model describing the long-terms behaviours of the ACO systems in problems represented as binary chains, a type of DOP. These behaviours are described using modelling techniques already developed for studying dynamical systems. The algorithm developed takes advantage of new insights offered by this model to regulate the tradeoff of exploration/exploitation resulting in a ACO system able to adapt its long-term behaviours to the problem changes and to improve its performance due to the experiences learnt from the previous explorations. An empirical evaluation is used to validate the algorithm capabilities of adaptability and optimization.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121078951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615153
J. Rueda, I. Erlich
Based on swarm intelligence principles and an enhanced mapping scheme, the extension of the original single-particle mean-variance mapping optimization (MVMO) to its swarm variant (MVMOS) is investigated in this paper. Numerical experiments and comparisons with other heuristic optimization methods, which were conducted on several composition test functions, demonstrate the feasibility and effectiveness of MVMOS when solving multimodal optimization problems. Sensitivity analysis of the algorithm parameters highlights its robust performance.
{"title":"Evaluation of the mean-variance mapping optimization for solving multimodal problems","authors":"J. Rueda, I. Erlich","doi":"10.1109/SIS.2013.6615153","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615153","url":null,"abstract":"Based on swarm intelligence principles and an enhanced mapping scheme, the extension of the original single-particle mean-variance mapping optimization (MVMO) to its swarm variant (MVMOS) is investigated in this paper. Numerical experiments and comparisons with other heuristic optimization methods, which were conducted on several composition test functions, demonstrate the feasibility and effectiveness of MVMOS when solving multimodal optimization problems. Sensitivity analysis of the algorithm parameters highlights its robust performance.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615162
N. Lynn, P. N. Suganthan
In this paper, comprehensive learning particle swarm optimizer (CLPSO) is integrated with guidance vector selection. To update a particle's velocity and position, several candidate guidance positions are constructed based on all particles' best positions. Then the candidate guidance vector with the best fitness is selected to guide the particle. Simulation study is performed on CEC 2005 benchmark problems and the results show that the CLPSO with guidance vector selection has better performance when solving shifted and rotated optimization problems.
{"title":"Comprehensive learning particle swarm optimizer with guidance vector selection","authors":"N. Lynn, P. N. Suganthan","doi":"10.1109/SIS.2013.6615162","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615162","url":null,"abstract":"In this paper, comprehensive learning particle swarm optimizer (CLPSO) is integrated with guidance vector selection. To update a particle's velocity and position, several candidate guidance positions are constructed based on all particles' best positions. Then the candidate guidance vector with the best fitness is selected to guide the particle. Simulation study is performed on CEC 2005 benchmark problems and the results show that the CLPSO with guidance vector selection has better performance when solving shifted and rotated optimization problems.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129936755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615158
S. Doi
Recently, for security reasons, the need to solve the patrolling problem has become increasingly urgent. We consider a problem in which agents patrol a graph at the shortest regular intervals possible for each node. To solve the problem, we propose an autonomous distributed algorithm called pheromone-based Probabilistic Vertex-Ant-Walk (pPVAW), an improved version of Probabilistic Vertex-Ant-Walk (PVAW) that uses a pheromone model for agent communication and cooperative work. In our algorithm, an agent at a node perceives pheromones related to the difference between the current time and the time when the agent previously visited each neighbor node. The agent determines the next node to select using roulette selection that is proportional to the pheromone. Agents using pPVAW do not go back to the previously visited node. This is in contrast to PVAW, in which agents may go back to the previously visited node because agents using PVAW can randomly select a neighbor node to visit. A comparison of pPVAW with PVAW for dynamic environments indicates that the performance of pPVAW is better than that of PVAW.
{"title":"Proposal and evaluation of a pheromone-based algorithm for the patrolling problem in dynamic environments","authors":"S. Doi","doi":"10.1109/SIS.2013.6615158","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615158","url":null,"abstract":"Recently, for security reasons, the need to solve the patrolling problem has become increasingly urgent. We consider a problem in which agents patrol a graph at the shortest regular intervals possible for each node. To solve the problem, we propose an autonomous distributed algorithm called pheromone-based Probabilistic Vertex-Ant-Walk (pPVAW), an improved version of Probabilistic Vertex-Ant-Walk (PVAW) that uses a pheromone model for agent communication and cooperative work. In our algorithm, an agent at a node perceives pheromones related to the difference between the current time and the time when the agent previously visited each neighbor node. The agent determines the next node to select using roulette selection that is proportional to the pheromone. Agents using pPVAW do not go back to the previously visited node. This is in contrast to PVAW, in which agents may go back to the previously visited node because agents using PVAW can randomly select a neighbor node to visit. A comparison of pPVAW with PVAW for dynamic environments indicates that the performance of pPVAW is better than that of PVAW.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127759460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615185
Hongwei Mo, Lifang Xu
Magnetotactic bacteria (MTB) is a kind of polyphyletic group of prokaryotes with the characteristics of magnetotaxis that make them orient and swim along geomagnetic field lines. Magnetotactic bacteria is the optimized product of nature by long process of evolution. A new optimization algorithm called magnetotactic bacteria optimization algorithm (MBOA), which is inspired by the characteristics of magnetotactic bacteria is researched on multimodal problems in the paper. It is compared with classical genetic algorithm and some relatively new optimization algorithms. All of them are tested on 10 standard multimodal functions problems. The experiment results show that the proposed MBOA is effective in optimization problems and has better performance than the other algorithms.
{"title":"Magnetotactic bacteria optimization algorithm for multimodal optimization","authors":"Hongwei Mo, Lifang Xu","doi":"10.1109/SIS.2013.6615185","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615185","url":null,"abstract":"Magnetotactic bacteria (MTB) is a kind of polyphyletic group of prokaryotes with the characteristics of magnetotaxis that make them orient and swim along geomagnetic field lines. Magnetotactic bacteria is the optimized product of nature by long process of evolution. A new optimization algorithm called magnetotactic bacteria optimization algorithm (MBOA), which is inspired by the characteristics of magnetotactic bacteria is researched on multimodal problems in the paper. It is compared with classical genetic algorithm and some relatively new optimization algorithms. All of them are tested on 10 standard multimodal functions problems. The experiment results show that the proposed MBOA is effective in optimization problems and has better performance than the other algorithms.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122440892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615152
Jing J. Liang, B. Qu
Portfolio optimization problems involve selection of different assets to invest so that the investor is able to maximize the overall return and minimize the overall risk. The complexity of an asset allocation problem increases with the increasing number of assets available for investing. When the number of assets/stocks increase to several hundred, it is difficult for classical method to optimize (construct a good portfolio). In this paper, the Multi-objective Dynamic Multi-Swarm Particle Swarm Optimizer is employed to solve a portfolio optimization problem with 500 assets (stocks). The results obtained by the proposed method are compared several other optimization methods. The experimental results show that this approach is efficient and confirms its potential to solve the large scale portfolio optimization problem.
{"title":"Large-scale portfolio optimization using multiobjective dynamic mutli-swarm particle swarm optimizer","authors":"Jing J. Liang, B. Qu","doi":"10.1109/SIS.2013.6615152","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615152","url":null,"abstract":"Portfolio optimization problems involve selection of different assets to invest so that the investor is able to maximize the overall return and minimize the overall risk. The complexity of an asset allocation problem increases with the increasing number of assets available for investing. When the number of assets/stocks increase to several hundred, it is difficult for classical method to optimize (construct a good portfolio). In this paper, the Multi-objective Dynamic Multi-Swarm Particle Swarm Optimizer is employed to solve a portfolio optimization problem with 500 assets (stocks). The results obtained by the proposed method are compared several other optimization methods. The experimental results show that this approach is efficient and confirms its potential to solve the large scale portfolio optimization problem.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126518574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615172
Shafiq Alam, G. Dobbie, Yun Sing Koh, Patricia J. Riddle
Data clustering aims to group data based on similarities between the data elements. Recently, due to the increasing complexity and amount of heterogenous data, modeling of such data for clustering has become a serious challenge. In this paper we tackle the problem of modeling heterogeneous web usage data for clustering. The main contribution is a new similarity measure which we propose to cluster heterogeneous web usage data. We then use this similarity measure in our Particle Swarm Optimization (PSO) based clustering algorithm, Hierarchical Particle Swarm Optimization based clustering (HPSO-clustering). HPSO-clustering combines the qualities of hierarchical and partitional clustering to cluster data in a hierarchical agglomerative manner. We present the clustering results and explain the effects of the new similarity measure on inter-cluster and intra-cluster distances. These measures verify the applicability of the proposed similarity measure on web usage data.
{"title":"Clustering heterogeneous web usage data using Hierarchical Particle Swarm Optimization","authors":"Shafiq Alam, G. Dobbie, Yun Sing Koh, Patricia J. Riddle","doi":"10.1109/SIS.2013.6615172","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615172","url":null,"abstract":"Data clustering aims to group data based on similarities between the data elements. Recently, due to the increasing complexity and amount of heterogenous data, modeling of such data for clustering has become a serious challenge. In this paper we tackle the problem of modeling heterogeneous web usage data for clustering. The main contribution is a new similarity measure which we propose to cluster heterogeneous web usage data. We then use this similarity measure in our Particle Swarm Optimization (PSO) based clustering algorithm, Hierarchical Particle Swarm Optimization based clustering (HPSO-clustering). HPSO-clustering combines the qualities of hierarchical and partitional clustering to cluster data in a hierarchical agglomerative manner. We present the clustering results and explain the effects of the new similarity measure on inter-cluster and intra-cluster distances. These measures verify the applicability of the proposed similarity measure on web usage data.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132914521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615161
Akugbe Martins Arasomwan, A. Adewumi
Inertia weight is one of the control parameters that influence the performance of Particle Swarm Optimization (PSO). Since the introduction of the inertia weight parameter into PSO technique, different inertia weight strategies have been proposed to enhance the performance of PSO in handling optimization problems. Each of these inertia weights has shown varying degree of efficiency in improving the PSO algorithm. Research is however still ongoing in this area. This paper proposes two adaptive chaotic inertia weight strategies based on swarm success rate. Experimental results show that these strategies further enhance the speed of convergence and the location of best near optimal solutions. The performance of the PSO algorithm using proposed inertia weights compared with PSO using the chaotic random and chaotic linear decreasing inertia weights as well as the inertia weight based on decreasing exponential function adopted for comparison in this paper are verified through empirical studies using some benchmark global optimization problems.
{"title":"On adaptive chaotic inertia weights in Particle Swarm Optimization","authors":"Akugbe Martins Arasomwan, A. Adewumi","doi":"10.1109/SIS.2013.6615161","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615161","url":null,"abstract":"Inertia weight is one of the control parameters that influence the performance of Particle Swarm Optimization (PSO). Since the introduction of the inertia weight parameter into PSO technique, different inertia weight strategies have been proposed to enhance the performance of PSO in handling optimization problems. Each of these inertia weights has shown varying degree of efficiency in improving the PSO algorithm. Research is however still ongoing in this area. This paper proposes two adaptive chaotic inertia weight strategies based on swarm success rate. Experimental results show that these strategies further enhance the speed of convergence and the location of best near optimal solutions. The performance of the PSO algorithm using proposed inertia weights compared with PSO using the chaotic random and chaotic linear decreasing inertia weights as well as the inertia weight based on decreasing exponential function adopted for comparison in this paper are verified through empirical studies using some benchmark global optimization problems.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126608967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human being is the most intelligent organism in the world and the brainstorming process popularly used by them has been demonstrated to be a significant and promising way to create great ideas for problem solving. Brain storm optimization (BSO) is a new kind of swarm intelligence algorithm inspired by human being creative problem solving process. BSO transplants the brainstorming process in human being into optimization algorithm design and gains successes. BSO generally uses the grouping, replacing, and creating operators to produce ideas as many as possible to approach the problem solution generation by generation. In these operators, BSO involves mainly three control parameters named: (1) p_replce to control the replacing operator; (2) p_one to control the creating operator to create new ideas between one cluster and two clusters; and (3) p_center (p_one_center and p_two_center) to control using cluster center or random idea to create new idea. In this paper, we make investigations on these parameters to see how they affect the performance of BSO. More importantly, a new BSO variant designed according to the investigation results is proposed and its performance is evaluated.
{"title":"Parameter investigation in brain storm optimization","authors":"Zhi-hui Zhan, Wei-neng Chen, Ying-biao Lin, Yue-jiao Gong, Yuan-Long Li, Jun Zhang","doi":"10.1109/SIS.2013.6615166","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615166","url":null,"abstract":"Human being is the most intelligent organism in the world and the brainstorming process popularly used by them has been demonstrated to be a significant and promising way to create great ideas for problem solving. Brain storm optimization (BSO) is a new kind of swarm intelligence algorithm inspired by human being creative problem solving process. BSO transplants the brainstorming process in human being into optimization algorithm design and gains successes. BSO generally uses the grouping, replacing, and creating operators to produce ideas as many as possible to approach the problem solution generation by generation. In these operators, BSO involves mainly three control parameters named: (1) p_replce to control the replacing operator; (2) p_one to control the creating operator to create new ideas between one cluster and two clusters; and (3) p_center (p_one_center and p_two_center) to control using cluster center or random idea to create new idea. In this paper, we make investigations on these parameters to see how they affect the performance of BSO. More importantly, a new BSO variant designed according to the investigation results is proposed and its performance is evaluated.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128628903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-16DOI: 10.1109/SIS.2013.6615165
T. Sharma, M. Pant, C. Ahn
Foraging behavior has inspired different algorithms to solve real-parameter optimization problems. One of the most popular algorithms within this class is the Artificial Bee Colony (ABC). In the present study the food source is initialized by comparing the food source with worst fitness and the evaluated mean of randomly generated food sources (population). Further the scout bee operator is modified to increase searching capabilities of the algorithm to sample solutions within the range of search defined by the current population. The proposed variant is called IFS-ABC and is tested on six unconstrained benchmark function. Further to test the efficiency of the proposed variant we implemented it on five constrained engineering optimization problems.
{"title":"Improved food sources in Artificial Bee Colony","authors":"T. Sharma, M. Pant, C. Ahn","doi":"10.1109/SIS.2013.6615165","DOIUrl":"https://doi.org/10.1109/SIS.2013.6615165","url":null,"abstract":"Foraging behavior has inspired different algorithms to solve real-parameter optimization problems. One of the most popular algorithms within this class is the Artificial Bee Colony (ABC). In the present study the food source is initialized by comparing the food source with worst fitness and the evaluated mean of randomly generated food sources (population). Further the scout bee operator is modified to increase searching capabilities of the algorithm to sample solutions within the range of search defined by the current population. The proposed variant is called IFS-ABC and is tested on six unconstrained benchmark function. Further to test the efficiency of the proposed variant we implemented it on five constrained engineering optimization problems.","PeriodicalId":444765,"journal":{"name":"2013 IEEE Symposium on Swarm Intelligence (SIS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125650407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}