Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633625
Kokolo Ikeda, Simon Viennot
Thanks to the continued development of tree search algorithms, of more precise evaluation functions, and of faster hardware, computer Go and computer Shogi have now reached a level of strength sufficient for most amateur players. However, the research about entertaining and coaching human players of board games is still very limited. In this paper, we try first to define what are the requirements for entertaining human players in computer board games. Then, we describe the different approaches that we have experimented in the case of Monte-Carlo computer Go.
{"title":"Production of various strategies and position control for Monte-Carlo Go — Entertaining human players","authors":"Kokolo Ikeda, Simon Viennot","doi":"10.1109/CIG.2013.6633625","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633625","url":null,"abstract":"Thanks to the continued development of tree search algorithms, of more precise evaluation functions, and of faster hardware, computer Go and computer Shogi have now reached a level of strength sufficient for most amateur players. However, the research about entertaining and coaching human players of board games is still very limited. In this paper, we try first to define what are the requirements for entertaining human players in computer board games. Then, we describe the different approaches that we have experimented in the case of Monte-Carlo computer Go.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125862546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633611
Lee-Ann Barlow, D. Ashlock
This study revisits earlier work, concerning the evolutionary trajectory of agents trained to play iterated prisoner's dilemma on a combinatorial graph. The impact of different connection topologies, used to mediate both the play of prisoner's dilemma and the flow of genes during selection and replacement, is examined. The variety of connection topologies, stored as combinatorial graphs, is revisited and the analysis tools used are substantially improved. A novel tool called the play profile summarizes the distribution of behaviors over multiple replicates of the basic evolutionary algorithm and through multiple evolutionary epochs. The impact of changing the number of states used to encode agents is also examined. Changing the combinatorial graph on which the population resides is found to yield statistically significant differences in the play profiles. Changing the number of states in agents is also found to produce statistically significant differences in behavior. The use of multiple epochs in analysis of agent behavior demonstrates that the distribution of behaviors changes substantially over the course of evolution. The most common pattern is for agents to move toward the cooperative state over time, but this pattern is not universal. Another clear trend is that agents implemented with more states are less cooperative.
{"title":"The impact of connection topology and agent size on cooperation in the iterated prisoner's dilemma","authors":"Lee-Ann Barlow, D. Ashlock","doi":"10.1109/CIG.2013.6633611","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633611","url":null,"abstract":"This study revisits earlier work, concerning the evolutionary trajectory of agents trained to play iterated prisoner's dilemma on a combinatorial graph. The impact of different connection topologies, used to mediate both the play of prisoner's dilemma and the flow of genes during selection and replacement, is examined. The variety of connection topologies, stored as combinatorial graphs, is revisited and the analysis tools used are substantially improved. A novel tool called the play profile summarizes the distribution of behaviors over multiple replicates of the basic evolutionary algorithm and through multiple evolutionary epochs. The impact of changing the number of states used to encode agents is also examined. Changing the combinatorial graph on which the population resides is found to yield statistically significant differences in the play profiles. Changing the number of states in agents is also found to produce statistically significant differences in behavior. The use of multiple epochs in analysis of agent behavior demonstrates that the distribution of behaviors changes substantially over the course of evolution. The most common pattern is for agents to move toward the cooperative state over time, but this pattern is not universal. Another clear trend is that agents implemented with more states are less cooperative.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124585311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633637
R. Sifa, Anders Drachen, C. Bauckhage, Christian Thurau, Alessandro Canossa
Behavioral datasets from major commercial game titles of the “AAA” grade generally feature high dimensionality and large sample sizes, from tens of thousands to millions, covering time scales stretching into several years of real-time, and evolving user populations. This makes dimensionality-reduction methods such as clustering and classification useful for discovering and defining patterns in player behavior. The goal from the perspective of game development is the formation of behavioral profiles that provide actionable insights into how a game is being played, and enables the detection of e.g. problems hindering player progression. Due to its unsupervised nature, clustering is notably useful in cases where no prior-defined classes exist. Previous research in this area has successfully applied clustering algorithms to behavioral datasets from different games. In this paper, the focus is on examining the behavior of 62,000 players from the major commercial game Tomb Raider: Underworld, as it unfolds from the beginning of the game and throughout the seven main levels of the game. Where previous research has focused on aggregated behavioral datasets spanning an entire game, or conversely a limited slice or snapshot viewed in isolation, this is to the best knowledge of the authors the first study to examine the application of clustering methods to player behavior as it evolves throughout an entire game.
{"title":"Behavior evolution in Tomb Raider Underworld","authors":"R. Sifa, Anders Drachen, C. Bauckhage, Christian Thurau, Alessandro Canossa","doi":"10.1109/CIG.2013.6633637","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633637","url":null,"abstract":"Behavioral datasets from major commercial game titles of the “AAA” grade generally feature high dimensionality and large sample sizes, from tens of thousands to millions, covering time scales stretching into several years of real-time, and evolving user populations. This makes dimensionality-reduction methods such as clustering and classification useful for discovering and defining patterns in player behavior. The goal from the perspective of game development is the formation of behavioral profiles that provide actionable insights into how a game is being played, and enables the detection of e.g. problems hindering player progression. Due to its unsupervised nature, clustering is notably useful in cases where no prior-defined classes exist. Previous research in this area has successfully applied clustering algorithms to behavioral datasets from different games. In this paper, the focus is on examining the behavior of 62,000 players from the major commercial game Tomb Raider: Underworld, as it unfolds from the beginning of the game and throughout the seven main levels of the game. Where previous research has focused on aggregated behavioral datasets spanning an entire game, or conversely a limited slice or snapshot viewed in isolation, this is to the best knowledge of the authors the first study to examine the application of clustering methods to player behavior as it evolves throughout an entire game.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130123457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633634
Noor Shaker, J. Togelius, Georgios N. Yannakakis, Likith Poovanna, Vinay Sudha Ethiraj, S. Johansson, R. Reynolds, Leonard Kinnaird-Heether, T. Schumann, M. Gallagher
The Turing Test Track of the Mario AI Championship focused on developing human-like controllers for a clone of the popular game Super Mario Bros. Competitors participated by submitting AI agents that imitate human playing style. This paper presents the rules of the competition, the software used, the voting interface, the scoring procedure, the submitted controllers and the recent results of the competition for the year 2012. We also discuss what can be learnt from this competition in terms of believability in platform games. The discussion is supported by a statistical analysis of behavioural similarities and differences among the agents, and between agents and humans. The paper is co-authored by the organizers of the competition (the first three authors) and the competitors.
{"title":"The turing test track of the 2012 Mario AI Championship: Entries and evaluation","authors":"Noor Shaker, J. Togelius, Georgios N. Yannakakis, Likith Poovanna, Vinay Sudha Ethiraj, S. Johansson, R. Reynolds, Leonard Kinnaird-Heether, T. Schumann, M. Gallagher","doi":"10.1109/CIG.2013.6633634","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633634","url":null,"abstract":"The Turing Test Track of the Mario AI Championship focused on developing human-like controllers for a clone of the popular game Super Mario Bros. Competitors participated by submitting AI agents that imitate human playing style. This paper presents the rules of the competition, the software used, the voting interface, the scoring procedure, the submitted controllers and the recent results of the competition for the year 2012. We also discuss what can be learnt from this competition in terms of believability in platform games. The discussion is supported by a statistical analysis of behavioural similarities and differences among the agents, and between agents and humans. The paper is co-authored by the organizers of the competition (the first three authors) and the competitors.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133786878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633642
Eric Thibodeau-Laufer, Raul Chandias Ferrari, Li Yao, Olivier Delalleau, Yoshua Bengio
We consider an industrial strength application of recommendation systems for video-game matchmaking in which off-policy policy evaluation is important but where standard approaches can hardly be applied. The objective of the policy is to sequentially form teams of players from those waiting to be matched, in such a way as to produce well-balanced matches. Unfortunately, the available training data comes from a policy that is not known perfectly and that is not stochastic, making it impossible to use methods based on importance weights. Furthermore, we observe that when the estimated reward function and the policy are obtained by training from the same off-policy dataset, the policy evaluation using the estimated reward function is biased. We present a simple calibration procedure that is similar to stacked regression and that removes most of the bias, in the experiments we performed. Data collected during beta tests of Ghost Recon Online, a first person shooter from Ubisoft, were used for the experiments.
{"title":"Stacked calibration of off-policy policy evaluation for video game matchmaking","authors":"Eric Thibodeau-Laufer, Raul Chandias Ferrari, Li Yao, Olivier Delalleau, Yoshua Bengio","doi":"10.1109/CIG.2013.6633642","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633642","url":null,"abstract":"We consider an industrial strength application of recommendation systems for video-game matchmaking in which off-policy policy evaluation is important but where standard approaches can hardly be applied. The objective of the policy is to sequentially form teams of players from those waiting to be matched, in such a way as to produce well-balanced matches. Unfortunately, the available training data comes from a policy that is not known perfectly and that is not stochastic, making it impossible to use methods based on importance weights. Furthermore, we observe that when the estimated reward function and the policy are obtained by training from the same off-policy dataset, the policy evaluation using the estimated reward function is biased. We present a simple calibration procedure that is similar to stacked regression and that removes most of the bias, in the experiments we performed. Data collected during beta tests of Ghost Recon Online, a first person shooter from Ubisoft, were used for the experiments.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123606783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633628
Colin Divilly, C. O'Riordan, Seamus Hill
This paper describes approaches to evolving strategies for Mancala variants. The results are compared and the robustness of both the strategies and heuristics across variants of Mancala is analysed. The aim of this research is to evaluate the performance of a collection of heuristics across a selection of Mancala games. The performance of the individual heuristics can be evaluated on games with varying rules regarding capture rules, varying number of pits per row and for different seeds per pit at the start of the game.
{"title":"Exploration and analysis of the evolution of strategies for Mancala variants","authors":"Colin Divilly, C. O'Riordan, Seamus Hill","doi":"10.1109/CIG.2013.6633628","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633628","url":null,"abstract":"This paper describes approaches to evolving strategies for Mancala variants. The results are compared and the robustness of both the strategies and heuristics across variants of Mancala is analysed. The aim of this research is to evaluate the performance of a collection of heuristics across a selection of Mancala games. The performance of the individual heuristics can be evaluated on games with varying rules regarding capture rules, varying number of pits per row and for different seeds per pit at the start of the game.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126230409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633647
D. Ashlock, J. Gilbert
This study proposes a new mathematical game called Polyomination which involves the competitive placement of polyominoes to capture area. The game playing agents used are able to encode both their strategy and the game pieces they will play with. Strategy is encoded in a finite state representation called a binary decision automata which has access to a variety of pieces of information abstracted from the game state. Playing pieces are encoded by a developmental representation. An extensive parameter study is performed. The elite-fraction used by the evolutionary algorithm that trains the agents is found to be relatively unimportant. The number of states in the automata and the maximum number of squares used to build polyominoes are found to have a significant impact on competitive ability. The polyomino playing pieces are found to evolve in a strategic manner with playing pieces specializing for area-occupation, area-denial, and cleanup in which small pieces can fill in small remaining areas. This study serves as an initial study of Polyomination, intended to serve as a springboard for the design of simpler, related games.
{"title":"Creativity and competitiveness in polyomino-developing game playing agents","authors":"D. Ashlock, J. Gilbert","doi":"10.1109/CIG.2013.6633647","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633647","url":null,"abstract":"This study proposes a new mathematical game called Polyomination which involves the competitive placement of polyominoes to capture area. The game playing agents used are able to encode both their strategy and the game pieces they will play with. Strategy is encoded in a finite state representation called a binary decision automata which has access to a variety of pieces of information abstracted from the game state. Playing pieces are encoded by a developmental representation. An extensive parameter study is performed. The elite-fraction used by the evolutionary algorithm that trains the agents is found to be relatively unimportant. The number of states in the automata and the maximum number of squares used to build polyominoes are found to have a significant impact on competitive ability. The polyomino playing pieces are found to evolve in a strategic manner with playing pieces specializing for area-occupation, area-denial, and cleanup in which small pieces can fill in small remaining areas. This study serves as an initial study of Polyomination, intended to serve as a springboard for the design of simpler, related games.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115898862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633650
C. Browne
This paper describes initial experiments in the use of UCT-based algorithms for procedural content generation in creative game-like domains. UCT search offers potential benefits for this task, as its systematic method of node expansion constitutes an inherent form of exhaustive local search. A new variant called upper confidence bounds for graphs (UCG) is described, suitable for bitstring domains with reversible operations, such as those to which genetic algorithms are typically applied. We compare the performance of UCT-based methods with known search methods for two test domains, with encouraging results.
{"title":"UCT for PCG","authors":"C. Browne","doi":"10.1109/CIG.2013.6633650","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633650","url":null,"abstract":"This paper describes initial experiments in the use of UCT-based algorithms for procedural content generation in creative game-like domains. UCT search offers potential benefits for this task, as its systematic method of node expansion constitutes an inherent form of exhaustive local search. A new variant called upper confidence bounds for graphs (UCG) is described, suitable for bitstring domains with reversible operations, such as those to which genetic algorithms are typically applied. We compare the performance of UCT-based methods with known search methods for two test domains, with encouraging results.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633620
J. Sánchez, Ernestina Menasalvas Ruiz, S. Muelas, A. Latorre, Luis Peña, Sascha Ossowski
Although procedural and assisted content generation have attracted a lot of attention in both academic and industrial research in video games, there are few cases in the literature in which they have been applied to sport management games. The on-line variants of these games produce a lot of information concerning how the users interact with each other in the game. This contribution presents the application of soft computing techniques in the context of content generation for an on-line massive basketball management simulation game (in particular in the virtual trading market of the game). This application is developed in two different directions: (1) a machine learning model to analyze the appeal of the trading market contents (the virtual basketball players in the game), and (2) an evolutionary algorithm to assist users in the design of new contents (training of virtual basketball players).
{"title":"Soft computing for content generation: Trading market in a basketball management video game","authors":"J. Sánchez, Ernestina Menasalvas Ruiz, S. Muelas, A. Latorre, Luis Peña, Sascha Ossowski","doi":"10.1109/CIG.2013.6633620","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633620","url":null,"abstract":"Although procedural and assisted content generation have attracted a lot of attention in both academic and industrial research in video games, there are few cases in the literature in which they have been applied to sport management games. The on-line variants of these games produce a lot of information concerning how the users interact with each other in the game. This contribution presents the application of soft computing techniques in the context of content generation for an on-line massive basketball management simulation game (in particular in the virtual trading market of the game). This application is developed in two different directions: (1) a machine learning model to analyze the appeal of the trading market contents (the virtual basketball players in the game), and (2) an evolutionary algorithm to assist users in the design of new contents (training of virtual basketball players).","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133672386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633621
Diego Perez Liebana, Spyridon Samothrakis, S. Lucas
Multi-Objective optimization has traditionally been applied to manufacturing, engineering or finance, with little impact in games research. However, its application to this field of study may provide interesting results, especially for games that are complex or long enough that long-term planning is not trivial and/or a good level of play depends on balancing several strategies within the game. This paper proposes a new Multi-Objective algorithm based on Monte Carlo Tree Search (MCTS). The algorithm is tested in two different scenarios and its learning capabilities are measured in an online and offline fashion. Additionally, it is compared with a state of the art multi-objective evolutionary algorithm (NSGA-II) and with a previously published Multi-Objective MCTS algorithm. The results show that our proposed algorithm provides similar or better results than other techniques.
{"title":"Online and offline learning in multi-objective Monte Carlo Tree Search","authors":"Diego Perez Liebana, Spyridon Samothrakis, S. Lucas","doi":"10.1109/CIG.2013.6633621","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633621","url":null,"abstract":"Multi-Objective optimization has traditionally been applied to manufacturing, engineering or finance, with little impact in games research. However, its application to this field of study may provide interesting results, especially for games that are complex or long enough that long-term planning is not trivial and/or a good level of play depends on balancing several strategies within the game. This paper proposes a new Multi-Objective algorithm based on Monte Carlo Tree Search (MCTS). The algorithm is tested in two different scenarios and its learning capabilities are measured in an online and offline fashion. Additionally, it is compared with a state of the art multi-objective evolutionary algorithm (NSGA-II) and with a previously published Multi-Objective MCTS algorithm. The results show that our proposed algorithm provides similar or better results than other techniques.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125226127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}