Pub Date : 2011-12-01DOI: 10.1142/S1793843011000844
I. Aleksander, H. Morton
In a forthcoming book, "Aristotle's Laptop: The Discovery of Our Informational Mind" [Aleksander and Morton, 2012] we explore the idea that the long struggle for providing a scientific analysis of a conscious mind received a major gift in the guise of Shannon's formalization of information and the logic of digital systems. We argue, however, that progress is made not through the conventional route of algorithmic information processing and artificial intelligence, but through an understanding of how information and logic work in networks of neurons in order to support what we call the conscious mind. We approach the discourse with a close eye on the history of discoveries and what drove the inventors. This paper is the introductory chapter which sets out the path followed by our approach to explaining the "informational mind."
{"title":"INFORMATIONAL MINDS: FROM ARISTOTLE TO LAPTOPS (BOOK EXTRACT)","authors":"I. Aleksander, H. Morton","doi":"10.1142/S1793843011000844","DOIUrl":"https://doi.org/10.1142/S1793843011000844","url":null,"abstract":"In a forthcoming book, \"Aristotle's Laptop: The Discovery of Our Informational Mind\" [Aleksander and Morton, 2012] we explore the idea that the long struggle for providing a scientific analysis of a conscious mind received a major gift in the guise of Shannon's formalization of information and the logic of digital systems. We argue, however, that progress is made not through the conventional route of algorithmic information processing and artificial intelligence, but through an understanding of how information and logic work in networks of neurons in order to support what we call the conscious mind. We approach the discourse with a close eye on the history of discoveries and what drove the inventors. This paper is the introductory chapter which sets out the path followed by our approach to explaining the \"informational mind.\"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134110672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000686
Pei Wang
Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.
{"title":"THE ASSUMPTIONS ON KNOWLEDGE AND RESOURCES IN MODELS OF RATIONALITY","authors":"Pei Wang","doi":"10.1142/S1793843011000686","DOIUrl":"https://doi.org/10.1142/S1793843011000686","url":null,"abstract":"Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126124274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000595
P. Bonzon
We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.
{"title":"TOWARDS MACHINE CONSCIOUSNESS: GROUNDING ABSTRACT MODELS AS π-PROCESSES","authors":"P. Bonzon","doi":"10.1142/S1793843011000595","DOIUrl":"https://doi.org/10.1142/S1793843011000595","url":null,"abstract":"We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129803472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000613
Colin G. Hales
Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is "trivially true" or "pragmatically false" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the "law of nature" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.
{"title":"ON THE STATUS OF COMPUTATIONALISM AS A LAW OF NATURE","authors":"Colin G. Hales","doi":"10.1142/S1793843011000613","DOIUrl":"https://doi.org/10.1142/S1793843011000613","url":null,"abstract":"Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is \"trivially true\" or \"pragmatically false\" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the \"law of nature\" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121968238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000625
F. Mastrogiovanni, Antonello Scalmato, A. Sgorbissa, R. Zaccaria
This paper describes research work aimed at designing realistic reasoning techniques for humanoid robots provided with advanced skills. Robots operating in real-world environments are expected to exhibit very complex behaviors, such as manipulating everyday objects, moving in crowded environments or interacting with people, both socially and physically. Such — yet to be achieved — capabilities pose the problem of being able to reason upon hundreds or even thousands different objects, places and possible actions to carry out, each one relevant for achieving robot goals or motivations. This article proposes a functional representation of everyday objects, places and actions described in terms of such abstractions as affordances and capabilities. The main contribution is twofold: (i) affordances and capabilities are represented as neural maps grounded in proper metric spaces; (ii) the reasoning process is decomposed into two phases, namely problem awareness (which is the focus of this work) and action selection. Experiments in simulation show that large-scale reasoning problems can be easily managed in the proposed framework.
{"title":"Problem awareness for skilled humanoid robots","authors":"F. Mastrogiovanni, Antonello Scalmato, A. Sgorbissa, R. Zaccaria","doi":"10.1142/S1793843011000625","DOIUrl":"https://doi.org/10.1142/S1793843011000625","url":null,"abstract":"This paper describes research work aimed at designing realistic reasoning techniques for humanoid robots provided with advanced skills. Robots operating in real-world environments are expected to exhibit very complex behaviors, such as manipulating everyday objects, moving in crowded environments or interacting with people, both socially and physically. Such — yet to be achieved — capabilities pose the problem of being able to reason upon hundreds or even thousands different objects, places and possible actions to carry out, each one relevant for achieving robot goals or motivations. This article proposes a functional representation of everyday objects, places and actions described in terms of such abstractions as affordances and capabilities. The main contribution is twofold: (i) affordances and capabilities are represented as neural maps grounded in proper metric spaces; (ii) the reasoning process is decomposed into two phases, namely problem awareness (which is the focus of this work) and action selection. Experiments in simulation show that large-scale reasoning problems can be easily managed in the proposed framework.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126581308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000662
Knud Thomsen
The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed "consumption analysis" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly fitting filler for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is briefly outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in artificial general intelligence and consciousness conclude this paper.
{"title":"CONSCIOUSNESS FOR THE OUROBOROS MODEL","authors":"Knud Thomsen","doi":"10.1142/S1793843011000662","DOIUrl":"https://doi.org/10.1142/S1793843011000662","url":null,"abstract":"The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed \"consumption analysis\" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly fitting filler for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is briefly outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in artificial general intelligence and consciousness conclude this paper.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124955956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000649
Alfredo Pereira Junior, Leonardo Ferreira Almada
In the book "Conceptual Spaces: the Geometry of Thought" [2000] Peter Gardenfors proposes a new framework for cognitive science. Complementary to symbolic and subsymbolic [connectionist] descriptions, conceptual spaces are semantic structures — constructed from empirical data — representing the universe of mental states. We argue that Gardenfors' modeling can be used in consciousness research to describe the phenomenal conscious world, its elements and their intrinsic relations. The conceptual space approach affords the construction of a universal state space of human consciousness, where all possible kinds of human conscious states could be mapped. Starting from this approach, we discuss the inclusion of feelings and emotions in conceptual spaces, and their relation to perceptual and cognitive states. Current debate on integration of affect/emotion and perception/cognition allows three possible descriptive alternatives: emotion resulting from basic cognition; cognition resulting from basic emotion, and both as relatively independent functions integrated by brain mechanisms. Finding a solution for this issue is an important step in any attempt of successful modeling of natural or artificial consciousness. After making a brief review of proposals in this area, we summarize the essentials of a new model of consciousness based on neuro-astroglial interactions.
{"title":"CONCEPTUAL SPACES AND CONSCIOUSNESS: INTEGRATING COGNITIVE AND AFFECTIVE PROCESSES","authors":"Alfredo Pereira Junior, Leonardo Ferreira Almada","doi":"10.1142/S1793843011000649","DOIUrl":"https://doi.org/10.1142/S1793843011000649","url":null,"abstract":"In the book \"Conceptual Spaces: the Geometry of Thought\" [2000] Peter Gardenfors proposes a new framework for cognitive science. Complementary to symbolic and subsymbolic [connectionist] descriptions, conceptual spaces are semantic structures — constructed from empirical data — representing the universe of mental states. We argue that Gardenfors' modeling can be used in consciousness research to describe the phenomenal conscious world, its elements and their intrinsic relations. The conceptual space approach affords the construction of a universal state space of human consciousness, where all possible kinds of human conscious states could be mapped. Starting from this approach, we discuss the inclusion of feelings and emotions in conceptual spaces, and their relation to perceptual and cognitive states. Current debate on integration of affect/emotion and perception/cognition allows three possible descriptive alternatives: emotion resulting from basic cognition; cognition resulting from basic emotion, and both as relatively independent functions integrated by brain mechanisms. Finding a solution for this issue is an important step in any attempt of successful modeling of natural or artificial consciousness. After making a brief review of proposals in this area, we summarize the essentials of a new model of consciousness based on neuro-astroglial interactions.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128669867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000698
D. Borrett, David Shih, M. Tomko, Sarah Borrett, H. Kwan
A formalism is developed that treats a robot as a subject that can interpret its own experience rather than an object that is interpreted within our experience. A regulative definition of a meaningful experience in robots is proposed in which the present sensible experience is considered meaningful to the agent, as the subject of the experience, if it can be related to the agent's temporal horizons. This definition is validated by demonstrating that such an experience in evolutionary autonomous agents is embodied, contextual and normative, as is required for the maintenance of phenomenological accuracy. With this formalism it is shown how a dialectic similar to that described in Hegelian phenomenology can emerge in the robotic experience and why the presence of such a dialectic can serve as a constraint in the further development of cognitive agents.
{"title":"Hegelian phenomenology and robotics","authors":"D. Borrett, David Shih, M. Tomko, Sarah Borrett, H. Kwan","doi":"10.1142/S1793843011000698","DOIUrl":"https://doi.org/10.1142/S1793843011000698","url":null,"abstract":"A formalism is developed that treats a robot as a subject that can interpret its own experience rather than an object that is interpreted within our experience. A regulative definition of a meaningful experience in robots is proposed in which the present sensible experience is considered meaningful to the agent, as the subject of the experience, if it can be related to the agent's temporal horizons. This definition is validated by demonstrating that such an experience in evolutionary autonomous agents is embodied, contextual and normative, as is required for the maintenance of phenomenological accuracy. With this formalism it is shown how a dialectic similar to that described in Hegelian phenomenology can emerge in the robotic experience and why the presence of such a dialectic can serve as a constraint in the further development of cognitive agents.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122470229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1007/978-981-10-8702-8_15
S. Nishio, H. Ishiguro
{"title":"Attitude Change Induced by Different Appearances of Interaction Agents","authors":"S. Nishio, H. Ishiguro","doi":"10.1007/978-981-10-8702-8_15","DOIUrl":"https://doi.org/10.1007/978-981-10-8702-8_15","url":null,"abstract":"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121338063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-06-01DOI: 10.1142/S1793843011000601
B. Goertzel
A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the "moving bubble of attention" of the human brain and any roughly human-mind-like AI system. These ideas appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well. Their relationship with the CogPrime AI design and its implementation in the OpenCog software framework is elucidated in detail.
{"title":"HYPERSET MODELS OF SELF, WILL AND REFLECTIVE CONSCIOUSNESS","authors":"B. Goertzel","doi":"10.1142/S1793843011000601","DOIUrl":"https://doi.org/10.1142/S1793843011000601","url":null,"abstract":"A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the \"moving bubble of attention\" of the human brain and any roughly human-mind-like AI system. These ideas appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well. Their relationship with the CogPrime AI design and its implementation in the OpenCog software framework is elucidated in detail.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128579936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}