Pub Date : 2012-12-01DOI: 10.1142/S1793843012400203
A. Chella, S. Gaglio
Synthetic phenomenology typically focuses on the analysis of simplified perceptual signals with small or reduced dimensionality. Instead, synthetic phenomenology should be analyzed in terms of perceptual signals with huge dimensionality. Effective phenomenal processes actually exploit the entire richness of the dynamic perceptual signals coming from the retina. The hypothesis of a high-dimensional buffer at the basis of the perception loop that generates the robot synthetic phenomenology is analyzed in terms of a cognitive architecture for robot vision the authors have developed over the years. Despite the obvious computational problems when dealing with high-dimensional vectors, spaces with increased dimensionality could be a boon when searching for global minima. A simplified setup based on static scene analysis and a more complex setup based on the CiceRobot robot are discussed.
{"title":"SYNTHETIC PHENOMENOLOGY AND HIGH-DIMENSIONAL BUFFER HYPOTHESIS","authors":"A. Chella, S. Gaglio","doi":"10.1142/S1793843012400203","DOIUrl":"https://doi.org/10.1142/S1793843012400203","url":null,"abstract":"Synthetic phenomenology typically focuses on the analysis of simplified perceptual signals with small or reduced dimensionality. Instead, synthetic phenomenology should be analyzed in terms of perceptual signals with huge dimensionality. Effective phenomenal processes actually exploit the entire richness of the dynamic perceptual signals coming from the retina. The hypothesis of a high-dimensional buffer at the basis of the perception loop that generates the robot synthetic phenomenology is analyzed in terms of a cognitive architecture for robot vision the authors have developed over the years. Despite the obvious computational problems when dealing with high-dimensional vectors, spaces with increased dimensionality could be a boon when searching for global minima. A simplified setup based on static scene analysis and a more complex setup based on the CiceRobot robot are discussed.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"61 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121248737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400215
Raúl Arrabales
The use of human language is a hallmark of human consciousness, even when it is not used publicly. Inner speech is the way humans consciously communicate with themselves and arguably a key factor contributing to the formation of more self-aware selves. From the perspective of cognitive science and artificial cognitive architectures, inner speech can be also seen as a meta-management system that modulates some cognitive processes of the subject. In this paper, we describe a preliminary version of a computational model of inner speech generation based on the cognitive architecture CERA-CRANIUM. This inner speech generation method is illustrated using a video game non-player character as the subject of the first-person narratives to be produced. We also use this model of inner speech generation to discuss the possibilities of using such a first-person narrative stream as a meta-control input to the artificial cognitive architecture. We argue that this verbal input might be used as an integrated self-explanation of the agent in the world and thus contribute to the formation of self.
{"title":"INNER SPEECH GENERATION IN A VIDEO GAME NON-PLAYER CHARACTER: FROM EXPLANATION TO SELF?","authors":"Raúl Arrabales","doi":"10.1142/S1793843012400215","DOIUrl":"https://doi.org/10.1142/S1793843012400215","url":null,"abstract":"The use of human language is a hallmark of human consciousness, even when it is not used publicly. Inner speech is the way humans consciously communicate with themselves and arguably a key factor contributing to the formation of more self-aware selves. From the perspective of cognitive science and artificial cognitive architectures, inner speech can be also seen as a meta-management system that modulates some cognitive processes of the subject. In this paper, we describe a preliminary version of a computational model of inner speech generation based on the cognitive architecture CERA-CRANIUM. This inner speech generation method is illustrated using a video game non-player character as the subject of the first-person narratives to be produced. We also use this model of inner speech generation to discuss the possibilities of using such a first-person narrative stream as a meta-control input to the artificial cognitive architecture. We argue that this verbal input might be used as an integrated self-explanation of the agent in the world and thus contribute to the formation of self.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114357435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400264
J. Taylor
We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.
{"title":"CAN FUNCTIONAL AND PHENOMENAL CONSCIOUSNESS BE DIVIDED","authors":"J. Taylor","doi":"10.1142/S1793843012400264","DOIUrl":"https://doi.org/10.1142/S1793843012400264","url":null,"abstract":"We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122518563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400252
Michael Beaton, I. Aleksander
Information integration is a measure, developed by Tononi and co-researchers, of the capacity for dynamic neural networks to be in informational states which are unique and indivisible. This is supposed to correspond to the intuitive "feel" of a mental state: highly discriminative and yet fundamentally integrated. Recent versions of the theory include a definition of qualia, which measures the geometric contribution of individual neural structures to the overall measure. In this paper, we examine these approaches from two philosophical perspectives, enactivism (externalism) and phenomenal states (internalism). We suggest that a promising enactivist response is to agree with Tononi that consciousness consists of integrated information, but to argue for a radical rethink about the nature of information itself. We argue that information is most naturally viewed as a three-place relation, involving a Bayesian-rational subject, the subject's evidence and the world (as brought under the subject's evolving understanding). To have (or gain) information is to behave in a Bayesian-rational way in response to evidence. Information only ever belongs to whole, rationally behaving agents; information is only "in the brain" from the point of view of a theorist seeking to explain behavior. Rational behavior (hence information) will depend on brain, body and world — embodiment matters. Then, from a phenomenal states perspective, we examine the way that internal states of a network can be not only unique and indivisible but also reflect this coherence as it might exist in an external world. Extending previously published material, we propose that two systems could both score well on traditional integration measures where one had meaningful world-representing states and the other did not. A model which involves iconic learning and depiction is discussed and tested in order to show how internal states can be about the world and how measures of integration influence this process. This retains some of the structure of Tononi's integration measurements but operates within sets of states of the world as filtered by receptors and repertoires of internal states achieved by depiction. This suggests a formalization of qualia which does not ignore world-reflecting content and relates to internal states that aid the conscious organism's ability to act appropriately in the world of which it is conscious. Thus, a common theme emerges: Tononi has good intuition about the necessary nature of consciousness, but his is not the only theory of experience able to do justice to these key intuitions. Tononi's theory has an apparent weakness, in that it treats conscious "information" as something intrinsically meaningless (i.e., without any necessary connection to the world), whereas both the approaches canvassed here naturally relate experienced information to the world.
{"title":"WORLD-RELATED INTEGRATED INFORMATION: ENACTIVIST AND PHENOMENAL PERSPECTIVES","authors":"Michael Beaton, I. Aleksander","doi":"10.1142/S1793843012400252","DOIUrl":"https://doi.org/10.1142/S1793843012400252","url":null,"abstract":"Information integration is a measure, developed by Tononi and co-researchers, of the capacity for dynamic neural networks to be in informational states which are unique and indivisible. This is supposed to correspond to the intuitive \"feel\" of a mental state: highly discriminative and yet fundamentally integrated. Recent versions of the theory include a definition of qualia, which measures the geometric contribution of individual neural structures to the overall measure. In this paper, we examine these approaches from two philosophical perspectives, enactivism (externalism) and phenomenal states (internalism). We suggest that a promising enactivist response is to agree with Tononi that consciousness consists of integrated information, but to argue for a radical rethink about the nature of information itself. We argue that information is most naturally viewed as a three-place relation, involving a Bayesian-rational subject, the subject's evidence and the world (as brought under the subject's evolving understanding). To have (or gain) information is to behave in a Bayesian-rational way in response to evidence. Information only ever belongs to whole, rationally behaving agents; information is only \"in the brain\" from the point of view of a theorist seeking to explain behavior. Rational behavior (hence information) will depend on brain, body and world — embodiment matters. Then, from a phenomenal states perspective, we examine the way that internal states of a network can be not only unique and indivisible but also reflect this coherence as it might exist in an external world. Extending previously published material, we propose that two systems could both score well on traditional integration measures where one had meaningful world-representing states and the other did not. A model which involves iconic learning and depiction is discussed and tested in order to show how internal states can be about the world and how measures of integration influence this process. This retains some of the structure of Tononi's integration measurements but operates within sets of states of the world as filtered by receptors and repertoires of internal states achieved by depiction. This suggests a formalization of qualia which does not ignore world-reflecting content and relates to internal states that aid the conscious organism's ability to act appropriately in the world of which it is conscious. Thus, a common theme emerges: Tononi has good intuition about the necessary nature of consciousness, but his is not the only theory of experience able to do justice to these key intuitions. Tononi's theory has an apparent weakness, in that it treats conscious \"information\" as something intrinsically meaningless (i.e., without any necessary connection to the world), whereas both the approaches canvassed here naturally relate experienced information to the world.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134171638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400306
A. Chella
{"title":"REMEMBERING JOHN TAYLOR (1931–2012)","authors":"A. Chella","doi":"10.1142/S1793843012400306","DOIUrl":"https://doi.org/10.1142/S1793843012400306","url":null,"abstract":"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125934071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400288
S. Torrance
In this paper, the notion of super-intelligence (or "AI++", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, "for free"? (I call this the "drop-out question".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of "super-consciousness"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.
{"title":"SUPER-INTELLIGENCE AND (SUPER-)CONSCIOUSNESS","authors":"S. Torrance","doi":"10.1142/S1793843012400288","DOIUrl":"https://doi.org/10.1142/S1793843012400288","url":null,"abstract":"In this paper, the notion of super-intelligence (or \"AI++\", as Chalmers has termed it) is considered in the context of machine consciousness (MC) research. Suppose AI++ were to come about, would real MC have then also arrived, \"for free\"? (I call this the \"drop-out question\".) Does the idea tempt you, as an MC investigator? What are the various positions that might be adopted on the issue of whether an AI++ would necessarily (or with strong likelihood) be a conscious AI++? Would a conscious super-intelligence also be a super-consciousness? (Indeed, what meaning might be attached to the notion of \"super-consciousness\"?) What ethical and social consequences might be drawn from the idea of conscious super-AIs or from that of artificial super-consciousness? And what implications does this issue have for technical progress on MC in a pre-AI++ world? These and other questions are considered.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130447819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400239
R. Manzotti
It is customary to assume that agents receive information from the environment through their sensors. It is equally customary to assume that an agent is capable of information processing and thus of computation. These two assumptions may be misleading, particularly because so much basic theoretical work relies on the concepts of information and computation. In similarity with Dennett's intentional stance, I suggest that a lot of discussions in cognitive science, neuroscience and artificial intelligence is biased by a naive notion of computation resulting from the adoption of a computational stance. As a case study, I will focus on David Chalmers' view of computation in cognitive agents. In particular, I will challenge the thesis of computational sufficiency. I will argue that computation is no more than the ascription of an abstract model to a series of states and dynamic transitions in a physical agent. As a result, computation is akin to center of masses and other epistemic shortcuts that are insufficient to be the underpinnings of a baffling-yet-physical phenomenon like consciousness.
{"title":"THE COMPUTATIONAL STANCE IS UNFIT FOR CONSCIOUSNESS","authors":"R. Manzotti","doi":"10.1142/S1793843012400239","DOIUrl":"https://doi.org/10.1142/S1793843012400239","url":null,"abstract":"It is customary to assume that agents receive information from the environment through their sensors. It is equally customary to assume that an agent is capable of information processing and thus of computation. These two assumptions may be misleading, particularly because so much basic theoretical work relies on the concepts of information and computation. In similarity with Dennett's intentional stance, I suggest that a lot of discussions in cognitive science, neuroscience and artificial intelligence is biased by a naive notion of computation resulting from the adoption of a computational stance. As a case study, I will focus on David Chalmers' view of computation in cognitive agents. In particular, I will challenge the thesis of computational sufficiency. I will argue that computation is no more than the ascription of an abstract model to a series of states and dynamic transitions in a physical agent. As a result, computation is akin to center of masses and other epistemic shortcuts that are insufficient to be the underpinnings of a baffling-yet-physical phenomenon like consciousness.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125585537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400240
D. Gamez
Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.
{"title":"EMPIRICALLY GROUNDED CLAIMS ABOUT CONSCIOUSNESS IN COMPUTERS","authors":"D. Gamez","doi":"10.1142/S1793843012400240","DOIUrl":"https://doi.org/10.1142/S1793843012400240","url":null,"abstract":"Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127882957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012020027
Ron Chrisley, Robert W. Clowes
{"title":"Special issue on machine consciousness: Self, integration and explanation | selected papers from the 2011 aisb workshop: Guest editors' introduction","authors":"Ron Chrisley, Robert W. Clowes","doi":"10.1142/S1793843012020027","DOIUrl":"https://doi.org/10.1142/S1793843012020027","url":null,"abstract":"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"19 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120996117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1142/S1793843012400185
U. Ramamurthy, S. Franklin, Pulin Agrawal
Philosophers, psychologists and neuroscientists have proposed various forms of a "self" in humans and animals. All of these selves seem to have a basis in some form of consciousness. The Global Workspace Theory (GWT) [Baars, 1988, 2003] suggests a mostly unconscious, many-layered self-system. In this paper, we consider several issues that arise from attempts to include a self-system in a software agent/cognitive robot. We explore these issues in the context of the LIDA model [Baars and Franklin, 2009; Ramamurthy et al., 2006] which implements the Global Workspace Theory.
{"title":"Self-system in a model of cognition","authors":"U. Ramamurthy, S. Franklin, Pulin Agrawal","doi":"10.1142/S1793843012400185","DOIUrl":"https://doi.org/10.1142/S1793843012400185","url":null,"abstract":"Philosophers, psychologists and neuroscientists have proposed various forms of a \"self\" in humans and animals. All of these selves seem to have a basis in some form of consciousness. The Global Workspace Theory (GWT) [Baars, 1988, 2003] suggests a mostly unconscious, many-layered self-system. In this paper, we consider several issues that arise from attempts to include a self-system in a software agent/cognitive robot. We explore these issues in the context of the LIDA model [Baars and Franklin, 2009; Ramamurthy et al., 2006] which implements the Global Workspace Theory.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115190692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}