首页 > 最新文献

International Journal of Machine Consciousness最新文献

英文 中文
INFORMATIONAL MINDS: FROM ARISTOTLE TO LAPTOPS (BOOK EXTRACT) 信息思维:从亚里士多德到笔记本电脑(书籍节选)
Pub Date : 2011-12-01 DOI: 10.1142/S1793843011000844
I. Aleksander, H. Morton
In a forthcoming book, "Aristotle's Laptop: The Discovery of Our Informational Mind" [Aleksander and Morton, 2012] we explore the idea that the long struggle for providing a scientific analysis of a conscious mind received a major gift in the guise of Shannon's formalization of information and the logic of digital systems. We argue, however, that progress is made not through the conventional route of algorithmic information processing and artificial intelligence, but through an understanding of how information and logic work in networks of neurons in order to support what we call the conscious mind. We approach the discourse with a close eye on the history of discoveries and what drove the inventors. This paper is the introductory chapter which sets out the path followed by our approach to explaining the "informational mind."
在即将出版的《亚里士多德的笔记本电脑:发现我们的信息思维》(Aleksander和Morton, 2012)一书中,我们探讨了这样一种观点,即长期以来为提供有意识思维的科学分析而进行的斗争,在香农的信息形式化和数字系统逻辑的伪装下,获得了一份重要的礼物。然而,我们认为,进步不是通过算法信息处理和人工智能的传统路线取得的,而是通过理解信息和逻辑如何在神经元网络中工作,以支持我们所谓的意识思维。我们将密切关注发现的历史以及驱动发明者的因素。这篇论文是引言,它列出了我们解释“信息思维”的方法所遵循的路径。
{"title":"INFORMATIONAL MINDS: FROM ARISTOTLE TO LAPTOPS (BOOK EXTRACT)","authors":"I. Aleksander, H. Morton","doi":"10.1142/S1793843011000844","DOIUrl":"https://doi.org/10.1142/S1793843011000844","url":null,"abstract":"In a forthcoming book, \"Aristotle's Laptop: The Discovery of Our Informational Mind\" [Aleksander and Morton, 2012] we explore the idea that the long struggle for providing a scientific analysis of a conscious mind received a major gift in the guise of Shannon's formalization of information and the logic of digital systems. We argue, however, that progress is made not through the conventional route of algorithmic information processing and artificial intelligence, but through an understanding of how information and logic work in networks of neurons in order to support what we call the conscious mind. We approach the discourse with a close eye on the history of discoveries and what drove the inventors. This paper is the introductory chapter which sets out the path followed by our approach to explaining the \"informational mind.\"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134110672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
THE ASSUMPTIONS ON KNOWLEDGE AND RESOURCES IN MODELS OF RATIONALITY 理性模型中关于知识和资源的假设
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000686
Pei Wang
Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.
智能可以被理解为理性的一种形式,从某种意义上说,当一个智能系统的知识和资源对要解决的问题不足时,它会尽其所能。传统的理性模型通常假设某种形式的知识和资源是充足的,因此不能解决人工智能(AI)中的许多理论和实践问题。基于知识和资源不足假设(AIKR)的新模型不能通过对传统模型进行微小的修改或扩展来获得,而必须充分根据AIKR提供的限制和自由来建立。人工智能项目NARS的实践表明,这些新模型是可行的,有望为理性、智能、意识和心灵的研究提供新的理论基础。
{"title":"THE ASSUMPTIONS ON KNOWLEDGE AND RESOURCES IN MODELS OF RATIONALITY","authors":"Pei Wang","doi":"10.1142/S1793843011000686","DOIUrl":"https://doi.org/10.1142/S1793843011000686","url":null,"abstract":"Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126124274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
TOWARDS MACHINE CONSCIOUSNESS: GROUNDING ABSTRACT MODELS AS π-PROCESSES 走向机器意识:抽象模型作为π过程的基础
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000595
P. Bonzon
We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.
我们提出了并发通信系统(CCS)的两层模型,作为机器意识的基础。首先介绍了在逻辑编程中实现线程的一种语言。这个高级框架允许定义可以在虚拟机上执行的抽象进程。然后,我们寻找这些过程进入大脑的可能基础。为此,我们将抽象定义(包括表示编译知识的逻辑表达式)映射为π-演算的变体。我们通过一系列从纯粹的反应性行为到意识模式的例子来说明这种方法。
{"title":"TOWARDS MACHINE CONSCIOUSNESS: GROUNDING ABSTRACT MODELS AS π-PROCESSES","authors":"P. Bonzon","doi":"10.1142/S1793843011000595","DOIUrl":"https://doi.org/10.1142/S1793843011000595","url":null,"abstract":"We present a two-level model of concurrent communicating systems (CCS) to serve as a basis for machine consciousness. A language implementing threads within logic programming is first introduced. This high-level framework allows for the definition of abstract processes that can be executed on a virtual machine. We then look for a possible grounding of these processes into the brain. Towards this end, we map abstract definitions (including logical expressions representing compiled knowledge) into a variant of the π-calculus. We illustrate this approach through a series of examples extending from a purely reactive behavior to patterns of consciousness.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129803472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ON THE STATUS OF COMPUTATIONALISM AS A LAW OF NATURE 论计算主义作为自然规律的地位
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000613
Colin G. Hales
Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is "trivially true" or "pragmatically false" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the "law of nature" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.
科学行为被用作检验计算主义(COMP)作为自然规律的真实状态的基准。一个基于COMP的人工科学家将从三个简单的角度进行考察,看看他们是否通过COMP的能力或其他方式揭示了COMP的真伪,以像人类一样在先验的未知领域提供真正的原创科学。第一个视角(A)着眼于对无知的处理,并支持COMP“基本正确”或“实际错误”的主张,因为如果你已经知道了一切,你就可以模拟一个科学家,这是一种使模拟成为可能但毫无意义的状态。第二种情况(B)更具结论性和不寻常性,因为它揭示了COMP科学家永远不能提出/辩论COMP是自然规律。人类和人工科学家在这个单一的、非常特殊的情况下的显著差异意味着,COMP不能作为一个普遍的主张是正确的。第三种情况(C)考察人工科学家对自己/人类进行科学研究的能力,以揭示“自然法则”。这个场景揭示了一个基于COMP的人工科学家对科学行为的成功测试支持了COMP是正确的说法。这样的测试是非常实用的,可以应用于基于任何设计原则的人工科学家,而不仅仅是COMP。场景(C)也揭示了COMP科学家无法处理非正式系统(以骗子的形式)的实际例子,这进一步破坏了COMP。总的来说,结果是COMP是假的,在一个非常具体的,关键的地方是肯定的。这为以下说法提供了支持:(1)基于COMP原理的人工通用智能不会成功,以及(2)通过计算制定的人类认知的抽象模型永远不会创造出思维。
{"title":"ON THE STATUS OF COMPUTATIONALISM AS A LAW OF NATURE","authors":"Colin G. Hales","doi":"10.1142/S1793843011000613","DOIUrl":"https://doi.org/10.1142/S1793843011000613","url":null,"abstract":"Scientific behavior is used as a benchmark to examine the truth status of computationalism (COMP) as a law of nature. A COMP-based artificial scientist is examined from three simple perspectives to see if they shed light on the truth or falsehood of COMP through its ability or otherwise, to deliver authentic original science on the a priori unknown like humans do. The first perspective (A) looks at the handling of ignorance and supports a claim that COMP is \"trivially true\" or \"pragmatically false\" in the sense that you can simulate a scientist if you already know everything, which is a state that renders the simulation possible but pointless. The second scenario (B) is more conclusive and unusual in that it reveals that the COMP scientist can never propose/debate that COMP is a law of nature. This marked difference between the human and the artificial scientist in this single, very specific circumstance, means that COMP cannot be true as a general claim. The third scenario (C) examines the artificial scientist's ability to do science on itself/humans to uncover the \"law of nature\" which results in itself. This scenario reveals that a successful test for scientific behavior by a COMP-based artificial scientist supports a claim that COMP is true. Such a test is quite practical and can be applied to an artificial scientist based on any design principle, not merely COMP. Scenario (C) also reveals a practical example of the COMP scientist's inability to handle informal systems (in the form of liars), which further undermines COMP. Overall, the result is that COMP is false, with certainty in one very specific, critical place. This lends support to the claims (i) that artificial general intelligence will not succeed based on COMP principles, and (ii) computationally enacted abstract models of human cognition will never create a mind.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121968238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Problem awareness for skilled humanoid robots 熟练的类人机器人的问题意识
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000625
F. Mastrogiovanni, Antonello Scalmato, A. Sgorbissa, R. Zaccaria
This paper describes research work aimed at designing realistic reasoning techniques for humanoid robots provided with advanced skills. Robots operating in real-world environments are expected to exhibit very complex behaviors, such as manipulating everyday objects, moving in crowded environments or interacting with people, both socially and physically. Such — yet to be achieved — capabilities pose the problem of being able to reason upon hundreds or even thousands different objects, places and possible actions to carry out, each one relevant for achieving robot goals or motivations. This article proposes a functional representation of everyday objects, places and actions described in terms of such abstractions as affordances and capabilities. The main contribution is twofold: (i) affordances and capabilities are represented as neural maps grounded in proper metric spaces; (ii) the reasoning process is decomposed into two phases, namely problem awareness (which is the focus of this work) and action selection. Experiments in simulation show that large-scale reasoning problems can be easily managed in the proposed framework.
本文描述了为具有高级技能的类人机器人设计现实推理技术的研究工作。在现实环境中操作的机器人预计会表现出非常复杂的行为,比如操纵日常物品,在拥挤的环境中移动,或者与人进行社交和身体上的互动。这种尚未实现的能力提出了一个问题,即能够对数百甚至数千个不同的物体、地点和可能执行的动作进行推理,每一个都与实现机器人的目标或动机有关。本文提出了一种日常对象、地点和动作的功能表示,用诸如可视性和能力之类的抽象来描述。主要贡献有两方面:(i)能力和能力被表示为基于适当度量空间的神经地图;(ii)将推理过程分解为两个阶段,即问题意识阶段(这是本工作的重点)和行动选择阶段。仿真实验表明,该框架可以很容易地处理大规模推理问题。
{"title":"Problem awareness for skilled humanoid robots","authors":"F. Mastrogiovanni, Antonello Scalmato, A. Sgorbissa, R. Zaccaria","doi":"10.1142/S1793843011000625","DOIUrl":"https://doi.org/10.1142/S1793843011000625","url":null,"abstract":"This paper describes research work aimed at designing realistic reasoning techniques for humanoid robots provided with advanced skills. Robots operating in real-world environments are expected to exhibit very complex behaviors, such as manipulating everyday objects, moving in crowded environments or interacting with people, both socially and physically. Such — yet to be achieved — capabilities pose the problem of being able to reason upon hundreds or even thousands different objects, places and possible actions to carry out, each one relevant for achieving robot goals or motivations. This article proposes a functional representation of everyday objects, places and actions described in terms of such abstractions as affordances and capabilities. The main contribution is twofold: (i) affordances and capabilities are represented as neural maps grounded in proper metric spaces; (ii) the reasoning process is decomposed into two phases, namely problem awareness (which is the focus of this work) and action selection. Experiments in simulation show that large-scale reasoning problems can be easily managed in the proposed framework.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126581308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
CONSCIOUSNESS FOR THE OUROBOROS MODEL 对于衔尾蛇模型的意识
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000662
Knud Thomsen
The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed "consumption analysis" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly fitting filler for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is briefly outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in artificial general intelligence and consciousness conclude this paper.
衔尾蛇模型的特点是受生物学启发的认知结构。其核心是一个自引用递归过程,交替进行数据采集和评估阶段。内存项以模式组织。一次激活图式的一部分会使整个结构产生偏差,特别是缺失的特征,从而引发期望。然后,称为“消费分析”的迭代递归监控过程将检查这些期望与连续激活的匹配程度。基于以往经验的预期与当前实际数据之间的不匹配被突出显示,并用于控制注意力的分配。如果没有找到直接适合空缺的填充物,激活会更广泛地传播,包括与行为人相关的数据,然后是高阶人格激活(HOPA)。它简要概述了衔尾蛇模型如何产生许多不同的特征,从而解决了意识的既定标准。本文总结了与先前选定的意识概念的粗粒度关系,以及衔尾蛇模型如何阐明当前人工通用智能和意识的研究主题的草图。
{"title":"CONSCIOUSNESS FOR THE OUROBOROS MODEL","authors":"Knud Thomsen","doi":"10.1142/S1793843011000662","DOIUrl":"https://doi.org/10.1142/S1793843011000662","url":null,"abstract":"The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed \"consumption analysis\" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. In case no directly fitting filler for an open slot is found, activation spreads more widely and includes data relating to the actor, and Higher-Order Personality Activation, HOPA, ensues. It is briefly outlined how the Ouroboros Model produces many diverse characteristics and thus addresses established criteria for consciousness. Coarse-grained relationships to selected previous conceptualizations of consciousness and a sketch of how the Ouroboros Model could shed light on current research themes in artificial general intelligence and consciousness conclude this paper.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124955956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
CONCEPTUAL SPACES AND CONSCIOUSNESS: INTEGRATING COGNITIVE AND AFFECTIVE PROCESSES 概念空间与意识:认知与情感过程的整合
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000649
Alfredo Pereira Junior, Leonardo Ferreira Almada
In the book "Conceptual Spaces: the Geometry of Thought" [2000] Peter Gardenfors proposes a new framework for cognitive science. Complementary to symbolic and subsymbolic [connectionist] descriptions, conceptual spaces are semantic structures — constructed from empirical data — representing the universe of mental states. We argue that Gardenfors' modeling can be used in consciousness research to describe the phenomenal conscious world, its elements and their intrinsic relations. The conceptual space approach affords the construction of a universal state space of human consciousness, where all possible kinds of human conscious states could be mapped. Starting from this approach, we discuss the inclusion of feelings and emotions in conceptual spaces, and their relation to perceptual and cognitive states. Current debate on integration of affect/emotion and perception/cognition allows three possible descriptive alternatives: emotion resulting from basic cognition; cognition resulting from basic emotion, and both as relatively independent functions integrated by brain mechanisms. Finding a solution for this issue is an important step in any attempt of successful modeling of natural or artificial consciousness. After making a brief review of proposals in this area, we summarize the essentials of a new model of consciousness based on neuro-astroglial interactions.
在《概念空间:思想的几何》[2000]一书中,Peter Gardenfors为认知科学提出了一个新的框架。作为符号和亚符号(联结主义)描述的补充,概念空间是语义结构——由经验数据构建——代表心理状态的整体。我们认为Gardenfors的模型可以用于意识研究,以描述现象意识世界、其要素及其内在关系。概念空间方法提供了一个人类意识的普遍状态空间的构建,在这个空间中所有可能的人类意识状态都可以被映射。从这个角度出发,我们讨论了感觉和情绪在概念空间中的包含,以及它们与感知和认知状态的关系。目前关于情感/情绪和感知/认知整合的争论允许三种可能的描述选择:情感源于基本认知;认知产生于基本的情绪,两者作为相对独立的功能通过大脑机制整合。找到这个问题的解决方案是成功建立自然或人工意识模型的重要一步。在简要回顾了这一领域的建议后,我们总结了基于神经-星形胶质相互作用的新意识模型的要点。
{"title":"CONCEPTUAL SPACES AND CONSCIOUSNESS: INTEGRATING COGNITIVE AND AFFECTIVE PROCESSES","authors":"Alfredo Pereira Junior, Leonardo Ferreira Almada","doi":"10.1142/S1793843011000649","DOIUrl":"https://doi.org/10.1142/S1793843011000649","url":null,"abstract":"In the book \"Conceptual Spaces: the Geometry of Thought\" [2000] Peter Gardenfors proposes a new framework for cognitive science. Complementary to symbolic and subsymbolic [connectionist] descriptions, conceptual spaces are semantic structures — constructed from empirical data — representing the universe of mental states. We argue that Gardenfors' modeling can be used in consciousness research to describe the phenomenal conscious world, its elements and their intrinsic relations. The conceptual space approach affords the construction of a universal state space of human consciousness, where all possible kinds of human conscious states could be mapped. Starting from this approach, we discuss the inclusion of feelings and emotions in conceptual spaces, and their relation to perceptual and cognitive states. Current debate on integration of affect/emotion and perception/cognition allows three possible descriptive alternatives: emotion resulting from basic cognition; cognition resulting from basic emotion, and both as relatively independent functions integrated by brain mechanisms. Finding a solution for this issue is an important step in any attempt of successful modeling of natural or artificial consciousness. After making a brief review of proposals in this area, we summarize the essentials of a new model of consciousness based on neuro-astroglial interactions.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128669867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Hegelian phenomenology and robotics 黑格尔现象学和机器人学
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000698
D. Borrett, David Shih, M. Tomko, Sarah Borrett, H. Kwan
A formalism is developed that treats a robot as a subject that can interpret its own experience rather than an object that is interpreted within our experience. A regulative definition of a meaningful experience in robots is proposed in which the present sensible experience is considered meaningful to the agent, as the subject of the experience, if it can be related to the agent's temporal horizons. This definition is validated by demonstrating that such an experience in evolutionary autonomous agents is embodied, contextual and normative, as is required for the maintenance of phenomenological accuracy. With this formalism it is shown how a dialectic similar to that described in Hegelian phenomenology can emerge in the robotic experience and why the presence of such a dialectic can serve as a constraint in the further development of cognitive agents.
一种形式主义被开发出来,它将机器人视为一个可以解释自己经验的主体,而不是在我们的经验中被解释的对象。提出了机器人中有意义经验的规范性定义,其中当前的感知经验被认为对代理有意义,作为经验的主体,如果它可以与代理的时间视界相关。这一定义通过证明这种进化自主主体的经验是具体化的、情境化的和规范性的,这是维持现象学准确性所必需的。通过这种形式主义,我们展示了与黑格尔现象学中描述的辩证法类似的辩证法是如何在机器人经验中出现的,以及为什么这种辩证法的存在可以作为认知主体进一步发展的约束。
{"title":"Hegelian phenomenology and robotics","authors":"D. Borrett, David Shih, M. Tomko, Sarah Borrett, H. Kwan","doi":"10.1142/S1793843011000698","DOIUrl":"https://doi.org/10.1142/S1793843011000698","url":null,"abstract":"A formalism is developed that treats a robot as a subject that can interpret its own experience rather than an object that is interpreted within our experience. A regulative definition of a meaningful experience in robots is proposed in which the present sensible experience is considered meaningful to the agent, as the subject of the experience, if it can be related to the agent's temporal horizons. This definition is validated by demonstrating that such an experience in evolutionary autonomous agents is embodied, contextual and normative, as is required for the maintenance of phenomenological accuracy. With this formalism it is shown how a dialectic similar to that described in Hegelian phenomenology can emerge in the robotic experience and why the presence of such a dialectic can serve as a constraint in the further development of cognitive agents.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122470229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Attitude Change Induced by Different Appearances of Interaction Agents 相互作用因子的不同表现引起的态度变化
Pub Date : 2011-06-01 DOI: 10.1007/978-981-10-8702-8_15
S. Nishio, H. Ishiguro
{"title":"Attitude Change Induced by Different Appearances of Interaction Agents","authors":"S. Nishio, H. Ishiguro","doi":"10.1007/978-981-10-8702-8_15","DOIUrl":"https://doi.org/10.1007/978-981-10-8702-8_15","url":null,"abstract":"","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121338063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
HYPERSET MODELS OF SELF, WILL AND REFLECTIVE CONSCIOUSNESS 自我、意志和反思意识的超集模型
Pub Date : 2011-06-01 DOI: 10.1142/S1793843011000601
B. Goertzel
A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the "moving bubble of attention" of the human brain and any roughly human-mind-like AI system. These ideas appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well. Their relationship with the CogPrime AI design and its implementation in the OpenCog software framework is elucidated in detail.
提出了一种新的反思意识、意志和自我理论,该理论基于使用称为超集的自我参照数学结构对这些实体进行建模。模式理论被用来论证这些奇异的数学结构可以被有意义地视为物理系统,甚至是有限计算系统的思想的一部分。所提出的超集模型被假设为在人类大脑和任何与人类思维大致相似的人工智能系统的“移动的注意力气泡”中出现的模式。这些观点似乎与泛心论和唯物主义的意识观兼容,也可能与其他观点兼容。详细阐述了它们与CogPrime AI设计的关系及其在OpenCog软件框架中的实现。
{"title":"HYPERSET MODELS OF SELF, WILL AND REFLECTIVE CONSCIOUSNESS","authors":"B. Goertzel","doi":"10.1142/S1793843011000601","DOIUrl":"https://doi.org/10.1142/S1793843011000601","url":null,"abstract":"A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the \"moving bubble of attention\" of the human brain and any roughly human-mind-like AI system. These ideas appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well. Their relationship with the CogPrime AI design and its implementation in the OpenCog software framework is elucidated in detail.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128579936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
International Journal of Machine Consciousness
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1