首页 > 最新文献

Journal of Artificial General Intelligence最新文献

英文 中文
Fuzzy Networks for Modeling Shared Semantic Knowledge 基于模糊网络的共享语义知识建模
Pub Date : 2023-03-01 DOI: 10.2478/jagi-2023-0001
Farshad Badie, Luís M. Augusto
Abstract Shared conceptualization, in the sense we take it here, is as recent a notion as the Semantic Web, but its relevance for a large variety of fields requires efficient methods of extraction and representation for both quantitative and qualitative data. This notion is particularly relevant for the investigation into, and construction of, semantic structures such as knowledge bases and taxonomies, but given the required large, often inaccurate, corpora available for search we can get only approximations. We see fuzzy description logic as an adequate medium for the representation of human semantic knowledge and propose a means to couple it with fuzzy semantic networks via the propositional Łukasiewicz fuzzy logic such that these suffice for decidability for queries over a semantic-knowledge base such as “to what degree of sharedness does it entail the instantiation C(a) for some concept C” or “what are the roles R that connect the individuals a and b to degree of sharedness ε.”
共享概念化,在我们这里的意义上,和语义网一样是最近才出现的概念,但它与大量领域的相关性需要有效的方法来提取和表示定量和定性数据。这一概念与研究和构建语义结构(如知识库和分类法)特别相关,但是考虑到搜索所需的大型、通常不准确的语料库,我们只能得到近似值。我们看到模糊描述逻辑作为一个适当的媒介人类语义知识的表示形式,提出一种手段两模糊语义网络通过命题Łukasiewicz模糊逻辑这些满足可判定性等查询语义知识基础”到什么程度的sharedness它需要实例化C (a)对一些概念C”或“角色R连接个人a和b的sharednessε。”
{"title":"Fuzzy Networks for Modeling Shared Semantic Knowledge","authors":"Farshad Badie, Luís M. Augusto","doi":"10.2478/jagi-2023-0001","DOIUrl":"https://doi.org/10.2478/jagi-2023-0001","url":null,"abstract":"Abstract Shared conceptualization, in the sense we take it here, is as recent a notion as the Semantic Web, but its relevance for a large variety of fields requires efficient methods of extraction and representation for both quantitative and qualitative data. This notion is particularly relevant for the investigation into, and construction of, semantic structures such as knowledge bases and taxonomies, but given the required large, often inaccurate, corpora available for search we can get only approximations. We see fuzzy description logic as an adequate medium for the representation of human semantic knowledge and propose a means to couple it with fuzzy semantic networks via the propositional Łukasiewicz fuzzy logic such that these suffice for decidability for queries over a semantic-knowledge base such as “to what degree of sharedness does it entail the instantiation C(a) for some concept C” or “what are the roles R that connect the individuals a and b to degree of sharedness ε.”","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114587369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Environments to Measure Self-reflection in Reinforcement Learning 扩展环境以测量强化学习中的自我反思
Pub Date : 2021-10-13 DOI: 10.2478/jagi-2022-0001
S. Alexander, Michael Castaneda, K. Compher, Oscar Martinez
Abstract We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.
我们考虑了强化学习的扩展概念,其中环境可以模拟智能体,并基于智能体的假设行为来输出其输出。由于良好的性能通常需要关注环境输出所基于的任何东西,因此我们认为,对于智能体来说,要在许多这样的扩展环境中实现平均良好的性能,智能体有必要进行自我反思。因此,在所有适当行为良好的扩展环境空间上的加权平均性能可以被认为是衡量代理的自我反射能力的一种方法。我们给出了扩展环境的例子,并介绍了一个简单的转换,该转换在实验上似乎可以提高某些标准RL代理在特定类型扩展环境中的性能。
{"title":"Extending Environments to Measure Self-reflection in Reinforcement Learning","authors":"S. Alexander, Michael Castaneda, K. Compher, Oscar Martinez","doi":"10.2478/jagi-2022-0001","DOIUrl":"https://doi.org/10.2478/jagi-2022-0001","url":null,"abstract":"Abstract We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent’s hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment’s outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a way of measuring how self-reflective an agent is. We give examples of extended environments and introduce a simple transformation which experimentally seems to increase some standard RL agents’ performance in a certain type of extended environment.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feature Reinforcement Learning: Part II. Structured MDPs 特征强化学习:第二部分。结构化mdp
Pub Date : 2021-01-01 DOI: 10.2478/jagi-2021-0003
Marcus Hutter
Abstract The Feature Markov Decision Processes ( MDPs) model developed in Part I (Hutter, 2009b) is well-suited for learning agents in general environments. Nevertheless, unstructured (Φ)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend ΦMDP to ΦDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the “best” DBN representation. I discuss all building blocks required for a complete general learning algorithm, and compare the novel ΦDBN model to the prevalent POMDP approach.
第一部分(Hutter, 2009b)中开发的特征马尔可夫决策过程(mdp)模型非常适合于一般环境中的学习代理。然而,非结构化(Φ) mdp仅限于相对简单的环境。像动态贝叶斯网络(dbn)这样的结构化mdp用于解决大规模的现实问题。在本文中,我将ΦMDP扩展为ΦDBN。主要贡献是派生出一个成本标准,该标准允许从环境中自动提取最相关的特征,从而产生“最佳”DBN表示。我讨论了一个完整的通用学习算法所需的所有构建块,并将新的ΦDBN模型与流行的POMDP方法进行了比较。
{"title":"Feature Reinforcement Learning: Part II. Structured MDPs","authors":"Marcus Hutter","doi":"10.2478/jagi-2021-0003","DOIUrl":"https://doi.org/10.2478/jagi-2021-0003","url":null,"abstract":"Abstract The Feature Markov Decision Processes ( MDPs) model developed in Part I (Hutter, 2009b) is well-suited for learning agents in general environments. Nevertheless, unstructured (Φ)MDPs are limited to relatively simple environments. Structured MDPs like Dynamic Bayesian Networks (DBNs) are used for large-scale real-world problems. In this article I extend ΦMDP to ΦDBN. The primary contribution is to derive a cost criterion that allows to automatically extract the most relevant features from the environment, leading to the “best” DBN representation. I discuss all building blocks required for a complete general learning algorithm, and compare the novel ΦDBN model to the prevalent POMDP approach.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124148527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Synthesis and Decoding of Meaning 意义的合成与解码
Pub Date : 2021-01-01 DOI: 10.2478/jagi-2021-0002
H. G. Schulze
Abstract Thinking machines must be able to use language effectively in communication with humans. It requires from them the ability to generate meaning and transfer this meaning to a communicating partner. Machines must also be able to decode meaning communicated via language. This work is about meaning in the context of building an artificial general intelligent system. It starts with an analysis of the Turing test and some of the main approaches to explain meaning. It then considers the generation of meaning in the human mind and argues that meaning has a dual nature. The quantum component reflects the relationships between objects and the orthogonal quale component the value of these relationships to the self. Both components are necessary, simultaneously, for meaning to exist. This parallel existence permits the formulation of ‘meaning coordinates’ as ordered pairs of quantum and quale strengths. Meaning coordinates represent the contents of meaningful mental states. Spurred by a currently salient meaningful mental state in the speaker, language is used to induce a meaningful mental state in the hearer. Therefore, thinking machines must be able to produce and respond to meaningful mental states in ways similar to their functioning in humans. It is explained how quanta and qualia arise, how they generate meaningful mental states, how these states propagate to produce thought, how they are communicated and interpreted, and how they can be simulated to create thinking machines.
具有抽象思维的机器必须能够有效地使用语言与人类交流。这需要他们有能力产生意义,并将这种意义传递给沟通伙伴。机器还必须能够解码通过语言传达的含义。这项工作是关于在构建人工通用智能系统的背景下的意义。本文首先分析了图灵测试和一些解释含义的主要方法。然后,它考虑了意义在人类思想中的产生,并认为意义具有双重性质。量子分量反映了对象之间的关系,正交分量反映了这些关系对自我的价值。这两个组成部分是必要的,同时,意义的存在。这种平行存在允许将“意义坐标”表述为量子和等量强度的有序对。意义坐标代表有意义的心理状态的内容。在说话者当前显著的有意义的心理状态的刺激下,语言被用来诱导听者产生有意义的心理状态。因此,思考机器必须能够以类似于人类的方式产生和响应有意义的精神状态。它解释了量子和质是如何产生的,它们如何产生有意义的精神状态,这些状态如何传播产生思想,它们如何被交流和解释,以及如何模拟它们来创造思考机器。
{"title":"The Synthesis and Decoding of Meaning","authors":"H. G. Schulze","doi":"10.2478/jagi-2021-0002","DOIUrl":"https://doi.org/10.2478/jagi-2021-0002","url":null,"abstract":"Abstract Thinking machines must be able to use language effectively in communication with humans. It requires from them the ability to generate meaning and transfer this meaning to a communicating partner. Machines must also be able to decode meaning communicated via language. This work is about meaning in the context of building an artificial general intelligent system. It starts with an analysis of the Turing test and some of the main approaches to explain meaning. It then considers the generation of meaning in the human mind and argues that meaning has a dual nature. The quantum component reflects the relationships between objects and the orthogonal quale component the value of these relationships to the self. Both components are necessary, simultaneously, for meaning to exist. This parallel existence permits the formulation of ‘meaning coordinates’ as ordered pairs of quantum and quale strengths. Meaning coordinates represent the contents of meaningful mental states. Spurred by a currently salient meaningful mental state in the speaker, language is used to induce a meaningful mental state in the hearer. Therefore, thinking machines must be able to produce and respond to meaningful mental states in ways similar to their functioning in humans. It is explained how quanta and qualia arise, how they generate meaningful mental states, how these states propagate to produce thought, how they are communicated and interpreted, and how they can be simulated to create thinking machines.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125658107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Measuring Intelligence and Growth Rate: Variations on Hibbard’s Intelligence Measure 智力测量与增长率:希巴德智力测量的变化
Pub Date : 2021-01-01 DOI: 10.2478/jagi-2021-0001
S. Alexander, B. Hibbard
Abstract In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs).
2011年,Hibbard提出了一种针对在对抗序列预测博弈中竞争的代理的智力测量方法。我们认为,Hibbard的想法实际上应该被视为两个独立的想法:首先,这些智能体的智能可以根据它们击败的竞争对手的运行时间增长率来衡量;其次,有一种具体(有些武断)的方法来衡量上述增长率。而Hibbard的智力测量是基于后一种增长率测量方法,我们调查了其他测量功能增长率的方法,并展示了由此产生的类似Hibbard的智力测量和分类。特别有趣的是,我们获得了基于Big-O和Big-Theta符号系统的智能分类法,这些分类法是新颖的,因为它们挑战了智能测量应该是什么样子的传统观念。我们讨论了序列预测器的智能测量如何间接地作为具有人工通用智能(AGIs)的智能测量。
{"title":"Measuring Intelligence and Growth Rate: Variations on Hibbard’s Intelligence Measure","authors":"S. Alexander, B. Hibbard","doi":"10.2478/jagi-2021-0001","DOIUrl":"https://doi.org/10.2478/jagi-2021-0001","url":null,"abstract":"Abstract In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs).","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121521031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A New Approach to Creation of an Artificial Intellect and Method of its Implementation 一种创造人工智能的新途径及其实现方法
Pub Date : 2021-01-01 DOI: 10.2478/jagi-2021-0004
Wladimir Stalski
Abstract On the basis of the author’s earlier works, the article proposes a new approach to creating an artificial intellect system in a model of a human being that is presented as the unification of an intellectual agent and a humanoid robot (ARb). In accordance with the proposed new approach, the development of an artificial intellect is achieved by teaching a natural language to an ARb, and by its utilization for communication with ARbs and humans, as well as for reflections. A method is proposed for the implementation of the approach. Within the framework of that method, a human model is “brought up” like a child, in a collective of automatons and children, whereupon an ARb must master a natural language and reflection, and possess self-awareness. Agent robots (ARbs) propagate and their population evolves; that is ARbs develop cognitively from generation to generation. ARbs must perform the tasks they were given, such as computing, whereupon they are then assigned time for “private life” for improving their education as well as for searching for partners for propagation. After having received an education, every agent robot may be viewed as a “person” who is capable of activities that contain elements of creativity. The development of ARbs thanks to the evolution of their population, education, and personal “life” experience, including “work” experience, which is mastered in a collective of humans and automatons.
摘要:本文在作者早期工作的基础上,提出了一种以人为模型创建人工智能系统的新方法,即智能体和类人机器人的统一。根据提出的新方法,人工智能的发展是通过向ARb教授一种自然语言,并利用它与ARb和人类进行交流,以及进行反思来实现的。提出了一种实现该方法的方法。在这种方法的框架内,人类模型像一个孩子一样,在机器人和孩子的集体中“长大”,因此ARb必须掌握自然语言和反思,并拥有自我意识。智能机器人(Agent robots, ARbs)的繁殖和种群进化;也就是arb一代一代地在认知上发展。arb必须完成分配给他们的任务,例如计算,然后他们被分配时间用于“私人生活”,以改善他们的教育,以及寻找伴侣进行传播。在接受教育之后,每个代理机器人都可以被视为一个能够进行包含创造性元素的活动的“人”。arb的发展得益于其人口,教育和个人“生活”经验的演变,包括“工作”经验,这是由人类和机器人共同掌握的。
{"title":"A New Approach to Creation of an Artificial Intellect and Method of its Implementation","authors":"Wladimir Stalski","doi":"10.2478/jagi-2021-0004","DOIUrl":"https://doi.org/10.2478/jagi-2021-0004","url":null,"abstract":"Abstract On the basis of the author’s earlier works, the article proposes a new approach to creating an artificial intellect system in a model of a human being that is presented as the unification of an intellectual agent and a humanoid robot (ARb). In accordance with the proposed new approach, the development of an artificial intellect is achieved by teaching a natural language to an ARb, and by its utilization for communication with ARbs and humans, as well as for reflections. A method is proposed for the implementation of the approach. Within the framework of that method, a human model is “brought up” like a child, in a collective of automatons and children, whereupon an ARb must master a natural language and reflection, and possess self-awareness. Agent robots (ARbs) propagate and their population evolves; that is ARbs develop cognitively from generation to generation. ARbs must perform the tasks they were given, such as computing, whereupon they are then assigned time for “private life” for improving their education as well as for searching for partners for propagation. After having received an education, every agent robot may be viewed as a “person” who is capable of activities that contain elements of creativity. The development of ARbs thanks to the evolution of their population, education, and personal “life” experience, including “work” experience, which is mastered in a collective of humans and automatons.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128727317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Special Issue “On Defining Artificial Intelligence”—Commentaries and Author’s Response 特刊“人工智能的定义”-评论及作者回应
Pub Date : 2020-02-01 DOI: 10.2478/jagi-2020-0003
Dagmar Monett, Colin W. P. Lewis, K. Thórisson, Joscha Bach, G. Baldassarre, Giovanni Granato, Istvan S. N. Berkeley, François Chollet, Matthew Crosby, Henry Shevlin, John Fox, J. Laird, S. Legg, Peter Lindes, Tomas Mikolov, W. Rapaport, R. Rojas, Marek Rosa, Peter Stone, R. Sutton, Roman V Yampolskiy, Pei Wang, R. Schank, A. Sloman, A. Winfield
{"title":"Special Issue “On Defining Artificial Intelligence”—Commentaries and Author’s Response","authors":"Dagmar Monett, Colin W. P. Lewis, K. Thórisson, Joscha Bach, G. Baldassarre, Giovanni Granato, Istvan S. N. Berkeley, François Chollet, Matthew Crosby, Henry Shevlin, John Fox, J. Laird, S. Legg, Peter Lindes, Tomas Mikolov, W. Rapaport, R. Rojas, Marek Rosa, Peter Stone, R. Sutton, Roman V Yampolskiy, Pei Wang, R. Schank, A. Sloman, A. Winfield","doi":"10.2478/jagi-2020-0003","DOIUrl":"https://doi.org/10.2478/jagi-2020-0003","url":null,"abstract":"","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Combining Evolution and Learning in Computational Ecosystems 结合计算生态系统中的进化和学习
Pub Date : 2020-01-01 DOI: 10.2478/jagi-2020-0001
Claes Strannegård, Wen Xu, N. Engsner, J. Endler
Abstract Although animals such as spiders, fish, and birds have very different anatomies, the basic mechanisms that govern their perception, decision-making, learning, reproduction, and death have striking similarities. These mechanisms have apparently allowed the development of general intelligence in nature. This led us to the idea of approaching artificial general intelligence (AGI) by constructing a generic artificial animal (animat) with a configurable body and fixed mechanisms of perception, decision-making, learning, reproduction, and death. One instance of this generic animat could be an artificial spider, another an artificial fish, and a third an artificial bird. The goal of all decision-making in this model is to maintain homeostasis. Thus actions are selected that might promote survival and reproduction to varying degrees. All decision-making is based on knowledge that is stored in network structures. Each animat has two such network structures: a genotype and a phenotype. The genotype models the initial nervous system that is encoded in the genome (“the brain at birth”), while the phenotype represents the nervous system in its present form (“the brain at present”). Initially the phenotype and the genotype coincide, but then the phenotype keeps developing as a result of learning, while the genotype essentially remains unchanged. The model is extended to ecosystems populated by animats that develop continuously according to fixed mechanisms for sexual or asexual reproduction, and death. Several examples of simple ecosystems are given. We show that our generic animat model possesses general intelligence in a primitive form. In fact, it can learn simple forms of locomotion, navigation, foraging, language, and arithmetic.
尽管蜘蛛、鱼类和鸟类等动物的解剖结构非常不同,但支配它们的感知、决策、学习、繁殖和死亡的基本机制却有着惊人的相似之处。这些机制显然使自然界的一般智力得以发展。这让我们想到了通过构建具有可配置身体和固定感知、决策、学习、繁殖和死亡机制的通用人工动物来接近人工通用智能(AGI)的想法。这种通用动物的一个例子可能是人造蜘蛛,另一个例子是人造鱼,第三个例子是人造鸟。在这个模型中,所有决策的目标都是维持体内平衡。因此,人们选择了可能在不同程度上促进生存和繁殖的行为。所有的决策都是基于存储在网络结构中的知识。每个动物都有两个这样的网络结构:基因型和表现型。基因型模拟了基因组中编码的初始神经系统(“出生时的大脑”),而表型代表了目前形式的神经系统(“目前的大脑”)。最初,表现型和基因型是一致的,但随着学习,表现型不断发展,而基因型基本保持不变。这个模型被扩展到由动物组成的生态系统,这些动物根据固定的有性或无性繁殖和死亡机制不断发展。给出了几个简单生态系统的例子。我们证明了我们的通用动物模型具有原始形式的通用智能。事实上,它可以学习简单的运动、导航、觅食、语言和算术。
{"title":"Combining Evolution and Learning in Computational Ecosystems","authors":"Claes Strannegård, Wen Xu, N. Engsner, J. Endler","doi":"10.2478/jagi-2020-0001","DOIUrl":"https://doi.org/10.2478/jagi-2020-0001","url":null,"abstract":"Abstract Although animals such as spiders, fish, and birds have very different anatomies, the basic mechanisms that govern their perception, decision-making, learning, reproduction, and death have striking similarities. These mechanisms have apparently allowed the development of general intelligence in nature. This led us to the idea of approaching artificial general intelligence (AGI) by constructing a generic artificial animal (animat) with a configurable body and fixed mechanisms of perception, decision-making, learning, reproduction, and death. One instance of this generic animat could be an artificial spider, another an artificial fish, and a third an artificial bird. The goal of all decision-making in this model is to maintain homeostasis. Thus actions are selected that might promote survival and reproduction to varying degrees. All decision-making is based on knowledge that is stored in network structures. Each animat has two such network structures: a genotype and a phenotype. The genotype models the initial nervous system that is encoded in the genome (“the brain at birth”), while the phenotype represents the nervous system in its present form (“the brain at present”). Initially the phenotype and the genotype coincide, but then the phenotype keeps developing as a result of learning, while the genotype essentially remains unchanged. The model is extended to ecosystems populated by animats that develop continuously according to fixed mechanisms for sexual or asexual reproduction, and death. Several examples of simple ecosystems are given. We show that our generic animat model possesses general intelligence in a primitive form. In fact, it can learn simple forms of locomotion, navigation, foraging, language, and arithmetic.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116809739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Artificial Motivation for Cognitive Software Agents 认知软件代理的人工动机
Pub Date : 2020-01-01 DOI: 10.2478/jagi-2020-0002
R. McCall, S. Franklin, U. Faghihi, Javier Snaider, Sean Kugele
Abstract Natural selection has imbued biological agents with motivations moving them to act for survival and reproduction, as well as to learn so as to support both. Artificial agents also require motivations to act in a goal-directed manner and to learn appropriately into various memories. Here we present a biologically inspired motivation system, based on feelings (including emotions) integrated within the LIDA cognitive architecture at a fundamental level. This motivational system, operating within LIDA’s cognitive cycle, provides a repertoire of motivational capacities operating over a range of time scales of increasing complexity. These include alarms, appraisal mechanisms, appetence and aversion, and deliberation and planning.
自然选择赋予生物个体以动机,促使它们为生存和繁殖而行动,并学习以支持这两者。人工代理也需要动机以目标导向的方式行动,并适当地学习各种记忆。在这里,我们提出了一个基于感觉(包括情绪)的生物激励系统,该系统在基本层面上集成在LIDA认知架构中。这个动机系统,在LIDA的认知周期内运作,提供了一个在越来越复杂的时间尺度范围内运作的动机能力的剧目。这些包括警报、评估机制、偏好和厌恶,以及深思熟虑和计划。
{"title":"Artificial Motivation for Cognitive Software Agents","authors":"R. McCall, S. Franklin, U. Faghihi, Javier Snaider, Sean Kugele","doi":"10.2478/jagi-2020-0002","DOIUrl":"https://doi.org/10.2478/jagi-2020-0002","url":null,"abstract":"Abstract Natural selection has imbued biological agents with motivations moving them to act for survival and reproduction, as well as to learn so as to support both. Artificial agents also require motivations to act in a goal-directed manner and to learn appropriately into various memories. Here we present a biologically inspired motivation system, based on feelings (including emotions) integrated within the LIDA cognitive architecture at a fundamental level. This motivational system, operating within LIDA’s cognitive cycle, provides a repertoire of motivational capacities operating over a range of time scales of increasing complexity. These include alarms, appraisal mechanisms, appetence and aversion, and deliberation and planning.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128488182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI 阿基米德陷阱:为什么传统的强化学习可能不会产生AGI
Pub Date : 2020-01-01 DOI: 10.2478/jagi-2020-0004
S. Alexander
Abstract After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways traditional reinforcement learning could be altered to remove this roadblock.
摘要将实数的阿基米德性质推广到适用于非数值结构,证明了实数不能用于精确测量非阿基米德结构。我们认为,由于具有人工通用智能(AGI)的智能体在从事本质上涉及非阿基米德奖励的任务时应该没有问题,并且由于传统的强化学习奖励是实数,因此传统的强化学习可能不会导致AGI。我们指出了两种可能的方法来改变传统的强化学习来消除这个障碍。
{"title":"The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI","authors":"S. Alexander","doi":"10.2478/jagi-2020-0004","DOIUrl":"https://doi.org/10.2478/jagi-2020-0004","url":null,"abstract":"Abstract After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways traditional reinforcement learning could be altered to remove this roadblock.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121841728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Journal of Artificial General Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1