首页 > 最新文献

IEEE Transactions on Autonomous Mental Development最新文献

英文 中文
A Spike-Based Model of Neuronal Intrinsic Plasticity 基于峰的神经元内在可塑性模型
Pub Date : 2013-03-01 DOI: 10.1109/TAMD.2012.2211101
Chunguang Li, Yuke Li
The discovery of neuronal intrinsic plasticity (IP) processes which persistently modify a neuron's excitability necessitates a new concept of the neuronal plasticity mechanism and may profoundly influence our ideas on learning and memory. In this paper, we propose a spike-based IP model/adaptation rule for an integrate-and-fire (IF) neuron to model this biological phenomenon. By utilizing spikes denoted by Dirac delta functions rather than computing instantaneous firing rates for the time-dependent stimulus, this simple adaptation rule adjusts two parameters of an individual IF neuron to modify its excitability. As a result, this adaptation rule helps an IF neuron to keep its firing activity in a relatively “low but not too low” level and makes the spike-count distributions computed with adjusted window sizes similar to the experimental results.
神经元内在可塑性(IP)过程的发现持续地改变了神经元的兴奋性,需要对神经元可塑性机制提出新的概念,并可能深刻地影响我们对学习和记忆的看法。在本文中,我们提出了一个基于spike的IP模型/自适应规则来模拟这种生物现象。通过使用狄拉克函数表示的峰值,而不是计算时间依赖性刺激的瞬时放电率,这个简单的适应规则调整单个中频神经元的两个参数来改变其兴奋性。因此,这种适应规则有助于中频神经元将其放电活动保持在一个相对“低但不太低”的水平,并使调整窗口大小后计算的峰值计数分布与实验结果相似。
{"title":"A Spike-Based Model of Neuronal Intrinsic Plasticity","authors":"Chunguang Li, Yuke Li","doi":"10.1109/TAMD.2012.2211101","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2211101","url":null,"abstract":"The discovery of neuronal intrinsic plasticity (IP) processes which persistently modify a neuron's excitability necessitates a new concept of the neuronal plasticity mechanism and may profoundly influence our ideas on learning and memory. In this paper, we propose a spike-based IP model/adaptation rule for an integrate-and-fire (IF) neuron to model this biological phenomenon. By utilizing spikes denoted by Dirac delta functions rather than computing instantaneous firing rates for the time-dependent stimulus, this simple adaptation rule adjusts two parameters of an individual IF neuron to modify its excitability. As a result, this adaptation rule helps an IF neuron to keep its firing activity in a relatively “low but not too low” level and makes the spike-count distributions computed with adjusted window sizes similar to the experimental results.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"62-73"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2211101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Predicting Visual Stimuli From Self-Induced Actions: An Adaptive Model of a Corollary Discharge Circuit 从自我诱导的动作预测视觉刺激:一个自适应的推论放电回路模型
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2199989
Jonas Ruesch, R. Ferreira, A. Bernardino
Neural circuits that route motor activity to sensory structures play a fundamental role in perception. Their purpose is to aid basic cognitive processes by integrating knowledge about an organism's actions and to predict the perceptual consequences of those actions. This work develops a biologically inspired model of a visual stimulus prediction circuit and proposes a mathematical formulation for a computational implementation. We consider an agent with a visual sensory area consisting of an unknown rigid configuration of light-sensitive receptive fields which move with respect to the environment and according to a given number of degrees of freedom. From the agent's perspective, every movement induces an initially unknown change to the recorded stimulus. In line with evidence collected from studies on ontogenetic development and the plasticity of neural circuits, the proposed model adapts its structure with respect to experienced stimuli collected during the execution of a set of exploratory actions. We discuss the tendency of the proposed model to organize such that the prediction function is built using a particularly sparse feedforward network which requires a minimum amount of wiring and computational operations. We also observe a dualism between the organization of an intermediate layer of the network and the concept of self-similarity.
将运动活动传导到感觉结构的神经回路在知觉中起着重要作用。它们的目的是通过整合有关生物体行为的知识来帮助基本的认知过程,并预测这些行为的感知后果。这项工作开发了一个受生物学启发的视觉刺激预测电路模型,并提出了一个计算实现的数学公式。我们考虑一个具有视觉感觉区域的智能体,该区域由一个未知的光敏感接受域的刚性结构组成,该结构相对于环境并根据给定的自由度移动。从智能体的角度来看,每一个动作都会对记录的刺激产生最初未知的变化。根据从个体发育和神经回路可塑性研究中收集到的证据,所提出的模型可以根据执行一系列探索性动作期间收集到的经验刺激来调整其结构。我们讨论了所提出的模型的组织趋势,使得预测函数使用特别稀疏的前馈网络构建,这需要最少的布线和计算操作。我们还观察到网络中间层的组织与自相似概念之间的二元论。
{"title":"Predicting Visual Stimuli From Self-Induced Actions: An Adaptive Model of a Corollary Discharge Circuit","authors":"Jonas Ruesch, R. Ferreira, A. Bernardino","doi":"10.1109/TAMD.2012.2199989","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2199989","url":null,"abstract":"Neural circuits that route motor activity to sensory structures play a fundamental role in perception. Their purpose is to aid basic cognitive processes by integrating knowledge about an organism's actions and to predict the perceptual consequences of those actions. This work develops a biologically inspired model of a visual stimulus prediction circuit and proposes a mathematical formulation for a computational implementation. We consider an agent with a visual sensory area consisting of an unknown rigid configuration of light-sensitive receptive fields which move with respect to the environment and according to a given number of degrees of freedom. From the agent's perspective, every movement induces an initially unknown change to the recorded stimulus. In line with evidence collected from studies on ontogenetic development and the plasticity of neural circuits, the proposed model adapts its structure with respect to experienced stimuli collected during the execution of a set of exploratory actions. We discuss the tendency of the proposed model to organize such that the prediction function is built using a particularly sparse feedforward network which requires a minimum amount of wiring and computational operations. We also observe a dualism between the organization of an intermediate layer of the network and the concept of self-similarity.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"290-304"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2199989","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Developmental Approach to Structural Self-Organization in Reservoir Computing 油藏计算中结构自组织的发展方法
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2182765
Jun Yin, Y. Meng, Yaochu Jin
Reservoir computing (RC) is a computational framework for neural network based information processing. Little work, however, has been conducted on adapting the structure of the neural reservoir. In this paper, we propose a developmental approach to structural self-organization in reservoir computing. More specifically, a recurrent spiking neural network is adopted for building up the reservoir, whose synaptic and structural plasticity are regulated by a gene regulatory network (GRN). Meanwhile, the expression dynamics of the GRN is directly influenced by the activity of the neurons in the reservoir. We term this proposed model as GRN-regulated self-organizing RC (GRN-SO-RC). Contrary to a randomly initialized and fixed structure used in most existing RC models, the structure of the reservoir in the GRN-SO-RC model is self-organized to adapt to the specific task using the GRN-based mechanism. To evaluate the proposed model, experiments have been conducted on several benchmark problems widely used in RC models, such as memory capacity and nonlinear auto-regressive moving average. In addition, we apply the GRN-SO-RC model to solving complex real-world problems, including speech recognition and human action recognition. Our experimental results on both the benchmark and real-world problems demonstrate that the GRN-SO-RC model is effective and robust in solving different types of problems.
储层计算(RC)是一种基于神经网络的信息处理计算框架。然而,关于调节神经库结构的研究很少。本文提出了一种油藏计算中结构自组织的发展方法。更具体地说,采用循环脉冲神经网络来建立水库,水库的突触和结构可塑性由基因调控网络(GRN)调节。同时,GRN的表达动态也直接受到储库中神经元活动的影响。我们将该模型称为grn调节自组织RC (GRN-SO-RC)。与大多数现有RC模型采用随机初始化和固定结构不同,GRN-SO-RC模型中的水库结构采用基于grn的机制自组织以适应特定任务。为了验证所提出的模型,我们对RC模型中常用的几个基准问题,如记忆容量和非线性自回归移动平均进行了实验。此外,我们将GRN-SO-RC模型应用于解决复杂的现实问题,包括语音识别和人类动作识别。我们在基准和现实问题上的实验结果表明,GRN-SO-RC模型在解决不同类型的问题方面是有效的和鲁棒的。
{"title":"A Developmental Approach to Structural Self-Organization in Reservoir Computing","authors":"Jun Yin, Y. Meng, Yaochu Jin","doi":"10.1109/TAMD.2012.2182765","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2182765","url":null,"abstract":"Reservoir computing (RC) is a computational framework for neural network based information processing. Little work, however, has been conducted on adapting the structure of the neural reservoir. In this paper, we propose a developmental approach to structural self-organization in reservoir computing. More specifically, a recurrent spiking neural network is adopted for building up the reservoir, whose synaptic and structural plasticity are regulated by a gene regulatory network (GRN). Meanwhile, the expression dynamics of the GRN is directly influenced by the activity of the neurons in the reservoir. We term this proposed model as GRN-regulated self-organizing RC (GRN-SO-RC). Contrary to a randomly initialized and fixed structure used in most existing RC models, the structure of the reservoir in the GRN-SO-RC model is self-organized to adapt to the specific task using the GRN-based mechanism. To evaluate the proposed model, experiments have been conducted on several benchmark problems widely used in RC models, such as memory capacity and nonlinear auto-regressive moving average. In addition, we apply the GRN-SO-RC model to solving complex real-world problems, including speech recognition and human action recognition. Our experimental results on both the benchmark and real-world problems demonstrate that the GRN-SO-RC model is effective and robust in solving different types of problems.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"273-289"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2182765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Human-Recognizable Robotic Gestures 人类可识别的机器人手势
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2208962
J. Cabibihan, W. So, S. Pramanik
For robots to be accommodated in human spaces and in daily human activities, robots should be able to understand messages from their human conversation partner. In the same light, humans must also understand the messages that are being communicated to them by robots, including nonverbal messages. We conducted a Web-based video study wherein participants interpreted the iconic gestures and emblems produced by an anthropomorphic robot. Out of the 15 robotic gestures presented, we found 6 that can be accurately recognized by the human observer. These were nodding, clapping, hugging, expressing anger, walking, and flying. We review these gestures for their meaning from literature on human and animal behavior. We conclude by discussing the possible implications of these gestures for the design of social robots that are able to have engaging interactions with humans.
为了让机器人适应人类的空间和日常的人类活动,机器人应该能够理解来自人类对话伙伴的信息。同样,人类也必须理解机器人传递给他们的信息,包括非语言信息。我们进行了一项基于网络的视频研究,其中参与者解释了拟人化机器人产生的标志性手势和标志。在展示的15个机器人手势中,我们发现6个可以被人类观察者准确识别。这些动作包括点头、鼓掌、拥抱、表达愤怒、行走和飞翔。我们从人类和动物行为的文献中回顾这些手势的含义。最后,我们讨论了这些手势对设计能够与人类进行互动的社交机器人的可能影响。
{"title":"Human-Recognizable Robotic Gestures","authors":"J. Cabibihan, W. So, S. Pramanik","doi":"10.1109/TAMD.2012.2208962","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2208962","url":null,"abstract":"For robots to be accommodated in human spaces and in daily human activities, robots should be able to understand messages from their human conversation partner. In the same light, humans must also understand the messages that are being communicated to them by robots, including nonverbal messages. We conducted a Web-based video study wherein participants interpreted the iconic gestures and emblems produced by an anthropomorphic robot. Out of the 15 robotic gestures presented, we found 6 that can be accurately recognized by the human observer. These were nodding, clapping, hugging, expressing anger, walking, and flying. We review these gestures for their meaning from literature on human and animal behavior. We conclude by discussing the possible implications of these gestures for the design of social robots that are able to have engaging interactions with humans.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"28 1","pages":"305-314"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2208962","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Model-Free Reinforcement Learning of Impedance Control in Stochastic Environments 随机环境下阻抗控制的无模型强化学习
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2205924
F. Stulp, J. Buchli, Alice Ellmer, M. Mistry, Evangelos A. Theodorou, S. Schaal
For humans and robots, variable impedance control is an essential component for ensuring robust and safe physical interaction with the environment. Humans learn to adapt their impedance to specific tasks and environments; a capability which we continually develop and improve until we are well into our twenties. In this article, we reproduce functionally interesting aspects of learning impedance control in humans on a simulated robot platform. As demonstrated in numerous force field tasks, humans combine two strategies to adapt their impedance to perturbations, thereby minimizing position error and energy consumption: 1) if perturbations are unpredictable, subjects increase their impedance through cocontraction; and 2) if perturbations are predictable, subjects learn a feed-forward command to offset the perturbation. We show how a 7-DOF simulated robot demonstrates similar behavior with our model-free reinforcement learning algorithm PI2, by applying deterministic and stochastic force fields to the robot's end-effector. We show the qualitative similarity between the robot and human movements. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. Not requiring models also facilitates autonomous development for robots, as prespecified models cannot be provided for each environment a robot might encounter.
对于人类和机器人来说,可变阻抗控制是确保与环境进行稳健和安全的物理交互的重要组成部分。人类学会了使自己的阻抗适应特定的任务和环境;这种能力我们会不断发展和提高,直到我们二十多岁。在本文中,我们在模拟机器人平台上再现了人类学习阻抗控制的功能有趣方面。正如在许多力场任务中所展示的那样,人类结合两种策略来调整他们的阻抗以适应扰动,从而最大限度地减少位置误差和能量消耗:1)如果扰动不可预测,受试者通过收缩来增加阻抗;2)如果扰动是可预测的,受试者学习前馈命令来抵消扰动。通过将确定性和随机力场应用于机器人的末端执行器,我们展示了如何使用无模型强化学习算法PI2模拟7自由度机器人的类似行为。我们展示了机器人和人类运动之间的定性相似性。我们的研究结果提供了一种生物学上合理的方法,可以纯粹从经验中学习适当的阻抗,而不需要身体或环境动力学模型。不需要模型也有助于机器人的自主开发,因为预先指定的模型不能为机器人可能遇到的每个环境提供。
{"title":"Model-Free Reinforcement Learning of Impedance Control in Stochastic Environments","authors":"F. Stulp, J. Buchli, Alice Ellmer, M. Mistry, Evangelos A. Theodorou, S. Schaal","doi":"10.1109/TAMD.2012.2205924","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2205924","url":null,"abstract":"For humans and robots, variable impedance control is an essential component for ensuring robust and safe physical interaction with the environment. Humans learn to adapt their impedance to specific tasks and environments; a capability which we continually develop and improve until we are well into our twenties. In this article, we reproduce functionally interesting aspects of learning impedance control in humans on a simulated robot platform. As demonstrated in numerous force field tasks, humans combine two strategies to adapt their impedance to perturbations, thereby minimizing position error and energy consumption: 1) if perturbations are unpredictable, subjects increase their impedance through cocontraction; and 2) if perturbations are predictable, subjects learn a feed-forward command to offset the perturbation. We show how a 7-DOF simulated robot demonstrates similar behavior with our model-free reinforcement learning algorithm PI2, by applying deterministic and stochastic force fields to the robot's end-effector. We show the qualitative similarity between the robot and human movements. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. Not requiring models also facilitates autonomous development for robots, as prespecified models cannot be provided for each environment a robot might encounter.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"15 1","pages":"330-341"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2205924","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Intrinsic Motivation and Introspection in Reinforcement Learning 强化学习中的内在动机与内省
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2208457
K. Merrick
Incorporating intrinsic motivation with reinforcement learning can permit agents to independently choose, which skills they will develop, or to change their focus of attention to learn different skills at different times. This implies an autonomous developmental process for skills in which a skill-acquisition goal is first identified, then a skill is learned to solve the goal. The learned skill may then be stored, reused, temporarily ignored or even permanently erased. This paper formalizes the developmental process for skills by proposing a goal-lifecycle using the option framework for motivated reinforcement learning agents. The paper shows how the goal-lifecycle can be used as a basis for designing motivational state-spaces that permit agents to reason introspectively and autonomously about when to learn skills to solve goals, when to activate skills, when to suspend activation of skills or when to delete skills. An algorithm is presented that simultaneously learns: 1) an introspective policy mapping motivational states to decisions that change the agent's motivational state, and 2) multiple option policies mapping sensed states and actions to achieve various domain-specific goals. Two variations of agents using this model are compared to motivated reinforcement learning agents without introspection for controlling non-player characters in a computer game scenario. Results show that agents using introspection can focus their attention on learning more complex skills than agents without introspection. In addition, they can learn these skills more effectively.
将内在动机与强化学习结合起来,可以让智能体独立选择他们将发展的技能,或者改变他们的注意力焦点,在不同的时间学习不同的技能。这意味着技能的自主发展过程,首先确定技能获取目标,然后学习技能来解决目标。然后,学习的技能可能被存储、重用、暂时忽略甚至永久删除。本文通过使用动机强化学习代理的选项框架提出目标生命周期,形式化了技能的发展过程。本文展示了如何将目标生命周期用作设计动机状态空间的基础,该空间允许代理自省和自主地推理何时学习技能以解决目标,何时激活技能,何时暂停激活技能或何时删除技能。提出了一种同时学习的算法:1)内省策略将动机状态映射到改变代理动机状态的决策;2)多选项策略映射感知状态和动作以实现各种特定领域目标。在计算机游戏场景中,将使用该模型的两种代理变体与没有内省的动机强化学习代理进行比较,以控制非玩家角色。结果表明,使用内省的智能体比不使用内省的智能体更能集中注意力学习更复杂的技能。此外,他们可以更有效地学习这些技能。
{"title":"Intrinsic Motivation and Introspection in Reinforcement Learning","authors":"K. Merrick","doi":"10.1109/TAMD.2012.2208457","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2208457","url":null,"abstract":"Incorporating intrinsic motivation with reinforcement learning can permit agents to independently choose, which skills they will develop, or to change their focus of attention to learn different skills at different times. This implies an autonomous developmental process for skills in which a skill-acquisition goal is first identified, then a skill is learned to solve the goal. The learned skill may then be stored, reused, temporarily ignored or even permanently erased. This paper formalizes the developmental process for skills by proposing a goal-lifecycle using the option framework for motivated reinforcement learning agents. The paper shows how the goal-lifecycle can be used as a basis for designing motivational state-spaces that permit agents to reason introspectively and autonomously about when to learn skills to solve goals, when to activate skills, when to suspend activation of skills or when to delete skills. An algorithm is presented that simultaneously learns: 1) an introspective policy mapping motivational states to decisions that change the agent's motivational state, and 2) multiple option policies mapping sensed states and actions to achieve various domain-specific goals. Two variations of agents using this model are compared to motivated reinforcement learning agents without introspection for controlling non-player characters in a computer game scenario. Results show that agents using introspection can focus their attention on learning more complex skills than agents without introspection. In addition, they can learn these skills more effectively.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"315-329"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2208457","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A Unified Account of Gaze Following 目光跟随的统一描述
Pub Date : 2012-12-01 DOI: 10.1109/TAMD.2012.2208640
H. Jasso, J. Triesch, G. Deák, J. Lewis
Gaze following, the ability to redirect one's visual attention to look at what another person is seeing, is foundational for imitation, word learning, and theory-of-mind. Previous theories have suggested that the development of gaze following in human infants is the product of a basic gaze following mechanism, plus the gradual incorporation of several distinct new mechanisms that improve the skill, such as spatial inference, and the ability to use eye direction information as well as head direction. In this paper, we offer an alternative explanation based on a single learning mechanism. From a starting state with no knowledge of the implications of another organism's gaze direction, our model learns to follow gaze by being placed in a simulated environment where an adult caregiver looks around at objects. Our infant model matches the development of gaze following in human infants as measured in key experiments that we replicate and analyze in detail.
目光跟随,一种将一个人的视觉注意力转移到另一个人所看到的东西上的能力,是模仿、词汇学习和心理理论的基础。先前的理论认为,人类婴儿注视跟随的发展是一种基本的注视跟随机制的产物,再加上几种不同的新机制的逐渐结合,这些机制提高了这项技能,比如空间推理,以及使用眼睛方向信息和头部方向的能力。在本文中,我们提供了一种基于单一学习机制的替代解释。从一个不知道另一个生物体凝视方向的含义的初始状态开始,我们的模型通过放置在一个模拟环境中学习跟随凝视,在这个环境中,一个成年照顾者环顾四周。我们的婴儿模型与人类婴儿注视跟随的发展相匹配,我们在关键实验中进行了详细的复制和分析。
{"title":"A Unified Account of Gaze Following","authors":"H. Jasso, J. Triesch, G. Deák, J. Lewis","doi":"10.1109/TAMD.2012.2208640","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2208640","url":null,"abstract":"Gaze following, the ability to redirect one's visual attention to look at what another person is seeing, is foundational for imitation, word learning, and theory-of-mind. Previous theories have suggested that the development of gaze following in human infants is the product of a basic gaze following mechanism, plus the gradual incorporation of several distinct new mechanisms that improve the skill, such as spatial inference, and the ability to use eye direction information as well as head direction. In this paper, we offer an alternative explanation based on a single learning mechanism. From a starting state with no knowledge of the implications of another organism's gaze direction, our model learns to follow gaze by being placed in a simulated environment where an adult caregiver looks around at objects. Our infant model matches the development of gaze following in human infants as measured in key experiments that we replicate and analyze in detail.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"257-272"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2208640","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Editorial: Impact Factor and Outstanding Paper Awards 编辑:影响因子和杰出论文奖
Pub Date : 2012-09-10 DOI: 10.1109/TAMD.2012.2211475
Zhengyou Zhang
{"title":"Editorial: Impact Factor and Outstanding Paper Awards","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2012.2211475","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2211475","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"68 1","pages":"189"},"PeriodicalIF":0.0,"publicationDate":"2012-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85801397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Based Bayesian Intent Recognition 基于上下文的贝叶斯意图识别
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2211871
Richard Kelley, A. Tavakkoli, Christopher King, A. Ambardekar, M. Nicolescu, M. Nicolescu
One of the foundations of social interaction among humans is the ability to correctly identify interactions and infer the intentions of others. To build robots that reliably function in the human social world, we must develop models that robots can use to mimic the intent recognition skills found in humans. We propose a framework that uses contextual information in the form of object affordances and object state to improve the performance of an underlying intent recognition system. This system represents objects and their affordances using a directed graph that is automatically extracted from a large corpus of natural language text. We validate our approach on a physical robot that classifies intentions in a number of scenarios.
人类社会互动的基础之一是正确识别互动和推断他人意图的能力。为了制造能够可靠地在人类社会中发挥作用的机器人,我们必须开发机器人可以用来模仿人类意图识别技能的模型。我们提出了一个框架,该框架使用对象可视性和对象状态形式的上下文信息来提高底层意图识别系统的性能。该系统使用从大量自然语言文本语料库中自动提取的有向图来表示对象及其启示。我们在一个物理机器人上验证了我们的方法,该机器人在许多场景中对意图进行了分类。
{"title":"Context-Based Bayesian Intent Recognition","authors":"Richard Kelley, A. Tavakkoli, Christopher King, A. Ambardekar, M. Nicolescu, M. Nicolescu","doi":"10.1109/TAMD.2012.2211871","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2211871","url":null,"abstract":"One of the foundations of social interaction among humans is the ability to correctly identify interactions and infer the intentions of others. To build robots that reliably function in the human social world, we must develop models that robots can use to mimic the intent recognition skills found in humans. We propose a framework that uses contextual information in the form of object affordances and object state to improve the performance of an underlying intent recognition system. This system represents objects and their affordances using a directed graph that is automatically extracted from a large corpus of natural language text. We validate our approach on a physical robot that classifies intentions in a number of scenarios.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"215-225"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2211871","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Reciprocity and Retaliation in Social Games With Adaptive Agents 基于自适应代理的社交游戏中的互惠和报复
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2202658
Derrik E. Asher, Andrew Zaldivar, B. Barton, A. Brewer, J. Krichmar
Game theory has been useful for understanding risk-taking and cooperative behavior. However, in studies of the neural basis of decision-making during games of conflict, subjects typically play against opponents with predetermined strategies. The present study introduces a neurobiologically plausible model of action selection and neuromodulation, which adapts to its opponent's strategy and environmental conditions. The model is based on the assumption that dopaminergic and serotonergic systems track expected rewards and costs, respectively. The model controlled both simulated and robotic agents playing Hawk-Dove and Chicken games against subjects. When playing against an aggressive version of the model, there was a significant shift in the subjects' strategy from Win-Stay-Lose-Shift to Tit-For-Tat. Subjects became retaliatory when confronted with agents that tended towards risky behavior. These results highlight the important interactions between subjects and agents utilizing adaptive behavior. Moreover, they reveal neuromodulatory mechanisms that give rise to cooperative and competitive behaviors.
博弈论对于理解冒险和合作行为很有用。然而,在对冲突游戏中决策的神经基础的研究中,研究对象通常是用预先确定的策略与对手对抗。本研究提出了一种神经生物学上合理的动作选择和神经调节模型,该模型能适应对手的策略和环境条件。该模型是基于多巴胺能和血清素能系统分别跟踪预期回报和成本的假设。该模型同时控制模拟代理人和机器人代理人与实验对象玩鹰鸽和鸡的游戏。当与该模型的攻击性版本对抗时,受试者的策略发生了重大转变,从“赢-守-输”转变为“以牙还牙”。当面对倾向于冒险行为的代理人时,受试者变得具有报复性。这些结果突出了主体和主体之间利用适应性行为的重要相互作用。此外,它们揭示了产生合作和竞争行为的神经调节机制。
{"title":"Reciprocity and Retaliation in Social Games With Adaptive Agents","authors":"Derrik E. Asher, Andrew Zaldivar, B. Barton, A. Brewer, J. Krichmar","doi":"10.1109/TAMD.2012.2202658","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2202658","url":null,"abstract":"Game theory has been useful for understanding risk-taking and cooperative behavior. However, in studies of the neural basis of decision-making during games of conflict, subjects typically play against opponents with predetermined strategies. The present study introduces a neurobiologically plausible model of action selection and neuromodulation, which adapts to its opponent's strategy and environmental conditions. The model is based on the assumption that dopaminergic and serotonergic systems track expected rewards and costs, respectively. The model controlled both simulated and robotic agents playing Hawk-Dove and Chicken games against subjects. When playing against an aggressive version of the model, there was a significant shift in the subjects' strategy from Win-Stay-Lose-Shift to Tit-For-Tat. Subjects became retaliatory when confronted with agents that tended towards risky behavior. These results highlight the important interactions between subjects and agents utilizing adaptive behavior. Moreover, they reveal neuromodulatory mechanisms that give rise to cooperative and competitive behaviors.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"226-238"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2202658","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
IEEE Transactions on Autonomous Mental Development
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1