首页 > 最新文献

IEEE Transactions on Autonomous Mental Development最新文献

英文 中文
Object Learning Through Active Exploration 通过主动探索的对象学习
Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2280614
S. Ivaldi, S. Nguyen, Natalia Lyubova, Alain Droniou, V. Padois, David Filliat, Pierre-Yves Oudeyer, Olivier Sigaud
This paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts.
本文采用一种发展的方法,解决了类人儿童机器人的主动对象学习问题。我们提出了一种认知架构,其中对象的视觉表示是通过主动探索逐步建立的。我们提出了认知架构的设计指南,它的主要功能,我们概述了机器人的认知过程,展示了它是如何在受社会育儿启发的人机交互场景中学习识别物体的。机器人在社会引导和内在动机的双重驱动下,通过操纵主动探索物体。除了机器人和工程方面的成就,我们的实验还复制了一些关于婴儿视觉和操作耦合的观察结果,特别是他们如何关注最具信息量的物体。我们讨论了我们的体系结构的进一步好处,特别是如何改进它并将其用于基础概念。
{"title":"Object Learning Through Active Exploration","authors":"S. Ivaldi, S. Nguyen, Natalia Lyubova, Alain Droniou, V. Padois, David Filliat, Pierre-Yves Oudeyer, Olivier Sigaud","doi":"10.1109/TAMD.2013.2280614","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2280614","url":null,"abstract":"This paper addresses the problem of active object learning by a humanoid child-like robot, using a developmental approach. We propose a cognitive architecture where the visual representation of the objects is built incrementally through active exploration. We present the design guidelines of the cognitive architecture, its main functionalities, and we outline the cognitive process of the robot by showing how it learns to recognize objects in a human-robot interaction scenario inspired by social parenting. The robot actively explores the objects through manipulation, driven by a combination of social guidance and intrinsic motivation. Besides the robotics and engineering achievements, our experiments replicate some observations about the coupling of vision and manipulation in infants, particularly how they focus on the most informative objects. We discuss the further benefits of our architecture, particularly how it can be improved and used to ground concepts.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"26 1","pages":"56-72"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2280614","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Development of First Social Referencing Skills: Emotional Interaction as a Way to Regulate Robot Behavior 第一社会参照技能的发展:情感互动是调节机器人行为的一种方式
Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2284065
S. Boucenna, P. Gaussier, L. Hafemeister
In this paper, we study how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing as well as facial expression recognition can emerge from a simple sensory-motor system involving emotional stimuli. Without knowing that the other is an agent, the robot is able to learn some complex tasks if the human partner has some “empathy” or at least “resonate” with the robot head (low level emotional resonance). Hence, we advocate the idea that social referencing can be bootstrapped from a simple sensory-motor system not dedicated to social interactions.
在本文中,我们研究了与社会伙伴的情感互动如何引导越来越复杂的行为,如社会参照。我们的想法是,社会参照和面部表情识别可以从涉及情绪刺激的简单感觉运动系统中产生。在不知道对方是代理的情况下,如果人类伴侣有一些“同理心”,或者至少与机器人头部“共鸣”(低水平的情感共鸣),机器人就能够学习一些复杂的任务。因此,我们主张社会参照可以从一个简单的感觉-运动系统中启动,而不是专门用于社会互动。
{"title":"Development of First Social Referencing Skills: Emotional Interaction as a Way to Regulate Robot Behavior","authors":"S. Boucenna, P. Gaussier, L. Hafemeister","doi":"10.1109/TAMD.2013.2284065","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2284065","url":null,"abstract":"In this paper, we study how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing as well as facial expression recognition can emerge from a simple sensory-motor system involving emotional stimuli. Without knowing that the other is an agent, the robot is able to learn some complex tasks if the human partner has some “empathy” or at least “resonate” with the robot head (low level emotional resonance). Hence, we advocate the idea that social referencing can be bootstrapped from a simple sensory-motor system not dedicated to social interactions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"42-55"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2284065","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Erratum to "Modeling cross-modal interactions in early word learning" [Dec 13 288-297] 对“早期单词学习中跨模态交互建模”的勘误[Dec 13 288-297]
Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2014.2310061
Nadja Althaus, D. Mareschal
In the above paper (ibid., vol. 5, no. 4, pp. 288-297, Dec. 2013), Fig. 4 was mistakenly misrepresented. The current correct Fig. 4 is presented here.
在上述文件(同上,第5卷,第2号)中。4, pp. 288-297, 2013年12月),图4被错误地歪曲了。当前正确的图4呈现在这里。
{"title":"Erratum to \"Modeling cross-modal interactions in early word learning\" [Dec 13 288-297]","authors":"Nadja Althaus, D. Mareschal","doi":"10.1109/TAMD.2014.2310061","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2310061","url":null,"abstract":"In the above paper (ibid., vol. 5, no. 4, pp. 288-297, Dec. 2013), Fig. 4 was mistakenly misrepresented. The current correct Fig. 4 is presented here.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"73-73"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2310061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning LIDA:用于认知、情感和学习的系统级架构
Pub Date : 2014-03-01 DOI: 10.1109/TAMD.2013.2277589
S. Franklin, Tamas Madl, S. D’Mello, Javier Snaider
We describe a cognitive architecture learning intelligent distribution agent (LIDA) that affords attention, action selection and human-like learning intended for use in controlling cognitive agents that replicate human experiments as well as performing real-world tasks. LIDA combines sophisticated action selection, motivation via emotions, a centrally important attention mechanism, and multimodal instructionalist and selectionist learning. Empirically grounded in cognitive science and cognitive neuroscience, the LIDA architecture employs a variety of modules and processes, each with its own effective representations and algorithms. LIDA has much to say about motivation, emotion, attention, and autonomous learning in cognitive agents. In this paper, we summarize the LIDA model together with its resulting agent architecture, describe its computational implementation, and discuss results of simulations that replicate known experimental data. We also discuss some of LIDA's conceptual modules, propose nonlinear dynamics as a bridge between LIDA's modules and processes and the underlying neuroscience, and point out some of the differences between LIDA and other cognitive architectures. Finally, we discuss how LIDA addresses some of the open issues in cognitive architecture research.
我们描述了一个认知架构学习智能分布代理(LIDA),它提供了注意力、行动选择和类人学习,旨在用于控制复制人类实验以及执行现实世界任务的认知代理。LIDA结合了复杂的行动选择,通过情绪的动机,一个重要的集中注意机制,以及多模态指示主义和选择主义学习。LIDA架构以认知科学和认知神经科学为经验基础,采用各种模块和过程,每个模块和过程都有自己的有效表示和算法。LIDA对认知代理中的动机、情感、注意力和自主学习有很多看法。在本文中,我们总结了LIDA模型及其产生的代理架构,描述了其计算实现,并讨论了复制已知实验数据的模拟结果。我们还讨论了LIDA的一些概念模块,提出非线性动力学作为LIDA的模块和过程与基础神经科学之间的桥梁,并指出LIDA与其他认知架构之间的一些差异。最后,我们讨论了LIDA如何解决认知建筑研究中的一些开放性问题。
{"title":"LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning","authors":"S. Franklin, Tamas Madl, S. D’Mello, Javier Snaider","doi":"10.1109/TAMD.2013.2277589","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2277589","url":null,"abstract":"We describe a cognitive architecture learning intelligent distribution agent (LIDA) that affords attention, action selection and human-like learning intended for use in controlling cognitive agents that replicate human experiments as well as performing real-world tasks. LIDA combines sophisticated action selection, motivation via emotions, a centrally important attention mechanism, and multimodal instructionalist and selectionist learning. Empirically grounded in cognitive science and cognitive neuroscience, the LIDA architecture employs a variety of modules and processes, each with its own effective representations and algorithms. LIDA has much to say about motivation, emotion, attention, and autonomous learning in cognitive agents. In this paper, we summarize the LIDA model together with its resulting agent architecture, describe its computational implementation, and discuss results of simulations that replicate known experimental data. We also discuss some of LIDA's conceptual modules, propose nonlinear dynamics as a bridge between LIDA's modules and processes and the underlying neuroscience, and point out some of the differences between LIDA and other cognitive architectures. Finally, we discuss how LIDA addresses some of the open issues in cognitive architecture research.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"19-41"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2277589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 184
Editorial TAMD Update 编辑更新
Pub Date : 2014-01-01 DOI: 10.1109/TAMD.2014.2309431
Zhengyou Zhang
{"title":"Editorial TAMD Update","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2309431","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2309431","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"1 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75562194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction of New Associate Editors 新副编辑的介绍
Pub Date : 2014-01-01 DOI: 10.1109/TAMD.2014.2309443
Zhengyou Zhang
{"title":"Introduction of New Associate Editors","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2309443","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2309443","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"21 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86425519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Cross-Modal Interactions in Early Word Learning 早期单词学习中的跨模态交互建模
Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2264858
Nadja Althaus, D. Mareschal
Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped “onto” previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains. We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.
婴儿研究表明,在言语标签的存在下,视觉类别的形成有促进作用,这表明婴儿的对象类别和词汇是互动发展的。这与简单地将单词“映射到”先前存在的类别的概念形成对比。为了研究一个词和对象类别同时以交互方式发展的系统的计算基础,我们提出了一个基于交互自组织地图的词学习模型,这些自组织地图分别代表听觉和视觉模式。虽然其他词汇发展模型采用了类似的双地图架构,但我们的模型在学习过程中使用活跃的Hebbian连接来传播视觉和听觉地图之间的激活。我们的研究结果表明,范畴感知出现在这两个领域的早期视听交互中。我们认为,在我们的模型中引入的学习机制可以促进婴儿通过言语标签进行分类。
{"title":"Modeling Cross-Modal Interactions in Early Word Learning","authors":"Nadja Althaus, D. Mareschal","doi":"10.1109/TAMD.2013.2264858","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2264858","url":null,"abstract":"Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped “onto” previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains. We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"288-297"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2264858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Computational Audiovisual Scene Analysis in Online Adaptation of Audio-Motor Maps 音频-运动地图在线适配中的计算视听场景分析
Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2257766
Rujiao Yan, Tobias Rodemann, B. Wrede
For sound localization, the binaural auditory system of a robot needs audio-motor maps, which represent the relationship between certain audio features and the position of the sound source. This mapping is normally learned during an offline calibration in controlled environments, but we show that using computational audiovisual scene analysis (CAVSA), it can be adapted online in free interaction with a number of a priori unknown speakers. CAVSA enables a robot to understand dynamic dialog scenarios, such as the number and position of speakers, as well as who is the current speaker. Our system does not require specific robot motions and thus can work during other tasks. The performance of online-adapted maps is continuously monitored by computing the difference between online-adapted and offline-calibrated maps and also comparing sound localization results with ground truth data (if available). We show that our approach is more robust in multiperson scenarios than the state of the art in terms of learning progress. We also show that our system is able to bootstrap with a randomized audio-motor map and adapt to hardware modifications that induce a change in audio-motor maps.
对于声音定位,机器人的双耳听觉系统需要音频-电机地图,它代表了某些音频特征与声源位置之间的关系。这种映射通常是在受控环境下的离线校准中学习的,但我们表明,使用计算视听场景分析(CAVSA),它可以在线适应与许多先验未知说话者的自由交互。CAVSA使机器人能够理解动态对话场景,例如说话人的数量和位置,以及当前说话人是谁。我们的系统不需要特定的机器人运动,因此可以在其他任务中工作。通过计算在线调整地图和离线校准地图之间的差异,并将声音定位结果与地面真实数据(如果可用)进行比较,持续监测在线调整地图的性能。我们表明,就学习进度而言,我们的方法在多人场景中比目前的技术更健壮。我们还表明,我们的系统能够引导随机的音频-运动映射,并适应硬件修改,诱导音频-运动映射的变化。
{"title":"Computational Audiovisual Scene Analysis in Online Adaptation of Audio-Motor Maps","authors":"Rujiao Yan, Tobias Rodemann, B. Wrede","doi":"10.1109/TAMD.2013.2257766","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2257766","url":null,"abstract":"For sound localization, the binaural auditory system of a robot needs audio-motor maps, which represent the relationship between certain audio features and the position of the sound source. This mapping is normally learned during an offline calibration in controlled environments, but we show that using computational audiovisual scene analysis (CAVSA), it can be adapted online in free interaction with a number of a priori unknown speakers. CAVSA enables a robot to understand dynamic dialog scenarios, such as the number and position of speakers, as well as who is the current speaker. Our system does not require specific robot motions and thus can work during other tasks. The performance of online-adapted maps is continuously monitored by computing the difference between online-adapted and offline-calibrated maps and also comparing sound localization results with ground truth data (if available). We show that our approach is more robust in multiperson scenarios than the state of the art in terms of learning progress. We also show that our system is able to bootstrap with a randomized audio-motor map and adapt to hardware modifications that induce a change in audio-motor maps.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"273-287"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2257766","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Robotic Model of Reaching and Grasping Development 机器人的伸手与抓握发展模型
Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2264321
Piero Savastano, S. Nolfi
We present a neurorobotic model that develops reaching and grasping skills analogous to those displayed by infants during their early developmental stages. The learning process is realized in an incremental manner, taking into account the reflex behaviors initially possessed by infants and the neurophysiological and cognitive maturation occurring during the relevant developmental period. The behavioral skills acquired by the robots closely match those displayed by children. The comparison between incremental and nonincremental experiments demonstrates how some of the limitations characterizing the initial developmental phase channel the learning process toward better solutions.
我们提出了一个神经机器人模型,发展伸手和抓握技能类似于那些显示在他们的早期发展阶段的婴儿。学习过程是一个渐进的过程,考虑到婴儿最初拥有的反射行为和相关发育时期发生的神经生理和认知成熟。机器人获得的行为技能与儿童展示的非常接近。增量和非增量实验之间的比较表明,初始发展阶段的一些限制如何引导学习过程走向更好的解决方案。
{"title":"A Robotic Model of Reaching and Grasping Development","authors":"Piero Savastano, S. Nolfi","doi":"10.1109/TAMD.2013.2264321","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2264321","url":null,"abstract":"We present a neurorobotic model that develops reaching and grasping skills analogous to those displayed by infants during their early developmental stages. The learning process is realized in an incremental manner, taking into account the reflex behaviors initially possessed by infants and the neurophysiological and cognitive maturation occurring during the relevant developmental period. The behavioral skills acquired by the robots closely match those displayed by children. The comparison between incremental and nonincremental experiments demonstrates how some of the limitations characterizing the initial developmental phase channel the learning process toward better solutions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"326-336"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2264321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring 通过推断波动时间序列的随时间随机特性来学习再现波动时间序列:在机器人辅导学习中的应用
Pub Date : 2013-12-01 DOI: 10.1109/TAMD.2013.2258019
Shingo Murata, Jun Namikawa, H. Arie, S. Sugano, J. Tani
This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.
本文提出了一种新的动态神经网络模型,可以学习提取隐藏在时间序列数据中的随机或波动结构。该网络不仅学习预测下一个输入状态的均值,还学习预测其随时间变化的方差。该训练方法基于梯度下降法的极大似然估计,似然函数表示为估计方差的函数。关于模型评估,我们提出了利用高斯噪声以不同方式生成训练数据的数值实验。分析表明,该网络可以预测随时间变化的方差和均值,并可以利用估计的方差再现目标随机序列数据。此外,使用该网络的类人机器人可以学习再现隐藏在波动辅导轨迹中的潜在随机结构。这种学习方案对于获得感官引导的熟练行为是必不可少的。
{"title":"Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring","authors":"Shingo Murata, Jun Namikawa, H. Arie, S. Sugano, J. Tani","doi":"10.1109/TAMD.2013.2258019","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2258019","url":null,"abstract":"This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"10 1","pages":"298-310"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2258019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
期刊
IEEE Transactions on Autonomous Mental Development
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1