机器人学中用于人与智能体交互建模的符号涌现

T. Nagai
{"title":"机器人学中用于人与智能体交互建模的符号涌现","authors":"T. Nagai","doi":"10.1145/3125739.3134522","DOIUrl":null,"url":null,"abstract":"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \"symbol emergence in robotics\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"47 7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Symbol Emergence in Robotics for Modeling Human-Agent Interaction\",\"authors\":\"T. Nagai\",\"doi\":\"10.1145/3125739.3134522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \\\"symbol emergence in robotics\\\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.\",\"PeriodicalId\":346669,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"volume\":\"47 7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3125739.3134522\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3125739.3134522","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人类的智能深深依赖于它的身体,它的发展需要自身的身体和周围环境包括其他主体的相互作用。然而,如何将低电平电机控制与高电平符号操作系统相结合,仍然是一个有待解决的问题。我们在“机器人中的符号出现”领域的研究目标之一是建立一个从运动控制到高级符号操作的人类智能计算模型。在本次演讲中,首先介绍了一种无监督在线学习算法,该算法使用分层贝叶斯框架对机器人的多模态感官信号(如音频、视觉和触觉信息)进行分类。机器人利用它的身体从不同的角度抓取和观察一个物体,并在观察过程中倾听声音。智能的基本算法是对收集到的多模态数据进行分类,使机器人能够更好地推断未观察到的信息,我们将生成的分类称为多模态概念。本讲座的后半部分讨论了从运动控制到高层次认知的人类智能综合计算模型。其核心思想是将多模态概念与强化学习相结合。此外,本演讲试图在相同的框架内建立沟通模型,因为自我-他人区分过程可以被视为感觉-运动信号的多模态分类。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Symbol Emergence in Robotics for Modeling Human-Agent Interaction
Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called "symbol emergence in robotics" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example Keynote Talk Virtual Character Agent for Lowering Knowledge-sharing Barriers on Q&A Websites The Impact of Personalisation on Human-Robot Interaction in Learning Scenarios Human-Assisted Learning of Object Models through Active Object Exploration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1