{"title":"机器人学中用于人与智能体交互建模的符号涌现","authors":"T. Nagai","doi":"10.1145/3125739.3134522","DOIUrl":null,"url":null,"abstract":"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \"symbol emergence in robotics\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"47 7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Symbol Emergence in Robotics for Modeling Human-Agent Interaction\",\"authors\":\"T. Nagai\",\"doi\":\"10.1145/3125739.3134522\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called \\\"symbol emergence in robotics\\\" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.\",\"PeriodicalId\":346669,\"journal\":{\"name\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"volume\":\"47 7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 5th International Conference on Human Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3125739.3134522\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 5th International Conference on Human Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3125739.3134522","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Symbol Emergence in Robotics for Modeling Human-Agent Interaction
Human intelligence is deeply dependent on its physical body, and its development requires interaction between its own body and surrounding environment including other agents. However, it is still an open problem that how we can integrate the low level motor control and the high level symbol manipulation system. One of our research goals in the area called "symbol emergence in robotics" is to build a computational model of human intelligence from the motor control to the high level symbol manipulation. In this talk, an unsupervised on-line learning algorithm, which uses a hierarchical Bayesian framework for categorizing multimodal sensory signals such as audio, visual, and haptic information by robots, is introduced at first. The robot uses its physical body to grasp and observe an object from various viewpoints as well as listen to the sound during the observation. The basic algorithm for intelligence is to categorize the collected multimodal data so that the robot can infer unobserved information better and we call the generated categorizes as multimodal concepts. The latter half of this talk discusses an integrated computational model of human intelligence from the motor control to the high level cognition. The core idea is to integrate the multimodal concepts and reinforcement learning. Furthermore, this talk attempts to model communication within the same framework since the self-other discrimination process can be seen as the multimodal categorization of sensory-motor signals.