{"title":"运动触发的人-机器人同步自主获取联合注意","authors":"H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada","doi":"10.1109/DEVLRN.2005.1490980","DOIUrl":null,"url":null,"abstract":"Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Motion-triggered human-robot synchronization for autonomous acquisition of joint attention\",\"authors\":\"H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada\",\"doi\":\"10.1109/DEVLRN.2005.1490980\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed\",\"PeriodicalId\":297121,\"journal\":{\"name\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2005.1490980\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2005.1490980","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Motion-triggered human-robot synchronization for autonomous acquisition of joint attention
Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed