The Role of Motion Information in Learning Human-Robot Joint Attention

Y. Nagai
{"title":"The Role of Motion Information in Learning Human-Robot Joint Attention","authors":"Y. Nagai","doi":"10.1109/ROBOT.2005.1570418","DOIUrl":null,"url":null,"abstract":"To realize natural human-robot interactions and investigate the developmental mechanism of human communication, an effective approach is to construct models by which a robot imitates cognitive functions of humans. Focusing on the knowledge that humans utilize motion information of others’ action, this paper presents a learning model that enables a robot to acquire the ability to establish joint attention with a human by utilizing both static and motion information. As the motion information, the robot uses the optical flow detected when observing a human who is shifting his/her gaze from looking at the robot to looking at another object. As the static information, it extracts the edge image of the human face when he/she is gazing at the object. The static and motion information have complementary characteristics. The former gives the exact direction of gaze, even though it is difficult to interpret. On the other hand, the latter provides a rough but easily understandable relationship between the direction of gaze shift and motor output to follow the gaze. The learning model utilizing both static and motion information acquired from observing a human’s gaze shift enables the robot to efficiently acquire joint attention ability and to naturally interact with the human. Experimental results show that the motion information accelerates the learning of joint attention while the static information improves the task performance. The results are discussed in terms of analogy with cognitive development in human infants.","PeriodicalId":350878,"journal":{"name":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","volume":"878 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2005 IEEE International Conference on Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROBOT.2005.1570418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

Abstract

To realize natural human-robot interactions and investigate the developmental mechanism of human communication, an effective approach is to construct models by which a robot imitates cognitive functions of humans. Focusing on the knowledge that humans utilize motion information of others’ action, this paper presents a learning model that enables a robot to acquire the ability to establish joint attention with a human by utilizing both static and motion information. As the motion information, the robot uses the optical flow detected when observing a human who is shifting his/her gaze from looking at the robot to looking at another object. As the static information, it extracts the edge image of the human face when he/she is gazing at the object. The static and motion information have complementary characteristics. The former gives the exact direction of gaze, even though it is difficult to interpret. On the other hand, the latter provides a rough but easily understandable relationship between the direction of gaze shift and motor output to follow the gaze. The learning model utilizing both static and motion information acquired from observing a human’s gaze shift enables the robot to efficiently acquire joint attention ability and to naturally interact with the human. Experimental results show that the motion information accelerates the learning of joint attention while the static information improves the task performance. The results are discussed in terms of analogy with cognitive development in human infants.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
运动信息在学习人-机器人关节注意中的作用
构建机器人模仿人类认知功能的模型是实现人机自然交互和研究人类交流发展机制的有效途径。针对人类利用他人动作的运动信息的知识,本文提出了一种学习模型,使机器人能够同时利用静态和运动信息,获得与人类建立共同注意的能力。作为运动信息,机器人使用观察到的人将目光从机器人转移到另一个物体时检测到的光流。作为静态信息,它提取人脸注视物体时的边缘图像。静态信息和运动信息具有互补的特点。前者给出了凝视的确切方向,尽管很难解释。另一方面,后者提供了一个粗略但容易理解的凝视移动方向和跟随凝视的运动输出之间的关系。该学习模型利用观察人类目光移动获得的静态和运动信息,使机器人能够有效地获得共同注意能力,并与人类自然互动。实验结果表明,运动信息加速了联合注意的学习,而静态信息提高了任务性能。研究结果与人类婴儿认知发展的类比进行了讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fault Diagnosis and Fault Tolerant Control for Wheeled Mobile Robots under Unknown Environments: A Survey Clamping Tools of a Capsule for Monitoring the Gastrointestinal Tract Problem Analysis and Preliminary Technological Activity A Fixed– Camera Controller for Visual Guidance of Mobile Robots via Velocity Fields Insect-like Antennal Sensing for Climbing and Tunneling Behavior in a Biologically-inspired Mobile Robot Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1