头部动作模仿的自我-他人动作等价学习

Y. Nagai
{"title":"头部动作模仿的自我-他人动作等价学习","authors":"Y. Nagai","doi":"10.1109/DEVLRN.2005.1490958","DOIUrl":null,"url":null,"abstract":"Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 2005)","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Self-Other Motion Equivalence Learning for Head Movement Imitation\",\"authors\":\"Y. Nagai\",\"doi\":\"10.1109/DEVLRN.2005.1490958\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 2005)\",\"PeriodicalId\":297121,\"journal\":{\"name\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. The 4nd International Conference on Development and Learning, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2005.1490958\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2005.1490958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

只提供摘要形式。提出了一种基于自我动作与他人动作对等的头部动作模仿学习模型。人类婴儿可以模仿成人的头部和面部动作。关于婴儿模仿能力的一个悬而未决的问题是,他们自己和其他婴儿利用什么对等来模仿成人呈现的动作(Meltzolf和Moore, 1997)。自我产生的头部运动或面部运动不能以感知他人动作的相同方式被感知。一些研究人员已经开发出模仿人类头部运动的机器人模型。然而,他们的模型使用了机器人无法检测到的人类姿势数据,并且/或者人类和机器人之间的动作关系是由设计师完全定义的。该模型使机器人仅使用自检测的传感器信息就能学习自我-他者等价性来模仿人类头部运动。有证据表明,当婴儿观察到有运动的动作时,他们会模仿动作,而不是没有运动的动作,我的模型利用了动作的运动信息。由机器人的躯体传感器检测到的自产生动作的运动,表示为机器人头部的角位移矢量。当机器人注视人脸时,人类动作的运动被检测为机器人视觉感知中的光流。通过使用这些表征,机器人通过视觉跟踪人脸的经验来学习头部运动模仿的自我-他人运动等价。如图所示,在面对面的互动中,机器人首先将人的脸视为一个有趣的目标,当人把头转向一边时,机器人会在相机图像中检测到光流。这篇文章还展示了当人的头从中心转到机器人的左边时检测到的光流。然后,视觉上跟踪人脸的能力使机器人能够将其头部转向与人相同的方向,因为人的花边位置在相机图像中移动。这张图还显示了机器人通过跟踪人脸向左转动头部时检测到的运动矢量,其中圆圈中的线条表示八个运动方向上的角位移矢量。结果,机器人发现自运动向量在与人类头部运动的光流相同的运动方向上被激活。这种自我-他人运动等价是通过Hebbian学习获得的。实验表明,该模型使机器人能够在几分钟的在线学习时间内获得自身与人之间的运动等效。利用获得的感觉运动映射,机器人能够模仿人类的头部运动。这种模仿能力可以通过使用一个物体作为被关注的目标而导致联合视觉注意的发展(Nagai, 2005)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Self-Other Motion Equivalence Learning for Head Movement Imitation
Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 2005)
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
'Infants' preference for infants and adults','','','','','','','','93','95', Imitation faculty based on a simple visuo-motor mapping towards interaction rule learning with a human partner Online Injection of Teacher's Abstract Concepts into a Real-time Developmental Robot with Autonomous Navigation as Example How can prosody help to learn actions? Longitudinal Observations of Structural Changes in the Mother-Infant Interaction: A New Perspectives Based on Infants' Locomotion Development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1