经过训练的仿人机器人可以进行类似人类的跨模态社会关注和冲突解决。

IF 3.8 2区 计算机科学 Q2 ROBOTICS International Journal of Social Robotics Pub Date : 2023-04-02 DOI:10.1007/s12369-023-00993-3
Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu, Stefan Wermter
{"title":"经过训练的仿人机器人可以进行类似人类的跨模态社会关注和冲突解决。","authors":"Di Fu,&nbsp;Fares Abawi,&nbsp;Hugo Carneiro,&nbsp;Matthias Kerzel,&nbsp;Ziwei Chen,&nbsp;Erik Strahl,&nbsp;Xun Liu,&nbsp;Stefan Wermter","doi":"10.1007/s12369-023-00993-3","DOIUrl":null,"url":null,"abstract":"<p><p>To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":" ","pages":"1-16"},"PeriodicalIF":3.8000,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10067521/pdf/","citationCount":"2","resultStr":"{\"title\":\"A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution.\",\"authors\":\"Di Fu,&nbsp;Fares Abawi,&nbsp;Hugo Carneiro,&nbsp;Matthias Kerzel,&nbsp;Ziwei Chen,&nbsp;Erik Strahl,&nbsp;Xun Liu,&nbsp;Stefan Wermter\",\"doi\":\"10.1007/s12369-023-00993-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.</p>\",\"PeriodicalId\":14361,\"journal\":{\"name\":\"International Journal of Social Robotics\",\"volume\":\" \",\"pages\":\"1-16\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2023-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10067521/pdf/\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Social Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12369-023-00993-3\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Social Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12369-023-00993-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 2

摘要

为了增强人机的社交互动,机器人必须在复杂的现实世界环境中处理多个社交线索。然而,模态之间输入信息的不一致是不可避免的,并且可能对机器人的处理具有挑战性。为了应对这一挑战,我们的研究采用了跨模式冲突解决的神经机器人范式,使机器人表达类似人类的社会关注。在这项人体研究中,对37名参与者进行了行为实验。我们设计了一个有三个动画化身的圆桌会议场景,以提高生态有效性。每个化身都戴着医用口罩,以掩盖鼻子、嘴巴和下巴的面部表情。当外围化身产生声音时,中心化身转移了视线。视线方向和声音位置要么在空间上一致,要么不一致。我们观察到,中心化身的动态凝视可以触发跨模态的社会注意力反应。特别是,在一致的视听条件下,人类的表现要好于不一致的条件。我们的显著性预测模型被训练来检测社交线索,预测视听显著性,并选择性地参与机器人研究。将经过训练的模型安装在iCub上后,机器人暴露在类似于人类实验的实验室条件下。虽然人类的表现总体上优越,但我们训练的模型表明,它可以复制与人类相似的注意力反应。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution.

To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.80
自引率
8.50%
发文量
95
期刊介绍: Social Robotics is the study of robots that are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role. The journal covers a broad spectrum of topics related to the latest technologies, new research results and developments in the area of social robotics on all levels, from developments in core enabling technologies to system integration, aesthetic design, applications and social implications. It provides a platform for like-minded researchers to present their findings and latest developments in social robotics, covering relevant advances in engineering, computing, arts and social sciences. The journal publishes original, peer reviewed articles and contributions on innovative ideas and concepts, new discoveries and improvements, as well as novel applications, by leading researchers and developers regarding the latest fundamental advances in the core technologies that form the backbone of social robotics, distinguished developmental projects in the area, as well as seminal works in aesthetic design, ethics and philosophy, studies on social impact and influence, pertaining to social robotics.
期刊最新文献
Immersive Commodity Telepresence with the AVATRINA Robot Avatar The Effects of Robot Managers’ Reward-Punishment Behaviours on Human–Robot Trust and Job Performance When is Human–Robot Joint Agency Effective? The Case of Cooperative Reaction Games Does Cultural Robotics Need Culture? Conceptual Fragmentation and the Problems of Merging Culture with Robot Design Observing the Interaction between a Socially-Assistive Robot and Residents in a Nursing Home
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1