首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Data-Driven Communicative Behaviour Generation: A Survey 数据驱动的交际行为生成:一项调查
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-16 DOI: 10.1145/3609235
Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme
The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.
近年来,数据驱动行为生成系统的发展已成为人机交互(HAI)和人机交互(HRI)领域中备受关注的焦点。尽管基于规则的方法多年来占主导地位,但事实证明,这些方法缺乏灵活性,开发成本高昂。开发生产规则的困难,以及为了生成人工行为而需要手动配置,限制了基于规则的行为的复杂性和多样性。相比之下,使用跟踪和记录设备收集的实际人机交互数据使得使用机器学习,特别是近年来的深度学习,可以生成类似人类的多模态共语音行为。本调查概述了基于深度学习的协同语音行为生成模型的最新进展,并对该领域的未来研究进行了展望。
{"title":"Data-Driven Communicative Behaviour Generation: A Survey","authors":"Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme","doi":"10.1145/3609235","DOIUrl":"https://doi.org/10.1145/3609235","url":null,"abstract":"The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"35 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82183664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction 人机交互中非拟声的新设计潜力
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-01 DOI: 10.1145/3611646
Elias Naphausen, Andreas Muxel, J. Willmann
With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.
随着机器人设备使用量的增加和复杂性的增加,人机界面设计的要求也在迅速变化,需要新的交互和信息传递手段。在这个范围内,讨论的项目——由奥格斯堡应用科学大学的混合物实验室和Bauhaus-Universität魏玛的设计研究实验室开发——在描述一个新的研究领域迈出了第一步,探索了在人机交互(HRI)背景下非模拟超声的设计潜力。以工业7轴机械手为特色,在操作过程中收集多种信息(例如,末端执行器的位置,关节位置和力),这些数据集被用于创建一种新颖的增强听觉存在,从而允许新的交互形式。因此,本文考虑(1)非模拟超声的研究参数(如音高、音量和音色);(2)全面的实证追求,包括设置、探索和验证;(3)将这些发现整合到统一的人机交互过程中的总体含义。机械维度和听觉维度之间的关系尤其值得关注。
{"title":"New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction","authors":"Elias Naphausen, Andreas Muxel, J. Willmann","doi":"10.1145/3611646","DOIUrl":"https://doi.org/10.1145/3611646","url":null,"abstract":"With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"40 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76045892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario 城市空中交通情景下基于随机技能水平的人类训练共享控制
IF 5.1 Q2 ROBOTICS Pub Date : 2023-06-06 DOI: 10.1145/3603194
Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang
This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.
本文提出了一种基于随机技能水平的共享控制框架,以帮助人类新手在复杂的动态控制任务中模仿人类专家。提出的框架旨在推断人类新手的随机技能水平(ssl),并根据推断的ssl提供个性化的帮助。SSL可以被评估为一个随机变量,它表示新手的行为与专家相似的概率。我们提出了一种数据驱动的方法,将新手演示分别表征为新手模型和专家演示分别表征为专家模型。然后,我们的SSL推理方法利用新手和专家模型来评估复杂动态控制任务中新手的SSL。共享控制方案的设计目的是根据推断的SSL动态调整辅助级别,以防止在人工训练期间由于强加的辅助不足而感到沮丧或乏味。在远程驾驶城市空中机动(UAM)车辆的人类训练场景中,通过人体受试者实验证明了所提出的框架。结果表明,所提出的框架可以实时评估SSL并为个人定制帮助。将提出的框架与仅练习训练(无辅助)和基线共享控制方法进行比较,以测试人类受试者在设计的训练场景中的人类学习率。还检查了一项主观调查,以监测拟议框架的用户体验。
{"title":"Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario","authors":"Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang","doi":"10.1145/3603194","DOIUrl":"https://doi.org/10.1145/3603194","url":null,"abstract":"This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"89 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74266351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction” “设计机器人身体:情感具身互动的批判视角”特刊简介
IF 5.1 Q2 ROBOTICS Pub Date : 2023-05-17 DOI: 10.1145/3594713
M. Paterson, G. Hoffman, C. Zheng
A
一个
{"title":"Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction”","authors":"M. Paterson, G. Hoffman, C. Zheng","doi":"10.1145/3594713","DOIUrl":"https://doi.org/10.1145/3594713","url":null,"abstract":"A","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"42 1","pages":"1 - 9"},"PeriodicalIF":5.1,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75515966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Corners as a Problematic for Design Interactions 作为设计交互问题的情感角
IF 5.1 Q2 ROBOTICS Pub Date : 2023-05-15 DOI: 10.1145/3596452
Katherine M. Harrison, Ericka Johnson
Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.
家用机器人在许多家庭中已经司空见惯,而像Pepper这样的人形伴侣机器人正越来越多地成为各种护理工作的一部分。通过在机器人实验室的实地考察,以及我们与家用机器人的个人接触,我们在这里使用“难以触及的角落”的比喻来探索伴侣机器人的社会技术限制以及我们应对这些限制的不同能力。本文将“难以触及的角落”作为设计交互的一个问题,并将其作为思考环境和适应性交叉方面的机会。
{"title":"Affective Corners as a Problematic for Design Interactions","authors":"Katherine M. Harrison, Ericka Johnson","doi":"10.1145/3596452","DOIUrl":"https://doi.org/10.1145/3596452","url":null,"abstract":"Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"125 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73757550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Sound of Swarm. Auditory Description of Swarm Robotic Movements 蜂群之声。蜂群机器人运动的听觉描述
IF 5.1 Q2 ROBOTICS Pub Date : 2023-05-04 DOI: 10.1145/3596203
Maria Mannone, V. Seidita, A. Chella
Movements of robots in a swarm can be mapped to sounds, highlighting the group behavior through the coordinated and simultaneous variations of musical parameters across time. The vice versa is also possible: sound parameters can be mapped to robotic motion parameters, giving instructions through sound. In this article, we first develop a theoretical framework to relate musical parameters such as pitch, timbre, loudness, and articulation (for each time) with robotic parameters such as position, identity, motor status, and sensor status. We propose a definition of musical spaces as Hilbert spaces, and musical paths between parameters as elements of bigroupoids, generalizing existing conceptions of musical spaces. The use of Hilbert spaces allows us to build up quantum representations of musical states, inheriting quantum computing resources, already used for robotic swarms. We present the theoretical framework and then some case studies as toy examples. In particular, we discuss a 2D video and matrix simulation with two robo-caterpillars; a 2D simulation of 10 robo-ants with Webots; a 3D simulation of three robo-fish in an underwater search&rescue mission.
机器人在群体中的运动可以映射为声音,通过协调和同步的音乐参数随时间变化来突出群体行为。反之亦然:声音参数可以映射到机器人的运动参数,通过声音发出指令。在本文中,我们首先建立了一个理论框架,将音乐参数(如音调、音色、响度和发音)与机器人参数(如位置、身份、运动状态和传感器状态)联系起来。我们将音乐空间定义为希尔伯特空间,并将参数间的音乐路径定义为大群似体的元素,从而推广了现有的音乐空间概念。希尔伯特空间的使用允许我们建立音乐状态的量子表示,继承量子计算资源,已经用于机器人群。我们提出了理论框架,然后作为玩具样例进行了一些案例研究。特别地,我们讨论了一个二维视频和矩阵模拟与两个机器人毛毛虫;用Webots对10只机器人蚂蚁进行二维模拟;水下搜救任务中三条机器鱼的三维模拟。
{"title":"The Sound of Swarm. Auditory Description of Swarm Robotic Movements","authors":"Maria Mannone, V. Seidita, A. Chella","doi":"10.1145/3596203","DOIUrl":"https://doi.org/10.1145/3596203","url":null,"abstract":"Movements of robots in a swarm can be mapped to sounds, highlighting the group behavior through the coordinated and simultaneous variations of musical parameters across time. The vice versa is also possible: sound parameters can be mapped to robotic motion parameters, giving instructions through sound. In this article, we first develop a theoretical framework to relate musical parameters such as pitch, timbre, loudness, and articulation (for each time) with robotic parameters such as position, identity, motor status, and sensor status. We propose a definition of musical spaces as Hilbert spaces, and musical paths between parameters as elements of bigroupoids, generalizing existing conceptions of musical spaces. The use of Hilbert spaces allows us to build up quantum representations of musical states, inheriting quantum computing resources, already used for robotic swarms. We present the theoretical framework and then some case studies as toy examples. In particular, we discuss a 2D video and matrix simulation with two robo-caterpillars; a 2D simulation of 10 robo-ants with Webots; a 3D simulation of three robo-fish in an underwater search&rescue mission.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"33 7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83951315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
It Takes Two: using Co-creation to Facilitate Child-Robot Co-regulation 需要两个:利用共同创造促进儿童-机器人的共同监管
IF 5.1 Q2 ROBOTICS Pub Date : 2023-05-02 DOI: 10.1145/3593812
M. Ligthart, Mark Antonius Neerincx, K. Hindriks
While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children’s ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light animations for the robot to use during their conversation. A crucial additional feature is that children are able to coordinate their involvement of the co-creation process. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children’s agency. It also positively affected the acceptance of the robot. We furthermore identified five distinct profiles detailing the different needs and motivations children have for the level of involvement they chose during the co-creation process.
在与社交机器人互动的过程中,孩子们需要表达自己,并让机器人认可他们的表达。由于机器人在理解儿童表情方面的局限性,它常常无法满足这一需求。为了让孩子和机器人之间的互动易于管理,机器人就会控制,从而削弱了孩子们共同调节互动的能力。共同调节对于实现令人满意的社会互动非常重要。我们开发了一个共同创造活动,旨在促进更多的共同监管。孩子们可以为机器人创造声音效果、手势和灯光动画,以便在他们交谈时使用。一个重要的附加特征是,孩子们能够协调他们参与共同创造过程。一项用户研究(N = 59名7-11岁的学童)的结果表明,共同创造活动通过提高儿童的能动性成功地促进了共同调节。这也对机器人的接受度产生了积极影响。我们进一步确定了五种不同的概况,详细说明了儿童在共同创造过程中选择的参与程度的不同需求和动机。
{"title":"It Takes Two: using Co-creation to Facilitate Child-Robot Co-regulation","authors":"M. Ligthart, Mark Antonius Neerincx, K. Hindriks","doi":"10.1145/3593812","DOIUrl":"https://doi.org/10.1145/3593812","url":null,"abstract":"While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children’s ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light animations for the robot to use during their conversation. A crucial additional feature is that children are able to coordinate their involvement of the co-creation process. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children’s agency. It also positively affected the acceptance of the robot. We furthermore identified five distinct profiles detailing the different needs and motivations children have for the level of involvement they chose during the co-creation process.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"87 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84231214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Computational Model of Coupled Human Trust and Self-confidence Dynamics 人类信任与自信耦合动态的计算模型
IF 5.1 Q2 ROBOTICS Pub Date : 2023-04-27 DOI: 10.1145/3594715
Katherine J. Williams, Madeleine S. Yuh, Neera Jain
Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this article, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.
能够帮助人类完成日益复杂任务的自主系统正变得无处不在。此外,已经确定的是,一个人决定依赖这样的系统是他们对系统的信任和他们自己的自信的功能,因为它涉及到执行感兴趣的任务。鉴于对自动化的过度依赖和过度依赖都可能给人类带来重大风险,因此有动机开发能够适当校准人类信任或自信的自主系统,以实现适当的依赖行为。本文提出了一个人的信任与自信耦合动态的计算模型。动态建模为部分可观察的马尔可夫决策过程,没有奖励函数(POMDP/R),利用行为和自我报告数据作为估计这些认知状态的观察。该模型使用从340名参与者收集的数据进行训练和验证。对转移概率的分析表明,所提出的模型捕获了高、低信任和自信的所有离散组合的信任、自信和依赖之间的概率关系。利用所提出的模型设计最优策略以促进信任和自信校准是未来工作的目标。
{"title":"A Computational Model of Coupled Human Trust and Self-confidence Dynamics","authors":"Katherine J. Williams, Madeleine S. Yuh, Neera Jain","doi":"10.1145/3594715","DOIUrl":"https://doi.org/10.1145/3594715","url":null,"abstract":"Autonomous systems that can assist humans with increasingly complex tasks are becoming ubiquitous. Moreover, it has been established that a human’s decision to rely on such systems is a function of both their trust in the system and their own self-confidence as it relates to executing the task of interest. Given that both under- and over-reliance on automation can pose significant risks to humans, there is motivation for developing autonomous systems that could appropriately calibrate a human’s trust or self-confidence to achieve proper reliance behavior. In this article, a computational model of coupled human trust and self-confidence dynamics is proposed. The dynamics are modeled as a partially observable Markov decision process without a reward function (POMDP/R) that leverages behavioral and self-report data as observations for estimation of these cognitive states. The model is trained and validated using data collected from 340 participants. Analysis of the transition probabilities shows that the proposed model captures the probabilistic relationship between trust, self-confidence, and reliance for all discrete combinations of high and low trust and self-confidence. The use of the proposed model to design an optimal policy to facilitate trust and self-confidence calibration is a goal of future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"255 1","pages":"1 - 29"},"PeriodicalIF":5.1,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76167032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
“Who said that?” Applying the Situation Awareness Global Assessment Technique to Social Telepresence “谁说的?”态势感知全局评估技术在社交网真中的应用
IF 5.1 Q2 ROBOTICS Pub Date : 2023-04-25 DOI: 10.1145/3592801
Adam K. Coyne, Keshav Sapkota, C. McGinn
As with all remotely-controlled robots, successful teleoperation of social and telepresence robots relies greatly on operator situation awareness, however existing situation awareness measurements, most being originally created for military purposes, are not adapted to the context of social interaction. We propose an objective technique for telepresence evaluation based on the widely-accepted Situation Awareness Global Assessment Technique (SAGAT), adjusted to suit social contexts. This was trialled in a between-subjects participant study (n = 56), comparing the effect of mono and spatial (binaural) audio feedback on operator situation awareness during robot teleoperation in a simulated social telepresence scenario. Subjective data was also recorded, including questions adapted from Witmer and Singer’s Presence Questionnaire, as well as qualitative feedback from participants. No significant differences in situation awareness measurements were detected, however correlations observed between measures call for further research. This study and its findings are a potential starting point for the development of social situation awareness assessment techniques, which can inform future social and telepresence robot design decisions.
与所有远程控制机器人一样,社交机器人和远程呈现机器人的成功遥操作在很大程度上依赖于操作员的态势感知,然而现有的态势感知测量大多是为军事目的而创建的,不适应社会互动的背景。我们提出了一种基于广泛接受的态势感知全局评估技术(SAGAT)的客观远程呈现评估技术,并进行了调整以适应社会环境。在一项受试者之间的参与者研究中(n = 56),比较了在模拟的社交远程呈现场景中,机器人远程操作过程中单声和空间(双耳)音频反馈对操作员情境感知的影响。主观数据也被记录下来,包括改编自Witmer和Singer的存在问卷的问题,以及参与者的定性反馈。在态势感知测量中没有发现显著差异,但观察到的测量之间的相关性需要进一步研究。本研究及其发现为社会情境感知评估技术的发展提供了一个潜在的起点,可以为未来的社交和远程呈现机器人设计决策提供信息。
{"title":"“Who said that?” Applying the Situation Awareness Global Assessment Technique to Social Telepresence","authors":"Adam K. Coyne, Keshav Sapkota, C. McGinn","doi":"10.1145/3592801","DOIUrl":"https://doi.org/10.1145/3592801","url":null,"abstract":"As with all remotely-controlled robots, successful teleoperation of social and telepresence robots relies greatly on operator situation awareness, however existing situation awareness measurements, most being originally created for military purposes, are not adapted to the context of social interaction. We propose an objective technique for telepresence evaluation based on the widely-accepted Situation Awareness Global Assessment Technique (SAGAT), adjusted to suit social contexts. This was trialled in a between-subjects participant study (n = 56), comparing the effect of mono and spatial (binaural) audio feedback on operator situation awareness during robot teleoperation in a simulated social telepresence scenario. Subjective data was also recorded, including questions adapted from Witmer and Singer’s Presence Questionnaire, as well as qualitative feedback from participants. No significant differences in situation awareness measurements were detected, however correlations observed between measures call for further research. This study and its findings are a potential starting point for the development of social situation awareness assessment techniques, which can inform future social and telepresence robot design decisions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"48 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81600542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification 探索机器人声音的美学策略:运动声音的复杂性与物质性
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-17 DOI: 10.1145/3585277
A. Latupeirissa, C. Panariello, R. Bresin
This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).
本文提出了三项研究,其中我们通过将胡椒机器人的运动映射到声音模型来探索由运动超声产生的声音的美学策略。我们开发了两套声音模型。第一组是由两个声音模型制作的,一个基于锯齿,另一个基于反馈链,用于研究合成机器人声音的感知如何依赖于它们的设计复杂性。我们实现了第二组声音模型,用于探测机器人在运动中发出的声音的“物质性”。这一套包括一个基于引擎的声音合成,突出了机器人的内部机制,一个金属声合成,突出了机器人的典型外观,以及一个嗖嗖声合成,突出了运动。我们进行了三项研究。第一项研究通过在线调查探讨了第一组声音模型如何影响Pepper机器人对表达手势的感知。在第二项研究中,我们在一个博物馆装置中进行了一个实验,让Pepper机器人在两种场景中呈现:(1)在餐厅欢迎顾客时,(2)在购物中心为游客提供信息时。最后,在第三项研究中,我们使用与第二项研究类似的刺激进行了在线调查。我们的研究结果表明,参与者更喜欢更复杂的声音模型来模拟机器人的运动。在物质性方面,参与者喜欢与环境声音(即较少分散注意力)和声源可以识别的音景相融合的更好的细微声音。此外,声音偏好取决于参与者体验机器人生成声音的环境(例如,作为现场博物馆装置还是在线展示)。
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1