首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks 仿真和交互性对导航任务中人类对机器人感知的影响
IF 4.2 Q2 ROBOTICS Pub Date : 2024-07-16 DOI: 10.1145/3675784
Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez
In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.
在人机交互中,研究人员通常利用面对面研究来收集对机器人的主观感受。此外,互动视频和互动模拟(参与者控制虚拟世界中与机器人互动的化身)也被用来快速收集大规模的人类反馈。在这些方法之间,人类对机器人的感知如何比较?为了探究这个问题,我们进行了一项 2x2 主体间研究(N=160),评估了交互环境(真实环境 vs. 模拟环境)和参与者在人机交互过程中的交互性(交互式参与 vs. 视频观察)对机器人感知(能力、不适感、社交表现和社交信息处理)的影响,以便完成与人协同导航的任务。我们还研究了参与者在不同实验条件下的工作量。我们的结果表明,在真实环境和模拟环境中,人们对机器人的感知存在明显差异。此外,我们的结果还显示,当人们观看相遇视频与参与相遇时,人类对机器人的感知存在差异。最后,我们发现模拟交互和模拟交锋视频比真实交锋及其视频产生的工作量更大。我们的研究结果表明,视频和模拟方法的研究结果不一定总能转化为现实世界中的人机交互。为了让从业人员能够利用本研究的知识,并让未来的研究人员能够扩展我们在这一领域的知识,我们提供了权衡不同方法的指导原则。
{"title":"Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks","authors":"Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez","doi":"10.1145/3675784","DOIUrl":"https://doi.org/10.1145/3675784","url":null,"abstract":"In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires 趋同措施和新兴模型:人机信任问卷的元分析
IF 4.2 Q2 ROBOTICS Pub Date : 2024-07-13 DOI: 10.1145/3677614
Yosef Razin, K. Feigh
Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.
信任对于技术接受、持续使用和团队合作至关重要。然而,人与机器人之间的信任,以及更广泛意义上的人与机器之间的信任,存在术语分歧和结构扩散的问题。通过比较、映射和分析精心构建的信任调查工具,本研究揭示了人机交互中的信任共识结构。为此,我们确定了最常引用、最有效的人机和人机机器人信任调查问卷,以及构成这种信任的维度和前因的最佳既定因素。为了减少混淆和结构扩散,我们提供了问卷间术语的详细映射。此外,我们还对采用多因素调查工具的实验所产生的回归模型进行了元分析。在此基础上,我们提供了迄今为止最完整的、经过实验验证的人机信任和人机信任模型。这个趋同模型为未来研究建立了一个综合框架。它确定了当前信任测量的界限,以及需要进一步调查和验证的领域。最后,我们将讨论如何选择合适的信任调查工具以及如何进行信任设计。通过确定信任的内部运作机制,我们建立了一个更完整的、可广泛应用的信任测量基础。
{"title":"Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires","authors":"Yosef Razin, K. Feigh","doi":"10.1145/3677614","DOIUrl":"https://doi.org/10.1145/3677614","url":null,"abstract":"Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141651115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Pattern-Based Conventions for Predictable Planning in Human-Robot Collaboration 为人机协作中的可预测规划生成基于模式的约定
IF 4.2 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1145/3659061
Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes
For humans to effectively work with robots, they must be able to predict the actions and behaviors of their robot teammates rather than merely react to them. While there are existing techniques enabling robots to adapt to human behavior, there is a demonstrated need for methods that explicitly improve humans’ ability to understand and predict robot behavior at multi-task timescales. In this work, we propose a method leveraging the innate human propensity for pattern recognition in order to improve team dynamics in human-robot teams and to make robots more predictable to the humans that work with them. Patterns are a cognitive tool that humans use and rely on often, and the human brain is in many ways primed for pattern recognition and usage. We propose Pattern-Aware Convention-setting for Teaming (PACT), an entropy-based algorithm that identifies and imposes appropriate patterns over a robot’s planner or policy over long time horizons. These patterns are autonomously generated and chosen via an algorithmic process that considers human-perceptible features and characteristics derived from the tasks to be completed, and as such, produces behavior that is easier for humans to identify and predict. Our evaluation shows that PACT contributes to significant improvements in team dynamics and teammate perceptions of the robot, as compared to robots that utilize traditionally ‘optimal’ plans and robots utilizing unoptimized patterns.
人类要想有效地与机器人合作,就必须能够预测机器人队友的行动和行为,而不仅仅是对它们做出反应。虽然现有的技术能让机器人适应人类的行为,但我们仍然需要能明确提高人类在多任务时间尺度上理解和预测机器人行为的能力的方法。在这项工作中,我们提出了一种利用人类与生俱来的模式识别倾向的方法,以改善人类-机器人团队的动态关系,并使机器人对与之共事的人类而言更具可预测性。模式是人类经常使用和依赖的一种认知工具,人脑在很多方面都具备模式识别和使用的能力。我们提出了一种基于熵的算法--PACT(Pattern-Aware Convention-setting for Teaming),该算法可以识别并在机器人的规划器或策略上长期实施适当的模式。这些模式是通过一个算法过程自主生成和选择的,该过程考虑了人类可感知的特征和来自待完成任务的特性,因此产生的行为更容易被人类识别和预测。我们的评估结果表明,与使用传统 "最优 "计划的机器人和使用未优化模式的机器人相比,PACT 显著改善了团队活力和队友对机器人的看法。
{"title":"Generating Pattern-Based Conventions for Predictable Planning in Human-Robot Collaboration","authors":"Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes","doi":"10.1145/3659061","DOIUrl":"https://doi.org/10.1145/3659061","url":null,"abstract":"For humans to effectively work with robots, they must be able to predict the actions and behaviors of their robot teammates rather than merely react to them. While there are existing techniques enabling robots to adapt to human behavior, there is a demonstrated need for methods that explicitly improve humans’ ability to understand and predict robot behavior at multi-task timescales. In this work, we propose a method leveraging the innate human propensity for pattern recognition in order to improve team dynamics in human-robot teams and to make robots more predictable to the humans that work with them. Patterns are a cognitive tool that humans use and rely on often, and the human brain is in many ways primed for pattern recognition and usage. We propose Pattern-Aware Convention-setting for Teaming (PACT), an entropy-based algorithm that identifies and imposes appropriate patterns over a robot’s planner or policy over long time horizons. These patterns are autonomously generated and chosen via an algorithmic process that considers human-perceptible features and characteristics derived from the tasks to be completed, and as such, produces behavior that is easier for humans to identify and predict. Our evaluation shows that PACT contributes to significant improvements in team dynamics and teammate perceptions of the robot, as compared to robots that utilize traditionally ‘optimal’ plans and robots utilizing unoptimized patterns.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141693071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Co-manipulation Modus with Human-Human Teams for Future Application to Human-Robot Systems 未来应用于人机系统的人机团队协同操纵模式分类
IF 5.1 Q2 Computer Science Pub Date : 2024-06-13 DOI: 10.1145/3659059
Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack
Despite the existence of robots that can lift heavy loads, robots that can help people move heavy objects are not readily available. This paper makes progress towards effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing 27 kg without being co-located (i.e. participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly, smoothly and avoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.
尽管存在能够举起重物的机器人,但能够帮助人类移动重物的机器人却并不容易获得。本文通过研究 30 个人类-机器人二人组,在没有共同位置的情况下(即参与者位于延伸物体的两端)协作操纵一个重达 27 公斤的物体,在实现有效的人类-机器人协同操纵方面取得了进展。参与者用该物体绕过不同的障碍物,同时在任何时候都表现出四种模式(团队共同移动物体的方式或目标)中的一种。利用力和运动信号对方式或行为进行分类是这项工作的主要目的。我们的研究结果表明,在最初提出的模式中,有两种非常相似,因此可以有效地去除其中一种,同时仍能涵盖共同操控任务中的常见行为。用于分类的三种模式分别是快速、平稳和避开障碍物。利用深度卷积神经网络(CNN),我们从验证集中对三种模式进行了分类,准确率高达 89%。在协同操纵过程中检测或分类模式的能力可根据团队的目标或模式帮助定义适当的机器人行为或控制器参数,从而大大提高人机协作性能。
{"title":"Classification of Co-manipulation Modus with Human-Human Teams for Future Application to Human-Robot Systems","authors":"Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack","doi":"10.1145/3659059","DOIUrl":"https://doi.org/10.1145/3659059","url":null,"abstract":"Despite the existence of robots that can lift heavy loads, robots that can help people move heavy objects are not readily available. This paper makes progress towards effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing 27 kg without being co-located (i.e. participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly, smoothly and avoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of a Robot that Interleaves Tasks for Multiple Users 对多用户交错任务机器人的看法
IF 5.1 Q2 Computer Science Pub Date : 2024-05-23 DOI: 10.1145/3663486
Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal
When robots have multiple tasks to perform, they must determine the order in which to complete them. Interleaving tasks is efficient for the robot trying to finish its to-do list, but it may be less satisfying for a human whose request was delayed in favor of schedule efficiency. Following online research that examined delays with various motivations [4, 27], we created two in-person studies in which participants’ tasks were impacted by the robot’s other tasks. In the first, participants either requested a task for the robot to complete on their behalf or watched the robot performing tasks for other people. We measured how their opinions changed depending on whether their task’s completion was delayed due to another participant’s task or they were observing without a task of their own. In the second, participants had a robot walk them to an office and became delayed as the robot detoured to another location. We measured how opinions of the robot changed depending on who requested the detour task and the length of the detour. Overall, participants positively viewed task interleaving as long as the delay and inconvenience imposed by someone else’s task were small and the task was well-justified. Also, observers often had lower opinions of the robot than participants who requested tasks, highlighting a concern for online research.
当机器人需要执行多项任务时,它们必须确定完成任务的顺序。对于试图完成待办事项清单的机器人来说,交错完成任务是高效的,但对于为了提高日程效率而延迟完成任务的人类来说,这可能就不那么令人满意了。根据对各种延迟动机的在线研究[4, 27],我们进行了两项面对面的研究,在这些研究中,参与者的任务会受到机器人其他任务的影响。在第一项研究中,参与者要么要求机器人代表他们完成一项任务,要么观看机器人为其他人执行任务。我们根据参与者的任务是否因其他参与者的任务而延迟完成,或者他们是在没有自己的任务的情况下进行观察,来衡量他们的意见是如何变化的。在第二项任务中,参与者让机器人送他们去办公室,但由于机器人绕道到了另一个地方,他们的时间被耽搁了。我们根据要求绕道任务的人和绕道时间的长短来衡量参与者对机器人看法的变化。总的来说,只要别人的任务造成的延误和不便较小,而且任务理由充分,参与者就会积极看待任务交错。此外,观察者对机器人的评价往往低于提出任务要求的参与者,这也是在线研究中需要关注的问题。
{"title":"Perceptions of a Robot that Interleaves Tasks for Multiple Users","authors":"Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal","doi":"10.1145/3663486","DOIUrl":"https://doi.org/10.1145/3663486","url":null,"abstract":"When robots have multiple tasks to perform, they must determine the order in which to complete them. Interleaving tasks is efficient for the robot trying to finish its to-do list, but it may be less satisfying for a human whose request was delayed in favor of schedule efficiency. Following online research that examined delays with various motivations [4, 27], we created two in-person studies in which participants’ tasks were impacted by the robot’s other tasks. In the first, participants either requested a task for the robot to complete on their behalf or watched the robot performing tasks for other people. We measured how their opinions changed depending on whether their task’s completion was delayed due to another participant’s task or they were observing without a task of their own. In the second, participants had a robot walk them to an office and became delayed as the robot detoured to another location. We measured how opinions of the robot changed depending on who requested the detour task and the length of the detour. Overall, participants positively viewed task interleaving as long as the delay and inconvenience imposed by someone else’s task were small and the task was well-justified. Also, observers often had lower opinions of the robot than participants who requested tasks, highlighting a concern for online research.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Human-Centered View of Continual Learning: Understanding Interactions, Teaching Patterns, and Perceptions of Human Users Towards a Continual Learning Robot in Repeated Interactions 以人为本的持续学习观:了解交互、教学模式和人类用户对重复交互中持续学习机器人的看法
IF 5.1 Q2 Computer Science Pub Date : 2024-05-23 DOI: 10.1145/3659110
Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn
Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been robot-centered to develop continual learning algorithms that can quickly learn new information on systematically collected static datasets. In this paper, we take a human-centered approach to continual learning, to understand how humans interact with, teach, and perceive continual learning robots over the long term, and if there are variations in their teaching styles. We developed a socially guided continual learning system that integrates CL models for object recognition with a mobile manipulator robot and allows humans to directly teach and test the robot in real time over multiple sessions. We conducted an in-person study with 60 participants who interacted with the continual learning robot in 300 sessions with 5 sessions per participant. In this between-participant study, we used three different CL models deployed on a mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. Our analysis shows that the constrained experimental setups that have been widely used to test most CL models are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Finally, our analysis shows that although users have concerns about continual learning robots being deployed in our daily lives, they mention that with further improvements continual learning robots could assist older adults and people with disabilities in their homes.
近年来,持续学习(CL)已成为机器学习(ML)和人机交互(HRI)交叉领域的一个重要研究方向,目的是让机器人在与人类的长期互动中不断学习所处的环境。然而,大多数持续学习研究都是以机器人为中心,开发能够在系统收集的静态数据集上快速学习新信息的持续学习算法。在本文中,我们采用了以人为中心的持续学习方法,以了解人类如何与持续学习机器人进行长期互动、教学和感知,以及他们的教学风格是否存在差异。我们开发了一个社交引导的持续学习系统,该系统将用于物体识别的CL模型与移动操纵机器人整合在一起,并允许人类在多个环节中直接对机器人进行实时教学和测试。我们对 60 名参与者进行了现场研究,他们与持续学习机器人进行了 300 次互动,每人 5 次。在这项参与者之间的研究中,我们在移动机械手机器人上使用了三种不同的持续学习模型。我们对研究中收集到的数据进行了广泛的定性和定量分析,结果表明,不同用户的教学风格存在显著差异,这表明需要针对他们不同的教学风格进行个性化调整。我们的分析表明,广泛用于测试大多数持续学习机器人模型的受限实验装置是不够的,因为真实用户与持续学习机器人的互动和教学方式多种多样。最后,我们的分析表明,尽管用户对将持续学习机器人应用于日常生活表示担忧,但他们提到,如果进一步改进,持续学习机器人可以帮助老年人和残疾人在家中学习。
{"title":"A Human-Centered View of Continual Learning: Understanding Interactions, Teaching Patterns, and Perceptions of Human Users Towards a Continual Learning Robot in Repeated Interactions","authors":"Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn","doi":"10.1145/3659110","DOIUrl":"https://doi.org/10.1145/3659110","url":null,"abstract":"\u0000 Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been\u0000 robot-centered\u0000 to develop continual learning algorithms that can quickly learn new information on systematically collected static datasets. In this paper, we take a\u0000 human-centered\u0000 approach to continual learning, to understand how humans interact with, teach, and perceive continual learning robots over the long term, and if there are variations in their teaching styles. We developed a socially guided continual learning system that integrates CL models for object recognition with a mobile manipulator robot and allows humans to directly teach and test the robot in real time over multiple sessions. We conducted an in-person study with 60 participants who interacted with the continual learning robot in 300 sessions with 5 sessions per participant. In this between-participant study, we used three different CL models deployed on a mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. Our analysis shows that the constrained experimental setups that have been widely used to test most CL models are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Finally, our analysis shows that although users have concerns about continual learning robots being deployed in our daily lives, they mention that with further improvements continual learning robots could assist older adults and people with disabilities in their homes.\u0000","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust Assessment 平衡社交机器人中的人类相似性:对儿童词汇排列和信任评估自我披露的影响
IF 5.1 Q2 Computer Science Pub Date : 2024-05-23 DOI: 10.1145/3659062
Natalia Calvo-Barajas, Anastasia Akkuzu, Ginevra Castellano
While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.
虽然有证据表明,机器人中的类人特征可以在很多方面有利于儿童与机器人之间的互动,但关于机器人中的类人特征应达到何种程度才合适,以避免对接受度和信任度产生不利影响,仍是一个悬而未决的问题。本研究探讨了人类的相似性、外观和行为如何影响儿童对机器人的社交和能力信任。首先,我们设计了两个版本的 Furhat 机器人,其视觉和听觉上的类人和类机提示已在两项在线研究中得到验证。其次,我们创造了一些语言行为,在这些行为中,人类的相似性被操纵为对机器人词汇匹配的反应性。然后,52 名儿童(7-10 岁)在主体间实验设计中进行了讲故事游戏。结果表明,实验条件并不影响主观信任度的测量。然而,客观测量结果表明,人的相似性对信任的影响是不同的。低相似度的人类外表增强了社会信任,而高相似度的人类行为则提高了儿童对机器人任务相关建议的接受度。这项研究提供了实证证据,说明如何操纵面部特征和行为来控制具有高度仿人形态的机器人的仿人程度。我们讨论了在机器人设计中平衡人类相似性的意义和重要性及其对任务执行的影响,因为这直接影响到与儿童建立信任。
{"title":"Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust Assessment","authors":"Natalia Calvo-Barajas, Anastasia Akkuzu, Ginevra Castellano","doi":"10.1145/3659062","DOIUrl":"https://doi.org/10.1145/3659062","url":null,"abstract":"While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Children's Acceptance of a Domestic Social Robot: How It Evolves over Time 儿童对家用社交机器人的接受程度:如何随时间演变
IF 5.1 Q2 Computer Science Pub Date : 2024-02-16 DOI: 10.1145/3638066
Chiara de Jong, J. Peter, R. Kühne, Àlex Barco
Little is known about children's long-term acceptance of social robots; whether different types of users exist; and what reasons children have not to use a robot. Moreover, the literature is inconclusive about how the measurement of children's robot acceptance (i.e., self-report or observational) affects the findings. We relied on both self-report and observational data from a six-wave panel study among 321 children aged eight to nine, who were given a Cozmo robot to play with at home over the course of eight weeks. Children's robot acceptance decreased over time, with the strongest drop after two to four weeks. Children rarely rejected the robot (i.e., they did not stop using it already prior to actual adoption). They rather discontinued its use after initial adoption or alternated between using and not using the robot. The competition of other toys and lacking motivation to play with Cozmo emerged as strongest reasons for not using the robot. Self-report measures captured patterns of robot acceptance well but seemed suboptimal for precise assessments of robot use.
关于儿童对社交机器人的长期接受程度、是否存在不同类型的用户以及儿童不使用机器人的原因,目前所知甚少。此外,关于儿童对机器人接受程度的测量方法(即自我报告还是观察)对研究结果的影响,文献中也没有定论。我们利用一项六波小组研究中的自我报告和观察数据,对 321 名 8 到 9 岁的儿童进行了研究,这些儿童在八周的时间里在家里玩了一个 Cozmo 机器人。随着时间的推移,儿童对机器人的接受度逐渐下降,其中两到四周后下降幅度最大。孩子们很少拒绝机器人(也就是说,他们并没有在实际采用之前就停止使用机器人)。相反,他们在初次采用后就停止使用,或者交替使用和不使用机器人。其他玩具的竞争和缺乏与 Cozmo 玩耍的动力是不使用机器人的最主要原因。自我报告测量方法很好地捕捉到了机器人的接受模式,但对于精确评估机器人的使用情况似乎还不够理想。
{"title":"Children's Acceptance of a Domestic Social Robot: How It Evolves over Time","authors":"Chiara de Jong, J. Peter, R. Kühne, Àlex Barco","doi":"10.1145/3638066","DOIUrl":"https://doi.org/10.1145/3638066","url":null,"abstract":"Little is known about children's long-term acceptance of social robots; whether different types of users exist; and what reasons children have not to use a robot. Moreover, the literature is inconclusive about how the measurement of children's robot acceptance (i.e., self-report or observational) affects the findings. We relied on both self-report and observational data from a six-wave panel study among 321 children aged eight to nine, who were given a Cozmo robot to play with at home over the course of eight weeks. Children's robot acceptance decreased over time, with the strongest drop after two to four weeks. Children rarely rejected the robot (i.e., they did not stop using it already prior to actual adoption). They rather discontinued its use after initial adoption or alternated between using and not using the robot. The competition of other toys and lacking motivation to play with Cozmo emerged as strongest reasons for not using the robot. Self-report measures captured patterns of robot acceptance well but seemed suboptimal for precise assessments of robot use.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction-Shaping Robotics: Robots that Influence Interactions between Other Agents 交互塑造机器人:影响其他代理之间互动的机器人
IF 5.1 Q2 Computer Science Pub Date : 2024-02-02 DOI: 10.1145/3643803
Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo
Work in Human-Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human-robot group interactions. Yet, the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this paper, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of Interaction-Shaping Robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human-robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.
人机交互(HRI)领域的工作研究了人与机器人之间的互动,以及人与机器人的群体互动。然而,该领域对机器人对其他群体成员(如人与人)之间互动的影响缺乏明确的定义和理解。在本文中,我们定义了交互塑造机器人技术(ISR),它是人力资源创新的一个子领域,研究机器人对两个(或多个)其他代理之间的行为和态度产生的影响。我们强调了交互塑造机器人的关键因素,包括机器人的角色、机器人塑造的结果、机器人影响的形式、机器人交流的类型以及机器人影响的时间轴。我们还描述了人类与机器人群体的三种不同结构,以强调 ISR 在不同群体组成中的潜力,并讨论了机器人交互塑造行为的目标。最后,我们提出了 ISR 未来研究的机遇和挑战领域。
{"title":"Interaction-Shaping Robotics: Robots that Influence Interactions between Other Agents","authors":"Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo","doi":"10.1145/3643803","DOIUrl":"https://doi.org/10.1145/3643803","url":null,"abstract":"Work in Human-Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human-robot group interactions. Yet, the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this paper, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of Interaction-Shaping Robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human-robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139683479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception and Action Augmentation for Teleoperation Assistance in Freeform Tele-manipulation 感知和行动增强技术为自由形态远程操纵提供远程操作辅助
IF 5.1 Q2 Computer Science Pub Date : 2024-01-31 DOI: 10.1145/3643804
Tsung-Chi Lin, Achyuthan Unni Krishnan, Zhi Li
Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: 1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; 2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this paper we investigate: 1) which aspects of dexterous tele-manipulation require assistance; 2) the impact of perception and action augmentation in improving teleoperation performance; 3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task based and user preferred perception and augmentation features for teleoperation assistance.
远程操作能够远程控制复杂的机器人系统,提供远距离传授人类专业知识的能力。然而,这些界面使用起来可能很复杂,因为很难从有限的摄像头反馈中了解工作区中机器人运动的上下文信息。因此,需要研究向操作员提供帮助的最佳方式,以降低界面的复杂性和远程操作所需的工作量。在自由远程操作时为操作员提供帮助的一些技术包括1)感知增强,如增强现实视觉提示和额外的摄像机角度,增加操作员可用的信息;2)行动增强,如辅助自主和控制增强,通过优化减少操作员在远程操作时所需的工作量。在本文中,我们研究了:1)灵巧远程操纵的哪些方面需要辅助;2)感知和动作增强对提高远程操作性能的影响;3)哪些因素会影响辅助的使用,以及如何根据操作员的需求和特点定制这些界面。这项用户研究和研究后调查的结果将有助于确定基于任务和用户偏好的远程操作辅助感知和增强功能。
{"title":"Perception and Action Augmentation for Teleoperation Assistance in Freeform Tele-manipulation","authors":"Tsung-Chi Lin, Achyuthan Unni Krishnan, Zhi Li","doi":"10.1145/3643804","DOIUrl":"https://doi.org/10.1145/3643804","url":null,"abstract":"Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: 1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; 2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this paper we investigate: 1) which aspects of dexterous tele-manipulation require assistance; 2) the impact of perception and action augmentation in improving teleoperation performance; 3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task based and user preferred perception and augmentation features for teleoperation assistance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140479040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1