首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
IMPRINT: Interactional Dynamics-aware Motion Prediction in Teams using Multimodal Context 印记:使用多模态上下文的团队中的交互式动态感知运动预测
Q2 ROBOTICS Pub Date : 2023-10-16 DOI: 10.1145/3626954
Mohammad Samin Yasar, Md Mofijul Islam, Tariq Iqbal
Robots are moving from working in isolation to working with humans as a part of human-robot teams. In such situations, they are expected to work with multiple humans and need to understand and predict the team members’ actions. To address this challenge, in this work, we introduce IMPRINT, a multi-agent motion prediction framework that models the interactional dynamics and incorporates the multimodal context (e.g., data from RGB and depth sensors and skeleton joint positions) to accurately predict the motion of all the agents in a team. In IMPRINT, we propose an Interaction module that can extract the intra-agent and inter-agent dynamics before fusing them to obtain the interactional dynamics. Furthermore, we propose a Multimodal Context module that incorporates multimodal context information to improve multi-agent motion prediction. We evaluated IMPRINT by comparing its performance on human-human and human-robot team scenarios against state-of-the-art methods. The results suggest that IMPRINT outperformed all other methods over all evaluated temporal horizons. Additionally, we provide an interpretation of how IMPRINT incorporates the multimodal context information from all the modalities during multi-agent motion prediction. The superior performance of IMPRINT provides a promising direction to integrate motion prediction with robot perception and enable safe and effective human-robot collaboration.
机器人正在从孤立工作转向作为人-机器人团队的一部分与人类合作。在这种情况下,他们需要与多人一起工作,并且需要理解和预测团队成员的行为。为了应对这一挑战,在这项工作中,我们引入了IMPRINT,这是一个多智能体运动预测框架,它对交互动力学进行建模,并结合多模态上下文(例如,来自RGB和深度传感器以及骨骼关节位置的数据)来准确预测团队中所有智能体的运动。在IMPRINT中,我们提出了一个交互模块,该模块可以提取agent内和agent间的动态,然后将它们融合以获得交互动态。此外,我们提出了一个包含多模态上下文信息的多模态上下文模块,以改进多智能体运动预测。我们通过比较人与人和人机团队场景与最先进方法的表现来评估IMPRINT。结果表明,在所有评估的时间范围内,IMPRINT优于所有其他方法。此外,我们还解释了IMPRINT如何在多智能体运动预测过程中整合来自所有模态的多模态上下文信息。IMPRINT的优越性能为将运动预测与机器人感知相结合,实现安全有效的人机协作提供了一个有前景的方向。
{"title":"IMPRINT: Interactional Dynamics-aware Motion Prediction in Teams using Multimodal Context","authors":"Mohammad Samin Yasar, Md Mofijul Islam, Tariq Iqbal","doi":"10.1145/3626954","DOIUrl":"https://doi.org/10.1145/3626954","url":null,"abstract":"Robots are moving from working in isolation to working with humans as a part of human-robot teams. In such situations, they are expected to work with multiple humans and need to understand and predict the team members’ actions. To address this challenge, in this work, we introduce IMPRINT, a multi-agent motion prediction framework that models the interactional dynamics and incorporates the multimodal context (e.g., data from RGB and depth sensors and skeleton joint positions) to accurately predict the motion of all the agents in a team. In IMPRINT, we propose an Interaction module that can extract the intra-agent and inter-agent dynamics before fusing them to obtain the interactional dynamics. Furthermore, we propose a Multimodal Context module that incorporates multimodal context information to improve multi-agent motion prediction. We evaluated IMPRINT by comparing its performance on human-human and human-robot team scenarios against state-of-the-art methods. The results suggest that IMPRINT outperformed all other methods over all evaluated temporal horizons. Additionally, we provide an interpretation of how IMPRINT incorporates the multimodal context information from all the modalities during multi-agent motion prediction. The superior performance of IMPRINT provides a promising direction to integrate motion prediction with robot perception and enable safe and effective human-robot collaboration.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"225 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136078655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks Face2Gesture:通过共享潜在空间神经网络将面部表情转化为机器人动作
Q2 ROBOTICS Pub Date : 2023-10-04 DOI: 10.1145/3623386
Michael Suguitan, Nick DePalma, Guy Hoffman, Jessica Hodgins
In this work, we present a method for personalizing human-robot interaction by using emotive facial expressions to generate affective robot movements. Movement is an important medium for robots to communicate affective states, but the expertise and time required to craft new robot movements promotes a reliance on fixed preprogrammed behaviors. Enabling robots to respond to multimodal user input with newly generated movements could stave off staleness of interaction and convey a deeper degree of affective understanding than current retrieval-based methods. We use autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space. Then, we use a reconstruction loss to generate movements from these embeddings and triplet loss to align the embeddings by emotion classes rather than data modality. To subjectively evaluate our method, we conducted a user survey and found that generated happy and sad movements could be matched to their source face images. However, angry movements were most often mismatched to sad images. This multimodal data-driven generative method can expand an interactive agent’s behavior library and could be adopted for other multimodal affective applications.
在这项工作中,我们提出了一种通过使用情感面部表情来产生情感机器人动作的个性化人机交互方法。运动是机器人交流情感状态的重要媒介,但制作新的机器人运动所需的专业知识和时间促进了对固定预编程行为的依赖。使机器人能够用新生成的动作响应多模态用户输入,可以避免交互的陈旧,并传达比当前基于检索的方法更深程度的情感理解。我们使用自编码器神经网络将机器人运动数据和面部表情图像压缩到一个共享的潜在嵌入空间中。然后,我们使用重建损失来从这些嵌入中生成运动,并使用三重损失来根据情感类别而不是数据模式对齐嵌入。为了主观地评价我们的方法,我们对用户进行了调查,发现生成的快乐和悲伤的动作可以与他们的源面部图像相匹配。然而,愤怒的动作通常与悲伤的图像不匹配。这种多模态数据驱动生成方法可以扩展交互式智能体的行为库,并可用于其他多模态情感应用。
{"title":"Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks","authors":"Michael Suguitan, Nick DePalma, Guy Hoffman, Jessica Hodgins","doi":"10.1145/3623386","DOIUrl":"https://doi.org/10.1145/3623386","url":null,"abstract":"In this work, we present a method for personalizing human-robot interaction by using emotive facial expressions to generate affective robot movements. Movement is an important medium for robots to communicate affective states, but the expertise and time required to craft new robot movements promotes a reliance on fixed preprogrammed behaviors. Enabling robots to respond to multimodal user input with newly generated movements could stave off staleness of interaction and convey a deeper degree of affective understanding than current retrieval-based methods. We use autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space. Then, we use a reconstruction loss to generate movements from these embeddings and triplet loss to align the embeddings by emotion classes rather than data modality. To subjectively evaluate our method, we conducted a user survey and found that generated happy and sad movements could be matched to their source face images. However, angry movements were most often mismatched to sad images. This multimodal data-driven generative method can expand an interactive agent’s behavior library and could be adopted for other multimodal affective applications.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135596832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Do this instead” – Robots that Adequately Respond to Corrected Instructions “做这个”——对纠正指令做出充分反应的机器人
Q2 ROBOTICS Pub Date : 2023-09-22 DOI: 10.1145/3623385
Christopher Thierauf, Ravenna Thielstrom, Bradley Oosterveld, Will Becker, Matthias Scheutz
Natural language instructions are effective at tasking autonomous robots and for teaching them new knowledge quickly. Yet, human instructors are not perfect and are likely to make mistakes at times, and will correct themselves when they notice errors in their own instructions. In this paper, we introduce a complete system for robot behaviors to handle such corrections, during both task instruction and action execution. We then demonstrate its operation in an integrated cognitive robotic architecture through spoken language in two tasks: a navigation and retrieval task and a meal assembly task. Verbal corrections occur before, during, and after verbally taught sequences of tasks, demonstrating that the proposed methods enable fast corrections not only of the semantics generated from the instructions, but also of overt robot behavior in a manner shown to be reasonable when compared to human behavior and expectations.
自然语言指令在给自主机器人分配任务和快速教授新知识方面是有效的。然而,人类教师并不完美,有时可能会犯错误,当他们注意到自己的指示中的错误时,他们会纠正自己。在本文中,我们引入了一个完整的机器人行为系统,在任务指令和动作执行过程中处理这种纠正。然后,我们通过口语在两个任务中演示其在集成认知机器人架构中的操作:导航和检索任务以及饭菜组装任务。口头纠正发生在口头教导任务序列之前、期间和之后,这表明所提出的方法不仅能够快速纠正指令生成的语义,而且能够以与人类行为和期望相比合理的方式纠正明显的机器人行为。
{"title":"“Do this instead” – Robots that Adequately Respond to Corrected Instructions","authors":"Christopher Thierauf, Ravenna Thielstrom, Bradley Oosterveld, Will Becker, Matthias Scheutz","doi":"10.1145/3623385","DOIUrl":"https://doi.org/10.1145/3623385","url":null,"abstract":"Natural language instructions are effective at tasking autonomous robots and for teaching them new knowledge quickly. Yet, human instructors are not perfect and are likely to make mistakes at times, and will correct themselves when they notice errors in their own instructions. In this paper, we introduce a complete system for robot behaviors to handle such corrections, during both task instruction and action execution. We then demonstrate its operation in an integrated cognitive robotic architecture through spoken language in two tasks: a navigation and retrieval task and a meal assembly task. Verbal corrections occur before, during, and after verbally taught sequences of tasks, demonstrating that the proposed methods enable fast corrections not only of the semantics generated from the instructions, but also of overt robot behavior in a manner shown to be reasonable when compared to human behavior and expectations.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136060335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction 在物理人机交互过程中,从演示、修正和偏好中统一学习
Q2 ROBOTICS Pub Date : 2023-09-22 DOI: 10.1145/3623384
Shaunak A. Mehta, Dylan P. Losey
Humans can leverage physical interaction to teach robot arms. This physical interaction takes multiple forms depending on the task, the user, and what the robot has learned so far. State-of-the-art approaches focus on learning from a single modality, or combine some interaction types. Some methods do so by assuming that the robot has prior information about the features of the task and the reward structure. By contrast, in this paper we introduce an algorithmic formalism that unites learning from demonstrations, corrections, and preferences. Our approach makes no assumptions about the tasks the human wants to teach the robot; instead, we learn a reward model from scratch by comparing the human’s input to nearby alternatives, i.e., trajectories close to the human’s feedback. We first derive a loss function that trains an ensemble of reward models to match the human’s demonstrations, corrections, and preferences. The type and order of feedback is up to the human teacher: we enable the robot to collect this feedback passively or actively. We then apply constrained optimization to convert our learned reward into a desired robot trajectory. Through simulations and a user study we demonstrate that our proposed approach more accurately learns manipulation tasks from physical human interaction than existing baselines, particularly when the robot is faced with new or unexpected objectives. Videos of our user study are available at: https://youtu.be/FSUJsTYvEKU
人类可以利用物理互动来教机器人手臂。这种物理交互根据任务、用户和机器人迄今所学的知识采取多种形式。最先进的方法侧重于从单一模式学习,或者结合一些交互类型。一些方法通过假设机器人具有关于任务特征和奖励结构的先验信息来做到这一点。相比之下,在本文中,我们引入了一种算法形式主义,它将从演示、修正和偏好中学习结合起来。我们的方法没有对人类想教机器人的任务做任何假设;相反,我们通过将人类的输入与附近的替代方案(即接近人类反馈的轨迹)进行比较,从头开始学习奖励模型。我们首先推导了一个损失函数,该函数训练了一个奖励模型集合,以匹配人类的演示、纠正和偏好。反馈的类型和顺序取决于人类老师:我们使机器人能够被动或主动地收集这些反馈。然后,我们应用约束优化将我们学习到的奖励转换为期望的机器人轨迹。通过模拟和用户研究,我们证明了我们提出的方法比现有的基线更准确地从物理人机交互中学习操作任务,特别是当机器人面临新的或意想不到的目标时。我们的用户研究视频可以在https://youtu.be/FSUJsTYvEKU上找到
{"title":"Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction","authors":"Shaunak A. Mehta, Dylan P. Losey","doi":"10.1145/3623384","DOIUrl":"https://doi.org/10.1145/3623384","url":null,"abstract":"Humans can leverage physical interaction to teach robot arms. This physical interaction takes multiple forms depending on the task, the user, and what the robot has learned so far. State-of-the-art approaches focus on learning from a single modality, or combine some interaction types. Some methods do so by assuming that the robot has prior information about the features of the task and the reward structure. By contrast, in this paper we introduce an algorithmic formalism that unites learning from demonstrations, corrections, and preferences. Our approach makes no assumptions about the tasks the human wants to teach the robot; instead, we learn a reward model from scratch by comparing the human’s input to nearby alternatives, i.e., trajectories close to the human’s feedback. We first derive a loss function that trains an ensemble of reward models to match the human’s demonstrations, corrections, and preferences. The type and order of feedback is up to the human teacher: we enable the robot to collect this feedback passively or actively. We then apply constrained optimization to convert our learned reward into a desired robot trajectory. Through simulations and a user study we demonstrate that our proposed approach more accurately learns manipulation tasks from physical human interaction than existing baselines, particularly when the robot is faced with new or unexpected objectives. Videos of our user study are available at: https://youtu.be/FSUJsTYvEKU","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136062298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration http:面向无通信、相互适应的人机协作的用户感知分层任务规划框架
Q2 ROBOTICS Pub Date : 2023-09-22 DOI: 10.1145/3623387
Kartik Ramachandruni, Cassandra Kent, Sonia Chernova
Collaborative human-robot task execution approaches require mutual adaptation, allowing both the human and robot partners to take active roles in action selection and role assignment to achieve a single shared goal. Prior works have utilized a leader-follower paradigm in which either agent must follow the actions specified by the other agent. We introduce the User-aware Hierarchical Task Planning (UHTP) framework, a communication-free human-robot collaborative approach for adaptive execution of multi-step tasks that moves beyond the leader-follower paradigm. Specifically, our approach enables the robot to observe the human, perform actions that support the human’s decisions, and actively select actions that maximize the expected efficiency of the collaborative task. In turn, the human chooses actions based on their observation of the task and the robot, without being dictated by a scheduler or the robot. We evaluate UHTP both in simulation and in a human subjects experiment of a collaborative drill assembly task. Our results show that UHTP achieves more efficient task plans and shorter task completion times than non-adaptive baselines across a wide range of human behaviors, that interacting with a UHTP-controlled robot reduces the human’s cognitive workload, and that humans prefer to work with our adaptive robot over a fixed-policy alternative.
协作式人机任务执行方法需要相互适应,允许人和机器人合作伙伴在行动选择和角色分配中发挥积极作用,以实现单一的共享目标。先前的研究使用了领导者-追随者范式,其中任何一个代理都必须遵循另一个代理指定的动作。我们介绍了用户感知分层任务规划(http)框架,这是一种无需通信的人机协作方法,用于自适应执行多步骤任务,超越了领导者-追随者范式。具体来说,我们的方法使机器人能够观察人类,执行支持人类决策的动作,并主动选择最大限度提高协作任务预期效率的动作。反过来,人类根据他们对任务和机器人的观察来选择行动,而不受调度程序或机器人的支配。我们在模拟和协作演练装配任务的人类受试者实验中评估UHTP。我们的研究结果表明,在广泛的人类行为中,与非自适应基线相比,uhttp实现了更有效的任务计划和更短的任务完成时间,与uhttp控制的机器人交互减少了人类的认知工作量,并且与固定策略替代方案相比,人类更喜欢与我们的自适应机器人一起工作。
{"title":"UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration","authors":"Kartik Ramachandruni, Cassandra Kent, Sonia Chernova","doi":"10.1145/3623387","DOIUrl":"https://doi.org/10.1145/3623387","url":null,"abstract":"Collaborative human-robot task execution approaches require mutual adaptation, allowing both the human and robot partners to take active roles in action selection and role assignment to achieve a single shared goal. Prior works have utilized a leader-follower paradigm in which either agent must follow the actions specified by the other agent. We introduce the User-aware Hierarchical Task Planning (UHTP) framework, a communication-free human-robot collaborative approach for adaptive execution of multi-step tasks that moves beyond the leader-follower paradigm. Specifically, our approach enables the robot to observe the human, perform actions that support the human’s decisions, and actively select actions that maximize the expected efficiency of the collaborative task. In turn, the human chooses actions based on their observation of the task and the robot, without being dictated by a scheduler or the robot. We evaluate UHTP both in simulation and in a human subjects experiment of a collaborative drill assembly task. Our results show that UHTP achieves more efficient task plans and shorter task completion times than non-adaptive baselines across a wide range of human behaviors, that interacting with a UHTP-controlled robot reduces the human’s cognitive workload, and that humans prefer to work with our adaptive robot over a fixed-policy alternative.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136059980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making 理解人类动态采样目标,使机器人辅助科学决策
Q2 ROBOTICS Pub Date : 2023-09-13 DOI: 10.1145/3623383
Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian
Truly collaborative scientific field data collection between human scientists and autonomous robot systems requires a shared understanding of the search objectives and tradeoffs faced when making decisions. Therefore, critical to developing intelligent robots to aid human experts, is an understanding of how scientists make such decisions and how they adapt their data collection strategies when presented with new information in situ . In this study we examined the dynamic data collection decisions of 108 expert geoscience researchers using a simulated field scenario. Human data collection behaviors suggested two distinct objectives: an information-based objective to maximize information coverage, and a discrepancy-based objective to maximize hypothesis verification. We developed a highly-simplified quantitative decision model that allows the robot to predict potential human data collection locations based on the two observed human data collection objectives. Predictions from the simple model revealed a transition from information-based to discrepancy-based objective as the level of information increased. The findings will allow robotic teammates to connect experts’ dynamic science objectives with the adaptation of their sampling behaviors, and in the long term, enable the development of more cognitively-compatible robotic field assistants.
人类科学家和自主机器人系统之间真正的协作科学领域数据收集需要对搜索目标和决策时面临的权衡有共同的理解。因此,开发智能机器人来帮助人类专家的关键是了解科学家如何做出这样的决定,以及他们如何在现场呈现新信息时调整他们的数据收集策略。在这项研究中,我们使用模拟的现场场景检查了108位地球科学专家的动态数据收集决策。人类数据收集行为表明了两个不同的目标:基于信息的目标是最大化信息覆盖,基于差异的目标是最大化假设验证。我们开发了一个高度简化的定量决策模型,允许机器人根据两个观察到的人类数据收集目标预测潜在的人类数据收集位置。简单模型的预测显示,随着信息水平的增加,目标从基于信息到基于差异的转变。这些发现将使机器人队友能够将专家的动态科学目标与他们的采样行为的适应联系起来,从长远来看,能够开发出更多认知兼容的机器人现场助理。
{"title":"Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making","authors":"Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian","doi":"10.1145/3623383","DOIUrl":"https://doi.org/10.1145/3623383","url":null,"abstract":"Truly collaborative scientific field data collection between human scientists and autonomous robot systems requires a shared understanding of the search objectives and tradeoffs faced when making decisions. Therefore, critical to developing intelligent robots to aid human experts, is an understanding of how scientists make such decisions and how they adapt their data collection strategies when presented with new information in situ . In this study we examined the dynamic data collection decisions of 108 expert geoscience researchers using a simulated field scenario. Human data collection behaviors suggested two distinct objectives: an information-based objective to maximize information coverage, and a discrepancy-based objective to maximize hypothesis verification. We developed a highly-simplified quantitative decision model that allows the robot to predict potential human data collection locations based on the two observed human data collection objectives. Predictions from the simple model revealed a transition from information-based to discrepancy-based objective as the level of information increased. The findings will allow robotic teammates to connect experts’ dynamic science objectives with the adaptation of their sampling behaviors, and in the long term, enable the development of more cognitively-compatible robotic field assistants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forging Productive Human-Robot Partnerships Through Task Training 通过任务训练建立富有成效的人机合作伙伴关系
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-31 DOI: 10.1145/3611657
Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang
Productive human-robot partnerships are vital to successful integration of assistive robots into everyday life. While prior research has explored techniques to facilitate collaboration during human-robot interaction, the work described here aims to forge productive partnerships prior to human-robot interaction, drawing upon team building activities’ aid in establishing effective human teams. Through a 2 (group membership: ingroup and outgroup) × 3 (robot error: main task errors, side task errors, and no errors) online study (N = 62), we demonstrate that 1) a non-social pre-task exercise can help form ingroup relationships; 2) an ingroup robot is perceived as a better, more committed teammate than an outgroup robot (despite the two behaving identically); and 3) participants are more tolerant of negative outcomes when working with an ingroup robot. We discuss how pre-task exercises may serve as an active task failure mitigation strategy.
高效的人机合作关系对于将辅助机器人成功融入日常生活至关重要。虽然之前的研究已经探索了在人机交互过程中促进协作的技术,但这里描述的工作旨在在人机交互之前建立富有成效的伙伴关系,利用团队建设活动帮助建立有效的人类团队。通过一项2(群体成员:内群体和外群体)× 3(机器人错误:主任务错误、副任务错误和无错误)的在线研究(N = 62),我们证明了1)非社会任务前练习可以帮助形成内群体关系;2)内组机器人被认为是比外组机器人更好、更忠诚的队友(尽管两者的行为相同);3)参与者在与内部机器人合作时更能容忍负面结果。我们讨论了任务前练习如何作为一种主动的任务失败缓解策略。
{"title":"Forging Productive Human-Robot Partnerships Through Task Training","authors":"Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang","doi":"10.1145/3611657","DOIUrl":"https://doi.org/10.1145/3611657","url":null,"abstract":"Productive human-robot partnerships are vital to successful integration of assistive robots into everyday life. While prior research has explored techniques to facilitate collaboration during human-robot interaction, the work described here aims to forge productive partnerships prior to human-robot interaction, drawing upon team building activities’ aid in establishing effective human teams. Through a 2 (group membership: ingroup and outgroup) × 3 (robot error: main task errors, side task errors, and no errors) online study (N = 62), we demonstrate that 1) a non-social pre-task exercise can help form ingroup relationships; 2) an ingroup robot is perceived as a better, more committed teammate than an outgroup robot (despite the two behaving identically); and 3) participants are more tolerant of negative outcomes when working with an ingroup robot. We discuss how pre-task exercises may serve as an active task failure mitigation strategy.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"36 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75149830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments 无仪器环境下自主移动机器人变化检测的增强现实可视化
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-21 DOI: 10.1145/3611654
Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang
The creation of information transparency solutions to enable humans to understand robot perception is a challenging requirement for autonomous and artificially intelligent robots to impact a multitude of domains. By taking advantage of comprehensive and high-volume data from robot teammates’ advanced perception and reasoning capabilities, humans will be able to make better decisions, with significant impacts from safety to functionality. We present a solution to this challenge by coupling augmented reality (AR) with an intelligent mobile robot that is autonomously detecting novel changes in an environment. We show that the human teammate can understand and make decisions based on information shared via AR by the robot. Sharing of robot-perceived information is enabled by the robot’s online calculation of the human’s relative position, making the system robust to environments without external instrumentation such as GPS. Our robotic system performs change detection by comparing current metric sensor readings against a previous reading to identify differences. We experimentally explore the design of change detection visualizations and the aggregation of information, the impact of instruction on communication understanding, the effects of visualization and alignment error, and the relationship between situated 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams. We demonstrate this novel capability and assess the effectiveness of human-robot teaming in crowdsourced data-driven studies, as well as an in-person study where participants are equipped with a commercial off-the-shelf AR headset and teamed with a small ground robot which maneuvers through the environment. The mobile robot scans for changes, which are visualized via AR to the participant. The effectiveness of this communication is evaluated through accuracy and subjective assessment metrics to provide insight into interpretation and experience.
创建信息透明的解决方案,使人类能够理解机器人的感知,是自主和人工智能机器人影响众多领域的一个具有挑战性的要求。通过利用机器人队友先进的感知和推理能力提供的全面和大量数据,人类将能够做出更好的决策,从安全性到功能性都将产生重大影响。我们提出了一种解决方案,通过将增强现实(AR)与智能移动机器人相结合,该机器人可以自主检测环境中的新变化。我们展示了人类队友可以理解并根据机器人通过AR共享的信息做出决策。通过机器人在线计算人类的相对位置,可以共享机器人感知到的信息,使系统在没有外部仪器(如GPS)的环境下具有鲁棒性。我们的机器人系统通过比较当前度量传感器读数与先前读数来识别差异,从而执行变化检测。我们通过实验探讨了变化检测可视化和信息聚合的设计、指令对沟通理解的影响、可视化和对齐误差的影响,以及AR中的情境三维可视化与作战环境中人类运动对人机团队共享态势感知的关系。我们展示了这种新颖的能力,并在众包数据驱动的研究中评估了人-机器人团队的有效性,以及一项面对面的研究,参与者配备了商用现货AR耳机,并与一个小型地面机器人合作,该机器人可以在环境中机动。移动机器人扫描变化,并通过AR向参与者可视化。这种沟通的有效性通过准确性和主观评估指标进行评估,以提供对解释和经验的见解。
{"title":"Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments","authors":"Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang","doi":"10.1145/3611654","DOIUrl":"https://doi.org/10.1145/3611654","url":null,"abstract":"The creation of information transparency solutions to enable humans to understand robot perception is a challenging requirement for autonomous and artificially intelligent robots to impact a multitude of domains. By taking advantage of comprehensive and high-volume data from robot teammates’ advanced perception and reasoning capabilities, humans will be able to make better decisions, with significant impacts from safety to functionality. We present a solution to this challenge by coupling augmented reality (AR) with an intelligent mobile robot that is autonomously detecting novel changes in an environment. We show that the human teammate can understand and make decisions based on information shared via AR by the robot. Sharing of robot-perceived information is enabled by the robot’s online calculation of the human’s relative position, making the system robust to environments without external instrumentation such as GPS. Our robotic system performs change detection by comparing current metric sensor readings against a previous reading to identify differences. We experimentally explore the design of change detection visualizations and the aggregation of information, the impact of instruction on communication understanding, the effects of visualization and alignment error, and the relationship between situated 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams. We demonstrate this novel capability and assess the effectiveness of human-robot teaming in crowdsourced data-driven studies, as well as an in-person study where participants are equipped with a commercial off-the-shelf AR headset and teamed with a small ground robot which maneuvers through the environment. The mobile robot scans for changes, which are visualized via AR to the participant. The effectiveness of this communication is evaluated through accuracy and subjective assessment metrics to provide insight into interpretation and experience.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Someone There Or Is That The TV? Detecting Social Presence Using Sound 是有人在还是电视在响?使用声音检测社会存在
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-18 DOI: 10.1145/3611658
Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati
Social robots in the home will need to solve audio identification problems to better interact with their users. This paper focuses on the classification between a) natural conversation that includes at least one co-located user and b) media that is playing from electronic sources and does not require a social response, such as television shows. This classification can help social robots detect a user’s social presence using sound. Social robots that are able to solve this problem can apply this information to assist them in making decisions, such as determining when and how to appropriately engage human users. We compiled a dataset from a variety of acoustic environments which contained either natural or media audio, including audio that we recorded in our own homes. Using this dataset, we performed an experimental evaluation on a range of traditional machine learning classifiers, and assessed the classifiers’ abilities to generalize to new recordings, acoustic conditions, and environments. We conclude that a C-Support Vector Classification (SVC) algorithm outperformed other classifiers. Finally, we present a classification pipeline that in-home robots can utilize, and discuss the timing and size of the trained classifiers, as well as privacy and ethics considerations.
家庭中的社交机器人将需要解决音频识别问题,以便更好地与用户互动。本文关注的是a)包括至少一个用户的自然对话和b)从电子来源播放的媒体,不需要社会回应,如电视节目。这种分类可以帮助社交机器人通过声音来检测用户的社交存在。能够解决这个问题的社交机器人可以应用这些信息来帮助它们做出决策,例如确定何时以及如何适当地吸引人类用户。我们从各种声学环境中编译了一个数据集,其中包含自然或媒体音频,包括我们在自己家中录制的音频。使用该数据集,我们对一系列传统机器学习分类器进行了实验评估,并评估了分类器泛化到新录音、声学条件和环境的能力。我们得出结论,c -支持向量分类(SVC)算法优于其他分类器。最后,我们提出了一个家用机器人可以使用的分类管道,并讨论了训练分类器的时间和大小,以及隐私和道德考虑。
{"title":"Is Someone There Or Is That The TV? Detecting Social Presence Using Sound","authors":"Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati","doi":"10.1145/3611658","DOIUrl":"https://doi.org/10.1145/3611658","url":null,"abstract":"Social robots in the home will need to solve audio identification problems to better interact with their users. This paper focuses on the classification between a) natural conversation that includes at least one co-located user and b) media that is playing from electronic sources and does not require a social response, such as television shows. This classification can help social robots detect a user’s social presence using sound. Social robots that are able to solve this problem can apply this information to assist them in making decisions, such as determining when and how to appropriately engage human users. We compiled a dataset from a variety of acoustic environments which contained either natural or media audio, including audio that we recorded in our own homes. Using this dataset, we performed an experimental evaluation on a range of traditional machine learning classifiers, and assessed the classifiers’ abilities to generalize to new recordings, acoustic conditions, and environments. We conclude that a C-Support Vector Classification (SVC) algorithm outperformed other classifiers. Finally, we present a classification pipeline that in-home robots can utilize, and discuss the timing and size of the trained classifiers, as well as privacy and ethics considerations.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"41 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73935061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction 发声机器人:无意识人机交互听觉显示的设计与评价
IF 5.1 Q2 ROBOTICS Pub Date : 2023-08-17 DOI: 10.1145/3611655
Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre
Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in 5 online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.
非语言交流在HRI中很重要,特别是当人类和机器人不需要一起积极参与任务,而是在共享空间中共存时。机器人可能仍然需要沟通紧急或可用性等状态,以及它们打算去哪里,以避免碰撞和中断。声音可以用一种直观和非破坏性的方式来传达这种状态和意图。在这里,我们提出了一个多层分类系统,通过声音同时显示各种机器人信息。我们首先概念化哪些机器人特征可以被显示(机器人的大小,速度,交互的可用性,紧迫性和方向性);然后我们将它们映射到一组音频参数。然后在5项在线研究中对设计的声音进行评估,在这些研究中,人们听了这些声音,并被要求识别相关的机器人特征。这些声音通常被参与者理解为有意的,尤其是当他们一次评估一个特征时,以及同时评估两个特征时。这些评估的结果表明,声音可以成功地用于隐式和直观地传达机器人状态和预期动作。
{"title":"Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction","authors":"Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre","doi":"10.1145/3611655","DOIUrl":"https://doi.org/10.1145/3611655","url":null,"abstract":"Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in 5 online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"387 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78106150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1