首页 > 最新文献

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)最新文献

英文 中文
Creating a Shared Reality with Robots 与机器人创造共享现实
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673191
M. Faizan, Hassan Amel, A. Cleaver, J. Sinapov
This paper outlines the system design, capabilities and potential applications of an Augmented Reality (AR) framework developed for Robot Operating System (ROS) powered robots. The goal of this framework is to enable high-level human-robot collaboration and interaction. It allows the users to visualize the robot's state in intuitive modalities overlaid onto the real world and interact with AR objects as a means of communication with the robot. Thereby creating a shared environment in which humans and robots can interact and collaborate.
本文概述了为机器人操作系统(ROS)驱动的机器人开发的增强现实(AR)框架的系统设计、功能和潜在应用。这个框架的目标是实现高层次的人机协作和交互。它允许用户以直观的方式将机器人的状态可视化,叠加在现实世界上,并与AR对象进行交互,作为与机器人通信的一种手段。从而创造一个人类和机器人可以互动和协作的共享环境。
{"title":"Creating a Shared Reality with Robots","authors":"M. Faizan, Hassan Amel, A. Cleaver, J. Sinapov","doi":"10.1109/HRI.2019.8673191","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673191","url":null,"abstract":"This paper outlines the system design, capabilities and potential applications of an Augmented Reality (AR) framework developed for Robot Operating System (ROS) powered robots. The goal of this framework is to enable high-level human-robot collaboration and interaction. It allows the users to visualize the robot's state in intuitive modalities overlaid onto the real world and interact with AR objects as a means of communication with the robot. Thereby creating a shared environment in which humans and robots can interact and collaborate.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"45 1","pages":"614-615"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76113698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Apprentice of Oz: Human in the Loop System for Conversational Robot Wizard of Oz 《绿野仙踪》:对话机器人的人机循环系统
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673205
Ahnjae Shin, J. Oh, Joonhwan Lee
Conversational robots that exhibit human-level abilities in physical and verbal conversation are widely used in human-robot interaction studies, along with the Wizard of Oz protocol. However, even with the protocol, manipulating the robot to move and talk is cognitively demanding. A preliminary study with a humanoid was conducted to observe difficulties wizards experienced in each of four subtasks: attention, decision, execution, and reflection. Apprentice of Oz is a human-in-the-loop Wizard of Oz system designed to reduce the wizard's cognitive load in each subtask. Each task is co-performed by the wizard and the system. This paper describes the system design from the view of each subtask.
会话机器人具有人类水平的肢体和语言对话能力,与绿野仙踪协议一起被广泛应用于人机交互研究。然而,即使有了协议,操纵机器人移动和说话也需要认知能力。我们对一个人形机器人进行了初步研究,观察巫师在四个子任务中遇到的困难:注意、决策、执行和反思。《绿野仙踪的学徒》是一个“人在环”的绿野仙踪系统,旨在减少巫师在每个子任务中的认知负荷。每个任务由向导和系统共同执行。本文从各个子任务的角度描述了系统的设计。
{"title":"Apprentice of Oz: Human in the Loop System for Conversational Robot Wizard of Oz","authors":"Ahnjae Shin, J. Oh, Joonhwan Lee","doi":"10.1109/HRI.2019.8673205","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673205","url":null,"abstract":"Conversational robots that exhibit human-level abilities in physical and verbal conversation are widely used in human-robot interaction studies, along with the Wizard of Oz protocol. However, even with the protocol, manipulating the robot to move and talk is cognitively demanding. A preliminary study with a humanoid was conducted to observe difficulties wizards experienced in each of four subtasks: attention, decision, execution, and reflection. Apprentice of Oz is a human-in-the-loop Wizard of Oz system designed to reduce the wizard's cognitive load in each subtask. Each task is co-performed by the wizard and the system. This paper describes the system design from the view of each subtask.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"22 1","pages":"516-517"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79042697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Authoring Robot Presentation for Promoting Reflection on Presentation Scenario 创作机器人演示,促进对演示场景的反思
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673278
Mitsuhiro Goto, Tatsuya Ishino, Keisuke Inazawa, N. Matsumura, Tadashi Nunobiki, A. Kashihara
In presentation, presenters are required to conduct non-verbal behavior involving face direction and gesture, which is so important for promoting audience's understanding. However, it is not simple for the presenters to use appropriate non-verbal behavior depending on the presentation contexts. In order to address this issue, this paper proposes a robot presentation system that allows presenters to reflect on their presentation through authoring the presentation scenario used by the robot. Features of the proposed system are that the presenters can easily and quickly author and modify their presentation, and that they can become aware of points to be modified. In addition, this paper reports a case study using the system with six participants, whose purpose was to compare the proposed system with the convention system in terms of complication for authoring the scenario. The results suggest that our system allows presenters to easily and quickly modify the presentation.
在演讲中,演讲者需要进行非语言行为,包括面部方向和手势,这对于促进观众的理解非常重要。然而,对于演讲者来说,根据演讲环境使用适当的非语言行为并不简单。为了解决这个问题,本文提出了一个机器人演示系统,该系统允许演示者通过编写机器人使用的演示场景来反思他们的演示。所建议的系统的特点是,演示者可以轻松快速地编写和修改他们的演示文稿,并且他们可以知道要修改的点。此外,本文报告了一个有六名参与者使用该系统的案例研究,其目的是在编写场景的复杂性方面将拟议的系统与惯例系统进行比较。结果表明,我们的系统可以方便、快速地修改演示文稿。
{"title":"Authoring Robot Presentation for Promoting Reflection on Presentation Scenario","authors":"Mitsuhiro Goto, Tatsuya Ishino, Keisuke Inazawa, N. Matsumura, Tadashi Nunobiki, A. Kashihara","doi":"10.1109/HRI.2019.8673278","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673278","url":null,"abstract":"In presentation, presenters are required to conduct non-verbal behavior involving face direction and gesture, which is so important for promoting audience's understanding. However, it is not simple for the presenters to use appropriate non-verbal behavior depending on the presentation contexts. In order to address this issue, this paper proposes a robot presentation system that allows presenters to reflect on their presentation through authoring the presentation scenario used by the robot. Features of the proposed system are that the presenters can easily and quickly author and modify their presentation, and that they can become aware of points to be modified. In addition, this paper reports a case study using the system with six participants, whose purpose was to compare the proposed system with the convention system in terms of complication for authoring the scenario. The results suggest that our system allows presenters to easily and quickly modify the presentation.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"88 1","pages":"660-661"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76402259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Aquaticus: Publicly Available Datasets from a Marine Human-Robot Teaming Testbed Aquaticus:来自海洋人机合作试验台的公开可用数据集
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673176
M. Novitzky, P. Robinette, M. Benjamin, Caileigh Fitzgerald, Henrik Schmidt
In this paper, we introduce publicly available human-robot teaming datasets captured during the summer 2018 season using our Aquaticus testbed. Our Aquaticus testbed is designed to examine the interactions between human-human and human-robot teammates while situated in the marine environment in their own vehicles. In particular, we assess these interactions while humans and fully autonomous robots play a competitive game of capture the flag on the water. Our testbed is unique in that the humans are situated in the field with their fully autonomous robot teammates in vehicles that have similar dynamics. Having a competition on the water reduces the safety concerns and cost of performing similar experiments in the air or on the ground. By having the competitions on the water, we create a complex, dynamic, and partially observable view of the world for participants while in their motorized kayak. The main modality for teammate interaction is audio to better simulate the experience of real-world tactical situations – ie fighter pilots talking to each other over radios. We have released our complete datasets publicly so that we can enable researchers throughout the HRI community that do not have access to such a testbed and may have expertise other than our own to leverage our datasets to perform their own analysis and contribute to the HRI community.
在本文中,我们介绍了使用Aquaticus测试平台在2018年夏季捕获的公开可用的人机团队数据集。我们的Aquaticus测试平台旨在检测人类和人类机器人队友之间的相互作用,同时位于他们自己的车辆的海洋环境中。特别是,当人类和完全自主的机器人在水上玩夺旗竞技游戏时,我们评估了这些相互作用。我们的测试平台的独特之处在于,人类与完全自主的机器人队友坐在具有类似动力的车辆中。在水上进行比赛可以减少在空中或地面进行类似实验的安全问题和成本。通过在水上进行比赛,我们为在摩托艇上的参与者创造了一个复杂的、动态的、部分可观察的世界。队友互动的主要方式是音频,以更好地模拟现实世界的战术情况-例如战斗机飞行员通过无线电相互交谈。我们已经公开发布了我们完整的数据集,这样我们就可以使整个HRI社区的研究人员能够访问这样一个测试平台,并且可能有我们自己以外的专业知识,利用我们的数据集来执行他们自己的分析,并为HRI社区做出贡献。
{"title":"Aquaticus: Publicly Available Datasets from a Marine Human-Robot Teaming Testbed","authors":"M. Novitzky, P. Robinette, M. Benjamin, Caileigh Fitzgerald, Henrik Schmidt","doi":"10.1109/HRI.2019.8673176","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673176","url":null,"abstract":"In this paper, we introduce publicly available human-robot teaming datasets captured during the summer 2018 season using our Aquaticus testbed. Our Aquaticus testbed is designed to examine the interactions between human-human and human-robot teammates while situated in the marine environment in their own vehicles. In particular, we assess these interactions while humans and fully autonomous robots play a competitive game of capture the flag on the water. Our testbed is unique in that the humans are situated in the field with their fully autonomous robot teammates in vehicles that have similar dynamics. Having a competition on the water reduces the safety concerns and cost of performing similar experiments in the air or on the ground. By having the competitions on the water, we create a complex, dynamic, and partially observable view of the world for participants while in their motorized kayak. The main modality for teammate interaction is audio to better simulate the experience of real-world tactical situations – ie fighter pilots talking to each other over radios. We have released our complete datasets publicly so that we can enable researchers throughout the HRI community that do not have access to such a testbed and may have expertise other than our own to leverage our datasets to perform their own analysis and contribute to the HRI community.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"18 1","pages":"392-400"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86203408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot 用毛绒社交机器人吸引神经发育障碍患者
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673107
D. Fisicaro, Francesco Pozzi, M. Gelsomini, F. Garzotto
The use of social robots in interventions for persons with Neuro-Developmental Disorder (NDD) has been explored in several studies. This paper describes a plush social robot with an elephant appearance called ELE that acts as conversational companion and has been designed to promote NDD persons engagement persons during interventions. We also present the initial evaluation of ELE and the preliminary results in terms of visual attention improvement in a storytelling context.
社交机器人在神经发育障碍(NDD)患者干预中的应用已经在几项研究中进行了探索。这篇论文描述了一种长毛绒的社交机器人,它有着大象的外表,被称为ELE,它作为会话伴侣,被设计用来促进NDD患者在干预期间与患者的接触。我们还介绍了ELE的初步评估和在讲故事情境下视觉注意力改善方面的初步结果。
{"title":"Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot","authors":"D. Fisicaro, Francesco Pozzi, M. Gelsomini, F. Garzotto","doi":"10.1109/HRI.2019.8673107","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673107","url":null,"abstract":"The use of social robots in interventions for persons with Neuro-Developmental Disorder (NDD) has been explored in several studies. This paper describes a plush social robot with an elephant appearance called ELE that acts as conversational companion and has been designed to promote NDD persons engagement persons during interventions. We also present the initial evaluation of ELE and the preliminary results in terms of visual attention improvement in a storytelling context.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"13 1","pages":"610-611"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77270061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Recognizing F-Formations in the Open World 在开放世界中识别f -阵型
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673233
Hooman Hedayati, D. Szafir, Sean Andrist
A key skill for social robots in the wild will be to understand the structure and dynamics of conversational groups in order to fluidly participate in them. Social scientists have long studied the rich complexity underlying such focused encounters, or $pmb{F}$-formations. However, current state-of-the-art algorithms that robots might use to recognize F-formations are highly heuristic and quite brittle. In this report, we explore a data-driven approach to detect F-formations from sets of tracked human positions and orientations, trained and evaluated on two openly available human-only datasets and a small human-robot dataset that we collected. We also discuss the potential for further computational characterization of F-formations beyond simply detecting their occurrence.
野外社交机器人的一项关键技能将是理解对话群体的结构和动态,以便流畅地参与其中。长期以来,社会科学家一直在研究这种集中相遇(或称为“$pmb{F}$”)背后的丰富复杂性。然而,目前机器人用来识别f型队形的最先进算法是高度启发式的,而且相当脆弱。在本报告中,我们探索了一种数据驱动的方法,从跟踪的人类位置和方向集中检测f形,并在两个公开可用的仅人类数据集和我们收集的小型人类-机器人数据集上进行了训练和评估。我们还讨论了进一步计算表征f -地层的潜力,而不仅仅是检测它们的出现。
{"title":"Recognizing F-Formations in the Open World","authors":"Hooman Hedayati, D. Szafir, Sean Andrist","doi":"10.1109/HRI.2019.8673233","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673233","url":null,"abstract":"A key skill for social robots in the wild will be to understand the structure and dynamics of conversational groups in order to fluidly participate in them. Social scientists have long studied the rich complexity underlying such focused encounters, or $pmb{F}$-formations. However, current state-of-the-art algorithms that robots might use to recognize F-formations are highly heuristic and quite brittle. In this report, we explore a data-driven approach to detect F-formations from sets of tracked human positions and orientations, trained and evaluated on two openly available human-only datasets and a small human-robot dataset that we collected. We also discuss the potential for further computational characterization of F-formations beyond simply detecting their occurrence.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"19 1","pages":"558-559"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91532779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Personal Partner Agents for Cooperative Intelligence 合作情报的个人伙伴代理
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673179
Kotaro Funakoshi, Hideaki Shimazaki, T. Kumada, H. Tsujino
We advocate cooperative intelligence (CI) that achieves its goals in cooperating with other agents, particularly human beings, with limited resources but in complex and dynamic environments. CI is important because it delivers better performances in achieving a broad range of tasks; furthermore, cooperativeness is key to human intelligence, and the processes of cooperation can contribute to help people gain several life values. This paper discusses elements in CI and our research approach to CI. We identify the four aspects of CI: adaptive intelligence, collective intelligence, coordinative intelligence, and collaborative intelligence. We also take an approach that focuses on the implementation of coordinative intelligence in the form of personal partner agents (PPAs) and consider the design of our robotic research platform to physically realize PPAs.
我们提倡在复杂和动态的环境中,通过与其他智能体(尤其是人类)的合作来实现其目标的合作智能(CI)。CI之所以重要,是因为它在完成广泛的任务时提供了更好的性能;此外,合作是人类智力的关键,合作的过程可以帮助人们获得多种人生价值。本文讨论了持续集成的要素和我们的研究方法。我们确定了CI的四个方面:适应性智能、集体智能、协调智能和协作智能。我们还采取了一种专注于以个人伙伴代理(PPAs)形式实现协调智能的方法,并考虑设计我们的机器人研究平台来物理地实现PPAs。
{"title":"Personal Partner Agents for Cooperative Intelligence","authors":"Kotaro Funakoshi, Hideaki Shimazaki, T. Kumada, H. Tsujino","doi":"10.1109/HRI.2019.8673179","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673179","url":null,"abstract":"We advocate cooperative intelligence (CI) that achieves its goals in cooperating with other agents, particularly human beings, with limited resources but in complex and dynamic environments. CI is important because it delivers better performances in achieving a broad range of tasks; furthermore, cooperativeness is key to human intelligence, and the processes of cooperation can contribute to help people gain several life values. This paper discusses elements in CI and our research approach to CI. We identify the four aspects of CI: adaptive intelligence, collective intelligence, coordinative intelligence, and collaborative intelligence. We also take an approach that focuses on the implementation of coordinative intelligence in the form of personal partner agents (PPAs) and consider the design of our robotic research platform to physically realize PPAs.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"41 1","pages":"570-571"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91219617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Affective Robot Movement Generation Using CycleGANs 利用CycleGANs生成情感机器人运动
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673281
Michael Suguitan, Mason Bretan, Guy Hoffman
Social robots use gestures to express internal and affective states, but their interactive capabilities are hindered by relying on preprogrammed or hand-animated behaviors, which can be repetitive and predictable. We propose a method for automatically synthesizing affective robot movements given manually-generated examples. Our approach is based on techniques adapted from deep learning, specifically generative adversarial neural networks (GANs).
社交机器人使用手势来表达内部和情感状态,但它们的互动能力受到依赖于预编程或手动动画行为的阻碍,这些行为可能是重复的和可预测的。我们提出了一种方法来自动合成情感机器人运动给定的人工生成的例子。我们的方法是基于深度学习的技术,特别是生成对抗神经网络(gan)。
{"title":"Affective Robot Movement Generation Using CycleGANs","authors":"Michael Suguitan, Mason Bretan, Guy Hoffman","doi":"10.1109/HRI.2019.8673281","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673281","url":null,"abstract":"Social robots use gestures to express internal and affective states, but their interactive capabilities are hindered by relying on preprogrammed or hand-animated behaviors, which can be repetitive and predictable. We propose a method for automatically synthesizing affective robot movements given manually-generated examples. Our approach is based on techniques adapted from deep learning, specifically generative adversarial neural networks (GANs).","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"02 1","pages":"534-535"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86057537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
SAIL: Simulation-Informed Active In-the-Wild Learning SAIL:模拟信息主动野外学习
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673019
Elaine Schaertl Short, Adam Allevato, A. Thomaz
Robots in real-world environments may need to adapt context-specific behaviors learned in one environment to new environments with new constraints. In many cases, copresent humans can provide the robot with information, but it may not be safe for them to provide hands-on demonstrations and there may not be a dedicated supervisor to provide constant feedback. In this work we present the SAIL (Simulation-Informed Active In-the-Wild Learning) algorithm for learning new approaches to manipulation skills starting from a single demonstration. In this three-step algorithm, the robot simulates task execution to choose new potential approaches; collects unsupervised data on task execution in the target environment; and finally, chooses informative actions to show to co-present humans and obtain labels. Our approach enables a robot to learn new ways of executing two different tasks by using success/failure labels obtained from naïve users in a public space, performing 496 manipulation actions and collecting 163 labels from users in the wild over six 45-minute to 1-hour deployments. We show that classifiers based low-level sensor data can be used to accurately distinguish between successful and unsuccessful motions in a multi-step task ($mathbf{p} < 0.005$), even when trained in the wild. We also show that using the sensor data to choose which actions to sample is more effective than choosing the least-sampled action.
现实环境中的机器人可能需要将在一个环境中学习的特定于上下文的行为适应具有新约束的新环境。在许多情况下,在场的人可以为机器人提供信息,但他们提供实际演示可能不安全,而且可能没有专门的主管提供持续的反馈。在这项工作中,我们提出了SAIL(模拟通知主动野外学习)算法,用于从单个演示开始学习操作技能的新方法。在该算法中,机器人通过模拟任务执行来选择新的潜在路径;收集目标环境中任务执行的无监督数据;最后,选择信息动作展示给共同呈现的人类并获得标签。我们的方法使机器人能够学习执行两种不同任务的新方法,通过在公共空间中使用从naïve用户获得的成功/失败标签,执行496个操作动作,并在六次45分钟到1小时的部署中从用户收集163个标签。我们表明,即使在野外训练时,基于低级传感器数据的分类器也可以用于准确区分多步骤任务中成功和不成功的运动($mathbf{p} < 0.005$)。我们还表明,使用传感器数据来选择要采样的动作比选择采样最少的动作更有效。
{"title":"SAIL: Simulation-Informed Active In-the-Wild Learning","authors":"Elaine Schaertl Short, Adam Allevato, A. Thomaz","doi":"10.1109/HRI.2019.8673019","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673019","url":null,"abstract":"Robots in real-world environments may need to adapt context-specific behaviors learned in one environment to new environments with new constraints. In many cases, copresent humans can provide the robot with information, but it may not be safe for them to provide hands-on demonstrations and there may not be a dedicated supervisor to provide constant feedback. In this work we present the SAIL (Simulation-Informed Active In-the-Wild Learning) algorithm for learning new approaches to manipulation skills starting from a single demonstration. In this three-step algorithm, the robot simulates task execution to choose new potential approaches; collects unsupervised data on task execution in the target environment; and finally, chooses informative actions to show to co-present humans and obtain labels. Our approach enables a robot to learn new ways of executing two different tasks by using success/failure labels obtained from naïve users in a public space, performing 496 manipulation actions and collecting 163 labels from users in the wild over six 45-minute to 1-hour deployments. We show that classifiers based low-level sensor data can be used to accurately distinguish between successful and unsuccessful motions in a multi-step task ($mathbf{p} < 0.005$), even when trained in the wild. We also show that using the sensor data to choose which actions to sample is more effective than choosing the least-sampled action.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"8 1","pages":"468-477"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81681955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reducing Overtrust in Failing Robotic Systems 减少对失败机器人系统的过度信任
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673235
Anders B. H. Christensen, C. R. Dam, Corentin Rasle, Jacob E. Bauer, Ramlo A. Mohamed, L. Jensen
In general, people tend to place too much trust in robotic systems, also in emergency situations. Our study attempts to discover ways of reducing this overtrust, by adding vocal warnings of error from a robot that guides participants blindfolded through a maze. The results indicate that the tested vocal warnings have no effect in reducing overtrust, but we encourage further testing of similar warnings to fully explore its potential effects.
一般来说,人们倾向于过于信任机器人系统,在紧急情况下也是如此。我们的研究试图找到减少这种过度信任的方法,方法是让机器人在引导被蒙住眼睛的参与者走过迷宫时发出错误的声音警告。结果表明,测试的声音警告在减少过度信任方面没有效果,但我们鼓励进一步测试类似的警告,以充分探索其潜在的效果。
{"title":"Reducing Overtrust in Failing Robotic Systems","authors":"Anders B. H. Christensen, C. R. Dam, Corentin Rasle, Jacob E. Bauer, Ramlo A. Mohamed, L. Jensen","doi":"10.1109/HRI.2019.8673235","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673235","url":null,"abstract":"In general, people tend to place too much trust in robotic systems, also in emergency situations. Our study attempts to discover ways of reducing this overtrust, by adding vocal warnings of error from a robot that guides participants blindfolded through a maze. The results indicate that the tested vocal warnings have no effect in reducing overtrust, but we encourage further testing of similar warnings to fully explore its potential effects.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"19 1","pages":"542-543"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87142979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1