首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Visuo-Textual Explanations of a Robot's Navigational Choices 机器人导航选择的视觉文本解释
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580141
Amar Halilovic, F. Lindner
With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.
随着我们日常生活中机器人数量的增加,人机接触将变得更加频繁。为了改善人机交互(HRI),人们将需要对机器人的行为进行解释,尤其是当它们做了一些意想不到的事情时。我们的重点是机器人导航,在那里我们解释为什么机器人做出特定的导航选择。基于可解释人工智能(XAI)领域的方法,我们采用了语义图和定性空间推理(QSR)领域的技术,用知识级空间信息丰富视觉解释。我们概述了机器人如何同时生成视觉和文本解释,并在模拟中测试了我们的方法。
{"title":"Visuo-Textual Explanations of a Robot's Navigational Choices","authors":"Amar Halilovic, F. Lindner","doi":"10.1145/3568294.3580141","DOIUrl":"https://doi.org/10.1145/3568294.3580141","url":null,"abstract":"With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76664176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Variable Autonomy for Human-Robot Teaming (VAT) 人-机器人团队(VAT)的可变自治
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3579957
Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss
As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
随着机器人被引入各种领域和应用,人机协作(HRT)能力是必不可少的。这样的能力包括在不同的抽象层次上与人类合作,利用人类和机器人的互补能力。这就要求机器人系统具有动态改变其自主水平或程度的能力,以有效地与人类合作,并克服各种具有挑战性的环境。可变自治(Variable autonomous, VA)是一个涵盖此类研究的总称,包括但不限于共享控制和共享自治、混合主动、可调节自治和滑动自治。由于该领域相对年轻,不同社区之间经常脱节的与va相关的研究和实践及时需要汇集在一起,因此推动了本次研讨会。研讨会的目标是巩固人机系统的研究。为此,鉴于人-机器人系统的复杂性和广度,本次研讨会将采用一种全面的跨学科方法,旨在a)识别和分类相关的共同挑战和机遇;B)确定需要联合起来应对挑战的学科;识别和定义通用术语、方法、方法学、基准和度量标准;D)为社区定义短期和长期的研究目标。为了实现这些目标,本次研讨会旨在汇集行业利益相关者,来自VA旗下领域的研究人员,以及其他高度相关领域(如人因和心理学)的专家。研讨会将包括邀请演讲、贡献论文和互动讨论小组,以实现VA的共同愿景。
{"title":"Variable Autonomy for Human-Robot Teaming (VAT)","authors":"Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss","doi":"10.1145/3568294.3579957","DOIUrl":"https://doi.org/10.1145/3568294.3579957","url":null,"abstract":"As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"223 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas 人-无人机互动:与禁区内吸烟人群互动
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580173
Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova
Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message "not to smoke" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.
无人机正在不断进入我们的日常生活,被用于许多不同的应用。这就产生了对人类和无人机之间更好的互动方式的自然需求。从改进的相互作用中获益的一个可能的应用是检查禁止区域的吸烟情况。我们提出了我们自己的无人机飞行手势,我们相信它会比现成的内置手势更好地传递“不吸烟”的信息。为此,我们进行了一项涉及19名参与者的实验,在那里我们评估了通过绿野仙踪交互设计操作的无人机上的手势。结果表明,与内置手势相比,提出的手势在传递信息方面更胜一筹。
{"title":"Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas","authors":"Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova","doi":"10.1145/3568294.3580173","DOIUrl":"https://doi.org/10.1145/3568294.3580173","url":null,"abstract":"Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message \"not to smoke\" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79818653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HighLight 突出
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.5040/9781350088733.0124
Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid
{"title":"HighLight","authors":"Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid","doi":"10.5040/9781350088733.0124","DOIUrl":"https://doi.org/10.5040/9781350088733.0124","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79916250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stretch to the Client; Re-imagining Interfaces 延伸到客户端;一次接口
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580212
Kay N. Wojtowicz, M. E. Cabrera
This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.
本文介绍了为创建与Hello-Robot Stretch一起使用的客户端界面所做的努力。目标是创建一个可访问的界面,以实现最佳的用户体验。该界面使用户能够通过几种方式控制基本命令的拉伸。为了使这个界面易于访问,设计了一个简单明了的web界面,以便不同能力的用户可以成功地与Stretch进行交互。还增加了语音激活选项,以进一步增加可能的交互范围。
{"title":"Stretch to the Client; Re-imagining Interfaces","authors":"Kay N. Wojtowicz, M. E. Cabrera","doi":"10.1145/3568294.3580212","DOIUrl":"https://doi.org/10.1145/3568294.3580212","url":null,"abstract":"This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"5 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80542595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care 辅助精神科日间护理认知重建机器人的实用开发
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580150
Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi
One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot's conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.
社交机器人的重要作用之一是通过与人交谈来支持心理健康。在本研究中,我们重点研究了柱状方法来支持认知重构,这也是精神科日托的一个项目,帮助患者灵活思考,了解自己的特点。为了开发一个能够通过对话帮助精神科日托病人组织自己的担忧和目标的机器人,我们基于列法设计了机器人的对话内容,并实现了机器人的自主对话功能。本文报告了为评估和改进该原型系统在实际精神科日托环境中的有效性而进行的初步实验,以及实验参与者和日托工作人员的意见。
{"title":"Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care","authors":"Takuto Akiyoshi, H. Sumioka, Hirokazu Kumazaki, Junya Nakanishi, Hirokazu Kato, M. Shiomi","doi":"10.1145/3568294.3580150","DOIUrl":"https://doi.org/10.1145/3568294.3580150","url":null,"abstract":"One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot's conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89880500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Task Traces for Service Robotics 服务机器人的众包任务轨迹
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580112
David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu
Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.
演示是一种有效的终端用户开发范例,用于教机器人如何执行新任务。在本文中,我们假定演示不仅是一种教学工具,而且是一种理解和帮助最终用户开发人员思考手头任务的方法。作为获得这种理解的第一步,我们构建了一个轻量级的web界面,以众包常见家庭任务的逐步说明,利用潜在的最终用户开发人员的想象力和过去的经验。为了证明这个接口的实用性,我们在Amazon Mechanical Turk上部署了这个接口,并收集了207个任务跟踪,涵盖了18个不同的任务类别。我们描述了如何将这些任务跟踪作为最终用户开发工具中的任务模型进行操作,并为未来的工作提供了路线图。
{"title":"Crowdsourcing Task Traces for Service Robotics","authors":"David J. Porfirio, Allison Sauppé, M. Cakmak, Aws Albarghouthi, Bilge Mutlu","doi":"10.1145/3568294.3580112","DOIUrl":"https://doi.org/10.1145/3568294.3580112","url":null,"abstract":"Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"36 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90257123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation 基于混合现实的上肢机器人康复运动游戏
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580124
Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli
Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients' motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands' movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.
机器人康复设备在强化、任务导向和个性化运动训练方面显示出强大的潜力。在这些系统中集成混合现实(MR)技术和有形对象,可以创建有吸引力、刺激和个性化的混合环境。使用游戏化方法,基于核磁共振的机器人训练可以提高患者的积极性、参与度和经验。本文介绍了两种基于混合现实的运动游戏的发展,以肩部康复外骨骼和使用有形物体辅助进行双手练习。采用以用户为中心的迭代过程完成系统设计。该系统从手部运动的运动学分析中评估任务性能和成本函数指标。对该系统进行了初步的评估,结果表明该系统的操作是正确的,并且能够刺激期望的上肢运动。
{"title":"Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation","authors":"Nadia Vanessa Garcia Hernandez, S. Buccelli, M. Laffranchi, L. D. De Michieli","doi":"10.1145/3568294.3580124","DOIUrl":"https://doi.org/10.1145/3568294.3580124","url":null,"abstract":"Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients' motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands' movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86641984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks 主动机器人协助装配任务中人类偏好的迁移学习
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576965
Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis
We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.
我们专注于使机器人能够通过适应人类偏好的动作序列来主动协助人类完成组装任务。许多关于机器人适应的工作都需要人类的示范。然而,真实世界的程序集的人工演示既繁琐又耗时。因此,我们建议从较短的规范任务中的演示中学习人类偏好,以预测实际组装任务中的用户操作。该系统使用从规范任务中学习到的偏好模型作为先验,并在预测不准确时通过交互更新模型。我们在模拟装配任务和现实世界的人-机器人装配研究中对所提出的系统进行了评估,结果表明,从规范任务转移偏好模型以及在线更新模型都有助于提高人类行为预测的准确性。与被动机器人相比,这使得机器人能够主动帮助用户,大大减少他们的空闲时间,并改善他们与机器人一起工作的体验。
{"title":"Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks","authors":"Heramb Nemlekar, N. Dhanaraj, Angelos Guan, S. Gupta, S. Nikolaidis","doi":"10.1145/3568162.3576965","DOIUrl":"https://doi.org/10.1145/3568162.3576965","url":null,"abstract":"We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 2 Pt 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89515596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction 机器人学习模仿社会人际互动的多模态数据集
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580080
Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan
Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
人类倾向于使用各种非语言信号向他们的互动伙伴传达他们的信息。先前的研究利用这一渠道作为开发自动方法的基本线索,用于理解、建模和合成人机交互和人机交互设置中的个人行为。另一方面,在小组互动中,交流的一个重要方面是对话者之间社会信号的动态交换。本文介绍了LISI-HHI - Learning to imitation Social human - human Interaction,这是一个记录在各种交流场景中的二元人类交互行为的数据集。该数据集包含由高精度传感器同时捕获的多种模式,包括动作捕捉、RGB-D相机、眼动仪和麦克风。lis - hhi旨在成为HRI和多模态学习研究的基准,用于对社会互动环境中的内部和人际非语言信号进行建模,并研究如何将这些模型转移到社交机器人中。
{"title":"A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction","authors":"Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan","doi":"10.1145/3568294.3580080","DOIUrl":"https://doi.org/10.1145/3568294.3580080","url":null,"abstract":"Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"14 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82930853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1