首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
The Maze of Realizing Empathy with Social Robots 用社交机器人实现同理心的迷宫
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223466
Marialejandra García-Corretjer, Raquel Ros, F. Martin, David Miralles
Current trends envisage an evolution of collaboration, engagement, and relationship between humans and devices, intelligent agents and robots in our everyday life. Some of the key elements under study are affective states, motivation, trust, care, and empathy. This paper introduces an empathy test-bed that serves as a case study for an existing empathy model. The model describes the steps that need to occur in the process to provoke meaning in empathy, as well as the variables and elements that contextualise those steps. Based on this approach we have developed a fun collaborative scenario where a user and a social robot work together to solve a maze. A set of exploratory trials are carried out to gather insights on how users perceive the proposed test-bed around attachment and trust, which are basic elements for the realisation of empathy.
当前的趋势设想了日常生活中人类与设备、智能代理和机器人之间协作、参与和关系的演变。研究中的一些关键因素是情感状态、动机、信任、关心和同理心。本文介绍了一个共情测试平台,作为现有共情模型的案例研究。该模型描述了在引发共情意义的过程中需要发生的步骤,以及将这些步骤置于背景下的变量和元素。基于这种方法,我们开发了一个有趣的协作场景,用户和社交机器人一起解决迷宫。我们进行了一系列探索性试验,以了解用户如何看待围绕依恋和信任的拟议测试平台,这是实现共情的基本要素。
{"title":"The Maze of Realizing Empathy with Social Robots","authors":"Marialejandra García-Corretjer, Raquel Ros, F. Martin, David Miralles","doi":"10.1109/RO-MAN47096.2020.9223466","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223466","url":null,"abstract":"Current trends envisage an evolution of collaboration, engagement, and relationship between humans and devices, intelligent agents and robots in our everyday life. Some of the key elements under study are affective states, motivation, trust, care, and empathy. This paper introduces an empathy test-bed that serves as a case study for an existing empathy model. The model describes the steps that need to occur in the process to provoke meaning in empathy, as well as the variables and elements that contextualise those steps. Based on this approach we have developed a fun collaborative scenario where a user and a social robot work together to solve a maze. A set of exploratory trials are carried out to gather insights on how users perceive the proposed test-bed around attachment and trust, which are basic elements for the realisation of empathy.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115723202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Social Bonding Increases Unsolicited Helpfulness Towards A Bullied Robot 社会关系增加了对受欺负机器人的主动帮助
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223454
Barbara Kühnlenz, K. Kühnlenz
This paper is a first step towards the investigation of civil courage in human-robot interaction (HRI). The main research question is if human users would help a robot being bullied by other humans. Previous work showed that pro-social behavior can be induced in human users towards a robot pro-actively asking for their help in order to accomplish a specific task by applying mechanisms of social bonding. In contrast, this paper investigates unsolicited helpful behavior towards a robot being bullied by a third person subsequent to an interaction task. To this end, social bonding in terms of small talk including explicit emotional adaptation to induce a feeling of similarity is applied to a human-robot dialog scenario in a user study. As an interaction context, a cooperative object classification task is chosen, where a robot reads objects from a list needed by the robot to fulfill another task later. To induce bullying behavior, the list is took away from the robot by a disturbing third person after the completed interaction. The two experimental conditions of the study differ in whether or not social bonding is applied prior to the interaction. According to previous work, results showed increased ratings for social presence and anthropomorphism, as well as increased unsolicited helpfulness of the participants in the social bonding condition. Surprisingly, unsolicited help occurred only verbally and directed towards the robot and none of the human users took action against the bullying third person. It is discussed, that this may be due to social-psychological side-effects caused by the passive presence of the human experimental examiner and that additional channels of emotional adaptation by the robot may be required.
本文是研究人机交互(HRI)中的公民勇气的第一步。主要的研究问题是,人类用户是否会帮助被其他人欺负的机器人。先前的研究表明,通过应用社会联系机制,人类用户可以对机器人主动寻求帮助以完成特定任务产生亲社会行为。相比之下,本文研究了机器人在交互任务后被第三方欺负时的主动帮助行为。为此,在闲谈方面的社会联系,包括明确的情感适应,以诱导相似感,被应用于用户研究中的人机对话场景。作为交互上下文,选择合作对象分类任务,其中机器人从机器人需要的列表中读取对象,以便稍后完成另一个任务。为了诱导霸凌行为,在完成互动后,一个令人不安的第三个人从机器人手中拿走了清单。该研究的两个实验条件的不同之处在于是否在相互作用之前应用社会联系。根据之前的研究,结果显示,在社会关系条件下,社会存在和拟人化的评分有所提高,参与者主动提供的帮助也有所增加。令人惊讶的是,未经请求的帮助只发生在口头上,并且是针对机器人的,没有一个人类用户对欺凌的第三人采取行动。讨论了这可能是由于人类实验考官的被动存在引起的社会心理副作用,并且可能需要机器人额外的情感适应渠道。
{"title":"Social Bonding Increases Unsolicited Helpfulness Towards A Bullied Robot","authors":"Barbara Kühnlenz, K. Kühnlenz","doi":"10.1109/RO-MAN47096.2020.9223454","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223454","url":null,"abstract":"This paper is a first step towards the investigation of civil courage in human-robot interaction (HRI). The main research question is if human users would help a robot being bullied by other humans. Previous work showed that pro-social behavior can be induced in human users towards a robot pro-actively asking for their help in order to accomplish a specific task by applying mechanisms of social bonding. In contrast, this paper investigates unsolicited helpful behavior towards a robot being bullied by a third person subsequent to an interaction task. To this end, social bonding in terms of small talk including explicit emotional adaptation to induce a feeling of similarity is applied to a human-robot dialog scenario in a user study. As an interaction context, a cooperative object classification task is chosen, where a robot reads objects from a list needed by the robot to fulfill another task later. To induce bullying behavior, the list is took away from the robot by a disturbing third person after the completed interaction. The two experimental conditions of the study differ in whether or not social bonding is applied prior to the interaction. According to previous work, results showed increased ratings for social presence and anthropomorphism, as well as increased unsolicited helpfulness of the participants in the social bonding condition. Surprisingly, unsolicited help occurred only verbally and directed towards the robot and none of the human users took action against the bullying third person. It is discussed, that this may be due to social-psychological side-effects caused by the passive presence of the human experimental examiner and that additional channels of emotional adaptation by the robot may be required.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"94 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Measure to Match Robot Plans to Human Intent: A Case Study in Multi-Objective Human-Robot Path-Planning* 机器人计划与人类意图匹配的一种度量方法——以多目标人-机器人路径规划为例*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223330
M. T. Shaikh, M. Goodrich
Measuring how well a potential solution to a problem matches the problem-holder’s intent and detecting when a current solution no longer matches intent is important when designing resilient human-robot teams. This paper addresses intent-matching for a robot path-planning problem that includes multiple objectives and where human intent is represented as a vector in the multi-objective payoff space. The paper introduces a new metric called the intent threshold margin and shows that it can be used to rank paths by how close they match a specified intent. The rankings induced by the metric correlate with average human rankings (obtained in an MTurk study) of how closely different paths match a specified intent. The intuition of the intent threshold margin is that it represents how much the human’s intent must be "relaxed" to match the payoffs for a specified path.
在设计有弹性的人机团队时,衡量问题的潜在解决方案与问题持有人的意图相匹配的程度以及检测当前解决方案何时不再符合意图是很重要的。本文解决了一个包含多目标的机器人路径规划问题的意图匹配问题,其中人类意图在多目标支付空间中表示为向量。本文引入了一种称为意图阈值裕度的新度量,并表明它可以通过路径与指定意图的匹配程度来对路径进行排序。由度量诱发的排名与人类平均排名(在MTurk的研究中获得)相关,即不同路径与特定意图的匹配程度。直觉的意图阈值边界是,它代表了人类的意图必须“放松”多少,以匹配特定路径的回报。
{"title":"A Measure to Match Robot Plans to Human Intent: A Case Study in Multi-Objective Human-Robot Path-Planning*","authors":"M. T. Shaikh, M. Goodrich","doi":"10.1109/RO-MAN47096.2020.9223330","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223330","url":null,"abstract":"Measuring how well a potential solution to a problem matches the problem-holder’s intent and detecting when a current solution no longer matches intent is important when designing resilient human-robot teams. This paper addresses intent-matching for a robot path-planning problem that includes multiple objectives and where human intent is represented as a vector in the multi-objective payoff space. The paper introduces a new metric called the intent threshold margin and shows that it can be used to rank paths by how close they match a specified intent. The rankings induced by the metric correlate with average human rankings (obtained in an MTurk study) of how closely different paths match a specified intent. The intuition of the intent threshold margin is that it represents how much the human’s intent must be \"relaxed\" to match the payoffs for a specified path.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129145366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enactively Conscious Robots: Why Enactivism Does Not Commit the Intermediate Level Fallacy * 主动意识机器人:为什么主动意识不会犯中级谬误*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223494
A. Scarinzi
Conscious experience is needed to adapt to novel and significant events, to perform actions, to have perceptions. This contribution shows how a robot can be enactively conscious. It questions the view by Manzotti and Chella (2018) according to which the enactive approach to consciousness falls into the so called "intermediate level fallacy" and shows that the authors’ remark is implausible because it is based on a partial and reductive view both of enactivism and of one of its main tenets called embodiment. The original enactive approach to experience as it was developed by Varela/Thompson/Rosch (1991) is discussed. Manzotti’s and Chella’s criticism that in enactivism it is unclear why the knowledge of the effects of movement on sensory stimulation should lead to conscious experience is rejected. In this contribution, it is explained why sensorimotricity and the actionist approach to perception do lead to (robot) conscious experience in the perception of objects located in outer space.
需要有意识的经验来适应新的和重要的事件,执行行动,产生感知。这个贡献展示了机器人是如何具有主动意识的。它质疑Manzotti和Chella(2018)的观点,根据该观点,对意识的行动方法属于所谓的“中间水平谬误”,并表明作者的评论是不可信的,因为它是基于对行动主义及其主要原则之一即具体化的部分和简化的观点。本文讨论了由Varela/Thompson/Rosch(1991)提出的原始的主动体验方法。Manzotti和Chella的批评是,在运动主义中,不清楚为什么运动对感觉刺激的影响的知识会导致有意识的体验,这一批评被拒绝了。在这篇文章中,解释了为什么感觉运动性和行动主义的感知方法确实会导致(机器人)在外太空感知物体的有意识体验。
{"title":"Enactively Conscious Robots: Why Enactivism Does Not Commit the Intermediate Level Fallacy *","authors":"A. Scarinzi","doi":"10.1109/RO-MAN47096.2020.9223494","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223494","url":null,"abstract":"Conscious experience is needed to adapt to novel and significant events, to perform actions, to have perceptions. This contribution shows how a robot can be enactively conscious. It questions the view by Manzotti and Chella (2018) according to which the enactive approach to consciousness falls into the so called \"intermediate level fallacy\" and shows that the authors’ remark is implausible because it is based on a partial and reductive view both of enactivism and of one of its main tenets called embodiment. The original enactive approach to experience as it was developed by Varela/Thompson/Rosch (1991) is discussed. Manzotti’s and Chella’s criticism that in enactivism it is unclear why the knowledge of the effects of movement on sensory stimulation should lead to conscious experience is rejected. In this contribution, it is explained why sensorimotricity and the actionist approach to perception do lead to (robot) conscious experience in the perception of objects located in outer space.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Robot Self-Efficacy Scale: Robot Self-Efficacy, Likability and Willingness to Interact Increases After a Robot-Delivered Tutorial 机器人自我效能感量表:机器人自我效能感、受欢迎程度和互动意愿在机器人授课后增加
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223535
Nicole L. Robinson, Teah-Neal Hicks, Gavin Suddrey, D. Kavanagh
An individual’s self-efficacy to interact with a robot has important implications around the content, utility and success of the interaction. Individuals need to achieve a high level of self-efficacy in human robot-interaction in a reasonable time-frame for positive effects to occur in short-term human-robot scenarios. This trial explored the impact of a 2-minute automated robot-delivered tutorial designed to teach people from the general public how to use the robot as a method to increase robot self-efficacy scores. This trial assessed scores before (T1) and after (T2) an interaction with the robot to investigate changes in self-efficacy, likability and willingness to use it. The 40 participants recruited had on average very low level of robotic experience. After the tutorial, people reported significantly higher robot self-efficacy with very large effect sizes to operate a robot and apply the robot to a task ($eta _p^2 = 0.727$ and 0.660). Significant increases in likability and willingness to interact with the robot were also found ($eta _p^2 = 0.465$ and 0.480). Changes in likability and self-efficacy contributed to 64% of the variance in changes to willingness to use the robot. Initial differences were found in robot self-efficacy for older people and those with less robotics and programming experience compared with other participants, but scores across these subgroups were similar after completion of the tutorial. This demonstrated that high levels of self-efficacy, likeability and willingness to use a social robot can be reached in a very short time, and on comparable levels, regardless of age or prior robotics experience. This outcome has significant implications for future trials using social robots, since these variables can strongly influence experimental outcomes.
个体与机器人互动的自我效能感对互动的内容、效用和成功与否有着重要的影响。个体需要在合理的时间框架内达到高水平的人-机器人互动自我效能感,才能在短期的人-机器人场景中产生积极的效果。这项试验探讨了一个2分钟的自动机器人教程的影响,该教程旨在教公众如何使用机器人作为提高机器人自我效能得分的方法。该试验评估了与机器人互动之前(T1)和之后(T2)的得分,以调查自我效能感、受欢迎程度和使用机器人的意愿的变化。被招募的40名参与者的机器人经验平均水平很低。在教程之后,人们报告了明显更高的机器人自我效能感,并且在操作机器人和将机器人应用于任务方面具有非常大的效应量($eta _p^2 = 0.727$和0.660)。与机器人互动的意愿和受欢迎程度也显著增加($eta _p^2 = 0.465$和0.480 $)。受欢迎程度和自我效能感的变化影响了使用机器人意愿变化的64%。与其他参与者相比,在机器人自我效能感方面,年龄较大的人和那些没有机器人和编程经验的人发现了最初的差异,但在完成教程后,这些亚组的得分相似。这表明,无论年龄或先前的机器人经验如何,高水平的自我效能、受欢迎程度和使用社交机器人的意愿都可以在很短的时间内达到相当的水平。这一结果对未来使用社交机器人的试验具有重要意义,因为这些变量会强烈影响实验结果。
{"title":"The Robot Self-Efficacy Scale: Robot Self-Efficacy, Likability and Willingness to Interact Increases After a Robot-Delivered Tutorial","authors":"Nicole L. Robinson, Teah-Neal Hicks, Gavin Suddrey, D. Kavanagh","doi":"10.1109/RO-MAN47096.2020.9223535","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223535","url":null,"abstract":"An individual’s self-efficacy to interact with a robot has important implications around the content, utility and success of the interaction. Individuals need to achieve a high level of self-efficacy in human robot-interaction in a reasonable time-frame for positive effects to occur in short-term human-robot scenarios. This trial explored the impact of a 2-minute automated robot-delivered tutorial designed to teach people from the general public how to use the robot as a method to increase robot self-efficacy scores. This trial assessed scores before (T1) and after (T2) an interaction with the robot to investigate changes in self-efficacy, likability and willingness to use it. The 40 participants recruited had on average very low level of robotic experience. After the tutorial, people reported significantly higher robot self-efficacy with very large effect sizes to operate a robot and apply the robot to a task ($eta _p^2 = 0.727$ and 0.660). Significant increases in likability and willingness to interact with the robot were also found ($eta _p^2 = 0.465$ and 0.480). Changes in likability and self-efficacy contributed to 64% of the variance in changes to willingness to use the robot. Initial differences were found in robot self-efficacy for older people and those with less robotics and programming experience compared with other participants, but scores across these subgroups were similar after completion of the tutorial. This demonstrated that high levels of self-efficacy, likeability and willingness to use a social robot can be reached in a very short time, and on comparable levels, regardless of age or prior robotics experience. This outcome has significant implications for future trials using social robots, since these variables can strongly influence experimental outcomes.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Resolving Clashing Norms Using Tiered Utility 使用分层实用程序解决规范冲突
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223334
Sean Welsh
There are times when norms clash in a given situation. One norm requires an agent to do X. Another requires an agent to not do X but to do Y or nothing (not X) instead. This paper describes a way to resolve clashes between norms using a concept of tiered utility that has the potential to be automated. Classical utility has polarity and magnitude. Tiered utility has polarity, magnitude and tiers. Tiers are used for lexicographic preference orderings that enable correct normative choices by robots.
在特定情况下,有时规范会发生冲突。一种规范要求agent做X,另一种规范要求agent不做X,而做Y,或者什么都不做(不是X)。本文描述了一种使用具有自动化潜力的分层实用程序概念来解决规范之间冲突的方法。经典效用具有极性和量级。分层效用有极性、大小和层次。层级用于字典级偏好排序,使机器人能够做出正确的规范选择。
{"title":"Resolving Clashing Norms Using Tiered Utility","authors":"Sean Welsh","doi":"10.1109/RO-MAN47096.2020.9223334","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223334","url":null,"abstract":"There are times when norms clash in a given situation. One norm requires an agent to do X. Another requires an agent to not do X but to do Y or nothing (not X) instead. This paper describes a way to resolve clashes between norms using a concept of tiered utility that has the potential to be automated. Classical utility has polarity and magnitude. Tiered utility has polarity, magnitude and tiers. Tiers are used for lexicographic preference orderings that enable correct normative choices by robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121362483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning 基于强化学习的长时间延迟手-臂外骨骼模型介导遥操作
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223477
H. Mohammadi, Matthias Kerzel, Benedikt Pleintinger, T. Hulin, Philipp Reisich, A. Schmidt, Aaron Pereira, S. Wermter, Neal Y. Lii
Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR’s Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved.
远程机器人系统必须适应新的环境条件,并处理由长时间延迟引起的高度不确定性。作为人类智能的最佳替代方案之一,强化学习(RL)可能为应对这些问题提供解决方案。本文提出将强化学习与模型介导遥操作(MMT)概念相结合。远程操作员与模拟的虚拟环境交互,并提供即时反馈。来自真实环境的反馈是延迟的,而来自模型的反馈是即时的,因此具有很高的透明度。MMT与两层智能系统相结合实现。第一层利用动态运动原语(Dynamic Movement Primitives, DMP)来解释角色环境中的某些变化。第二层利用强化学习方法解决了模型中不确定性引起的问题。增强现实技术为远程操作者提供了融合化身设备和虚拟环境模型的方法。在DLR的Exodex Adam手臂触觉外骨骼上实现,结果表明,演示后,当物体位置发生变化时,RL方法能够找到不同的解决方案。研究结果还表明,在没有不确定性的情况下,dmp能够有效地适应新环境。
{"title":"Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning","authors":"H. Mohammadi, Matthias Kerzel, Benedikt Pleintinger, T. Hulin, Philipp Reisich, A. Schmidt, Aaron Pereira, S. Wermter, Neal Y. Lii","doi":"10.1109/RO-MAN47096.2020.9223477","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223477","url":null,"abstract":"Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR’s Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121502982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Role Switching in Task-Oriented Multimodal Human-Robot Collaboration 面向任务的多模态人机协作中的角色转换
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223461
Natawut Monaikul, Bahareh Abbasi, Zhanibek Rysbek, Barbara Di Eugenio, M. Žefran
In a collaborative task and the interaction that accompanies it, the participants often take on distinct roles, and dynamically switch the roles as the task requires. A domestic assistive robot thus needs to have similar capabilities. Using our previously proposed Multimodal Interaction Manager (MIM) framework, this paper investigates how role switching for a robot can be implemented. It identifies a set of primitive subtasks that encode common interaction patterns observed in our data corpus and that can be used to easily construct complex task models. It also describes an implementation on the NAO robot that, together with our original work, demonstrates that the robot can take on different roles. We provide a detailed analysis of the performance of the system and discuss the challenges that arise when switching roles in human-robot interactions.
在协作任务及其伴随的交互中,参与者通常扮演不同的角色,并根据任务的需要动态切换角色。因此,家用辅助机器人也需要具备类似的能力。利用我们之前提出的多模态交互管理器(MIM)框架,本文研究了如何实现机器人的角色切换。它标识了一组原始子任务,这些子任务对数据语料库中观察到的常见交互模式进行编码,并可用于轻松构建复杂的任务模型。它还描述了NAO机器人的实现,与我们的原始工作一起,证明了机器人可以承担不同的角色。我们对系统的性能进行了详细的分析,并讨论了在人机交互中切换角色时出现的挑战。
{"title":"Role Switching in Task-Oriented Multimodal Human-Robot Collaboration","authors":"Natawut Monaikul, Bahareh Abbasi, Zhanibek Rysbek, Barbara Di Eugenio, M. Žefran","doi":"10.1109/RO-MAN47096.2020.9223461","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223461","url":null,"abstract":"In a collaborative task and the interaction that accompanies it, the participants often take on distinct roles, and dynamically switch the roles as the task requires. A domestic assistive robot thus needs to have similar capabilities. Using our previously proposed Multimodal Interaction Manager (MIM) framework, this paper investigates how role switching for a robot can be implemented. It identifies a set of primitive subtasks that encode common interaction patterns observed in our data corpus and that can be used to easily construct complex task models. It also describes an implementation on the NAO robot that, together with our original work, demonstrates that the robot can take on different roles. We provide a detailed analysis of the performance of the system and discuss the challenges that arise when switching roles in human-robot interactions.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126894974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
How Social Robots Influence People’s Trust in Critical Situations 社交机器人在危急情况下如何影响人们的信任
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223471
Alessandra Rossi, K. Dautenhahn, K. Koay, M. Walters
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people’s trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. We conducted a between-subjects study where participants’ trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants’ choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
由于我们预计自动机器人在我们日常生活中的存在将会增加,我们必须考虑到人们不仅要接受机器人成为他们生活的基本组成部分,而且他们还必须相信机器人能够可靠、安全地参与协作任务。几项研究表明,机器人更容易与尊重社会习俗的机器人互动。然而,目前还不清楚一个表达社会习俗的机器人是否会获得人们更有利的信任。在这项研究中,我们旨在评估社会行为和自然交流的使用是否会影响人类对机器人的信任感和陪伴感。我们进行了一项受试者之间的研究,在三种情况下测试参与者的信任,这些情况下他们与社交机器人或非社交机器人互动,信任临界度(低、中、高)不断增加。我们的研究结果表明,在低后果和中等后果的情况下,参与者同样信任社交机器人和非社交机器人。相反,我们观察到,参与者在高敏感任务中信任机器人的选择受到表达社交线索的机器人的影响更大,从而导致他们对机器人的信任减少。
{"title":"How Social Robots Influence People’s Trust in Critical Situations","authors":"Alessandra Rossi, K. Dautenhahn, K. Koay, M. Walters","doi":"10.1109/RO-MAN47096.2020.9223471","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223471","url":null,"abstract":"As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people’s trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. We conducted a between-subjects study where participants’ trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants’ choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133972065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Calibrating Trust in Human-Drone Cooperative Navigation 人-无人机协同导航中的信任校准
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223509
Kazuo Okamura, S. Yamada
Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a "continuous" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.
信任校准对于人类与自动驾驶汽车和无人驾驶飞机等自动系统之间的成功合作至关重要。如果用户高估了自治系统的能力,就会出现过度信任,即使在他们可以超越系统的情况下,用户也会依赖系统。相反,如果用户低估了系统的能力,就会出现信任不足,并且他们倾向于不使用该系统。由于这两种情况都阻碍了安全和效率方面的合作,因此非常希望有一种机制,促进用户对自主系统保持适当的信任水平。在本文中,我们首先提出了一个自适应信任校准框架,该框架可以从用户的行为中检测信任过度/信任不足,并鼓励他们在“连续”的合作任务中保持适当的信任水平。然后,我们进行了实验,以半自动无人机导航来评估我们的方法。在实验中,我们引入了天气条件下的ABA情况来研究我们的方法在双向信任变化中的应用。结果表明,该方法能够自适应地检测信任变化,并鼓励用户在连续合作任务中校准信任。我们相信这项研究的发现将有助于更好地设计协作系统的用户界面。
{"title":"Calibrating Trust in Human-Drone Cooperative Navigation","authors":"Kazuo Okamura, S. Yamada","doi":"10.1109/RO-MAN47096.2020.9223509","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223509","url":null,"abstract":"Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a \"continuous\" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132865404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1