Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223466
Marialejandra García-Corretjer, Raquel Ros, F. Martin, David Miralles
Current trends envisage an evolution of collaboration, engagement, and relationship between humans and devices, intelligent agents and robots in our everyday life. Some of the key elements under study are affective states, motivation, trust, care, and empathy. This paper introduces an empathy test-bed that serves as a case study for an existing empathy model. The model describes the steps that need to occur in the process to provoke meaning in empathy, as well as the variables and elements that contextualise those steps. Based on this approach we have developed a fun collaborative scenario where a user and a social robot work together to solve a maze. A set of exploratory trials are carried out to gather insights on how users perceive the proposed test-bed around attachment and trust, which are basic elements for the realisation of empathy.
{"title":"The Maze of Realizing Empathy with Social Robots","authors":"Marialejandra García-Corretjer, Raquel Ros, F. Martin, David Miralles","doi":"10.1109/RO-MAN47096.2020.9223466","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223466","url":null,"abstract":"Current trends envisage an evolution of collaboration, engagement, and relationship between humans and devices, intelligent agents and robots in our everyday life. Some of the key elements under study are affective states, motivation, trust, care, and empathy. This paper introduces an empathy test-bed that serves as a case study for an existing empathy model. The model describes the steps that need to occur in the process to provoke meaning in empathy, as well as the variables and elements that contextualise those steps. Based on this approach we have developed a fun collaborative scenario where a user and a social robot work together to solve a maze. A set of exploratory trials are carried out to gather insights on how users perceive the proposed test-bed around attachment and trust, which are basic elements for the realisation of empathy.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115723202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223454
Barbara Kühnlenz, K. Kühnlenz
This paper is a first step towards the investigation of civil courage in human-robot interaction (HRI). The main research question is if human users would help a robot being bullied by other humans. Previous work showed that pro-social behavior can be induced in human users towards a robot pro-actively asking for their help in order to accomplish a specific task by applying mechanisms of social bonding. In contrast, this paper investigates unsolicited helpful behavior towards a robot being bullied by a third person subsequent to an interaction task. To this end, social bonding in terms of small talk including explicit emotional adaptation to induce a feeling of similarity is applied to a human-robot dialog scenario in a user study. As an interaction context, a cooperative object classification task is chosen, where a robot reads objects from a list needed by the robot to fulfill another task later. To induce bullying behavior, the list is took away from the robot by a disturbing third person after the completed interaction. The two experimental conditions of the study differ in whether or not social bonding is applied prior to the interaction. According to previous work, results showed increased ratings for social presence and anthropomorphism, as well as increased unsolicited helpfulness of the participants in the social bonding condition. Surprisingly, unsolicited help occurred only verbally and directed towards the robot and none of the human users took action against the bullying third person. It is discussed, that this may be due to social-psychological side-effects caused by the passive presence of the human experimental examiner and that additional channels of emotional adaptation by the robot may be required.
{"title":"Social Bonding Increases Unsolicited Helpfulness Towards A Bullied Robot","authors":"Barbara Kühnlenz, K. Kühnlenz","doi":"10.1109/RO-MAN47096.2020.9223454","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223454","url":null,"abstract":"This paper is a first step towards the investigation of civil courage in human-robot interaction (HRI). The main research question is if human users would help a robot being bullied by other humans. Previous work showed that pro-social behavior can be induced in human users towards a robot pro-actively asking for their help in order to accomplish a specific task by applying mechanisms of social bonding. In contrast, this paper investigates unsolicited helpful behavior towards a robot being bullied by a third person subsequent to an interaction task. To this end, social bonding in terms of small talk including explicit emotional adaptation to induce a feeling of similarity is applied to a human-robot dialog scenario in a user study. As an interaction context, a cooperative object classification task is chosen, where a robot reads objects from a list needed by the robot to fulfill another task later. To induce bullying behavior, the list is took away from the robot by a disturbing third person after the completed interaction. The two experimental conditions of the study differ in whether or not social bonding is applied prior to the interaction. According to previous work, results showed increased ratings for social presence and anthropomorphism, as well as increased unsolicited helpfulness of the participants in the social bonding condition. Surprisingly, unsolicited help occurred only verbally and directed towards the robot and none of the human users took action against the bullying third person. It is discussed, that this may be due to social-psychological side-effects caused by the passive presence of the human experimental examiner and that additional channels of emotional adaptation by the robot may be required.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"94 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223330
M. T. Shaikh, M. Goodrich
Measuring how well a potential solution to a problem matches the problem-holder’s intent and detecting when a current solution no longer matches intent is important when designing resilient human-robot teams. This paper addresses intent-matching for a robot path-planning problem that includes multiple objectives and where human intent is represented as a vector in the multi-objective payoff space. The paper introduces a new metric called the intent threshold margin and shows that it can be used to rank paths by how close they match a specified intent. The rankings induced by the metric correlate with average human rankings (obtained in an MTurk study) of how closely different paths match a specified intent. The intuition of the intent threshold margin is that it represents how much the human’s intent must be "relaxed" to match the payoffs for a specified path.
{"title":"A Measure to Match Robot Plans to Human Intent: A Case Study in Multi-Objective Human-Robot Path-Planning*","authors":"M. T. Shaikh, M. Goodrich","doi":"10.1109/RO-MAN47096.2020.9223330","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223330","url":null,"abstract":"Measuring how well a potential solution to a problem matches the problem-holder’s intent and detecting when a current solution no longer matches intent is important when designing resilient human-robot teams. This paper addresses intent-matching for a robot path-planning problem that includes multiple objectives and where human intent is represented as a vector in the multi-objective payoff space. The paper introduces a new metric called the intent threshold margin and shows that it can be used to rank paths by how close they match a specified intent. The rankings induced by the metric correlate with average human rankings (obtained in an MTurk study) of how closely different paths match a specified intent. The intuition of the intent threshold margin is that it represents how much the human’s intent must be \"relaxed\" to match the payoffs for a specified path.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129145366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223494
A. Scarinzi
Conscious experience is needed to adapt to novel and significant events, to perform actions, to have perceptions. This contribution shows how a robot can be enactively conscious. It questions the view by Manzotti and Chella (2018) according to which the enactive approach to consciousness falls into the so called "intermediate level fallacy" and shows that the authors’ remark is implausible because it is based on a partial and reductive view both of enactivism and of one of its main tenets called embodiment. The original enactive approach to experience as it was developed by Varela/Thompson/Rosch (1991) is discussed. Manzotti’s and Chella’s criticism that in enactivism it is unclear why the knowledge of the effects of movement on sensory stimulation should lead to conscious experience is rejected. In this contribution, it is explained why sensorimotricity and the actionist approach to perception do lead to (robot) conscious experience in the perception of objects located in outer space.
{"title":"Enactively Conscious Robots: Why Enactivism Does Not Commit the Intermediate Level Fallacy *","authors":"A. Scarinzi","doi":"10.1109/RO-MAN47096.2020.9223494","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223494","url":null,"abstract":"Conscious experience is needed to adapt to novel and significant events, to perform actions, to have perceptions. This contribution shows how a robot can be enactively conscious. It questions the view by Manzotti and Chella (2018) according to which the enactive approach to consciousness falls into the so called \"intermediate level fallacy\" and shows that the authors’ remark is implausible because it is based on a partial and reductive view both of enactivism and of one of its main tenets called embodiment. The original enactive approach to experience as it was developed by Varela/Thompson/Rosch (1991) is discussed. Manzotti’s and Chella’s criticism that in enactivism it is unclear why the knowledge of the effects of movement on sensory stimulation should lead to conscious experience is rejected. In this contribution, it is explained why sensorimotricity and the actionist approach to perception do lead to (robot) conscious experience in the perception of objects located in outer space.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223535
Nicole L. Robinson, Teah-Neal Hicks, Gavin Suddrey, D. Kavanagh
An individual’s self-efficacy to interact with a robot has important implications around the content, utility and success of the interaction. Individuals need to achieve a high level of self-efficacy in human robot-interaction in a reasonable time-frame for positive effects to occur in short-term human-robot scenarios. This trial explored the impact of a 2-minute automated robot-delivered tutorial designed to teach people from the general public how to use the robot as a method to increase robot self-efficacy scores. This trial assessed scores before (T1) and after (T2) an interaction with the robot to investigate changes in self-efficacy, likability and willingness to use it. The 40 participants recruited had on average very low level of robotic experience. After the tutorial, people reported significantly higher robot self-efficacy with very large effect sizes to operate a robot and apply the robot to a task ($eta _p^2 = 0.727$ and 0.660). Significant increases in likability and willingness to interact with the robot were also found ($eta _p^2 = 0.465$ and 0.480). Changes in likability and self-efficacy contributed to 64% of the variance in changes to willingness to use the robot. Initial differences were found in robot self-efficacy for older people and those with less robotics and programming experience compared with other participants, but scores across these subgroups were similar after completion of the tutorial. This demonstrated that high levels of self-efficacy, likeability and willingness to use a social robot can be reached in a very short time, and on comparable levels, regardless of age or prior robotics experience. This outcome has significant implications for future trials using social robots, since these variables can strongly influence experimental outcomes.
{"title":"The Robot Self-Efficacy Scale: Robot Self-Efficacy, Likability and Willingness to Interact Increases After a Robot-Delivered Tutorial","authors":"Nicole L. Robinson, Teah-Neal Hicks, Gavin Suddrey, D. Kavanagh","doi":"10.1109/RO-MAN47096.2020.9223535","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223535","url":null,"abstract":"An individual’s self-efficacy to interact with a robot has important implications around the content, utility and success of the interaction. Individuals need to achieve a high level of self-efficacy in human robot-interaction in a reasonable time-frame for positive effects to occur in short-term human-robot scenarios. This trial explored the impact of a 2-minute automated robot-delivered tutorial designed to teach people from the general public how to use the robot as a method to increase robot self-efficacy scores. This trial assessed scores before (T1) and after (T2) an interaction with the robot to investigate changes in self-efficacy, likability and willingness to use it. The 40 participants recruited had on average very low level of robotic experience. After the tutorial, people reported significantly higher robot self-efficacy with very large effect sizes to operate a robot and apply the robot to a task ($eta _p^2 = 0.727$ and 0.660). Significant increases in likability and willingness to interact with the robot were also found ($eta _p^2 = 0.465$ and 0.480). Changes in likability and self-efficacy contributed to 64% of the variance in changes to willingness to use the robot. Initial differences were found in robot self-efficacy for older people and those with less robotics and programming experience compared with other participants, but scores across these subgroups were similar after completion of the tutorial. This demonstrated that high levels of self-efficacy, likeability and willingness to use a social robot can be reached in a very short time, and on comparable levels, regardless of age or prior robotics experience. This outcome has significant implications for future trials using social robots, since these variables can strongly influence experimental outcomes.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121279866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223334
Sean Welsh
There are times when norms clash in a given situation. One norm requires an agent to do X. Another requires an agent to not do X but to do Y or nothing (not X) instead. This paper describes a way to resolve clashes between norms using a concept of tiered utility that has the potential to be automated. Classical utility has polarity and magnitude. Tiered utility has polarity, magnitude and tiers. Tiers are used for lexicographic preference orderings that enable correct normative choices by robots.
{"title":"Resolving Clashing Norms Using Tiered Utility","authors":"Sean Welsh","doi":"10.1109/RO-MAN47096.2020.9223334","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223334","url":null,"abstract":"There are times when norms clash in a given situation. One norm requires an agent to do X. Another requires an agent to not do X but to do Y or nothing (not X) instead. This paper describes a way to resolve clashes between norms using a concept of tiered utility that has the potential to be automated. Classical utility has polarity and magnitude. Tiered utility has polarity, magnitude and tiers. Tiers are used for lexicographic preference orderings that enable correct normative choices by robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121362483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223477
H. Mohammadi, Matthias Kerzel, Benedikt Pleintinger, T. Hulin, Philipp Reisich, A. Schmidt, Aaron Pereira, S. Wermter, Neal Y. Lii
Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR’s Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved.
远程机器人系统必须适应新的环境条件,并处理由长时间延迟引起的高度不确定性。作为人类智能的最佳替代方案之一,强化学习(RL)可能为应对这些问题提供解决方案。本文提出将强化学习与模型介导遥操作(MMT)概念相结合。远程操作员与模拟的虚拟环境交互,并提供即时反馈。来自真实环境的反馈是延迟的,而来自模型的反馈是即时的,因此具有很高的透明度。MMT与两层智能系统相结合实现。第一层利用动态运动原语(Dynamic Movement Primitives, DMP)来解释角色环境中的某些变化。第二层利用强化学习方法解决了模型中不确定性引起的问题。增强现实技术为远程操作者提供了融合化身设备和虚拟环境模型的方法。在DLR的Exodex Adam手臂触觉外骨骼上实现,结果表明,演示后,当物体位置发生变化时,RL方法能够找到不同的解决方案。研究结果还表明,在没有不确定性的情况下,dmp能够有效地适应新环境。
{"title":"Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning","authors":"H. Mohammadi, Matthias Kerzel, Benedikt Pleintinger, T. Hulin, Philipp Reisich, A. Schmidt, Aaron Pereira, S. Wermter, Neal Y. Lii","doi":"10.1109/RO-MAN47096.2020.9223477","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223477","url":null,"abstract":"Telerobotic systems must adapt to new environmental conditions and deal with high uncertainty caused by long-time delays. As one of the best alternatives to human-level intelligence, Reinforcement Learning (RL) may offer a solution to cope with these issues. This paper proposes to integrate RL with the Model Mediated Teleoperation (MMT) concept. The teleoperator interacts with a simulated virtual environment, which provides instant feedback. Whereas feedback from the real environment is delayed, feedback from the model is instantaneous, leading to high transparency. The MMT is realized in combination with an intelligent system with two layers. The first layer utilizes Dynamic Movement Primitives (DMP) which accounts for certain changes in the avatar environment. And, the second layer addresses the problems caused by uncertainty in the model using RL methods. Augmented reality was also provided to fuse the avatar device and virtual environment models for the teleoperator. Implemented on DLR’s Exodex Adam hand-arm haptic exoskeleton, the results show RL methods are able to find different solutions when changes are applied to the object position after the demonstration. The results also show DMPs to be effective at adapting to new conditions where there is no uncertainty involved.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121502982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223461
Natawut Monaikul, Bahareh Abbasi, Zhanibek Rysbek, Barbara Di Eugenio, M. Žefran
In a collaborative task and the interaction that accompanies it, the participants often take on distinct roles, and dynamically switch the roles as the task requires. A domestic assistive robot thus needs to have similar capabilities. Using our previously proposed Multimodal Interaction Manager (MIM) framework, this paper investigates how role switching for a robot can be implemented. It identifies a set of primitive subtasks that encode common interaction patterns observed in our data corpus and that can be used to easily construct complex task models. It also describes an implementation on the NAO robot that, together with our original work, demonstrates that the robot can take on different roles. We provide a detailed analysis of the performance of the system and discuss the challenges that arise when switching roles in human-robot interactions.
{"title":"Role Switching in Task-Oriented Multimodal Human-Robot Collaboration","authors":"Natawut Monaikul, Bahareh Abbasi, Zhanibek Rysbek, Barbara Di Eugenio, M. Žefran","doi":"10.1109/RO-MAN47096.2020.9223461","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223461","url":null,"abstract":"In a collaborative task and the interaction that accompanies it, the participants often take on distinct roles, and dynamically switch the roles as the task requires. A domestic assistive robot thus needs to have similar capabilities. Using our previously proposed Multimodal Interaction Manager (MIM) framework, this paper investigates how role switching for a robot can be implemented. It identifies a set of primitive subtasks that encode common interaction patterns observed in our data corpus and that can be used to easily construct complex task models. It also describes an implementation on the NAO robot that, together with our original work, demonstrates that the robot can take on different roles. We provide a detailed analysis of the performance of the system and discuss the challenges that arise when switching roles in human-robot interactions.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126894974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223471
Alessandra Rossi, K. Dautenhahn, K. Koay, M. Walters
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people’s trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. We conducted a between-subjects study where participants’ trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants’ choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
{"title":"How Social Robots Influence People’s Trust in Critical Situations","authors":"Alessandra Rossi, K. Dautenhahn, K. Koay, M. Walters","doi":"10.1109/RO-MAN47096.2020.9223471","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223471","url":null,"abstract":"As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people’s trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. We conducted a between-subjects study where participants’ trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants’ choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133972065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223509
Kazuo Okamura, S. Yamada
Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a "continuous" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.
{"title":"Calibrating Trust in Human-Drone Cooperative Navigation","authors":"Kazuo Okamura, S. Yamada","doi":"10.1109/RO-MAN47096.2020.9223509","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223509","url":null,"abstract":"Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a \"continuous\" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132865404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}