This study proposes a method to estimate pedestrian trust in an automated vehicle (AV) based on pedestrian behavior. It conducted experiments in a VR environment where an AV approached a crosswalk. Participants rated their trust in the AV at three levels before/while they crossed the road. The level can be estimated by deep learning using their skeletal coordinates, position, vehicle position, and speed during the past four seconds. The estimation accuracy was 61%.
{"title":"Trust Estimation for Autonomous Vehicles by Measuring Pedestrian Behavior in VR","authors":"Ryota Masuda, Shintaro Ono, T. Hiraoka, Y. Suda","doi":"10.1145/3568294.3580072","DOIUrl":"https://doi.org/10.1145/3568294.3580072","url":null,"abstract":"This study proposes a method to estimate pedestrian trust in an automated vehicle (AV) based on pedestrian behavior. It conducted experiments in a VR environment where an AV approached a crosswalk. Participants rated their trust in the AV at three levels before/while they crossed the road. The level can be estimated by deep learning using their skeletal coordinates, position, vehicle position, and speed during the past four seconds. The estimation accuracy was 61%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"325 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75047130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An often-forgotten group of people which is heavily affected by loneliness are young adults. The perceived social isolation often stems from attachment insecurities and social skill deficiencies. Since robots can function as social interaction partners who exert less social pressure and display less social complexity, they may pose a promising approach to alleviate this problematic situation. The goal would not be to replace human interaction partners, but to diminish acute loneliness and accompanying detrimental effects and to function as social skills coach and practice interaction partner. To explore the potential of this approach, a preregistered quantitative online study (N = 150) incorporating a video-based interaction with a social robot and qualitative elements was conducted. First results show that young adults report less state loneliness after interacting with the robot than before. Technically affine people evaluate the robot's sociability as well as the interaction with it more positively, people with a general negative attitude towards robots less positively. Furthermore, the more trait loneliness people report to experience, the less sociable they perceive the robot.
{"title":"Hey Robot, Can You Help Me Feel Less Lonely?: An Explorative Study to Examine the Potential of Using Social Robots to Alleviate Loneliness in Young Adults","authors":"Aike C. Horstmann","doi":"10.1145/3568294.3580135","DOIUrl":"https://doi.org/10.1145/3568294.3580135","url":null,"abstract":"An often-forgotten group of people which is heavily affected by loneliness are young adults. The perceived social isolation often stems from attachment insecurities and social skill deficiencies. Since robots can function as social interaction partners who exert less social pressure and display less social complexity, they may pose a promising approach to alleviate this problematic situation. The goal would not be to replace human interaction partners, but to diminish acute loneliness and accompanying detrimental effects and to function as social skills coach and practice interaction partner. To explore the potential of this approach, a preregistered quantitative online study (N = 150) incorporating a video-based interaction with a social robot and qualitative elements was conducted. First results show that young adults report less state loneliness after interacting with the robot than before. Technically affine people evaluate the robot's sociability as well as the interaction with it more positively, people with a general negative attitude towards robots less positively. Furthermore, the more trait loneliness people report to experience, the less sociable they perceive the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"34 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75079642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the "Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.
{"title":"Development of a University Guidance and Information Robot","authors":"A. Blair, M. Foster","doi":"10.1145/3568294.3580138","DOIUrl":"https://doi.org/10.1145/3568294.3580138","url":null,"abstract":"We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the \"Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"24 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82191215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.
{"title":"Implications of AI Bias in HRI: Risks (and Opportunities) when Interacting with a Biased Robot","authors":"Tom Hitron, Noa Morag Yaar, H. Erel","doi":"10.1145/3568162.3576977","DOIUrl":"https://doi.org/10.1145/3568162.3576977","url":null,"abstract":"Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"64 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82703349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.
{"title":"Effects of Predictive Robot Eyes on Trust and Task Performance in an Industrial Cooperation Task","authors":"L. Onnasch, Paul Schweidler, Maximilian Wieser","doi":"10.1145/3568294.3580123","DOIUrl":"https://doi.org/10.1145/3568294.3580123","url":null,"abstract":"Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88185611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a coordinated and reactive human-aware motion planner for performing a handover task by an autonomous aerial manipulator (AAM). We present a method to determine the final state of the AAM for a handover task based on the current state of the human and the surrounding obstacles. We consider the visual field of the human and the effort to turn the head and see the AAM as well as the discomfort caused to the human. We apply these social constraints together with the kinematic constraints of the AAM to determine its coordinated motion along the trajectory.
{"title":"Reactive Planning for Coordinated Handover of an Autonomous Aerial Manipulator","authors":"Jérôme Truc, D. Sidobre, R. Alami","doi":"10.1145/3568294.3580055","DOIUrl":"https://doi.org/10.1145/3568294.3580055","url":null,"abstract":"In this paper, we present a coordinated and reactive human-aware motion planner for performing a handover task by an autonomous aerial manipulator (AAM). We present a method to determine the final state of the AAM for a handover task based on the current state of the human and the surrounding obstacles. We consider the visual field of the human and the effort to turn the head and see the AAM as well as the discomfort caused to the human. We apply these social constraints together with the kinematic constraints of the AAM to determine its coordinated motion along the trajectory.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"68 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91306181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.
{"title":"Towards Robot Learning from Spoken Language","authors":"K. Kodur, Manizheh Zand, Maria Kyrarini","doi":"10.1145/3568294.3580053","DOIUrl":"https://doi.org/10.1145/3568294.3580053","url":null,"abstract":"The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87606085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.
{"title":"Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals","authors":"A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, Jamison Heard","doi":"10.1145/3568294.3580146","DOIUrl":"https://doi.org/10.1145/3568294.3580146","url":null,"abstract":"With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88051400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.
{"title":"Who to Teach a Robot to Facilitate Multi-party Social Interactions?","authors":"Jouh Yeong Chew, Keisuke Nakamura","doi":"10.1145/3568294.3580056","DOIUrl":"https://doi.org/10.1145/3568294.3580056","url":null,"abstract":"One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89012519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
{"title":"Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks","authors":"J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu","doi":"10.1145/3568294.3580149","DOIUrl":"https://doi.org/10.1145/3568294.3580149","url":null,"abstract":"In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"8 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89696048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}