This study proposes a method to estimate pedestrian trust in an automated vehicle (AV) based on pedestrian behavior. It conducted experiments in a VR environment where an AV approached a crosswalk. Participants rated their trust in the AV at three levels before/while they crossed the road. The level can be estimated by deep learning using their skeletal coordinates, position, vehicle position, and speed during the past four seconds. The estimation accuracy was 61%.
{"title":"Trust Estimation for Autonomous Vehicles by Measuring Pedestrian Behavior in VR","authors":"Ryota Masuda, Shintaro Ono, T. Hiraoka, Y. Suda","doi":"10.1145/3568294.3580072","DOIUrl":"https://doi.org/10.1145/3568294.3580072","url":null,"abstract":"This study proposes a method to estimate pedestrian trust in an automated vehicle (AV) based on pedestrian behavior. It conducted experiments in a VR environment where an AV approached a crosswalk. Participants rated their trust in the AV at three levels before/while they crossed the road. The level can be estimated by deep learning using their skeletal coordinates, position, vehicle position, and speed during the past four seconds. The estimation accuracy was 61%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"325 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75047130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An often-forgotten group of people which is heavily affected by loneliness are young adults. The perceived social isolation often stems from attachment insecurities and social skill deficiencies. Since robots can function as social interaction partners who exert less social pressure and display less social complexity, they may pose a promising approach to alleviate this problematic situation. The goal would not be to replace human interaction partners, but to diminish acute loneliness and accompanying detrimental effects and to function as social skills coach and practice interaction partner. To explore the potential of this approach, a preregistered quantitative online study (N = 150) incorporating a video-based interaction with a social robot and qualitative elements was conducted. First results show that young adults report less state loneliness after interacting with the robot than before. Technically affine people evaluate the robot's sociability as well as the interaction with it more positively, people with a general negative attitude towards robots less positively. Furthermore, the more trait loneliness people report to experience, the less sociable they perceive the robot.
{"title":"Hey Robot, Can You Help Me Feel Less Lonely?: An Explorative Study to Examine the Potential of Using Social Robots to Alleviate Loneliness in Young Adults","authors":"Aike C. Horstmann","doi":"10.1145/3568294.3580135","DOIUrl":"https://doi.org/10.1145/3568294.3580135","url":null,"abstract":"An often-forgotten group of people which is heavily affected by loneliness are young adults. The perceived social isolation often stems from attachment insecurities and social skill deficiencies. Since robots can function as social interaction partners who exert less social pressure and display less social complexity, they may pose a promising approach to alleviate this problematic situation. The goal would not be to replace human interaction partners, but to diminish acute loneliness and accompanying detrimental effects and to function as social skills coach and practice interaction partner. To explore the potential of this approach, a preregistered quantitative online study (N = 150) incorporating a video-based interaction with a social robot and qualitative elements was conducted. First results show that young adults report less state loneliness after interacting with the robot than before. Technically affine people evaluate the robot's sociability as well as the interaction with it more positively, people with a general negative attitude towards robots less positively. Furthermore, the more trait loneliness people report to experience, the less sociable they perceive the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"34 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75079642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the "Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.
{"title":"Development of a University Guidance and Information Robot","authors":"A. Blair, M. Foster","doi":"10.1145/3568294.3580138","DOIUrl":"https://doi.org/10.1145/3568294.3580138","url":null,"abstract":"We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the \"Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"24 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82191215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.
{"title":"Implications of AI Bias in HRI: Risks (and Opportunities) when Interacting with a Biased Robot","authors":"Tom Hitron, Noa Morag Yaar, H. Erel","doi":"10.1145/3568162.3576977","DOIUrl":"https://doi.org/10.1145/3568162.3576977","url":null,"abstract":"Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"64 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82703349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.
{"title":"Effects of Predictive Robot Eyes on Trust and Task Performance in an Industrial Cooperation Task","authors":"L. Onnasch, Paul Schweidler, Maximilian Wieser","doi":"10.1145/3568294.3580123","DOIUrl":"https://doi.org/10.1145/3568294.3580123","url":null,"abstract":"Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88185611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.
{"title":"A Persuasive Robot that Alleviates Endogenous Smartphone-related Interruption","authors":"Hanyang Hu, Mengyu Chen, Ruhan Wang, Yijie Guo","doi":"10.1145/3568294.3580097","DOIUrl":"https://doi.org/10.1145/3568294.3580097","url":null,"abstract":"The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"26 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75616299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.
{"title":"Visuo-Textual Explanations of a Robot's Navigational Choices","authors":"Amar Halilovic, F. Lindner","doi":"10.1145/3568294.3580141","DOIUrl":"https://doi.org/10.1145/3568294.3580141","url":null,"abstract":"With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76664176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss
As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
{"title":"Variable Autonomy for Human-Robot Teaming (VAT)","authors":"Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss","doi":"10.1145/3568294.3579957","DOIUrl":"https://doi.org/10.1145/3568294.3579957","url":null,"abstract":"As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"223 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova
Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message "not to smoke" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.
{"title":"Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas","authors":"Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova","doi":"10.1145/3568294.3580173","DOIUrl":"https://doi.org/10.1145/3568294.3580173","url":null,"abstract":"Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message \"not to smoke\" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79818653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-13DOI: 10.5040/9781350088733.0124
Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid
{"title":"HighLight","authors":"Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid","doi":"10.5040/9781350088733.0124","DOIUrl":"https://doi.org/10.5040/9781350088733.0124","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79916250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}