We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.
{"title":"SEAN-VR: An Immersive Virtual Reality Experience for Evaluating Social Robot Navigation","authors":"Qiping Zhang, Nathan Tsoi, Marynel Vázquez","doi":"10.1145/3568294.3580039","DOIUrl":"https://doi.org/10.1145/3568294.3580039","url":null,"abstract":"We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77211354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhan Hu, Jin Ryu, David Gundana, Kirstin H. Petersen, H. Kress-Gazit, G. Hoffman
Robots have the potential to assist in emergency evacuation tasks, but it is not clear how robots should behave to evacuate people who are not fully compliant, perhaps due to panic or other priorities in an emergency. In this paper, we compare two robot strategies: an actively nudging robot that initiates evacuation and pulls toward the exit and a passively waiting robot that stays around users and waits for instruction. Both strategies were automatically synthesized from a description of the desired behavior. We conduct a within participant study ( = 20) in a simulated environment to compare the evacuation effectiveness between the two robot strategies. Our results indicate an advantage of the nudging robot for effective evacuation when being exposed to the evacuation scenario for the first time. The waiting robot results in lower efficiency, higher mental load, and more physical conflicts. However, participants like the waiting robots equally or slightly more when they repeat the evacuation scenario and are more familiar with the situation. Our qualitative analysis of the participants' feedback suggests several design implications for future emergency evacuation robots.
{"title":"Nudging or Waiting?: Automatically Synthesized Robot Strategies for Evacuating Noncompliant Users in an Emergency Situation","authors":"Yuhan Hu, Jin Ryu, David Gundana, Kirstin H. Petersen, H. Kress-Gazit, G. Hoffman","doi":"10.1145/3568162.3576955","DOIUrl":"https://doi.org/10.1145/3568162.3576955","url":null,"abstract":"Robots have the potential to assist in emergency evacuation tasks, but it is not clear how robots should behave to evacuate people who are not fully compliant, perhaps due to panic or other priorities in an emergency. In this paper, we compare two robot strategies: an actively nudging robot that initiates evacuation and pulls toward the exit and a passively waiting robot that stays around users and waits for instruction. Both strategies were automatically synthesized from a description of the desired behavior. We conduct a within participant study ( = 20) in a simulated environment to compare the evacuation effectiveness between the two robot strategies. Our results indicate an advantage of the nudging robot for effective evacuation when being exposed to the evacuation scenario for the first time. The waiting robot results in lower efficiency, higher mental load, and more physical conflicts. However, participants like the waiting robots equally or slightly more when they repeat the evacuation scenario and are more familiar with the situation. Our qualitative analysis of the participants' feedback suggests several design implications for future emergency evacuation robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81976013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significant segments of the HRI literature rely on or promote the ability to reason about human identity characteristics, including age, gender, and cultural background. However, attempting to handle identity characteristics raises a number of critical ethical concerns, especially given the spatiotemporal dynamics of these characteristics. In this paper I question whether human identity characteristics can and should be represented, recognized, or reasoned about by robots, with special attention paid to the construct of race, due to its relative lack of consideration within the HRI community. As I will argue, while there are a number of well-warranted reasons why HRI researchers might want to enable robotic consideration of identity characteristics, these reasons are outweighed by a number of key ontological, perceptual, and deployment-oriented concerns. This argument raises troubling questions as to whether robots should even be able to understand or generate descriptions of people, and how they would do so while avoiding these ethical concerns. Finally, I conclude with a discussion of what this means for the HRI community, in terms of both algorithm and robot design, and speculate as to possible paths forward.
{"title":"The Eye of the Robot Beholder: Ethical Risks of Representation, Recognition, and Reasoning over Identity Characteristics in Human-Robot Interaction","authors":"T. Williams","doi":"10.1145/3568294.3580031","DOIUrl":"https://doi.org/10.1145/3568294.3580031","url":null,"abstract":"Significant segments of the HRI literature rely on or promote the ability to reason about human identity characteristics, including age, gender, and cultural background. However, attempting to handle identity characteristics raises a number of critical ethical concerns, especially given the spatiotemporal dynamics of these characteristics. In this paper I question whether human identity characteristics can and should be represented, recognized, or reasoned about by robots, with special attention paid to the construct of race, due to its relative lack of consideration within the HRI community. As I will argue, while there are a number of well-warranted reasons why HRI researchers might want to enable robotic consideration of identity characteristics, these reasons are outweighed by a number of key ontological, perceptual, and deployment-oriented concerns. This argument raises troubling questions as to whether robots should even be able to understand or generate descriptions of people, and how they would do so while avoiding these ethical concerns. Finally, I conclude with a discussion of what this means for the HRI community, in terms of both algorithm and robot design, and speculate as to possible paths forward.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78060690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the "Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.
{"title":"Development of a University Guidance and Information Robot","authors":"A. Blair, M. Foster","doi":"10.1145/3568294.3580138","DOIUrl":"https://doi.org/10.1145/3568294.3580138","url":null,"abstract":"We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the \"Reach Out'' team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82191215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donald Mcmillan, Razan N. Jaber, Benjamin R. Cowan, J. Fischer, Bahar Irfan, Ronald Cumbal, Nima Zargham, Minha Lee
Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities through a one-day workshop to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track.
{"title":"Human-Robot Conversational Interaction (HRCI)","authors":"Donald Mcmillan, Razan N. Jaber, Benjamin R. Cowan, J. Fischer, Bahar Irfan, Ronald Cumbal, Nima Zargham, Minha Lee","doi":"10.1145/3568294.3579954","DOIUrl":"https://doi.org/10.1145/3568294.3579954","url":null,"abstract":"Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities through a one-day workshop to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78667422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deploying a social robot in the real world means that it must interact with speakers from diverse backgrounds, who in turn are likely to show substantial accent and dialect variation. Linguistic variation in social context has been well studied in human-human interaction; however, the influence of these factors on human interactions with digital agents, especially embodied agents such as robots, has received less attention. Here we present an ongoing project where the goal is to develop a social robot that is suitable for deployment in ethnically-diverse areas with distinctive regional accents. To help in developing this robot, we carried out an online survey of Scottish adults to understand their expectations for conversational interaction with a robot. The results confirm that social factors constraining accent and dialect are likely to be significant issues for human-robot interaction in this context, and so must be taken into account in the design of the system at all levels.
{"title":"Social Robotics meets Sociolinguistics: Investigating Accent Bias and Social Context in HRI","authors":"M. Foster, J. Stuart-Smith","doi":"10.1145/3568294.3580063","DOIUrl":"https://doi.org/10.1145/3568294.3580063","url":null,"abstract":"Deploying a social robot in the real world means that it must interact with speakers from diverse backgrounds, who in turn are likely to show substantial accent and dialect variation. Linguistic variation in social context has been well studied in human-human interaction; however, the influence of these factors on human interactions with digital agents, especially embodied agents such as robots, has received less attention. Here we present an ongoing project where the goal is to develop a social robot that is suitable for deployment in ethnically-diverse areas with distinctive regional accents. To help in developing this robot, we carried out an online survey of Scottish adults to understand their expectations for conversational interaction with a robot. The results confirm that social factors constraining accent and dialect are likely to be significant issues for human-robot interaction in this context, and so must be taken into account in the design of the system at all levels.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75962293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.
{"title":"Towards Robot Learning from Spoken Language","authors":"K. Kodur, Manizheh Zand, Maria Kyrarini","doi":"10.1145/3568294.3580053","DOIUrl":"https://doi.org/10.1145/3568294.3580053","url":null,"abstract":"The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87606085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.
{"title":"Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals","authors":"A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, Jamison Heard","doi":"10.1145/3568294.3580146","DOIUrl":"https://doi.org/10.1145/3568294.3580146","url":null,"abstract":"With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88051400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.
{"title":"Implications of AI Bias in HRI: Risks (and Opportunities) when Interacting with a Biased Robot","authors":"Tom Hitron, Noa Morag Yaar, H. Erel","doi":"10.1145/3568162.3576977","DOIUrl":"https://doi.org/10.1145/3568162.3576977","url":null,"abstract":"Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants' stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot's behavior matched gender stereotypes (Pro-Man); (2) The robot's behavior countered gender stereotypes (Pro-Woman); (3) The robot's behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants' stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82703349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.
{"title":"Who to Teach a Robot to Facilitate Multi-party Social Interactions?","authors":"Jouh Yeong Chew, Keisuke Nakamura","doi":"10.1145/3568294.3580056","DOIUrl":"https://doi.org/10.1145/3568294.3580056","url":null,"abstract":"One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89012519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}