This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.
{"title":"Fielded Human-Robot Interaction for a Heterogeneous Team in the DARPA Subterranean Challenge","authors":"Danny G. Riley, E. Frew","doi":"10.1145/3588325","DOIUrl":"https://doi.org/10.1145/3588325","url":null,"abstract":"Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":"1 - 24"},"PeriodicalIF":5.1,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89329550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.
{"title":"Affective Robots Need Therapy","authors":"Paul Bucci, David Marino, Ivan Beschastnikh","doi":"10.1145/3543514","DOIUrl":"https://doi.org/10.1145/3543514","url":null,"abstract":"Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"45 1","pages":"1 - 22"},"PeriodicalIF":5.1,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86531150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Caballa, David Lizano, Manuel Aranda, Diego Zegarra
Injury burns are traumatic events, especially for children. The use of combined pharmacological and non-pharmacological strategies in hospitals favor emotional experiences. However, in Latin American hospitals, which have low budgets, it is not possible to hire therapeutic support personnel, such as clowns, due to the lack of available human resources. RoPi is a social robot created to assist hospitalized children's emotional support through its multicolor interchangeable pieces and its interactive functions.
{"title":"RoPi","authors":"Sebastian Caballa, David Lizano, Manuel Aranda, Diego Zegarra","doi":"10.1145/3568294.3580198","DOIUrl":"https://doi.org/10.1145/3568294.3580198","url":null,"abstract":"Injury burns are traumatic events, especially for children. The use of combined pharmacological and non-pharmacological strategies in hospitals favor emotional experiences. However, in Latin American hospitals, which have low budgets, it is not possible to hire therapeutic support personnel, such as clowns, due to the lack of available human resources. RoPi is a social robot created to assist hospitalized children's emotional support through its multicolor interchangeable pieces and its interactive functions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"113 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77494799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Acceptance of robots is known to be directly influenced by perceptions and attitudes potential users have of them. Particularly, negative attitudes can prevent that the implementations of robots unlock their full potential and ultimately fail if negative attitudes are not addressed. We employed the popular Negative Attitude Towards Robots Scale (NARS) across four different studies to assess how different populations in Norway perceive robots. All four included exposure to at least a robot. However, the setup of each individual study was different from the others. We summarized the results across study and made comparisons between the different samples. We also analyzed the effect of gender and age on attitude towards robots as measured by the NARS. The results indicate that there are significant differences between samples and that females score significantly higher than males, thus having a less favorable opinion of robots and potentially avoiding interaction with them. We touch upon possible explanations and implications of our results and highlight the need for more research into this topic.
{"title":"Comparison of Attitudes Towards Robots of Different Population Samples in Norway","authors":"Marten Bloch, Alexandra Fernandes","doi":"10.1145/3568294.3580151","DOIUrl":"https://doi.org/10.1145/3568294.3580151","url":null,"abstract":"Acceptance of robots is known to be directly influenced by perceptions and attitudes potential users have of them. Particularly, negative attitudes can prevent that the implementations of robots unlock their full potential and ultimately fail if negative attitudes are not addressed. We employed the popular Negative Attitude Towards Robots Scale (NARS) across four different studies to assess how different populations in Norway perceive robots. All four included exposure to at least a robot. However, the setup of each individual study was different from the others. We summarized the results across study and made comparisons between the different samples. We also analyzed the effect of gender and age on attitude towards robots as measured by the NARS. The results indicate that there are significant differences between samples and that females score significantly higher than males, thus having a less favorable opinion of robots and potentially avoiding interaction with them. We touch upon possible explanations and implications of our results and highlight the need for more research into this topic.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"269 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77572879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Girish, Suchithra Selladurai, Vidhya Vidhya Sagar Murugan, Viboosithasri N S
Accessibility in transport hubs is indispensable for the upliftment of the social and economic spheres of persons with disabilities. Carla - a smart, autonomous personal assistant is proposed as an affordable solution to impart inclusivity in transport hubs. Carla can serve as a guide to the destination, an information kiosk and a luggage carrier. It is equipped with a rope guide and digital braille to serve people suffering from multi-sensory impairment. Carla can move to a modest push from the user - a novel feature introduced to enhance interaction. The system architecture and algorithm for autonomous navigation are also discussed. Finally, the interaction of Carla in a number of use cases are presented.
{"title":"Carla - Making Transport Hubs Accessible","authors":"A. Girish, Suchithra Selladurai, Vidhya Vidhya Sagar Murugan, Viboosithasri N S","doi":"10.1145/3568294.3580204","DOIUrl":"https://doi.org/10.1145/3568294.3580204","url":null,"abstract":"Accessibility in transport hubs is indispensable for the upliftment of the social and economic spheres of persons with disabilities. Carla - a smart, autonomous personal assistant is proposed as an affordable solution to impart inclusivity in transport hubs. Carla can serve as a guide to the destination, an information kiosk and a luggage carrier. It is equipped with a rope guide and digital braille to serve people suffering from multi-sensory impairment. Carla can move to a modest push from the user - a novel feature introduced to enhance interaction. The system architecture and algorithm for autonomous navigation are also discussed. Finally, the interaction of Carla in a number of use cases are presented.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78035947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donald Mcmillan, Razan N. Jaber, Benjamin R. Cowan, J. Fischer, Bahar Irfan, Ronald Cumbal, Nima Zargham, Minha Lee
Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities through a one-day workshop to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track.
{"title":"Human-Robot Conversational Interaction (HRCI)","authors":"Donald Mcmillan, Razan N. Jaber, Benjamin R. Cowan, J. Fischer, Bahar Irfan, Ronald Cumbal, Nima Zargham, Minha Lee","doi":"10.1145/3568294.3579954","DOIUrl":"https://doi.org/10.1145/3568294.3579954","url":null,"abstract":"Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities through a one-day workshop to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78667422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.
{"title":"SEAN-VR: An Immersive Virtual Reality Experience for Evaluating Social Robot Navigation","authors":"Qiping Zhang, Nathan Tsoi, Marynel Vázquez","doi":"10.1145/3568294.3580039","DOIUrl":"https://doi.org/10.1145/3568294.3580039","url":null,"abstract":"We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"94 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77211354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Arora, Amit Arora, K. Sivakumar, John R. McIntyre
This ROB-I-LEARN research utilizes a versatile framework (e.g., Business Model Canvas or BMC) for robot design and curriculum development aimed at students diagnosed with autism spectrum disorder (ASD). Robotic interventions / human-robot interaction (HRI) field experiments with high school students were conducted as a recommendation or an outcome of the BMC framework and customer discovery interviews. These curriculum-related robotic interventions / interactive scenarios were designed to improve cognitive rehabilitation targeting students with ASD in high schools, thus enabling a higher quality learning environment that corresponds with students' learning requirements to prepare them for future learning and workforce environments.
{"title":"Robotic Interventions for Learning (ROB-I-LEARN): Examining Social Robotics for Learning Disabilities through Business Model Canvas","authors":"A. Arora, Amit Arora, K. Sivakumar, John R. McIntyre","doi":"10.1145/3568294.3580088","DOIUrl":"https://doi.org/10.1145/3568294.3580088","url":null,"abstract":"This ROB-I-LEARN research utilizes a versatile framework (e.g., Business Model Canvas or BMC) for robot design and curriculum development aimed at students diagnosed with autism spectrum disorder (ASD). Robotic interventions / human-robot interaction (HRI) field experiments with high school students were conducted as a recommendation or an outcome of the BMC framework and customer discovery interviews. These curriculum-related robotic interventions / interactive scenarios were designed to improve cognitive rehabilitation targeting students with ASD in high schools, thus enabling a higher quality learning environment that corresponds with students' learning requirements to prepare them for future learning and workforce environments.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"107 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79163684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and scenarios. Their fundamental framework involves explicit, manual error management and implicit domain-specific information driven error management, tailoring their response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as another information channel to create more flexibility in application. To support this notion, we introduce a novel dataset (composed of three data collections) with a focus on understanding natural facial action unit (AU) responses to robot errors during physical-based human-robot interactions---varying across task, error, people, and scenario. Analysis of the dataset reveals that, through the lens of error detection, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition, we provide an example real-time interactive robot error management system using the error-aware framework.
{"title":"On Using Social Signals to Enable Flexible Error-Aware HRI","authors":"Maia Stiber, R. Taylor, Chien-Ming Huang","doi":"10.1145/3568162.3576990","DOIUrl":"https://doi.org/10.1145/3568162.3576990","url":null,"abstract":"Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and scenarios. Their fundamental framework involves explicit, manual error management and implicit domain-specific information driven error management, tailoring their response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as another information channel to create more flexibility in application. To support this notion, we introduce a novel dataset (composed of three data collections) with a focus on understanding natural facial action unit (AU) responses to robot errors during physical-based human-robot interactions---varying across task, error, people, and scenario. Analysis of the dataset reveals that, through the lens of error detection, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition, we provide an example real-time interactive robot error management system using the error-aware framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"26 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79274916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}