This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).
{"title":"Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification","authors":"A. Latupeirissa, C. Panariello, R. Bresin","doi":"10.1145/3585277","DOIUrl":"https://doi.org/10.1145/3585277","url":null,"abstract":"This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models. We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement. We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study. Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91362764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.
{"title":"Fielded Human-Robot Interaction for a Heterogeneous Team in the DARPA Subterranean Challenge","authors":"Danny G. Riley, E. Frew","doi":"10.1145/3588325","DOIUrl":"https://doi.org/10.1145/3588325","url":null,"abstract":"Human supervision of multiple fielded robots is a challenging task which requires a thoughtful design and implementation of both the underlying infrastructure and the human interface. It also requires a skilled human able to manage the workload and understand when to trust the autonomy, or manually intervene. We present an end-to-end system for human-robot interaction with a heterogeneous team of robots in complex, communication-limited environments. The system includes the communication infrastructure, autonomy interaction, and human interface elements. Results of the DARPA Subterranean Challenge Final Systems Competition are presented as a case study of the design and analyze the shortcomings of the system.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":"1 - 24"},"PeriodicalIF":5.1,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89329550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.
{"title":"Affective Robots Need Therapy","authors":"Paul Bucci, David Marino, Ivan Beschastnikh","doi":"10.1145/3543514","DOIUrl":"https://doi.org/10.1145/3543514","url":null,"abstract":"Emotion researchers have begun to converge on the theory that emotions are psychologically and socially constructed. A common assumption in affective robotics is that emotions are categorical brain-body states that can be confidently modeled. But if emotions are constructed, then they are interpretive, ambiguous, and specific to an individual’s unique experience. Constructivist views of emotion pose several challenges to affective robotics: first, it calls into question the validity of attempting to obtain objective measures of emotion through rating scales or biometrics. Second, ambiguous subjective data poses a challenge to computational systems that need structured and definite data to operate. How can a constructivist view of emotion be rectified with these challenges? In this article, we look to psychotherapy for ontological, epistemic, and methodological guidance. These fields (1) already understand emotions to be intrinsically embodied, relative, and metaphorical and (2) have built up substantial knowledge informed by everyday practice. It is our hope that by using interpretive methods inspired by therapeutic approaches, HRI researchers will be able to focus on the practicalities of designing effective embodied emotional interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"45 1","pages":"1 - 22"},"PeriodicalIF":5.1,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86531150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named "Moffuly-II." This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.
{"title":"Designing a Robot which Touches the User's Head with Intra-Hug Gestures","authors":"Yuya Onishi, H. Sumioka, M. Shiomi","doi":"10.1145/3568294.3580096","DOIUrl":"https://doi.org/10.1145/3568294.3580096","url":null,"abstract":"There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named \"Moffuly-II.\" This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90227931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The "Hospiano" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: "Robot Pianist mode", in which it plays pre-existing songs; "Play Along mode", which allows anyone to interact with the music; and "Composer mode", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.
{"title":"Making Music More Inclusive with Hospiano","authors":"Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn","doi":"10.1145/3568294.3580184","DOIUrl":"https://doi.org/10.1145/3568294.3580184","url":null,"abstract":"Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The \"Hospiano\" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: \"Robot Pianist mode\", in which it plays pre-existing songs; \"Play Along mode\", which allows anyone to interact with the music; and \"Composer mode\", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90366005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.
{"title":"People Dynamically Update Trust When Interactively Teaching Robots","authors":"V. B. Chi, B. Malle","doi":"10.1145/3568162.3576962","DOIUrl":"https://doi.org/10.1145/3568162.3576962","url":null,"abstract":"Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"94 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.
{"title":"Towards Robot Learning from Spoken Language","authors":"K. Kodur, Manizheh Zand, Maria Kyrarini","doi":"10.1145/3568294.3580053","DOIUrl":"https://doi.org/10.1145/3568294.3580053","url":null,"abstract":"The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87606085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.
{"title":"Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals","authors":"A'di Dust, Carola Gonzalez-Lebron, Shannon Connell, Saurav Singh, Reynold Bailey, Cecilia Ovesdotter Alm, Jamison Heard","doi":"10.1145/3568294.3580146","DOIUrl":"https://doi.org/10.1145/3568294.3580146","url":null,"abstract":"With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study's goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework's interaction strategy appropriately.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88051400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.
{"title":"Who to Teach a Robot to Facilitate Multi-party Social Interactions?","authors":"Jouh Yeong Chew, Keisuke Nakamura","doi":"10.1145/3568294.3580056","DOIUrl":"https://doi.org/10.1145/3568294.3580056","url":null,"abstract":"One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89012519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
{"title":"Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks","authors":"J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu","doi":"10.1145/3568294.3580149","DOIUrl":"https://doi.org/10.1145/3568294.3580149","url":null,"abstract":"In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"8 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89696048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}