This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.
{"title":"Towards Designing Companion Robots with the End in Mind","authors":"Waki Kamino","doi":"10.1145/3568294.3580046","DOIUrl":"https://doi.org/10.1145/3568294.3580046","url":null,"abstract":"This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"37 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90142545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau
Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
{"title":"The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children","authors":"Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau","doi":"10.1145/3568162.3576997","DOIUrl":"https://doi.org/10.1145/3568162.3576997","url":null,"abstract":"Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"56 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90817996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named "Moffuly-II." This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.
{"title":"Designing a Robot which Touches the User's Head with Intra-Hug Gestures","authors":"Yuya Onishi, H. Sumioka, M. Shiomi","doi":"10.1145/3568294.3580096","DOIUrl":"https://doi.org/10.1145/3568294.3580096","url":null,"abstract":"There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named \"Moffuly-II.\" This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90227931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The "Hospiano" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: "Robot Pianist mode", in which it plays pre-existing songs; "Play Along mode", which allows anyone to interact with the music; and "Composer mode", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.
{"title":"Making Music More Inclusive with Hospiano","authors":"Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn","doi":"10.1145/3568294.3580184","DOIUrl":"https://doi.org/10.1145/3568294.3580184","url":null,"abstract":"Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The \"Hospiano\" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: \"Robot Pianist mode\", in which it plays pre-existing songs; \"Play Along mode\", which allows anyone to interact with the music; and \"Composer mode\", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90366005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.
{"title":"People Dynamically Update Trust When Interactively Teaching Robots","authors":"V. B. Chi, B. Malle","doi":"10.1145/3568162.3576962","DOIUrl":"https://doi.org/10.1145/3568162.3576962","url":null,"abstract":"Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"94 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.
{"title":"A Persuasive Robot that Alleviates Endogenous Smartphone-related Interruption","authors":"Hanyang Hu, Mengyu Chen, Ruhan Wang, Yijie Guo","doi":"10.1145/3568294.3580097","DOIUrl":"https://doi.org/10.1145/3568294.3580097","url":null,"abstract":"The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"26 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75616299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.
{"title":"Visuo-Textual Explanations of a Robot's Navigational Choices","authors":"Amar Halilovic, F. Lindner","doi":"10.1145/3568294.3580141","DOIUrl":"https://doi.org/10.1145/3568294.3580141","url":null,"abstract":"With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76664176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss
As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
{"title":"Variable Autonomy for Human-Robot Teaming (VAT)","authors":"Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss","doi":"10.1145/3568294.3579957","DOIUrl":"https://doi.org/10.1145/3568294.3579957","url":null,"abstract":"As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"223 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova
Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message "not to smoke" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.
{"title":"Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas","authors":"Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova","doi":"10.1145/3568294.3580173","DOIUrl":"https://doi.org/10.1145/3568294.3580173","url":null,"abstract":"Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message \"not to smoke\" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79818653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-13DOI: 10.5040/9781350088733.0124
Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid
{"title":"HighLight","authors":"Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid","doi":"10.5040/9781350088733.0124","DOIUrl":"https://doi.org/10.5040/9781350088733.0124","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79916250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}