Christine P. Lee, Bengisu Cagiltay, Dakota Sullivan, Bilge Mutlu
While social robots are increasingly introduced into domestic settings, few have explored the utility of the robots' packaging. Here we highlight the potential of product packaging in human-robot interaction to facilitate, expand, and enrich user experience with the robot. We present a social robot's box as interactive product packaging, designed to be reused as a "home'' for the robot. Through co-design sessions with children, an narrative-driven and socially engaging box was developed to support initial interactions between the child and the robot. Our findings emphasize the importance of packaging design to produce positive outcomes towards successful human-robot interaction.
{"title":"Demonstrating the Potential of Interactive Product Packaging for Enriching Human-Robot Interaction","authors":"Christine P. Lee, Bengisu Cagiltay, Dakota Sullivan, Bilge Mutlu","doi":"10.1145/3568294.3580038","DOIUrl":"https://doi.org/10.1145/3568294.3580038","url":null,"abstract":"While social robots are increasingly introduced into domestic settings, few have explored the utility of the robots' packaging. Here we highlight the potential of product packaging in human-robot interaction to facilitate, expand, and enrich user experience with the robot. We present a social robot's box as interactive product packaging, designed to be reused as a \"home'' for the robot. Through co-design sessions with children, an narrative-driven and socially engaging box was developed to support initial interactions between the child and the robot. Our findings emphasize the importance of packaging design to produce positive outcomes towards successful human-robot interaction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"150 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73916274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danilo Gallo, P. Bioche, J. Willamowski, T. Colombino, Shreepriya Gonzalez-Jimenez, Herve Poirier, Cécile Boulard
This paper examines the advantages and disadvantages of combining Human-Like and Machine-Like behaviors for a robot taking a shared elevator with a bystander as part of an office delivery service scenario. We present findings of an in-person wizard-of-oz experiment that builds on and implements behavior policies developed in a previous study. In this experiment, we found that the combination of Machine-Like and Human-Like behaviors was perceived as better than Human-Like behaviors alone. We discuss possible reasons and point to key capabilities that a socially competent robot should have to achieve better Human-Like behaviors in order to seamlessly negotiate a social encounter with bystanders in a shared elevator or similar scenario. We found that establishing and maintaining a shared transactional space is one of these key requirements.
{"title":"Investigating the Integration of Human-Like and Machine-Like Robot Behaviors in a Shared Elevator Scenario","authors":"Danilo Gallo, P. Bioche, J. Willamowski, T. Colombino, Shreepriya Gonzalez-Jimenez, Herve Poirier, Cécile Boulard","doi":"10.1145/3568162.3576974","DOIUrl":"https://doi.org/10.1145/3568162.3576974","url":null,"abstract":"This paper examines the advantages and disadvantages of combining Human-Like and Machine-Like behaviors for a robot taking a shared elevator with a bystander as part of an office delivery service scenario. We present findings of an in-person wizard-of-oz experiment that builds on and implements behavior policies developed in a previous study. In this experiment, we found that the combination of Machine-Like and Human-Like behaviors was perceived as better than Human-Like behaviors alone. We discuss possible reasons and point to key capabilities that a socially competent robot should have to achieve better Human-Like behaviors in order to seamlessly negotiate a social encounter with bystanders in a shared elevator or similar scenario. We found that establishing and maintaining a shared transactional space is one of these key requirements.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"9 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77764583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marius Hoggenmueller, M. Lupetti, Willem van der Maden, Kazjon Grace
Design fixation, a phenomenon describing designers' adherence to pre-existing ideas or concepts that constrain design outcomes, is particularly prevalent in human-robot interaction (HRI), for example, due to collectively held and stabilised imaginations of what a robot should look like or behave. In this paper, we explore the contribution of creative AI tools to overcome design fixation and enhance creative processes in HRI design. In a four weeks long design exploration, we used generative text-to-image models to ideate and visualise robotic artefacts and robot sociotechnical imaginaries. We exchanged results along with reflections through a digital postcard format. We demonstrate the usefulness of our approach to imagining novel robot concepts, surfacing existing assumptions and robot stereotypes, and situating robotic artefacts in context. We discuss the contribution to designerly HRI practices and conclude with lessons learnt for using creative AI tools as an emerging design practice in HRI research and beyond.
{"title":"Creative AI for HRI Design Explorations","authors":"Marius Hoggenmueller, M. Lupetti, Willem van der Maden, Kazjon Grace","doi":"10.1145/3568294.3580035","DOIUrl":"https://doi.org/10.1145/3568294.3580035","url":null,"abstract":"Design fixation, a phenomenon describing designers' adherence to pre-existing ideas or concepts that constrain design outcomes, is particularly prevalent in human-robot interaction (HRI), for example, due to collectively held and stabilised imaginations of what a robot should look like or behave. In this paper, we explore the contribution of creative AI tools to overcome design fixation and enhance creative processes in HRI design. In a four weeks long design exploration, we used generative text-to-image models to ideate and visualise robotic artefacts and robot sociotechnical imaginaries. We exchanged results along with reflections through a digital postcard format. We demonstrate the usefulness of our approach to imagining novel robot concepts, surfacing existing assumptions and robot stereotypes, and situating robotic artefacts in context. We discuss the contribution to designerly HRI practices and conclude with lessons learnt for using creative AI tools as an emerging design practice in HRI research and beyond.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"22 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85156750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Houda Elmimouni, Amy Kinney, Elizabeth C. Brooks, Hannah Li, S. Šabanović
Robotic Telepresence (RT) is a promising medium for students who are unable to attend in-person classes. It enables remote students to be present in the classroom and interact with their classmates and instructors. However, it can be limiting to their identity self-perception and projection, which may have repercussions on the social dynamics and inclusion within the classroom. We present preliminary findings of a qualitative analysis of 12 observations and interviews with RT attendees. We examine RT design and use aspects that either supported identity self-perception and projection or limited it. Finally, we present telepresence robots design and use recommendations for the classroom context.
{"title":"\"Who's that?\": Identity Self-Perception and Projection in the Use of Telepresence Robots in Hybrid Classrooms","authors":"Houda Elmimouni, Amy Kinney, Elizabeth C. Brooks, Hannah Li, S. Šabanović","doi":"10.1145/3568294.3580090","DOIUrl":"https://doi.org/10.1145/3568294.3580090","url":null,"abstract":"Robotic Telepresence (RT) is a promising medium for students who are unable to attend in-person classes. It enables remote students to be present in the classroom and interact with their classmates and instructors. However, it can be limiting to their identity self-perception and projection, which may have repercussions on the social dynamics and inclusion within the classroom. We present preliminary findings of a qualitative analysis of 12 observations and interviews with RT attendees. We examine RT design and use aspects that either supported identity self-perception and projection or limited it. Finally, we present telepresence robots design and use recommendations for the classroom context.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"7 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85588515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When robots are used for physical therapy, programming becomes too important to be left to programmers. Developing programs for training robots is time-consuming and requires expertise within multiple engineering domains, combined with physical training, therapy, and human interaction competencies. In this paper, we present Platypus: an end-user development environment that encompasses the design and execution of custom activities for robot-assisted physical training. The current version ships a set of plugins for Eclipse's IDE and uses a block-based visual language to specify the robot's behaviors at a high abstraction level, which are translated into the low-level code specifications followed by the robot. As a use case, we present its implementation on RoboTrainer, a modular, rope-based pulling device for training at home. While user tests suggest that the platform has the potential to reduce the technical obstacles for building custom training scenarios, informational and design learning barriers were revealed during the tests.
{"title":"PLATYPUS","authors":"Jose Pablo De la Rosa Gutierrez, A. S. Sørensen","doi":"10.1145/3568294.3580102","DOIUrl":"https://doi.org/10.1145/3568294.3580102","url":null,"abstract":"When robots are used for physical therapy, programming becomes too important to be left to programmers. Developing programs for training robots is time-consuming and requires expertise within multiple engineering domains, combined with physical training, therapy, and human interaction competencies. In this paper, we present Platypus: an end-user development environment that encompasses the design and execution of custom activities for robot-assisted physical training. The current version ships a set of plugins for Eclipse's IDE and uses a block-based visual language to specify the robot's behaviors at a high abstraction level, which are translated into the low-level code specifications followed by the robot. As a use case, we present its implementation on RoboTrainer, a modular, rope-based pulling device for training at home. While user tests suggest that the platform has the potential to reduce the technical obstacles for building custom training scenarios, informational and design learning barriers were revealed during the tests.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81804270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri
We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics' ARI robot) to provide rich verbal interactions. Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots. The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.
{"title":"Open-source Natural Language Processing on the PAL Robotics ARI Social Robot","authors":"S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri","doi":"10.1145/3568294.3580041","DOIUrl":"https://doi.org/10.1145/3568294.3580041","url":null,"abstract":"We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics' ARI robot) to provide rich verbal interactions. Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots. The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"89 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82352478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prolificacy of human-robot interaction not only depends on a robot's ability to understand the intent and content of the human utterance but also gets impacted by the automatic speech recognition (ASR) system. Modern ASR can provide highly accurate (grammatically and syntactically) translation. Yet, the general purpose ASR often misses out on the semantics of the translation by incorrect word prediction due to open-vocabulary modeling. ASR inaccuracy can have significant repercussions as this can lead to a completely different action by the robot in the real world. Can any prior knowledge be helpful in such a scenario? In this work, we explore how prior knowledge can be utilized in ASR decoding. Using our experiments, we demonstrate how our system can significantly improve ASR translation for robotic task instruction.
{"title":"Utilizing Prior Knowledge to Improve Automatic Speech Recognition in Human-Robot Interactive Scenarios","authors":"Pradip Pramanick, Chayan Sarkar","doi":"10.1145/3568294.3580129","DOIUrl":"https://doi.org/10.1145/3568294.3580129","url":null,"abstract":"The prolificacy of human-robot interaction not only depends on a robot's ability to understand the intent and content of the human utterance but also gets impacted by the automatic speech recognition (ASR) system. Modern ASR can provide highly accurate (grammatically and syntactically) translation. Yet, the general purpose ASR often misses out on the semantics of the translation by incorrect word prediction due to open-vocabulary modeling. ASR inaccuracy can have significant repercussions as this can lead to a completely different action by the robot in the real world. Can any prior knowledge be helpful in such a scenario? In this work, we explore how prior knowledge can be utilized in ASR decoding. Using our experiments, we demonstrate how our system can significantly improve ASR translation for robotic task instruction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"58 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84891050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christine T. Chang, Mitchell Hebert, Bradley Hayes
Our work aims to apply iterative communication techniques to improve functionality of human-robot teams working in space and other high-risk environments. Forms of iterative communication include progressive incorporation of human preference and otherwise latent task specifications. Our prior work found that humans would choose not to comply with robot-provided instructions and then proceed to self-justify their choices despite the risks of physical harm and blatant disregard for rules. Results clearly showed that humans working near robots are willing to sacrifice safety for efficiency. Current work aims to improve communication by iteratively incorporating human preference into optimized path planning for human-robot teams operating over large areas. Future work will explore the extent to which negotiation can be used as a mechanism for improving task planning and joint task execution for humans and robots.
{"title":"Collaborative Planning and Negotiation in Human-Robot Teams","authors":"Christine T. Chang, Mitchell Hebert, Bradley Hayes","doi":"10.1145/3568294.3579978","DOIUrl":"https://doi.org/10.1145/3568294.3579978","url":null,"abstract":"Our work aims to apply iterative communication techniques to improve functionality of human-robot teams working in space and other high-risk environments. Forms of iterative communication include progressive incorporation of human preference and otherwise latent task specifications. Our prior work found that humans would choose not to comply with robot-provided instructions and then proceed to self-justify their choices despite the risks of physical harm and blatant disregard for rules. Results clearly showed that humans working near robots are willing to sacrifice safety for efficiency. Current work aims to improve communication by iteratively incorporating human preference into optimized path planning for human-robot teams operating over large areas. Future work will explore the extent to which negotiation can be used as a mechanism for improving task planning and joint task execution for humans and robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84917363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jake Brawer, Debasmita Ghose, Kate Candon, Meiying Qin, A. Roncone, Marynel Vázquez, B. Scassellati
One important aspect of effective human--robot collaborations is the ability for robots to adapt quickly to the needs of humans. While techniques like deep reinforcement learning have demonstrated success as sophisticated tools for learning robot policies, the fluency of human-robot collaborations is often limited by these policies' inability to integrate changes to a user's preferences for the task. To address these shortcomings, we propose a novel approach that can modify learned policies at execution time via symbolic if-this-then-that rules corresponding to a modular and superimposable set of low-level constraints on the robot's policy. These rules, which we call Transparent Matrix Overlays, function not only as succinct and explainable descriptions of the robot's current strategy but also as an interface by which a human collaborator can easily alter a robot's policy via verbal commands. We demonstrate the efficacy of this approach on a series of proof-of-concept cooking tasks performed in simulation and on a physical robot.
{"title":"Interactive Policy Shaping for Human-Robot Collaboration with Transparent Matrix Overlays","authors":"Jake Brawer, Debasmita Ghose, Kate Candon, Meiying Qin, A. Roncone, Marynel Vázquez, B. Scassellati","doi":"10.1145/3568162.3576983","DOIUrl":"https://doi.org/10.1145/3568162.3576983","url":null,"abstract":"One important aspect of effective human--robot collaborations is the ability for robots to adapt quickly to the needs of humans. While techniques like deep reinforcement learning have demonstrated success as sophisticated tools for learning robot policies, the fluency of human-robot collaborations is often limited by these policies' inability to integrate changes to a user's preferences for the task. To address these shortcomings, we propose a novel approach that can modify learned policies at execution time via symbolic if-this-then-that rules corresponding to a modular and superimposable set of low-level constraints on the robot's policy. These rules, which we call Transparent Matrix Overlays, function not only as succinct and explainable descriptions of the robot's current strategy but also as an interface by which a human collaborator can easily alter a robot's policy via verbal commands. We demonstrate the efficacy of this approach on a series of proof-of-concept cooking tasks performed in simulation and on a physical robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"30 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78928547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}