Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.
{"title":"Robot Theory of Mind with Reverse Psychology","authors":"Chuang Yu, Baris Serhan, M. Romeo, A. Cangelosi","doi":"10.1145/3568294.3580144","DOIUrl":"https://doi.org/10.1145/3568294.3580144","url":null,"abstract":"Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85125818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marius Hoggenmueller, M. Lupetti, Willem van der Maden, Kazjon Grace
Design fixation, a phenomenon describing designers' adherence to pre-existing ideas or concepts that constrain design outcomes, is particularly prevalent in human-robot interaction (HRI), for example, due to collectively held and stabilised imaginations of what a robot should look like or behave. In this paper, we explore the contribution of creative AI tools to overcome design fixation and enhance creative processes in HRI design. In a four weeks long design exploration, we used generative text-to-image models to ideate and visualise robotic artefacts and robot sociotechnical imaginaries. We exchanged results along with reflections through a digital postcard format. We demonstrate the usefulness of our approach to imagining novel robot concepts, surfacing existing assumptions and robot stereotypes, and situating robotic artefacts in context. We discuss the contribution to designerly HRI practices and conclude with lessons learnt for using creative AI tools as an emerging design practice in HRI research and beyond.
{"title":"Creative AI for HRI Design Explorations","authors":"Marius Hoggenmueller, M. Lupetti, Willem van der Maden, Kazjon Grace","doi":"10.1145/3568294.3580035","DOIUrl":"https://doi.org/10.1145/3568294.3580035","url":null,"abstract":"Design fixation, a phenomenon describing designers' adherence to pre-existing ideas or concepts that constrain design outcomes, is particularly prevalent in human-robot interaction (HRI), for example, due to collectively held and stabilised imaginations of what a robot should look like or behave. In this paper, we explore the contribution of creative AI tools to overcome design fixation and enhance creative processes in HRI design. In a four weeks long design exploration, we used generative text-to-image models to ideate and visualise robotic artefacts and robot sociotechnical imaginaries. We exchanged results along with reflections through a digital postcard format. We demonstrate the usefulness of our approach to imagining novel robot concepts, surfacing existing assumptions and robot stereotypes, and situating robotic artefacts in context. We discuss the contribution to designerly HRI practices and conclude with lessons learnt for using creative AI tools as an emerging design practice in HRI research and beyond.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85156750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When robots are used for physical therapy, programming becomes too important to be left to programmers. Developing programs for training robots is time-consuming and requires expertise within multiple engineering domains, combined with physical training, therapy, and human interaction competencies. In this paper, we present Platypus: an end-user development environment that encompasses the design and execution of custom activities for robot-assisted physical training. The current version ships a set of plugins for Eclipse's IDE and uses a block-based visual language to specify the robot's behaviors at a high abstraction level, which are translated into the low-level code specifications followed by the robot. As a use case, we present its implementation on RoboTrainer, a modular, rope-based pulling device for training at home. While user tests suggest that the platform has the potential to reduce the technical obstacles for building custom training scenarios, informational and design learning barriers were revealed during the tests.
{"title":"PLATYPUS","authors":"Jose Pablo De la Rosa Gutierrez, A. S. Sørensen","doi":"10.1145/3568294.3580102","DOIUrl":"https://doi.org/10.1145/3568294.3580102","url":null,"abstract":"When robots are used for physical therapy, programming becomes too important to be left to programmers. Developing programs for training robots is time-consuming and requires expertise within multiple engineering domains, combined with physical training, therapy, and human interaction competencies. In this paper, we present Platypus: an end-user development environment that encompasses the design and execution of custom activities for robot-assisted physical training. The current version ships a set of plugins for Eclipse's IDE and uses a block-based visual language to specify the robot's behaviors at a high abstraction level, which are translated into the low-level code specifications followed by the robot. As a use case, we present its implementation on RoboTrainer, a modular, rope-based pulling device for training at home. While user tests suggest that the platform has the potential to reduce the technical obstacles for building custom training scenarios, informational and design learning barriers were revealed during the tests.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81804270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.
{"title":"Stretch to the Client; Re-imagining Interfaces","authors":"Kay N. Wojtowicz, M. E. Cabrera","doi":"10.1145/3568294.3580212","DOIUrl":"https://doi.org/10.1145/3568294.3580212","url":null,"abstract":"This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80542595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ching-Chih Tsao, Cheng-Yi Tang, Yu-Wen Chang, Y. Sung, S. Chien, Szu-Yin Lin
The present study examines the influences of a robot recommender system on human impulse buying tendency in online e-commerce contexts. An empirical user study was conducted, where different marketing strategies (limited quantity vs. discount rate) were applied to the products and intimate designs were utilized for the robotic agent. An electroencephalogram (EEG) headset was used to capture users' brain activities, which allowed us to investigate participants' real-time cognitive perceptions toward different experimental conditions (i.e., marketing plans and robotic agents). Our preliminary results reveal that marketing strategies and robot recommender applications can trigger impulsive buying behavior and contribute to different cognitive activities.
{"title":"The Influence of a Robot Recommender System on Impulse Buying Tendency","authors":"Ching-Chih Tsao, Cheng-Yi Tang, Yu-Wen Chang, Y. Sung, S. Chien, Szu-Yin Lin","doi":"10.1145/3568294.3580171","DOIUrl":"https://doi.org/10.1145/3568294.3580171","url":null,"abstract":"The present study examines the influences of a robot recommender system on human impulse buying tendency in online e-commerce contexts. An empirical user study was conducted, where different marketing strategies (limited quantity vs. discount rate) were applied to the products and intimate designs were utilized for the robotic agent. An electroencephalogram (EEG) headset was used to capture users' brain activities, which allowed us to investigate participants' real-time cognitive perceptions toward different experimental conditions (i.e., marketing plans and robotic agents). Our preliminary results reveal that marketing strategies and robot recommender applications can trigger impulsive buying behavior and contribute to different cognitive activities.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83102773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danilo Gallo, P. Bioche, J. Willamowski, T. Colombino, Shreepriya Gonzalez-Jimenez, Herve Poirier, Cécile Boulard
This paper examines the advantages and disadvantages of combining Human-Like and Machine-Like behaviors for a robot taking a shared elevator with a bystander as part of an office delivery service scenario. We present findings of an in-person wizard-of-oz experiment that builds on and implements behavior policies developed in a previous study. In this experiment, we found that the combination of Machine-Like and Human-Like behaviors was perceived as better than Human-Like behaviors alone. We discuss possible reasons and point to key capabilities that a socially competent robot should have to achieve better Human-Like behaviors in order to seamlessly negotiate a social encounter with bystanders in a shared elevator or similar scenario. We found that establishing and maintaining a shared transactional space is one of these key requirements.
{"title":"Investigating the Integration of Human-Like and Machine-Like Robot Behaviors in a Shared Elevator Scenario","authors":"Danilo Gallo, P. Bioche, J. Willamowski, T. Colombino, Shreepriya Gonzalez-Jimenez, Herve Poirier, Cécile Boulard","doi":"10.1145/3568162.3576974","DOIUrl":"https://doi.org/10.1145/3568162.3576974","url":null,"abstract":"This paper examines the advantages and disadvantages of combining Human-Like and Machine-Like behaviors for a robot taking a shared elevator with a bystander as part of an office delivery service scenario. We present findings of an in-person wizard-of-oz experiment that builds on and implements behavior policies developed in a previous study. In this experiment, we found that the combination of Machine-Like and Human-Like behaviors was perceived as better than Human-Like behaviors alone. We discuss possible reasons and point to key capabilities that a socially competent robot should have to achieve better Human-Like behaviors in order to seamlessly negotiate a social encounter with bystanders in a shared elevator or similar scenario. We found that establishing and maintaining a shared transactional space is one of these key requirements.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77764583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri
We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics' ARI robot) to provide rich verbal interactions. Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots. The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.
{"title":"Open-source Natural Language Processing on the PAL Robotics ARI Social Robot","authors":"S. Lemaignan, S. Cooper, Raquel Ros, L. Ferrini, Antonio Andriella, Aina Irisarri","doi":"10.1145/3568294.3580041","DOIUrl":"https://doi.org/10.1145/3568294.3580041","url":null,"abstract":"We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics' ARI robot) to provide rich verbal interactions. Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots. The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82352478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan
Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
人类倾向于使用各种非语言信号向他们的互动伙伴传达他们的信息。先前的研究利用这一渠道作为开发自动方法的基本线索,用于理解、建模和合成人机交互和人机交互设置中的个人行为。另一方面,在小组互动中,交流的一个重要方面是对话者之间社会信号的动态交换。本文介绍了LISI-HHI - Learning to imitation Social human - human Interaction,这是一个记录在各种交流场景中的二元人类交互行为的数据集。该数据集包含由高精度传感器同时捕获的多种模式,包括动作捕捉、RGB-D相机、眼动仪和麦克风。lis - hhi旨在成为HRI和多模态学习研究的基准,用于对社会互动环境中的内部和人际非语言信号进行建模,并研究如何将这些模型转移到社交机器人中。
{"title":"A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction","authors":"Nguyen Tan Viet Tuyen, A. Georgescu, Irene Di Giulio, O. Çeliktutan","doi":"10.1145/3568294.3580080","DOIUrl":"https://doi.org/10.1145/3568294.3580080","url":null,"abstract":"Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82930853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seong Hee Lee, Nicholas Britten, Avram Block, A. Pandya, Malte F. Jung, Paul Schmitt
Lane changes of autonomous vehicles (AV) should not only succeed in making the maneuver but also provide a positive interaction experience for other drivers. As lane changes involve complex interactions, identification of a set of behaviors for autonomous vehicle lane change communication can be difficult to define. This study investigates different movements communicating AV lane change intent in order to identify which effectively communicates and positively affects other drivers' decisions. We utilized a virtual reality environment wherein 14 participants were each placed in the driver's seat of a car and experienced four different AV lane change signals. Our findings suggest that expressive lane change behaviors such as lateral movement have high levels of legibility at the cost of high perceived aggressiveness. We propose further investigation into how balancing key parameters of lateral movement can balance in legibility and aggressiveness that provide the best AV interaction experience for human drivers
{"title":"Coming In! Communicating Lane Change Intent in Autonomous Vehicles","authors":"Seong Hee Lee, Nicholas Britten, Avram Block, A. Pandya, Malte F. Jung, Paul Schmitt","doi":"10.1145/3568294.3580113","DOIUrl":"https://doi.org/10.1145/3568294.3580113","url":null,"abstract":"Lane changes of autonomous vehicles (AV) should not only succeed in making the maneuver but also provide a positive interaction experience for other drivers. As lane changes involve complex interactions, identification of a set of behaviors for autonomous vehicle lane change communication can be difficult to define. This study investigates different movements communicating AV lane change intent in order to identify which effectively communicates and positively affects other drivers' decisions. We utilized a virtual reality environment wherein 14 participants were each placed in the driver's seat of a car and experienced four different AV lane change signals. Our findings suggest that expressive lane change behaviors such as lateral movement have high levels of legibility at the cost of high perceived aggressiveness. We propose further investigation into how balancing key parameters of lateral movement can balance in legibility and aggressiveness that provide the best AV interaction experience for human drivers","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82987104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.
{"title":"Development of a Wearable Robot that Moves on the Arm to Support the Daily Life of the User","authors":"Koji Kimura, F. Tanaka","doi":"10.1145/3568294.3579983","DOIUrl":"https://doi.org/10.1145/3568294.3579983","url":null,"abstract":"Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89217416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}