D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal
This year's conference theme "HRI for all" not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders' point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.
{"title":"Robots for Learning 7 (R4L): A Look from Stakeholders' Perspective","authors":"D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal","doi":"10.1145/3568294.3579958","DOIUrl":"https://doi.org/10.1145/3568294.3579958","url":null,"abstract":"This year's conference theme \"HRI for all\" not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders' point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"43 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79979988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.
{"title":"Language Models for Human-Robot Interaction","authors":"E. Billing, Julia Rosén, M. Lamb","doi":"10.1145/3568294.3580040","DOIUrl":"https://doi.org/10.1145/3568294.3580040","url":null,"abstract":"Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"25 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78011440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.
{"title":"Robot Theory of Mind with Reverse Psychology","authors":"Chuang Yu, Baris Serhan, M. Romeo, A. Cangelosi","doi":"10.1145/3568294.3580144","DOIUrl":"https://doi.org/10.1145/3568294.3580144","url":null,"abstract":"Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"32 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85125818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoi Ki Tang, Matthijs H. J. Smakman, M. De Haas, Rianne van den Berghe
Vocabulary is a crucial part of second language (L2) learning. Children learn new vocabulary by forming mental lexicon relations with their existing knowledge. This is called lexical inferencing: using the available clues and knowledge to guess the meaning of the unknown word. This study explored the potential of second language vocabulary acquisition through lexical inferencing in child-robot interaction. A storytelling robot read a book to Dutch kindergartners (N = 36, aged 4-6 years) in Dutch in which a few key words were translated into French (L2), and with a robot providing additional word explanation cues or not. The results showed that the children learned the key words successfully as a result of the reading session with the storytelling robot, but that there was no significant effect of additional word explanation cues by the robot. Overall, it seems promising that lexical inferencing can act as a new and different way to teach kindergartners a second language.
{"title":"L2 Vocabulary Learning Through Lexical Inferencing Stories With a Social Robot","authors":"Hoi Ki Tang, Matthijs H. J. Smakman, M. De Haas, Rianne van den Berghe","doi":"10.1145/3568294.3580140","DOIUrl":"https://doi.org/10.1145/3568294.3580140","url":null,"abstract":"Vocabulary is a crucial part of second language (L2) learning. Children learn new vocabulary by forming mental lexicon relations with their existing knowledge. This is called lexical inferencing: using the available clues and knowledge to guess the meaning of the unknown word. This study explored the potential of second language vocabulary acquisition through lexical inferencing in child-robot interaction. A storytelling robot read a book to Dutch kindergartners (N = 36, aged 4-6 years) in Dutch in which a few key words were translated into French (L2), and with a robot providing additional word explanation cues or not. The results showed that the children learned the key words successfully as a result of the reading session with the storytelling robot, but that there was no significant effect of additional word explanation cues by the robot. Overall, it seems promising that lexical inferencing can act as a new and different way to teach kindergartners a second language.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"53 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84558779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This talk will be focused on the unique challenges in deploying a mobile manipulation robot into an environment where the robot is working closely with people on a daily basis. Diligent Robotics' first product, Moxi, is a mobile manipulation service robot that is at work in hospitals today assisting nurses and other front line staff with materials management tasks. This talk will dive into the computational complexity of developing a mobile manipulator with social intelligence. Dr. Thomaz will focus on how human-robot interaction theories and algorithms translate into the real-world and the impact on functionality and perception of robots that perform delivery tasks in a busy human environment. The talk will include many examples and data from the field, with commentary and discussion around both the expected and unexpected hard problems in building robots operating 24/7 as reliable teammates. BIO: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President's Council of Advisors on Science and Tech (PCAST), MIT Technology Review TR35 list, and TEDx as a featured keynote speaker on social robotics. Dr. Thomaz has received numerous research grants including the NSF CAREER award and the Office of Naval Research Young Investigator Award. Andrea has published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning and interaction, in order to build social robots and other machines that are intuitive for everyday people to teach. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab). Andrea co-founded Diligent Robotics in 2018, to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about.
{"title":"Robots in Real Life: Putting HRI to Work","authors":"A. Thomaz","doi":"10.1145/3568162.3578810","DOIUrl":"https://doi.org/10.1145/3568162.3578810","url":null,"abstract":"This talk will be focused on the unique challenges in deploying a mobile manipulation robot into an environment where the robot is working closely with people on a daily basis. Diligent Robotics' first product, Moxi, is a mobile manipulation service robot that is at work in hospitals today assisting nurses and other front line staff with materials management tasks. This talk will dive into the computational complexity of developing a mobile manipulator with social intelligence. Dr. Thomaz will focus on how human-robot interaction theories and algorithms translate into the real-world and the impact on functionality and perception of robots that perform delivery tasks in a busy human environment. The talk will include many examples and data from the field, with commentary and discussion around both the expected and unexpected hard problems in building robots operating 24/7 as reliable teammates. BIO: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President's Council of Advisors on Science and Tech (PCAST), MIT Technology Review TR35 list, and TEDx as a featured keynote speaker on social robotics. Dr. Thomaz has received numerous research grants including the NSF CAREER award and the Office of Naval Research Young Investigator Award. Andrea has published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning and interaction, in order to build social robots and other machines that are intuitive for everyday people to teach. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab). Andrea co-founded Diligent Robotics in 2018, to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87684195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.
{"title":"Development of a Wearable Robot that Moves on the Arm to Support the Daily Life of the User","authors":"Koji Kimura, F. Tanaka","doi":"10.1145/3568294.3579983","DOIUrl":"https://doi.org/10.1145/3568294.3579983","url":null,"abstract":"Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"35 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89217416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pets provide important mental support for human beings. Recent advancements in robotics and HRI have led to research and commercial products providing smart solutions to enrich indoor pets' lives. However, most of these products focus on satisfying pets' basic needs, such as feeding and litter cleaning, rather than their mental well-being. In this paper, we present the internet of robotic cat toys, where a group of robotic agents connects to play with our furry friends. Through three iterations, we demonstrate an affordable and flexible design of clip-on robotic agents to transform a static household into an interactive wonderland for pets.
{"title":"Internet of Robotic Cat Toys to Deepen Bond and Elevate Mood","authors":"I. X. Han, Sarah Witzman","doi":"10.1145/3568294.3580183","DOIUrl":"https://doi.org/10.1145/3568294.3580183","url":null,"abstract":"Pets provide important mental support for human beings. Recent advancements in robotics and HRI have led to research and commercial products providing smart solutions to enrich indoor pets' lives. However, most of these products focus on satisfying pets' basic needs, such as feeding and litter cleaning, rather than their mental well-being. In this paper, we present the internet of robotic cat toys, where a group of robotic agents connects to play with our furry friends. Through three iterations, we demonstrate an affordable and flexible design of clip-on robotic agents to transform a static household into an interactive wonderland for pets.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"11 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87881234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus
Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot's reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.
{"title":"Human Gesture Recognition with a Flow-based Model for Human Robot Interaction","authors":"Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus","doi":"10.1145/3568294.3580145","DOIUrl":"https://doi.org/10.1145/3568294.3580145","url":null,"abstract":"Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot's reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"28 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87611736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person's face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android's hand, and a controller that combines the pose and sensor information to direct the android's actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.
{"title":"Sawarimōto","authors":"Aidan Edward Fox-Tierney, Kurima Sakai, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro","doi":"10.1145/3568294.3580131","DOIUrl":"https://doi.org/10.1145/3568294.3580131","url":null,"abstract":"Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person's face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android's hand, and a controller that combines the pose and sensor information to direct the android's actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"102 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74262536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti
Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner's hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner's attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot's hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.
{"title":"Can a Robot's Hand Bias Human Attention?","authors":"Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti","doi":"10.1145/3568294.3580074","DOIUrl":"https://doi.org/10.1145/3568294.3580074","url":null,"abstract":"Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner's hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner's attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot's hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"12 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74615342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}