Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez
In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.
在人机交互中,研究人员通常利用面对面研究来收集对机器人的主观感受。此外,互动视频和互动模拟(参与者控制虚拟世界中与机器人互动的化身)也被用来快速收集大规模的人类反馈。在这些方法之间,人类对机器人的感知如何比较?为了探究这个问题,我们进行了一项 2x2 主体间研究(N=160),评估了交互环境(真实环境 vs. 模拟环境)和参与者在人机交互过程中的交互性(交互式参与 vs. 视频观察)对机器人感知(能力、不适感、社交表现和社交信息处理)的影响,以便完成与人协同导航的任务。我们还研究了参与者在不同实验条件下的工作量。我们的结果表明,在真实环境和模拟环境中,人们对机器人的感知存在明显差异。此外,我们的结果还显示,当人们观看相遇视频与参与相遇时,人类对机器人的感知存在差异。最后,我们发现模拟交互和模拟交锋视频比真实交锋及其视频产生的工作量更大。我们的研究结果表明,视频和模拟方法的研究结果不一定总能转化为现实世界中的人机交互。为了让从业人员能够利用本研究的知识,并让未来的研究人员能够扩展我们在这一领域的知识,我们提供了权衡不同方法的指导原则。
{"title":"Influence of Simulation and Interactivity on Human Perceptions of a Robot During Navigation Tasks","authors":"Nathan Tsoi, Rachel Sterneck, Xuan Zhao, Marynel Vázquez","doi":"10.1145/3675784","DOIUrl":"https://doi.org/10.1145/3675784","url":null,"abstract":"In Human-Robot Interaction, researchers typically utilize in-person studies to collect subjective perceptions of a robot. In addition, videos of interactions and interactive simulations (where participants control an avatar that interacts with a robot in a virtual world) have been used to quickly collect human feedback at scale. How would human perceptions of robots compare between these methodologies? To investigate this question, we conducted a 2x2 between-subjects study (N=160), which evaluated the effect of the interaction environment (Real vs. Simulated environment) and participants’ interactivity during human-robot encounters (Interactive participation vs. Video observations) on perceptions about a robot (competence, discomfort, social presentation, and social information processing) for the task of navigating in concert with people. We also studied participants’ workload across the experimental conditions. Our results revealed a significant difference in the perceptions of the robot between the real environment and the simulated environment. Furthermore, our results showed differences in human perceptions when people watched a video of an encounter versus taking part in the encounter. Finally, we found that simulated interactions and videos of the simulated encounter resulted in a higher workload than real-world encounters and videos thereof. Our results suggest that findings from video and simulation methodologies may not always translate to real-world human-robot interactions. In order to allow practitioners to leverage learnings from this study and future researchers to expand our knowledge in this area, we provide guidelines for weighing the tradeoffs between different methodologies.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.
{"title":"Converging Measures and an Emergent Model: A Meta-Analysis of Human-Machine Trust Questionnaires","authors":"Yosef Razin, K. Feigh","doi":"10.1145/3677614","DOIUrl":"https://doi.org/10.1145/3677614","url":null,"abstract":"Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited and best-validated human-machine and human-robot trust questionnaires as well as the best-established factors that form the dimensions and antecedents of such trust. To reduce both confusion and construct proliferation, we provide a detailed mapping of terminology between questionnaires. Furthermore, we perform a meta-analysis of the regression models which emerged from the experiments that employed multi-factorial survey instruments. Based on this meta-analysis, we provide the most complete, experimentally validated model of human-machine and human-robot trust to date. This convergent model establishes an integrated framework for future research. It determines the current boundaries of trust measurement and where further investigation and validation are necessary. We close by discussing how to choose an appropriate trust survey instrument and how to design for trust. By identifying the internal workings of trust, a more complete basis for measuring trust is developed that is widely applicable.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141651115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes
For humans to effectively work with robots, they must be able to predict the actions and behaviors of their robot teammates rather than merely react to them. While there are existing techniques enabling robots to adapt to human behavior, there is a demonstrated need for methods that explicitly improve humans’ ability to understand and predict robot behavior at multi-task timescales. In this work, we propose a method leveraging the innate human propensity for pattern recognition in order to improve team dynamics in human-robot teams and to make robots more predictable to the humans that work with them. Patterns are a cognitive tool that humans use and rely on often, and the human brain is in many ways primed for pattern recognition and usage. We propose Pattern-Aware Convention-setting for Teaming (PACT), an entropy-based algorithm that identifies and imposes appropriate patterns over a robot’s planner or policy over long time horizons. These patterns are autonomously generated and chosen via an algorithmic process that considers human-perceptible features and characteristics derived from the tasks to be completed, and as such, produces behavior that is easier for humans to identify and predict. Our evaluation shows that PACT contributes to significant improvements in team dynamics and teammate perceptions of the robot, as compared to robots that utilize traditionally ‘optimal’ plans and robots utilizing unoptimized patterns.
人类要想有效地与机器人合作,就必须能够预测机器人队友的行动和行为,而不仅仅是对它们做出反应。虽然现有的技术能让机器人适应人类的行为,但我们仍然需要能明确提高人类在多任务时间尺度上理解和预测机器人行为的能力的方法。在这项工作中,我们提出了一种利用人类与生俱来的模式识别倾向的方法,以改善人类-机器人团队的动态关系,并使机器人对与之共事的人类而言更具可预测性。模式是人类经常使用和依赖的一种认知工具,人脑在很多方面都具备模式识别和使用的能力。我们提出了一种基于熵的算法--PACT(Pattern-Aware Convention-setting for Teaming),该算法可以识别并在机器人的规划器或策略上长期实施适当的模式。这些模式是通过一个算法过程自主生成和选择的,该过程考虑了人类可感知的特征和来自待完成任务的特性,因此产生的行为更容易被人类识别和预测。我们的评估结果表明,与使用传统 "最优 "计划的机器人和使用未优化模式的机器人相比,PACT 显著改善了团队活力和队友对机器人的看法。
{"title":"Generating Pattern-Based Conventions for Predictable Planning in Human-Robot Collaboration","authors":"Clare Lohrmann, Maria Stull, A. Roncone, Bradley Hayes","doi":"10.1145/3659061","DOIUrl":"https://doi.org/10.1145/3659061","url":null,"abstract":"For humans to effectively work with robots, they must be able to predict the actions and behaviors of their robot teammates rather than merely react to them. While there are existing techniques enabling robots to adapt to human behavior, there is a demonstrated need for methods that explicitly improve humans’ ability to understand and predict robot behavior at multi-task timescales. In this work, we propose a method leveraging the innate human propensity for pattern recognition in order to improve team dynamics in human-robot teams and to make robots more predictable to the humans that work with them. Patterns are a cognitive tool that humans use and rely on often, and the human brain is in many ways primed for pattern recognition and usage. We propose Pattern-Aware Convention-setting for Teaming (PACT), an entropy-based algorithm that identifies and imposes appropriate patterns over a robot’s planner or policy over long time horizons. These patterns are autonomously generated and chosen via an algorithmic process that considers human-perceptible features and characteristics derived from the tasks to be completed, and as such, produces behavior that is easier for humans to identify and predict. Our evaluation shows that PACT contributes to significant improvements in team dynamics and teammate perceptions of the robot, as compared to robots that utilize traditionally ‘optimal’ plans and robots utilizing unoptimized patterns.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141693071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack
Despite the existence of robots that can lift heavy loads, robots that can help people move heavy objects are not readily available. This paper makes progress towards effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing 27 kg without being co-located (i.e. participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly, smoothly and avoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.
{"title":"Classification of Co-manipulation Modus with Human-Human Teams for Future Application to Human-Robot Systems","authors":"Seth Freeman, Shaden Moss, John L. Salmon, Marc D. Killpack","doi":"10.1145/3659059","DOIUrl":"https://doi.org/10.1145/3659059","url":null,"abstract":"Despite the existence of robots that can lift heavy loads, robots that can help people move heavy objects are not readily available. This paper makes progress towards effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing 27 kg without being co-located (i.e. participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were quickly, smoothly and avoiding obstacles. Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141349806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal
When robots have multiple tasks to perform, they must determine the order in which to complete them. Interleaving tasks is efficient for the robot trying to finish its to-do list, but it may be less satisfying for a human whose request was delayed in favor of schedule efficiency. Following online research that examined delays with various motivations [4, 27], we created two in-person studies in which participants’ tasks were impacted by the robot’s other tasks. In the first, participants either requested a task for the robot to complete on their behalf or watched the robot performing tasks for other people. We measured how their opinions changed depending on whether their task’s completion was delayed due to another participant’s task or they were observing without a task of their own. In the second, participants had a robot walk them to an office and became delayed as the robot detoured to another location. We measured how opinions of the robot changed depending on who requested the detour task and the length of the detour. Overall, participants positively viewed task interleaving as long as the delay and inconvenience imposed by someone else’s task were small and the task was well-justified. Also, observers often had lower opinions of the robot than participants who requested tasks, highlighting a concern for online research.
{"title":"Perceptions of a Robot that Interleaves Tasks for Multiple Users","authors":"Elizabeth J. Carter, Peerat Vichivanives, Ruijia Xing, Laura M. Hiatt, Stephanie Rosenthal","doi":"10.1145/3663486","DOIUrl":"https://doi.org/10.1145/3663486","url":null,"abstract":"When robots have multiple tasks to perform, they must determine the order in which to complete them. Interleaving tasks is efficient for the robot trying to finish its to-do list, but it may be less satisfying for a human whose request was delayed in favor of schedule efficiency. Following online research that examined delays with various motivations [4, 27], we created two in-person studies in which participants’ tasks were impacted by the robot’s other tasks. In the first, participants either requested a task for the robot to complete on their behalf or watched the robot performing tasks for other people. We measured how their opinions changed depending on whether their task’s completion was delayed due to another participant’s task or they were observing without a task of their own. In the second, participants had a robot walk them to an office and became delayed as the robot detoured to another location. We measured how opinions of the robot changed depending on who requested the detour task and the length of the detour. Overall, participants positively viewed task interleaving as long as the delay and inconvenience imposed by someone else’s task were small and the task was well-justified. Also, observers often had lower opinions of the robot than participants who requested tasks, highlighting a concern for online research.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn
Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been robot-centered to develop continual learning algorithms that can quickly learn new information on systematically collected static datasets. In this paper, we take a human-centered approach to continual learning, to understand how humans interact with, teach, and perceive continual learning robots over the long term, and if there are variations in their teaching styles. We developed a socially guided continual learning system that integrates CL models for object recognition with a mobile manipulator robot and allows humans to directly teach and test the robot in real time over multiple sessions. We conducted an in-person study with 60 participants who interacted with the continual learning robot in 300 sessions with 5 sessions per participant. In this between-participant study, we used three different CL models deployed on a mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. Our analysis shows that the constrained experimental setups that have been widely used to test most CL models are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Finally, our analysis shows that although users have concerns about continual learning robots being deployed in our daily lives, they mention that with further improvements continual learning robots could assist older adults and people with disabilities in their homes.
{"title":"A Human-Centered View of Continual Learning: Understanding Interactions, Teaching Patterns, and Perceptions of Human Users Towards a Continual Learning Robot in Repeated Interactions","authors":"Ali Ayub, Zachary De Francesco, Jainish Mehta, Khaled Yaakoub Agha, Patrick Holthaus, C. Nehaniv, Kerstin Dautenhahn","doi":"10.1145/3659110","DOIUrl":"https://doi.org/10.1145/3659110","url":null,"abstract":"\u0000 Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been\u0000 robot-centered\u0000 to develop continual learning algorithms that can quickly learn new information on systematically collected static datasets. In this paper, we take a\u0000 human-centered\u0000 approach to continual learning, to understand how humans interact with, teach, and perceive continual learning robots over the long term, and if there are variations in their teaching styles. We developed a socially guided continual learning system that integrates CL models for object recognition with a mobile manipulator robot and allows humans to directly teach and test the robot in real time over multiple sessions. We conducted an in-person study with 60 participants who interacted with the continual learning robot in 300 sessions with 5 sessions per participant. In this between-participant study, we used three different CL models deployed on a mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. Our analysis shows that the constrained experimental setups that have been widely used to test most CL models are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Finally, our analysis shows that although users have concerns about continual learning robots being deployed in our daily lives, they mention that with further improvements continual learning robots could assist older adults and people with disabilities in their homes.\u0000","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141107929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.
{"title":"Balancing Human Likeness in Social Robots: Impact on Children’s Lexical Alignment and Self-disclosure for Trust Assessment","authors":"Natalia Calvo-Barajas, Anastasia Akkuzu, Ginevra Castellano","doi":"10.1145/3659062","DOIUrl":"https://doi.org/10.1145/3659062","url":null,"abstract":"While there is evidence that human-like characteristics in robots could benefit child-robot interaction in many ways, open questions remain about the appropriate degree of human likeness that should be implemented in robots to avoid adverse effects on acceptance and trust. This study investigates how human likeness, appearance and behavior, influence children’s social and competency trust in a robot. We first designed two versions of the Furhat robot with visual and auditory human-like and machine-like cues validated in two online studies. Secondly, we created verbal behaviors where human likeness was manipulated as responsiveness regarding the robot’s lexical matching. Then, 52 children (7-10 years old) played a storytelling game in a between-subjects experimental design. Results show that the conditions did not affect subjective trust measures. However, objective measures showed that human likeness affects trust differently. While low human-like appearance enhanced social trust, high human-like behavior improved children’s acceptance of the robot’s task-related suggestions. This work provides empirical evidence on manipulating facial features and behavior to control human likeness in a robot with a highly human-like morphology. We discuss the implications and importance of balancing human likeness in robot design and its impacts on task performance, as it directly impacts trust-building with children.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141106639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Little is known about children's long-term acceptance of social robots; whether different types of users exist; and what reasons children have not to use a robot. Moreover, the literature is inconclusive about how the measurement of children's robot acceptance (i.e., self-report or observational) affects the findings. We relied on both self-report and observational data from a six-wave panel study among 321 children aged eight to nine, who were given a Cozmo robot to play with at home over the course of eight weeks. Children's robot acceptance decreased over time, with the strongest drop after two to four weeks. Children rarely rejected the robot (i.e., they did not stop using it already prior to actual adoption). They rather discontinued its use after initial adoption or alternated between using and not using the robot. The competition of other toys and lacking motivation to play with Cozmo emerged as strongest reasons for not using the robot. Self-report measures captured patterns of robot acceptance well but seemed suboptimal for precise assessments of robot use.
{"title":"Children's Acceptance of a Domestic Social Robot: How It Evolves over Time","authors":"Chiara de Jong, J. Peter, R. Kühne, Àlex Barco","doi":"10.1145/3638066","DOIUrl":"https://doi.org/10.1145/3638066","url":null,"abstract":"Little is known about children's long-term acceptance of social robots; whether different types of users exist; and what reasons children have not to use a robot. Moreover, the literature is inconclusive about how the measurement of children's robot acceptance (i.e., self-report or observational) affects the findings. We relied on both self-report and observational data from a six-wave panel study among 321 children aged eight to nine, who were given a Cozmo robot to play with at home over the course of eight weeks. Children's robot acceptance decreased over time, with the strongest drop after two to four weeks. Children rarely rejected the robot (i.e., they did not stop using it already prior to actual adoption). They rather discontinued its use after initial adoption or alternated between using and not using the robot. The competition of other toys and lacking motivation to play with Cozmo emerged as strongest reasons for not using the robot. Self-report measures captured patterns of robot acceptance well but seemed suboptimal for precise assessments of robot use.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo
Work in Human-Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human-robot group interactions. Yet, the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this paper, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of Interaction-Shaping Robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human-robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.
{"title":"Interaction-Shaping Robotics: Robots that Influence Interactions between Other Agents","authors":"Sarah Gillet, Marynel Vázquez, Sean Andrist, Iolanda Leite, Sarah Sebo","doi":"10.1145/3643803","DOIUrl":"https://doi.org/10.1145/3643803","url":null,"abstract":"Work in Human-Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human-robot group interactions. Yet, the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this paper, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of Interaction-Shaping Robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human-robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139683479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: 1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; 2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this paper we investigate: 1) which aspects of dexterous tele-manipulation require assistance; 2) the impact of perception and action augmentation in improving teleoperation performance; 3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task based and user preferred perception and augmentation features for teleoperation assistance.
{"title":"Perception and Action Augmentation for Teleoperation Assistance in Freeform Tele-manipulation","authors":"Tsung-Chi Lin, Achyuthan Unni Krishnan, Zhi Li","doi":"10.1145/3643804","DOIUrl":"https://doi.org/10.1145/3643804","url":null,"abstract":"Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: 1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; 2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this paper we investigate: 1) which aspects of dexterous tele-manipulation require assistance; 2) the impact of perception and action augmentation in improving teleoperation performance; 3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task based and user preferred perception and augmentation features for teleoperation assistance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140479040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}