Matthew V. Law, Nnamdi Nwagwu, Amritansh Kwatra, Seo-young Lee, Daniel M. DiAngelis, Naifang Yu, Gonzalo Gonzalez-Pumariega, Amit Rajesh, Guy Hoffman
We investigate what it might look like for a robot to work with a human on a needfinding design task using an affinity diagram. While some recent projects have examined how human-robot teams might explore solutions to design problems, human-robot collaboration in the sensemaking aspects of the design process has not been studied. Designers use affinity diagrams to make sense of unstructured information by clustering paper notes on a work surface. To explore human-robot collaboration on a sensemaking design activity, we developed HIRO, an autonomous robot that constructs affinity diagrams with humans. In a within-user study, 56 participants affinity-diagrammed themes to characterize needs in quotes taken from real-world user data, once alone, and once with HIRO. Users spent more time on the task with HIRO than alone, without strong evidence for corresponding effects on cognitive load. In addition, a majority of participants said they preferred to work with HIRO. From post-interaction interviews, we identified eight themes leading to four guidelines for robots that collaborate with humans on sensemaking design tasks: (1) account for the robot’s speed; (2) pursue mutual understanding rather than just correctness; (3) identify opportunities for constructive disagreements; (4) use other modes of communication in addition to physical materials.
{"title":"Affinity Diagramming with a Robot","authors":"Matthew V. Law, Nnamdi Nwagwu, Amritansh Kwatra, Seo-young Lee, Daniel M. DiAngelis, Naifang Yu, Gonzalo Gonzalez-Pumariega, Amit Rajesh, Guy Hoffman","doi":"10.1145/3641514","DOIUrl":"https://doi.org/10.1145/3641514","url":null,"abstract":"We investigate what it might look like for a robot to work with a human on a needfinding design task using an affinity diagram. While some recent projects have examined how human-robot teams might explore solutions to design problems, human-robot collaboration in the sensemaking aspects of the design process has not been studied. Designers use affinity diagrams to make sense of unstructured information by clustering paper notes on a work surface. To explore human-robot collaboration on a sensemaking design activity, we developed HIRO, an autonomous robot that constructs affinity diagrams with humans. In a within-user study, 56 participants affinity-diagrammed themes to characterize needs in quotes taken from real-world user data, once alone, and once with HIRO. Users spent more time on the task with HIRO than alone, without strong evidence for corresponding effects on cognitive load. In addition, a majority of participants said they preferred to work with HIRO. From post-interaction interviews, we identified eight themes leading to four guidelines for robots that collaborate with humans on sensemaking design tasks: (1) account for the robot’s speed; (2) pursue mutual understanding rather than just correctness; (3) identify opportunities for constructive disagreements; (4) use other modes of communication in addition to physical materials.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140476126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Williams, Cynthia Matuszek, Ross Mead, Nick Depalma
The proliferation of Large Language Models (LLMs) presents both a critical design challenge and a remarkable opportunity for the field of Human–Robot Interaction (HRI). While the direct deployment of LLMs on interactive robots may be unsuitable for reasons of ethics, safety, and control, LLMs might nevertheless provide a promising baseline technique for many elements of HRI. Specifically, in this article, we argue for the use of LLMs as Scarecrows : “brainless,” straw-man black-box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions, much like the use of “Wizard of Oz” (WoZ) and other human-in-the-loop approaches. We explicitly acknowledge that these Scarecrows, rather than providing a satisfying or scientifically complete solution, incorporate a form of the wisdom of the crowd and, in at least some cases, will ultimately need to be replaced or supplemented by a robust and theoretically motivated solution. We provide examples of how Scarecrows could be used in language-capable robot architectures as useful placeholders and suggest initial reporting guidelines for authors, mirroring existing guidelines for the use and reporting of WoZ techniques.
{"title":"Scarecrows in Oz: The Use of Large Language Models in HRI","authors":"Tom Williams, Cynthia Matuszek, Ross Mead, Nick Depalma","doi":"10.1145/3606261","DOIUrl":"https://doi.org/10.1145/3606261","url":null,"abstract":"\u0000 The proliferation of Large Language Models (LLMs) presents both a critical design challenge and a remarkable opportunity for the field of Human–Robot Interaction (HRI). While the direct deployment of LLMs on interactive robots may be unsuitable for reasons of ethics, safety, and control, LLMs might nevertheless provide a promising baseline technique for many elements of HRI. Specifically, in this article, we argue for the use of LLMs as\u0000 Scarecrows\u0000 : “brainless,” straw-man black-box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions, much like the use of “Wizard of Oz” (WoZ) and other human-in-the-loop approaches. We explicitly acknowledge that these Scarecrows, rather than providing a satisfying or scientifically complete solution, incorporate a form of the wisdom of the crowd and, in at least some cases, will ultimately need to be replaced or supplemented by a robust and theoretically motivated solution. We provide examples of how Scarecrows could be used in language-capable robot architectures as useful placeholders and suggest initial reporting guidelines for authors, mirroring existing guidelines for the use and reporting of WoZ techniques.\u0000","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140485011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Trafton, J. McCurry, Kevin Zish, Chelsea R. Frazier
The perception of agency in human robot interaction has become increasingly important as robots become more capable and more social. There are, however, no accepted or consistent methods of measuring perceived agency; researchers currently use a wide range of techniques and surveys. We provide a definition of perceived agency and from that definition we create and psychometrically validate a scale to measure perceived agency. We then perform a scale evaluation by comparing the PA scale constructed in experiment 1 to two other existing scales. We find that our PA and PA-R (Perceived Agency - Rasch) scales provide a better fit to empirical data than existing measures. We also perform scale validation by showing that our scale shows the hypothesized relationship between perceived agency and morality.
随着机器人的能力和社会性越来越强,在人机交互中的代理感知也变得越来越重要。然而,目前还没有公认的或一致的方法来测量感知代理;研究人员目前使用的技术和调查范围很广。我们提供了感知代理的定义,并根据该定义创建了一个量表来测量感知代理,该量表还经过了心理测试验证。然后,我们将实验 1 中构建的 PA 量表与其他两个现有量表进行比较,从而对量表进行评估。我们发现,与现有量表相比,我们的 PA 和 PA-R(感知代理-拉施)量表能更好地贴合经验数据。我们还对量表进行了验证,表明我们的量表显示了感知代理与道德之间的假设关系。
{"title":"The Perception of Agency","authors":"J. Trafton, J. McCurry, Kevin Zish, Chelsea R. Frazier","doi":"10.1145/3640011","DOIUrl":"https://doi.org/10.1145/3640011","url":null,"abstract":"The perception of agency in human robot interaction has become increasingly important as robots become more capable and more social. There are, however, no accepted or consistent methods of measuring perceived agency; researchers currently use a wide range of techniques and surveys. We provide a definition of perceived agency and from that definition we create and psychometrically validate a scale to measure perceived agency. We then perform a scale evaluation by comparing the PA scale constructed in experiment 1 to two other existing scales. We find that our PA and PA-R (Perceived Agency - Rasch) scales provide a better fit to empirical data than existing measures. We also perform scale validation by showing that our scale shows the hypothesized relationship between perceived agency and morality.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140487944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jongmin M. Lee, Temesgen Gebrekristos, Dalia De Santis, Mahdieh Nejati-Javaremi, Deepak Gopinath, Biraj Parikh, F. Mussa-Ivaldi, B. Argall
When individuals are paralyzed from injury or damage to the brain, upper body movement and function can be compromised. While the use of body motions to interface with machines has shown to be an effective noninvasive strategy to provide movement assistance and to promote physical rehabilitation, learning to use such interfaces to control complex machines is not well understood. In a five session study, we demonstrate that a subset of an uninjured population is able to learn and improve their ability to use a high-dimensional Body-Machine Interface (BoMI), to control a robotic arm. We use a sensor net of four inertial measurement units, placed bilaterally on the upper body, and a BoMI with the capacity to directly control a robot in six dimensions. We consider whether the way in which the robot control space is mapped from human inputs has any impact on learning. Our results suggest that the space of robot control does play a role in the evolution of human learning: specifically, though robot control in joint space appears to be more intuitive initially, control in task space is found to have a greater capacity for longer-term improvement and learning. Our results further suggest that there is an inverse relationship between control dimension couplings and task performance.
{"title":"Learning to Control Complex Robots Using High-Dimensional Body-Machine Interfaces","authors":"Jongmin M. Lee, Temesgen Gebrekristos, Dalia De Santis, Mahdieh Nejati-Javaremi, Deepak Gopinath, Biraj Parikh, F. Mussa-Ivaldi, B. Argall","doi":"10.1145/3630264","DOIUrl":"https://doi.org/10.1145/3630264","url":null,"abstract":"When individuals are paralyzed from injury or damage to the brain, upper body movement and function can be compromised. While the use of body motions to interface with machines has shown to be an effective noninvasive strategy to provide movement assistance and to promote physical rehabilitation, learning to use such interfaces to control complex machines is not well understood. In a five session study, we demonstrate that a subset of an uninjured population is able to learn and improve their ability to use a high-dimensional Body-Machine Interface (BoMI), to control a robotic arm. We use a sensor net of four inertial measurement units, placed bilaterally on the upper body, and a BoMI with the capacity to directly control a robot in six dimensions. We consider whether the way in which the robot control space is mapped from human inputs has any impact on learning. Our results suggest that the space of robot control does play a role in the evolution of human learning: specifically, though robot control in joint space appears to be more intuitive initially, control in task space is found to have a greater capacity for longer-term improvement and learning. Our results further suggest that there is an inverse relationship between control dimension couplings and task performance.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139527856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of end-user robot programming seeks to develop methods that empower non-expert programmers to task and modify robot operations. In doing so, researchers may enhance robot flexibility and broaden the scope of robot deployments into the real world. We introduce PRogramAR (Programming Robots using Augmented Reality), a novel end-user robot programming system that combines the intuitive visual feedback of augmented reality (AR) with the simplistic and responsive paradigm of trigger-action programming (TAP) to facilitate human-robot collaboration. Through PRogramAR, users are able to rapidly author task rules and desired reactive robot behaviors, while specifying task constraints and observing program feedback contextualized directly in the real world. PRogramAR provides feedback by simulating the robot’s intended behavior and providing instant evaluation of TAP rule executability to help end-users better understand and debug their programs during development. In a system validation, 17 end-users ranging from ages 18 to 83 used PRogramAR to program a robot to assist them in completing three collaborative tasks. Our results demonstrate how merging the benefits of AR and TAP using elements from prior robot programming research into a single novel system can successfully enhance the robot programming process for non-expert users.
终端用户机器人编程领域致力于开发各种方法,让非专业程序员也能完成任务并修改机器人操作。这样,研究人员就可以提高机器人的灵活性,扩大机器人在现实世界中的部署范围。我们介绍了 PRogramAR(使用增强现实技术对机器人进行编程),这是一种新型的终端用户机器人编程系统,它将增强现实技术(AR)的直观视觉反馈与触发式动作编程(TAP)的简易响应范例相结合,促进了人机协作。通过 PRogramAR,用户能够快速编写任务规则和所需的反应式机器人行为,同时指定任务约束条件,并直接观察现实世界中的程序反馈。PRogramAR 通过模拟机器人的预期行为提供反馈,并对 TAP 规则的可执行性进行即时评估,帮助最终用户在开发过程中更好地理解和调试程序。在系统验证中,17 位年龄从 18 岁到 83 岁不等的最终用户使用 PRogramAR 对机器人进行编程,以协助他们完成三项协作任务。我们的研究结果表明,利用先前机器人编程研究中的元素,将 AR 和 TAP 的优势融合到一个新颖的系统中,可以成功增强非专业用户的机器人编程过程。
{"title":"PRogramAR: Augmented Reality End-User Robot Programming","authors":"Bryce Ikeda, D. Szafir","doi":"10.1145/3640008","DOIUrl":"https://doi.org/10.1145/3640008","url":null,"abstract":"The field of end-user robot programming seeks to develop methods that empower non-expert programmers to task and modify robot operations. In doing so, researchers may enhance robot flexibility and broaden the scope of robot deployments into the real world. We introduce PRogramAR (Programming Robots using Augmented Reality), a novel end-user robot programming system that combines the intuitive visual feedback of augmented reality (AR) with the simplistic and responsive paradigm of trigger-action programming (TAP) to facilitate human-robot collaboration. Through PRogramAR, users are able to rapidly author task rules and desired reactive robot behaviors, while specifying task constraints and observing program feedback contextualized directly in the real world. PRogramAR provides feedback by simulating the robot’s intended behavior and providing instant evaluation of TAP rule executability to help end-users better understand and debug their programs during development. In a system validation, 17 end-users ranging from ages 18 to 83 used PRogramAR to program a robot to assist them in completing three collaborative tasks. Our results demonstrate how merging the benefits of AR and TAP using elements from prior robot programming research into a single novel system can successfully enhance the robot programming process for non-expert users.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139531889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Within human-robot interaction (HRI), research on robot personality has largely drawn on trait theories and models, such as the Big Five and OCEAN. We argue that reliance on trait models in HRI has led to a limited understanding of robot personality as a question of stable traits that can be designed into a robot plus how humans with certain traits respond to particular robots. However, trait-based approaches exist alongside other ways of understanding personality including approaches focusing on more dynamic constructs such as adaptations and narratives. We suggest that a deep understanding of robot personality is only possible through a cross-disciplinary effort to integrate these different approaches. We propose an Integrative Framework for Robot Personality Research (IF), wherein robot personality is defined not as a property of the robot, nor of the human perceiving the robot, but as a complex assemblage of components at the intersection of robot design and human factors. With the IF, we aim to establish a common theoretical grounding for robot personality research that incorporates personality constructs beyond traits and treats these constructs as complementary and fundamentally interdependent.
{"title":"Towards an Integrative Framework for Robot Personality Research","authors":"Anna Dobrosovestnova, Tim Reinboth, Astrid Weiss","doi":"10.1145/3640010","DOIUrl":"https://doi.org/10.1145/3640010","url":null,"abstract":"Within human-robot interaction (HRI), research on robot personality has largely drawn on trait theories and models, such as the Big Five and OCEAN. We argue that reliance on trait models in HRI has led to a limited understanding of robot personality as a question of stable traits that can be designed into a robot plus how humans with certain traits respond to particular robots. However, trait-based approaches exist alongside other ways of understanding personality including approaches focusing on more dynamic constructs such as adaptations and narratives. We suggest that a deep understanding of robot personality is only possible through a cross-disciplinary effort to integrate these different approaches. We propose an Integrative Framework for Robot Personality Research (IF), wherein robot personality is defined not as a property of the robot, nor of the human perceiving the robot, but as a complex assemblage of components at the intersection of robot design and human factors. With the IF, we aim to establish a common theoretical grounding for robot personality research that incorporates personality constructs beyond traits and treats these constructs as complementary and fundamentally interdependent.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139439125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Telepresence technology creates the opportunity for people that were traditionally left out of the workforce to work remotely. In the service industry, a pool of novice remote workers could teleoperate robots to perform short work stints to fill in the gaps left by the dwindling workforce. A hurdle is that consistently talking appropriately and politely imposes a severe mental burden on such novice operators and the quality of the service may suffer. In this study, we propose a teleoperation support system that lets novice remote workers talk freely without considering appropriateness and politeness while maintaining the quality of the service. The proposed system exploits intent recognition to transform casual utterances into predefined appropriate and polite utterances. We conducted a within subject user study where 23 participants played the role of novice remote operators controlling a guardsman robot in charge of monitoring customers’ behaviors. We measured the workload with and without using the proposed support system using NASA task load index questionnaires. The workload was significantly lower (p <.001) when using the proposed support system (M = 46.07, SD = 14.36) than when not using it (M = 62.74, SD = 12.70). The effect size was large (Cohen’s d = 1.23).
网真技术为传统上被排除在劳动力队伍之外的人员创造了远程工作的机会。在服务行业,一批远程新手可以通过远程操作机器人来完成短期工作,以填补劳动力减少带来的空缺。一个障碍是,始终保持适当和礼貌的交谈方式会给这些新手操作员带来严重的心理负担,服务质量可能会受到影响。在本研究中,我们提出了一种远程操作支持系统,让远程操作新手在不考虑适当性和礼貌性的情况下自由交谈,同时保持服务质量。所提议的系统利用意图识别将随意的话语转换为预定义的适当和礼貌的话语。我们进行了一项用户研究,让 23 名参与者扮演远程操作员新手,控制一个负责监控客户行为的警卫机器人。我们使用 NASA 任务负荷指数问卷,测量了使用和未使用拟议支持系统时的工作量。使用建议的支持系统时,工作量(M = 46.07,SD = 14.36)明显低于未使用时(M = 62.74,SD = 12.70)(p <.001)。效应大小较大(Cohen's d = 1.23)。
{"title":"Effortless Polite Telepresence using Intention Recognition","authors":"Morteza Daneshmand, Jani Even, Takayuki Kanda","doi":"10.1145/3636433","DOIUrl":"https://doi.org/10.1145/3636433","url":null,"abstract":"Telepresence technology creates the opportunity for people that were traditionally left out of the workforce to work remotely. In the service industry, a pool of novice remote workers could teleoperate robots to perform short work stints to fill in the gaps left by the dwindling workforce. A hurdle is that consistently talking appropriately and politely imposes a severe mental burden on such novice operators and the quality of the service may suffer. In this study, we propose a teleoperation support system that lets novice remote workers talk freely without considering appropriateness and politeness while maintaining the quality of the service. The proposed system exploits intent recognition to transform casual utterances into predefined appropriate and polite utterances. We conducted a within subject user study where 23 participants played the role of novice remote operators controlling a guardsman robot in charge of monitoring customers’ behaviors. We measured the workload with and without using the proposed support system using NASA task load index questionnaires. The workload was significantly lower (p <.001) when using the proposed support system (M = 46.07, SD = 14.36) than when not using it (M = 62.74, SD = 12.70). The effect size was large (Cohen’s d = 1.23).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138976430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki
{"title":"Introduction to the Special Issue on Sound in Human-Robot Interaction","authors":"F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki","doi":"10.1145/3632185","DOIUrl":"https://doi.org/10.1145/3632185","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield
Physically embodied artificial agents, or robots, are being incorporated into various practical and social contexts, from self-driving cars for personal transportation to assistive robotics in social care. To enable these systems to better perform under changing conditions, designers have proposed to endow robots with varying degrees of autonomous capabilities and the capacity to move between them – an approach known as variable autonomy. Researchers are beginning to understand how robots with fixed autonomous capabilities influence a person’s sense of autonomy, social relations, and, as a result, notions of responsibility; however, addressing these topics in scenarios where robot autonomy dynamically changes is underexplored. To establish a research agenda for variable autonomy that emphasises the responsible design and use of robotics, we conduct a developmental review. Based on a sample of 42 papers, we provide a synthesised definition of variable autonomy to connect currently disjointed research efforts, detail research approaches in variable autonomy to strengthen the empirical basis for subsequent work, characterise the dimensions of variable autonomy, and present design guidelines for variable autonomy research based on responsible robotics.
{"title":"Variable Autonomy Through Responsible Robotics: Design Guidelines and Research Agenda","authors":"T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield","doi":"10.1145/3636432","DOIUrl":"https://doi.org/10.1145/3636432","url":null,"abstract":"Physically embodied artificial agents, or robots, are being incorporated into various practical and social contexts, from self-driving cars for personal transportation to assistive robotics in social care. To enable these systems to better perform under changing conditions, designers have proposed to endow robots with varying degrees of autonomous capabilities and the capacity to move between them – an approach known as variable autonomy. Researchers are beginning to understand how robots with fixed autonomous capabilities influence a person’s sense of autonomy, social relations, and, as a result, notions of responsibility; however, addressing these topics in scenarios where robot autonomy dynamically changes is underexplored. To establish a research agenda for variable autonomy that emphasises the responsible design and use of robotics, we conduct a developmental review. Based on a sample of 42 papers, we provide a synthesised definition of variable autonomy to connect currently disjointed research efforts, detail research approaches in variable autonomy to strengthen the empirical basis for subsequent work, characterise the dimensions of variable autonomy, and present design guidelines for variable autonomy research based on responsible robotics.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-09-28DOI: 10.1145/3611656
Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek
Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K-12 (n = 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data-robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.
{"title":"The Power of Robot-mediated Play: Forming Friendships and Expressing Identity.","authors":"Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek","doi":"10.1145/3611656","DOIUrl":"10.1145/3611656","url":null,"abstract":"<p><p>Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K-12 (<i>n</i> = 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data-robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.</p>","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50158967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}