Within human-robot interaction (HRI), research on robot personality has largely drawn on trait theories and models, such as the Big Five and OCEAN. We argue that reliance on trait models in HRI has led to a limited understanding of robot personality as a question of stable traits that can be designed into a robot plus how humans with certain traits respond to particular robots. However, trait-based approaches exist alongside other ways of understanding personality including approaches focusing on more dynamic constructs such as adaptations and narratives. We suggest that a deep understanding of robot personality is only possible through a cross-disciplinary effort to integrate these different approaches. We propose an Integrative Framework for Robot Personality Research (IF), wherein robot personality is defined not as a property of the robot, nor of the human perceiving the robot, but as a complex assemblage of components at the intersection of robot design and human factors. With the IF, we aim to establish a common theoretical grounding for robot personality research that incorporates personality constructs beyond traits and treats these constructs as complementary and fundamentally interdependent.
{"title":"Towards an Integrative Framework for Robot Personality Research","authors":"Anna Dobrosovestnova, Tim Reinboth, Astrid Weiss","doi":"10.1145/3640010","DOIUrl":"https://doi.org/10.1145/3640010","url":null,"abstract":"Within human-robot interaction (HRI), research on robot personality has largely drawn on trait theories and models, such as the Big Five and OCEAN. We argue that reliance on trait models in HRI has led to a limited understanding of robot personality as a question of stable traits that can be designed into a robot plus how humans with certain traits respond to particular robots. However, trait-based approaches exist alongside other ways of understanding personality including approaches focusing on more dynamic constructs such as adaptations and narratives. We suggest that a deep understanding of robot personality is only possible through a cross-disciplinary effort to integrate these different approaches. We propose an Integrative Framework for Robot Personality Research (IF), wherein robot personality is defined not as a property of the robot, nor of the human perceiving the robot, but as a complex assemblage of components at the intersection of robot design and human factors. With the IF, we aim to establish a common theoretical grounding for robot personality research that incorporates personality constructs beyond traits and treats these constructs as complementary and fundamentally interdependent.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 10","pages":""},"PeriodicalIF":5.1,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139439125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Telepresence technology creates the opportunity for people that were traditionally left out of the workforce to work remotely. In the service industry, a pool of novice remote workers could teleoperate robots to perform short work stints to fill in the gaps left by the dwindling workforce. A hurdle is that consistently talking appropriately and politely imposes a severe mental burden on such novice operators and the quality of the service may suffer. In this study, we propose a teleoperation support system that lets novice remote workers talk freely without considering appropriateness and politeness while maintaining the quality of the service. The proposed system exploits intent recognition to transform casual utterances into predefined appropriate and polite utterances. We conducted a within subject user study where 23 participants played the role of novice remote operators controlling a guardsman robot in charge of monitoring customers’ behaviors. We measured the workload with and without using the proposed support system using NASA task load index questionnaires. The workload was significantly lower (p <.001) when using the proposed support system (M = 46.07, SD = 14.36) than when not using it (M = 62.74, SD = 12.70). The effect size was large (Cohen’s d = 1.23).
网真技术为传统上被排除在劳动力队伍之外的人员创造了远程工作的机会。在服务行业,一批远程新手可以通过远程操作机器人来完成短期工作,以填补劳动力减少带来的空缺。一个障碍是,始终保持适当和礼貌的交谈方式会给这些新手操作员带来严重的心理负担,服务质量可能会受到影响。在本研究中,我们提出了一种远程操作支持系统,让远程操作新手在不考虑适当性和礼貌性的情况下自由交谈,同时保持服务质量。所提议的系统利用意图识别将随意的话语转换为预定义的适当和礼貌的话语。我们进行了一项用户研究,让 23 名参与者扮演远程操作员新手,控制一个负责监控客户行为的警卫机器人。我们使用 NASA 任务负荷指数问卷,测量了使用和未使用拟议支持系统时的工作量。使用建议的支持系统时,工作量(M = 46.07,SD = 14.36)明显低于未使用时(M = 62.74,SD = 12.70)(p <.001)。效应大小较大(Cohen's d = 1.23)。
{"title":"Effortless Polite Telepresence using Intention Recognition","authors":"Morteza Daneshmand, Jani Even, Takayuki Kanda","doi":"10.1145/3636433","DOIUrl":"https://doi.org/10.1145/3636433","url":null,"abstract":"Telepresence technology creates the opportunity for people that were traditionally left out of the workforce to work remotely. In the service industry, a pool of novice remote workers could teleoperate robots to perform short work stints to fill in the gaps left by the dwindling workforce. A hurdle is that consistently talking appropriately and politely imposes a severe mental burden on such novice operators and the quality of the service may suffer. In this study, we propose a teleoperation support system that lets novice remote workers talk freely without considering appropriateness and politeness while maintaining the quality of the service. The proposed system exploits intent recognition to transform casual utterances into predefined appropriate and polite utterances. We conducted a within subject user study where 23 participants played the role of novice remote operators controlling a guardsman robot in charge of monitoring customers’ behaviors. We measured the workload with and without using the proposed support system using NASA task load index questionnaires. The workload was significantly lower (p <.001) when using the proposed support system (M = 46.07, SD = 14.36) than when not using it (M = 62.74, SD = 12.70). The effect size was large (Cohen’s d = 1.23).","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"6 2","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138976430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki
{"title":"Introduction to the Special Issue on Sound in Human-Robot Interaction","authors":"F. Robinson, Hannah R. M. Pelikan, Katsumi Watanabe, Luisa Damiano, Oliver Bown, Mari Velonaki","doi":"10.1145/3632185","DOIUrl":"https://doi.org/10.1145/3632185","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"237 5","pages":"1 - 5"},"PeriodicalIF":5.1,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139006017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield
Physically embodied artificial agents, or robots, are being incorporated into various practical and social contexts, from self-driving cars for personal transportation to assistive robotics in social care. To enable these systems to better perform under changing conditions, designers have proposed to endow robots with varying degrees of autonomous capabilities and the capacity to move between them – an approach known as variable autonomy. Researchers are beginning to understand how robots with fixed autonomous capabilities influence a person’s sense of autonomy, social relations, and, as a result, notions of responsibility; however, addressing these topics in scenarios where robot autonomy dynamically changes is underexplored. To establish a research agenda for variable autonomy that emphasises the responsible design and use of robotics, we conduct a developmental review. Based on a sample of 42 papers, we provide a synthesised definition of variable autonomy to connect currently disjointed research efforts, detail research approaches in variable autonomy to strengthen the empirical basis for subsequent work, characterise the dimensions of variable autonomy, and present design guidelines for variable autonomy research based on responsible robotics.
{"title":"Variable Autonomy Through Responsible Robotics: Design Guidelines and Research Agenda","authors":"T. Reinmund, P. Salvini, Lars Kunze, Marina Jirotka, A. Winfield","doi":"10.1145/3636432","DOIUrl":"https://doi.org/10.1145/3636432","url":null,"abstract":"Physically embodied artificial agents, or robots, are being incorporated into various practical and social contexts, from self-driving cars for personal transportation to assistive robotics in social care. To enable these systems to better perform under changing conditions, designers have proposed to endow robots with varying degrees of autonomous capabilities and the capacity to move between them – an approach known as variable autonomy. Researchers are beginning to understand how robots with fixed autonomous capabilities influence a person’s sense of autonomy, social relations, and, as a result, notions of responsibility; however, addressing these topics in scenarios where robot autonomy dynamically changes is underexplored. To establish a research agenda for variable autonomy that emphasises the responsible design and use of robotics, we conduct a developmental review. Based on a sample of 42 papers, we provide a synthesised definition of variable autonomy to connect currently disjointed research efforts, detail research approaches in variable autonomy to strengthen the empirical basis for subsequent work, characterise the dimensions of variable autonomy, and present design guidelines for variable autonomy research based on responsible robotics.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"22 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138590343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-09-28DOI: 10.1145/3611656
Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek
Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K-12 (n = 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data-robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.
{"title":"The Power of Robot-mediated Play: Forming Friendships and Expressing Identity.","authors":"Verónica Ahumada-Newhart, Margaret Schneider, Laurel D Riek","doi":"10.1145/3611656","DOIUrl":"10.1145/3611656","url":null,"abstract":"<p><p>Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K-12 (<i>n</i> = 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data-robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.</p>","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"12 4","pages":""},"PeriodicalIF":4.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10593410/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50158967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah K. Hopko, Yinsu Zhang, Aakash Yadav, Prabhakar R. Pagilla, Ranjana K. Mehta
Trust in human-robot collaboration is an essential consideration that relates to operator performance, utilization, and experience. While trust’s importance is understood, the state-of-the-art methods to study trust in automation, like surveys, drastically limit the types of insights that can be made. Improvements in measuring techniques can provide a granular understanding of influencers like robot reliability and their subsequent impact on human behavior and experience. This investigation quantifies the brain-behavior relationships associated with trust manipulation in shared space human-robot collaboration (HRC) to advance the scope of metrics to study trust. Thirty-eight participants, balanced by sex, were recruited to perform an assembly task with a collaborative robot under reliable and unreliable robot conditions. Brain imaging, psychological and behavioral eye-tracking, quantitative and qualitative performance, and subjective experiences were monitored. Results from this investigation identify specific information processing and cognitive strategies that result in identified trust-related behaviors, that were found to be sex-specific. The use of covert measurements of trust can reveal insights that humans cannot consciously report, thus shedding light on processes systematically overlooked by subjective measures. Our findings connect a trust influencer (robot reliability) to upstream cognition and downstream human behavior and are enabled by the utilization of granular metrics.
{"title":"Brain-Behavior Relationships of Trust in Shared Space Human-Robot Collaboration","authors":"Sarah K. Hopko, Yinsu Zhang, Aakash Yadav, Prabhakar R. Pagilla, Ranjana K. Mehta","doi":"10.1145/3632149","DOIUrl":"https://doi.org/10.1145/3632149","url":null,"abstract":"Trust in human-robot collaboration is an essential consideration that relates to operator performance, utilization, and experience. While trust’s importance is understood, the state-of-the-art methods to study trust in automation, like surveys, drastically limit the types of insights that can be made. Improvements in measuring techniques can provide a granular understanding of influencers like robot reliability and their subsequent impact on human behavior and experience. This investigation quantifies the brain-behavior relationships associated with trust manipulation in shared space human-robot collaboration (HRC) to advance the scope of metrics to study trust. Thirty-eight participants, balanced by sex, were recruited to perform an assembly task with a collaborative robot under reliable and unreliable robot conditions. Brain imaging, psychological and behavioral eye-tracking, quantitative and qualitative performance, and subjective experiences were monitored. Results from this investigation identify specific information processing and cognitive strategies that result in identified trust-related behaviors, that were found to be sex-specific. The use of covert measurements of trust can reveal insights that humans cannot consciously report, thus shedding light on processes systematically overlooked by subjective measures. Our findings connect a trust influencer (robot reliability) to upstream cognition and downstream human behavior and are enabled by the utilization of granular metrics.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"114 51","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135138234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many social robots will have the capacity to interact via speech in the future, and thus they will have to have a voice. However, so far it is unclear how we can create voices that fit their robotic speakers. In this paper, we explore how robot voices can be designed to fit the size of the respective robot. We therefore investigate the acoustic correlates of human voices and body size. In Study I, we analyzed 163 speech samples in connection with their speakers’ body size and body height. Our results show that specific acoustic parameters are significantly associated with body height, and to a lesser degree to body weight, but that different features are relevant for female and male voices. In Study II, we tested then for female and male voices to what extent the acoustic features identified can be used to create voices that are reliably associated with the size of robots. The results show that the acoustic features identified provide reliable clues to whether a large or a small robot is speaking.
{"title":"Which Voice for which Robot? Designing Robot Voices that Indicate Robot Size","authors":"Kerstin Fischer, Oliver Niebuhr","doi":"10.1145/3632124","DOIUrl":"https://doi.org/10.1145/3632124","url":null,"abstract":"Many social robots will have the capacity to interact via speech in the future, and thus they will have to have a voice. However, so far it is unclear how we can create voices that fit their robotic speakers. In this paper, we explore how robot voices can be designed to fit the size of the respective robot. We therefore investigate the acoustic correlates of human voices and body size. In Study I, we analyzed 163 speech samples in connection with their speakers’ body size and body height. Our results show that specific acoustic parameters are significantly associated with body height, and to a lesser degree to body weight, but that different features are relevant for female and male voices. In Study II, we tested then for female and male voices to what extent the acoustic features identified can be used to create voices that are reliably associated with the size of robots. The results show that the acoustic features identified provide reliable clues to whether a large or a small robot is speaking.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"29 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135391105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In teleoperation of redundant robotic manipulators, translating an operator’s end effector motion command to joint space can be a tool for maintaining feasible and precise robot motion. Through optimizing redundancy resolution, the control system can ensure the end effector maintains maneuverability by avoiding joint limits and kinematic singularities. In autonomous motion planning, this optimization can be done over an entire trajectory to improve performance over local optimization. However, teleoperation involves a human-in-the-loop who determines the trajectory to be executed through a dynamic sequence of motion commands. We present two systems, PrediKCT and PrediKCS, for utilizing a predictive model of operator commands in order to accomplish this redundancy resolution in a manner that considers future expected motion during teleoperation. Using a probabilistic model of operator commands allows optimization over an expected trajectory of future motion rather than consideration of local motion alone. Evaluation through a user study demonstrates improved control outcomes from this predictive redundancy resolution over minimum joint velocity solutions and inverse kinematics-based motion controllers.
{"title":"Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering","authors":"Connor Brooks, Wyatt Rees, Daniel Szafir","doi":"10.1145/3630265","DOIUrl":"https://doi.org/10.1145/3630265","url":null,"abstract":"In teleoperation of redundant robotic manipulators, translating an operator’s end effector motion command to joint space can be a tool for maintaining feasible and precise robot motion. Through optimizing redundancy resolution, the control system can ensure the end effector maintains maneuverability by avoiding joint limits and kinematic singularities. In autonomous motion planning, this optimization can be done over an entire trajectory to improve performance over local optimization. However, teleoperation involves a human-in-the-loop who determines the trajectory to be executed through a dynamic sequence of motion commands. We present two systems, PrediKCT and PrediKCS, for utilizing a predictive model of operator commands in order to accomplish this redundancy resolution in a manner that considers future expected motion during teleoperation. Using a probabilistic model of operator commands allows optimization over an expected trajectory of future motion rather than consideration of local motion alone. Evaluation through a user study demonstrates improved control outcomes from this predictive redundancy resolution over minimum joint velocity solutions and inverse kinematics-based motion controllers.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"40 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers’ attention. Because emotions play a crucial role in social interactions, research has been conducted on conveying emotions via speech. Our study sought to investigate the synchronization of multimodal interaction in human-robot interaction (HRI). We conducted a within-subjects exploratory study with 40 participants to investigate the effects of non-speech sounds (natural voice, synthesized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception with emotional body gestures of an anthropomorphic robot (Pepper). While listening to a fairytale with the participant, a humanoid robot responded to the story with a recorded emotional non-speech sounds and gestures. Participants showed significantly higher emotion recognition accuracy from the natural voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which is in line with previous research. The natural voice also induced higher trust, naturalness, and preference, compared to other sounds. Interestingly, the musical sound mostly showed lower perception ratings, even compared to the no sound. Results are discussed with design guidelines for emotional cues from social robots and future research directions.
{"title":"Robots’ “Woohoo” and “Argh” can Enhance Users’ Emotional and Social Perceptions: An Exploratory Study on Non-Lexical Vocalizations and Non-Linguistic Sounds","authors":"Xiaozhen Liu, Jiayuan Dong, Myounghoon Jeon","doi":"10.1145/3626185","DOIUrl":"https://doi.org/10.1145/3626185","url":null,"abstract":"As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers’ attention. Because emotions play a crucial role in social interactions, research has been conducted on conveying emotions via speech. Our study sought to investigate the synchronization of multimodal interaction in human-robot interaction (HRI). We conducted a within-subjects exploratory study with 40 participants to investigate the effects of non-speech sounds (natural voice, synthesized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception with emotional body gestures of an anthropomorphic robot (Pepper). While listening to a fairytale with the participant, a humanoid robot responded to the story with a recorded emotional non-speech sounds and gestures. Participants showed significantly higher emotion recognition accuracy from the natural voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which is in line with previous research. The natural voice also induced higher trust, naturalness, and preference, compared to other sounds. Interestingly, the musical sound mostly showed lower perception ratings, even compared to the no sound. Results are discussed with design guidelines for emotional cues from social robots and future research directions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136034445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Pérez-D'Arpino, Rebecca P. Khurshid, Julie A. Shah
Remote robot manipulation with human control enables applications where safety and environmental constraints are adverse to humans (e.g. underwater, space robotics and disaster response) or the complexity of the task demands human-level cognition and dexterity (e.g. robotic surgery and manufacturing). These systems typically use direct teleoperation at the motion level, and are usually limited to low-DOF arms and 2D perception. Improving dexterity and situational awareness demands new interaction and planning workflows. We explore the use of human-robot teaming through teleautonomy with assisted planning for remote control of a dual-arm dexterous robot for multi-step manipulation, and conduct a within-subjects experimental assessment (n=12 expert users) to compare it with direct teleoperation with an imitation controller with 2D and 3D perception, as well as teleoperation through a teleautonomy interface. The proposed assisted planning approach achieves task times comparable with direct teleoperation while improving other objective and subjective metrics, including re-grasps, collisions, and TLX workload. Assisted planning in the teleautonomy interface achieves faster task execution, and removes a significant interaction with the operator’s expertise level, resulting in a performance equalizer across users. Our study protocol, metrics and models for statistical analysis might also serve as a general benchmarking framework in teleoperation domains. Accompanying video and reference R code: https://people.csail.mit.edu/cdarpino/THRIteleop/
{"title":"Experimental Assessment of Human-Robot Teaming for Multi-Step Remote Manipulation with Expert Operators","authors":"Claudia Pérez-D'Arpino, Rebecca P. Khurshid, Julie A. Shah","doi":"10.1145/3618258","DOIUrl":"https://doi.org/10.1145/3618258","url":null,"abstract":"Remote robot manipulation with human control enables applications where safety and environmental constraints are adverse to humans (e.g. underwater, space robotics and disaster response) or the complexity of the task demands human-level cognition and dexterity (e.g. robotic surgery and manufacturing). These systems typically use direct teleoperation at the motion level, and are usually limited to low-DOF arms and 2D perception. Improving dexterity and situational awareness demands new interaction and planning workflows. We explore the use of human-robot teaming through teleautonomy with assisted planning for remote control of a dual-arm dexterous robot for multi-step manipulation, and conduct a within-subjects experimental assessment (n=12 expert users) to compare it with direct teleoperation with an imitation controller with 2D and 3D perception, as well as teleoperation through a teleautonomy interface. The proposed assisted planning approach achieves task times comparable with direct teleoperation while improving other objective and subjective metrics, including re-grasps, collisions, and TLX workload. Assisted planning in the teleautonomy interface achieves faster task execution, and removes a significant interaction with the operator’s expertise level, resulting in a performance equalizer across users. Our study protocol, metrics and models for statistical analysis might also serve as a general benchmarking framework in teleoperation domains. Accompanying video and reference R code: https://people.csail.mit.edu/cdarpino/THRIteleop/","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135944946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}