Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515313
M. Lieser, Ulrich Schwanecke, J. Berdux
The increasing autonomy and presence of Unmanned Aerial Vehicles (UAVs), especially quadrotors, in everyday applications requires in-depth studies of proxemics in Human-Drone Interaction (HDI) and novel methods of user interaction suitable for different distances. This paper presents a user study (N=32) that evaluates proxemics with a miniature quadrotor (92 mm wheelbase) from four directions (front, back, left, right) in a seated setting investigating preferred approach directions and distances in future home or workplace scenarios. The goal of this study is to determine if humans are willing to allow flying robots of that size and mechanical appearance to approach close enough to enable tactile interaction. Moreover, the participants' inclination to physically interact with the quadrotor is examined. Studies evaluating proxemics in HDI are highly dependent on repeatable results and actually flying robots. In most comparable studies, the quadrotors used did not fly freely or at all, but were moved, manually controlled, or flew barely repeatable trajectories due to unstable onboard navigation. Only few studies have used pose estimation systems that ensure smooth and reproducible trajectories and thus reliable findings of the studies. For this reason, in addition to the presented study and its results, an insight into the used testbed is provided, that also integrates full skeleton pose estimation rather than tracking participants with only a single marker.
{"title":"Evaluating Distances in Tactile Human-Drone Interaction","authors":"M. Lieser, Ulrich Schwanecke, J. Berdux","doi":"10.1109/RO-MAN50785.2021.9515313","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515313","url":null,"abstract":"The increasing autonomy and presence of Unmanned Aerial Vehicles (UAVs), especially quadrotors, in everyday applications requires in-depth studies of proxemics in Human-Drone Interaction (HDI) and novel methods of user interaction suitable for different distances. This paper presents a user study (N=32) that evaluates proxemics with a miniature quadrotor (92 mm wheelbase) from four directions (front, back, left, right) in a seated setting investigating preferred approach directions and distances in future home or workplace scenarios. The goal of this study is to determine if humans are willing to allow flying robots of that size and mechanical appearance to approach close enough to enable tactile interaction. Moreover, the participants' inclination to physically interact with the quadrotor is examined. Studies evaluating proxemics in HDI are highly dependent on repeatable results and actually flying robots. In most comparable studies, the quadrotors used did not fly freely or at all, but were moved, manually controlled, or flew barely repeatable trajectories due to unstable onboard navigation. Only few studies have used pose estimation systems that ensure smooth and reproducible trajectories and thus reliable findings of the studies. For this reason, in addition to the presented study and its results, an insight into the used testbed is provided, that also integrates full skeleton pose estimation rather than tracking participants with only a single marker.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"104 1","pages":"1275-1282"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78377978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515428
M. L. Chang, J. Trafton, J. McCurry, A. Thomaz
How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.
{"title":"Unfair! Perceptions of Fairness in Human-Robot Teams","authors":"M. L. Chang, J. Trafton, J. McCurry, A. Thomaz","doi":"10.1109/RO-MAN50785.2021.9515428","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515428","url":null,"abstract":"How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"905-912"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78316036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515502
Björn Petrak, J. Stapels, Katharina Weitz, F. Eyssel, E. André
Various works show that proxemics occupies an important role in human-robot interaction and that appropriate proxemic interaction depends on many characteristics of humans and robots. However, there is none that shows the relationship between an emotional state expressed by a user and a proxemic reaction of the robot to it, in a social interaction between these interactants. In the current experiment (N = 82), we investigate this using an online study in which we examine which proxemic response (i.e., approaching, not moving, moving away) to a person’s expressed emotional state (i.e., anger, fear, disgust, surprise, sadness, joy) is perceived as appropriate. The quantitative and qualitative data collected suggests that the robot’s approach was considered appropriate for the expressed fear, sadness, and joy, whereas moving away was perceived as inappropriate in most scenarios. Further exploratory findings underline the importance of appropriate nonverbal behavior on the perception of the robot.
{"title":"To Move or Not to Move? Social Acceptability of Robot Proxemics Behavior Depending on User Emotion","authors":"Björn Petrak, J. Stapels, Katharina Weitz, F. Eyssel, E. André","doi":"10.1109/RO-MAN50785.2021.9515502","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515502","url":null,"abstract":"Various works show that proxemics occupies an important role in human-robot interaction and that appropriate proxemic interaction depends on many characteristics of humans and robots. However, there is none that shows the relationship between an emotional state expressed by a user and a proxemic reaction of the robot to it, in a social interaction between these interactants. In the current experiment (N = 82), we investigate this using an online study in which we examine which proxemic response (i.e., approaching, not moving, moving away) to a person’s expressed emotional state (i.e., anger, fear, disgust, surprise, sadness, joy) is perceived as appropriate. The quantitative and qualitative data collected suggests that the robot’s approach was considered appropriate for the expressed fear, sadness, and joy, whereas moving away was perceived as inappropriate in most scenarios. Further exploratory findings underline the importance of appropriate nonverbal behavior on the perception of the robot.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"105 1","pages":"975-982"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74978598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515373
Zhijie Zhang, Jianmin Zheng, N. Magnenat-Thalmann
As the applications of intelligent agents (IAs) are gradually increasing in daily life, they are expected to have reasonable social intelligence to interact with people by appropriately interpreting human behavior and intention. This paper presents a method to estimate whether people have willingness to join in a conversation, which helps to endow IAs with the capability of detecting potential participants. The method is built on the CNN-LSTM network, which takes image features and social signals as input, making use of general information conveyed in images, semantic social cues proven by social psychology studies, and temporal information in the sequence of inputs. The network is designed to have a multi-branch structure with the flexibility of accommodating different types of inputs. We also discuss the signal transition in multiparty human-robot interaction scenarios. The method is evaluated on three datasets with social signals and/or images as inputs. The results show that the proposed method can infer human engagement intention well.
{"title":"Engagement Intention Estimation in Multiparty Human-Robot Interaction","authors":"Zhijie Zhang, Jianmin Zheng, N. Magnenat-Thalmann","doi":"10.1109/RO-MAN50785.2021.9515373","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515373","url":null,"abstract":"As the applications of intelligent agents (IAs) are gradually increasing in daily life, they are expected to have reasonable social intelligence to interact with people by appropriately interpreting human behavior and intention. This paper presents a method to estimate whether people have willingness to join in a conversation, which helps to endow IAs with the capability of detecting potential participants. The method is built on the CNN-LSTM network, which takes image features and social signals as input, making use of general information conveyed in images, semantic social cues proven by social psychology studies, and temporal information in the sequence of inputs. The network is designed to have a multi-branch structure with the flexibility of accommodating different types of inputs. We also discuss the signal transition in multiparty human-robot interaction scenarios. The method is evaluated on three datasets with social signals and/or images as inputs. The results show that the proposed method can infer human engagement intention well.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"117-122"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74750012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515323
R. Gomez, Deborah Szapiro, Kerl Galindo, L. Merino, H. Brock, Keisuke Nakamura, Yu Fang, Eric Nichols
In this paper, we explore the storytelling potential of a robot. We exploit the use of creative contents that maximize the embodied communication affordance of the empathic robot Haru. We identify the elements in storytelling such as narration, agency, engagement and education and synthesized these into the robot. Through effective design we investigated the possible answers that could leverage the limitations and the challenges in developing storytelling applications through a robotic medium. Our preliminary findings show that the use of an embodied agent such as a robot in storytelling only has meaning when its communicative affordance (i.e. embodiment, expressiveness, and other modalities) is tapped, adding new dimension to the experience. Otherwise, traditional storytelling delivery (e.g. tablet) without the use of embodiment will suffice. Hence, robots need to be performers rather than just mere props in storytelling.
{"title":"Exploring Affective Storytelling with an Embodied Agent","authors":"R. Gomez, Deborah Szapiro, Kerl Galindo, L. Merino, H. Brock, Keisuke Nakamura, Yu Fang, Eric Nichols","doi":"10.1109/RO-MAN50785.2021.9515323","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515323","url":null,"abstract":"In this paper, we explore the storytelling potential of a robot. We exploit the use of creative contents that maximize the embodied communication affordance of the empathic robot Haru. We identify the elements in storytelling such as narration, agency, engagement and education and synthesized these into the robot. Through effective design we investigated the possible answers that could leverage the limitations and the challenges in developing storytelling applications through a robotic medium. Our preliminary findings show that the use of an embodied agent such as a robot in storytelling only has meaning when its communicative affordance (i.e. embodiment, expressiveness, and other modalities) is tapped, adding new dimension to the experience. Otherwise, traditional storytelling delivery (e.g. tablet) without the use of embodiment will suffice. Hence, robots need to be performers rather than just mere props in storytelling.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"1249-1255"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84514451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515528
Kanishk Rana, Rahul Madaan, Jainendra Shukla
Chatbots are one of the emerging intelligent systems which interact with customers to solve different queries in a wide range of domain areas. During social interaction, politeness plays a vital role in achieving effective communication. Consequently, it becomes essential to understand how a chatbot’s politeness affects user experience during the interaction. To understand it, we conducted a between-subject user study with two chatbots where one of the chatbots employs polite triggers, and the other one replies intending to answer the queries. To introduce politeness in normal chatbot responses, we used the state-of-the-art tag and generate approach. We first analyzed how different personality traits influence the response of individual persons to polite triggers. In addition, we also investigated the effects of polite triggers among different genders and age groups using a cross-sectional analysis.
{"title":"Effect of Polite Triggers in Chatbot Conversations on User Experience across Gender, Age, and Personality","authors":"Kanishk Rana, Rahul Madaan, Jainendra Shukla","doi":"10.1109/RO-MAN50785.2021.9515528","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515528","url":null,"abstract":"Chatbots are one of the emerging intelligent systems which interact with customers to solve different queries in a wide range of domain areas. During social interaction, politeness plays a vital role in achieving effective communication. Consequently, it becomes essential to understand how a chatbot’s politeness affects user experience during the interaction. To understand it, we conducted a between-subject user study with two chatbots where one of the chatbots employs polite triggers, and the other one replies intending to answer the queries. To introduce politeness in normal chatbot responses, we used the state-of-the-art tag and generate approach. We first analyzed how different personality traits influence the response of individual persons to polite triggers. In addition, we also investigated the effects of polite triggers among different genders and age groups using a cross-sectional analysis.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"813-819"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84540176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515320
H. Park, Jung Hoon Kim
Motivated by the fact that a number of present state estimations require some presumed conditions and could not lead to a desired accuracy when they are applied to real systems, this paper is concerned with providing a new framework for the state estimation. We first introduce some existing methods of state estimations and describe their weaknesses for unknown bounded persistent elements. Aiming at taking into account more practical situations of real systems, which cannot be treated by the existing methods, this paper provides a new state estimation method by using the l1 optimal control theory. More precisely, the new state estimation method called the l1 optimal state estimation considers unknown bounded persistent elements such as external disturbances and measurement noises, which often occur in the systems and make the estimation difficult. The problem of minimizing the effect of the bounded persistent elements on the corresponding state estimation error could be mathematically formulated by using the arguments on l1 optimal state estimation introduced in this paper. Finally, the effectiveness of the l1 optimal state estimation is demonstrated through some simulation results associated with the center of mass (CoM) estimation for a bipedal robot on its linear inverted pendulum model (LIPM).
{"title":"On the l1 Optimal State Estimator with Applications to Bipedal Robots","authors":"H. Park, Jung Hoon Kim","doi":"10.1109/RO-MAN50785.2021.9515320","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515320","url":null,"abstract":"Motivated by the fact that a number of present state estimations require some presumed conditions and could not lead to a desired accuracy when they are applied to real systems, this paper is concerned with providing a new framework for the state estimation. We first introduce some existing methods of state estimations and describe their weaknesses for unknown bounded persistent elements. Aiming at taking into account more practical situations of real systems, which cannot be treated by the existing methods, this paper provides a new state estimation method by using the l1 optimal control theory. More precisely, the new state estimation method called the l1 optimal state estimation considers unknown bounded persistent elements such as external disturbances and measurement noises, which often occur in the systems and make the estimation difficult. The problem of minimizing the effect of the bounded persistent elements on the corresponding state estimation error could be mathematically formulated by using the arguments on l1 optimal state estimation introduced in this paper. Finally, the effectiveness of the l1 optimal state estimation is demonstrated through some simulation results associated with the center of mass (CoM) estimation for a bipedal robot on its linear inverted pendulum model (LIPM).","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"792-797"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89607969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515550
Karen Tatarian, Marine Chamoux, A. Pandey, M. Chetouani
With more social robots entering different industries such as educational systems, health-care facilities, and even airports, it is important to tackle problems that may hinder high quality interactions in a wild setting including group conversations. This paper presents an autonomous group conversational role coordinator system based on the proxemics of participants in the group with the robot including their distances and orientations. The system accordingly assigns to the group participants around the robot three different statuses: active, bystander, and overhearer. Once the statuses are estimated, the robot autonomously adjusts its gaze pattern in order to adapt to the group dynamics and attributes its attention in relation to the role the member in the group is playing. This system was evaluated through a pilot study (N=16), in which two participants at a time played a trivia game with the robot and had different roles to play within the interaction. The primary results imply that the participants interacting with a robot having this adaptive gaze behavior based on conversational role coordination are more likely to stand closer to the robot. In addition, the robot was perceived as more adaptable, sociable, and socially present as well as more likely to make the participants feel more attended to.
{"title":"Robot Gaze Behavior and Proxemics to Coordinate Conversational Roles in Group Interactions","authors":"Karen Tatarian, Marine Chamoux, A. Pandey, M. Chetouani","doi":"10.1109/RO-MAN50785.2021.9515550","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515550","url":null,"abstract":"With more social robots entering different industries such as educational systems, health-care facilities, and even airports, it is important to tackle problems that may hinder high quality interactions in a wild setting including group conversations. This paper presents an autonomous group conversational role coordinator system based on the proxemics of participants in the group with the robot including their distances and orientations. The system accordingly assigns to the group participants around the robot three different statuses: active, bystander, and overhearer. Once the statuses are estimated, the robot autonomously adjusts its gaze pattern in order to adapt to the group dynamics and attributes its attention in relation to the role the member in the group is playing. This system was evaluated through a pilot study (N=16), in which two participants at a time played a trivia game with the robot and had different roles to play within the interaction. The primary results imply that the participants interacting with a robot having this adaptive gaze behavior based on conversational role coordination are more likely to stand closer to the robot. In addition, the robot was perceived as more adaptable, sociable, and socially present as well as more likely to make the participants feel more attended to.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"169 1","pages":"1297-1304"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89055348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515470
Johannes Møgster, M. Stoelen, E. Kyrkjebø
Cooperative physical Human Robot Interaction (pHRI) aims to combine the best of human problem-solving skills with the strength, speed and accuracy of a robot. When humans and robots physically interact, there will be interaction Forces and Torques (F/Ts) that must be within safe limits to avoid threats to human life and unacceptable damage to equipment. When measured, these F/Ts can be limited by safety rated emergency stops, and one can design a compliant robot behavior to reduce interaction F/Ts, and avoid unnecessary emergency stops. Several recent collaborative robots offer measurements of interaction F/Ts by utilizing torque sensors in joints or observers for joint torque, and the classical end-effector F/T sensor can provide measurements of interaction at the working end of a robot. The end-effector wrench can be calculated from joint torques if and only if there is no interaction on the robot body. Typically, safety limits are evaluated around a single point of contact – on the end-effector or elsewhere on the robot body. This approach fails when a human uses both hands to interact with a robot, e.g. when hand guiding or otherwise cooperating with the robot placing one hand on the robot end-effector and the other hand on the robot elbow. Having two points of contact that are evaluated as one will limit the allowed F/Ts of the sum of the contacts rather than individually. In this paper, we introduce the body torque as the interaction on the body that is not the result of interactions on the end-effector. We then use this body torque, which is a more accurate representation of the forces applied to the robot body, to limit the body interaction F/Ts to ensure safe human-robot interaction. Furthermore, the body torque can be used to design null-space compliance for a redundant robot. Distinguishing body torque is a step towards safe cooperative pHRI, where body torque, unknown end-effector loads, and end-effector interaction F/T are all important measurements for safety, control and compliance.
{"title":"Estimating Robot Body Torque for Two-Handed Cooperative Physical Human-Robot Interaction","authors":"Johannes Møgster, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN50785.2021.9515470","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515470","url":null,"abstract":"Cooperative physical Human Robot Interaction (pHRI) aims to combine the best of human problem-solving skills with the strength, speed and accuracy of a robot. When humans and robots physically interact, there will be interaction Forces and Torques (F/Ts) that must be within safe limits to avoid threats to human life and unacceptable damage to equipment. When measured, these F/Ts can be limited by safety rated emergency stops, and one can design a compliant robot behavior to reduce interaction F/Ts, and avoid unnecessary emergency stops. Several recent collaborative robots offer measurements of interaction F/Ts by utilizing torque sensors in joints or observers for joint torque, and the classical end-effector F/T sensor can provide measurements of interaction at the working end of a robot. The end-effector wrench can be calculated from joint torques if and only if there is no interaction on the robot body. Typically, safety limits are evaluated around a single point of contact – on the end-effector or elsewhere on the robot body. This approach fails when a human uses both hands to interact with a robot, e.g. when hand guiding or otherwise cooperating with the robot placing one hand on the robot end-effector and the other hand on the robot elbow. Having two points of contact that are evaluated as one will limit the allowed F/Ts of the sum of the contacts rather than individually. In this paper, we introduce the body torque as the interaction on the body that is not the result of interactions on the end-effector. We then use this body torque, which is a more accurate representation of the forces applied to the robot body, to limit the body interaction F/Ts to ensure safe human-robot interaction. Furthermore, the body torque can be used to design null-space compliance for a redundant robot. Distinguishing body torque is a step towards safe cooperative pHRI, where body torque, unknown end-effector loads, and end-effector interaction F/T are all important measurements for safety, control and compliance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"279-284"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89061894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515535
G. Giannopoulou, Elsi-Mari Borrelli, Fiona McMaster
Technology can be empowering: It can lift the burden from tasks we find dreadful and help us excel in tasks we enjoy. Equally, a mismatch between the user interface (UI) and our skills can make us feel incompetent and reluctant towards the technology. With the trend to increase the level of automation in fields beyond traditional manufacturing, such as laboratories or small workshops, new technologies such as collaborative robots (cobots) are entering the workplace. The technology literacy levels of professionals in these fields may vary greatly depending on their age, gender, education and personal interests, creating a challenge in designing universal cobot UIs. In this qualitative study, we address the question of how introducing interaction skills and intelligence to cobots may inhibit or encourage users to use them in their work context. The interviews, performed with 15 individuals working in laboratory settings, gave rise to numerous themes relevant to the design of user-empowering cobot interfaces for individuals with varying technology literacy levels: As "programming may not be for a normal person", talking to a robot may not be for another. Incorporating the unique interests, fears, personal and domain experience of the end users can contribute to the design and development of cobot interfacesas diverse as their needs, thus maximizing the likelihood of successful integration across diverse work environments.
{"title":"\"Programming - It’s not for Normal People\": A Qualitative Study on User-Empowering Interfaces for Programming Collaborative Robots","authors":"G. Giannopoulou, Elsi-Mari Borrelli, Fiona McMaster","doi":"10.1109/RO-MAN50785.2021.9515535","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515535","url":null,"abstract":"Technology can be empowering: It can lift the burden from tasks we find dreadful and help us excel in tasks we enjoy. Equally, a mismatch between the user interface (UI) and our skills can make us feel incompetent and reluctant towards the technology. With the trend to increase the level of automation in fields beyond traditional manufacturing, such as laboratories or small workshops, new technologies such as collaborative robots (cobots) are entering the workplace. The technology literacy levels of professionals in these fields may vary greatly depending on their age, gender, education and personal interests, creating a challenge in designing universal cobot UIs. In this qualitative study, we address the question of how introducing interaction skills and intelligence to cobots may inhibit or encourage users to use them in their work context. The interviews, performed with 15 individuals working in laboratory settings, gave rise to numerous themes relevant to the design of user-empowering cobot interfaces for individuals with varying technology literacy levels: As \"programming may not be for a normal person\", talking to a robot may not be for another. Incorporating the unique interests, fears, personal and domain experience of the end users can contribute to the design and development of cobot interfacesas diverse as their needs, thus maximizing the likelihood of successful integration across diverse work environments.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87770284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}