Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900814
E. K. Duarte, M. Shiomi, A. Vinel, M. Cooney
Could a social robot use force to prevent violence directed toward humans in its care?–Might crime be eradicated, or conversely could excessive use of force proliferate and human dignity become trampled beneath cold robotic wheels? Such speculation is one part of a larger, increasingly important question of how social robots will be expected to behave in our societies, as robotic technologies develop and become increasingly widespread. Here, to gain some insight into this topic of "robot self-defense", we proposed a simplified heuristic based on perceived risk of loss to predict acceptability, and conducted a user survey with 304 participants, who watched eight animated videos of robots and humans in a violent altercation. The results indicated that people largely accept the idea that a humanoid robot can use force on attackers to help others. Furthermore, self-defense was perceived as more acceptable when the appearance of the defender was humanoid rather than mechanical, and when the force disparity between attacker and defender was high. The immediate suggestion is that it could be beneficial to re-examine common assumptions that a robot should never harm or risk harming humans, and to discuss and consider the possibilities for robot self-defense.
{"title":"Robot Self-defense: Robots Can Use Force on Human Attackers to Defend Victims*","authors":"E. K. Duarte, M. Shiomi, A. Vinel, M. Cooney","doi":"10.1109/RO-MAN53752.2022.9900814","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900814","url":null,"abstract":"Could a social robot use force to prevent violence directed toward humans in its care?–Might crime be eradicated, or conversely could excessive use of force proliferate and human dignity become trampled beneath cold robotic wheels? Such speculation is one part of a larger, increasingly important question of how social robots will be expected to behave in our societies, as robotic technologies develop and become increasingly widespread. Here, to gain some insight into this topic of \"robot self-defense\", we proposed a simplified heuristic based on perceived risk of loss to predict acceptability, and conducted a user survey with 304 participants, who watched eight animated videos of robots and humans in a violent altercation. The results indicated that people largely accept the idea that a humanoid robot can use force on attackers to help others. Furthermore, self-defense was perceived as more acceptable when the appearance of the defender was humanoid rather than mechanical, and when the force disparity between attacker and defender was high. The immediate suggestion is that it could be beneficial to re-examine common assumptions that a robot should never harm or risk harming humans, and to discuss and consider the possibilities for robot self-defense.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133469421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900743
Javier Laplaza, A. Garrell, F. Moreno-Noguer, A. Sanfeliu
In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in handover human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a handover collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it’s planner. Several experiments are performed where human volunteers fill a standard poll to rate different features, taking into account when the robot uses the prediction versus when the robot doesn’t use the prediction.
{"title":"Context and Intention for 3D Human Motion Prediction: Experimentation and User study in Handover Tasks","authors":"Javier Laplaza, A. Garrell, F. Moreno-Noguer, A. Sanfeliu","doi":"10.1109/RO-MAN53752.2022.9900743","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900743","url":null,"abstract":"In this work we present a novel attention deep learning model that uses context and human intention for 3D human body motion prediction in handover human-robot tasks. This model uses a multi-head attention architecture which incorporates as inputs the human motion, the robot end effector and the position of the obstacles. The outputs of the model are the predicted motion of the human body and the predicted human intention. We use this model to analyze a handover collaborative task with a robot where the robot is able to predict the future motion of the human and use this information in it’s planner. Several experiments are performed where human volunteers fill a standard poll to rate different features, taking into account when the robot uses the prediction versus when the robot doesn’t use the prediction.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132743468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900567
Rebecca Stower, E. Zibetti, D. St-Onge
Behaviours of robot swarms often take inspiration from biological models, such as ant colonies and bee hives. Yet, understanding how these behaviours are actually perceived by human users has so far received limited attention. In this paper, we use animations to represent different kinds of possible swarm motions intended to communicate specific messages to a human. We explore how these animations relate to the perceived group cohesiveness of the swarm, comprised of five different parameters: synchronising, grouping, following, reacting, and shape forming. We conducted an online user study where 98 participants viewed nine animations of a swarm displaying different behaviours and rated them for perceived group cohesiveness. We found that the parameters of group cohesiveness correlated with the messages the swarm was perceived as communicating. In particular, the message of initiating communication was highly positively correlated with all group parameters, whereas broken communication was negatively correlated. In addition, the importance of specific group parameters differed within each animation. For example, the parameter of grouping was most associated with animations signalling an intervention is needed. These findings are discussed within the context of designing intuitive behaviour for robot swarms.
{"title":"Bots of a Feather: Exploring User Perceptions of Group Cohesiveness for Application in Robotic Swarms","authors":"Rebecca Stower, E. Zibetti, D. St-Onge","doi":"10.1109/RO-MAN53752.2022.9900567","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900567","url":null,"abstract":"Behaviours of robot swarms often take inspiration from biological models, such as ant colonies and bee hives. Yet, understanding how these behaviours are actually perceived by human users has so far received limited attention. In this paper, we use animations to represent different kinds of possible swarm motions intended to communicate specific messages to a human. We explore how these animations relate to the perceived group cohesiveness of the swarm, comprised of five different parameters: synchronising, grouping, following, reacting, and shape forming. We conducted an online user study where 98 participants viewed nine animations of a swarm displaying different behaviours and rated them for perceived group cohesiveness. We found that the parameters of group cohesiveness correlated with the messages the swarm was perceived as communicating. In particular, the message of initiating communication was highly positively correlated with all group parameters, whereas broken communication was negatively correlated. In addition, the importance of specific group parameters differed within each animation. For example, the parameter of grouping was most associated with animations signalling an intervention is needed. These findings are discussed within the context of designing intuitive behaviour for robot swarms.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133779839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900631
Padraig Higgins, Ryan Barron, Cynthia Matuszek
Modern robotics heavily relies on machine learning and has a growing need for training data. Advances and commercialization of virtual reality (VR) present an opportunity to use VR as a tool to gather such data for human-robot interactions. We present the Robot Interaction in VR simulator, which allows human participants to interact with simulated robots and environments in real-time. We are particularly interested in spoken interactions between the human and robot, which can be combined with the robot’s sensory data for language grounding. To demonstrate the utility of the simulator, we describe a study which investigates whether a user’s head pose can serve as a proxy for gaze in a VR object selection task. Participants were asked to describe a series of known objects, providing approximate labels for the focus of attention. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants’ attention and linguistic descriptions.
{"title":"Head Pose for Object Deixis in VR-Based Human-Robot Interaction","authors":"Padraig Higgins, Ryan Barron, Cynthia Matuszek","doi":"10.1109/RO-MAN53752.2022.9900631","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900631","url":null,"abstract":"Modern robotics heavily relies on machine learning and has a growing need for training data. Advances and commercialization of virtual reality (VR) present an opportunity to use VR as a tool to gather such data for human-robot interactions. We present the Robot Interaction in VR simulator, which allows human participants to interact with simulated robots and environments in real-time. We are particularly interested in spoken interactions between the human and robot, which can be combined with the robot’s sensory data for language grounding. To demonstrate the utility of the simulator, we describe a study which investigates whether a user’s head pose can serve as a proxy for gaze in a VR object selection task. Participants were asked to describe a series of known objects, providing approximate labels for the focus of attention. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants’ attention and linguistic descriptions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124387976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900804
Baisong Liu, D. Tetteroo, A. Timmermans, P. Markopoulos
This paper presents a qualitative study that uses video prototypes and interviews to explore older adults’ acceptance, needs, and design requirements towards a social robotic application for physical rehabilitation. Our study identified the benefits of applying social robots (SR) in physical rehabilitation. Further, we discovered participants’ preference for an anthropomorphic social robot design. The data revealed a desire for social interaction could increase motivation for older adults to engage in an active lifestyle and social robot acceptance. However, participants showed low motivation for technology adoption and negatively anthropomorphize the social robot, which lowers acceptance for their application. This work complements the current user-centered explorations with SR in rehabilitation, and provides considerations for SR design for rehabilitative applications.
{"title":"Exploring Older Adults’ Acceptance, Needs, and Design Requirements towards Applying Social Robots in a Rehabilitation Context","authors":"Baisong Liu, D. Tetteroo, A. Timmermans, P. Markopoulos","doi":"10.1109/RO-MAN53752.2022.9900804","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900804","url":null,"abstract":"This paper presents a qualitative study that uses video prototypes and interviews to explore older adults’ acceptance, needs, and design requirements towards a social robotic application for physical rehabilitation. Our study identified the benefits of applying social robots (SR) in physical rehabilitation. Further, we discovered participants’ preference for an anthropomorphic social robot design. The data revealed a desire for social interaction could increase motivation for older adults to engage in an active lifestyle and social robot acceptance. However, participants showed low motivation for technology adoption and negatively anthropomorphize the social robot, which lowers acceptance for their application. This work complements the current user-centered explorations with SR in rehabilitation, and provides considerations for SR design for rehabilitative applications.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115129419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900679
C. Colceriu, B. Leichtmann, S. Brell-Çokcan, W. Jonas, V. Nitsch
While user-centered design philosophy and corresponding design recommendations are central pillars of human-robot interaction (HRI) research, the process how to move from such abstract and generalized design recommendations to concrete, context-specific design implementations remains under-researched and vague in the literature. The goal of this paper is therefore to show an approach for moving from abstract design recommendations to a concrete interface, and thus illustrates a design process that is rarely illustrated in concrete terms in HRI. This is done using a real-world use case of designing a possible user-centered interface for mobile cooperative manufacturing robots for assembly work in a medium-sized company. A study is presented to conceptualize and test a Research-through-Design approach, which combines transdisciplinary methods to determine relevant information which should be displayed on a graphical user interface (GUI) for HRI. Based on the use case, a Goal-Directed Task Analysis (GDTA) was conducted, consisting of a participatory observation and interviews with subject matter experts to analyze an assembly task from the work objective to the information units. The acquired information has been transferred to a physical model. A wireframe has been created to show how the results of the GDTA and the physical model can be applied to a GUI. The wireframe design has been evaluated through qualitative interviews with end users (n = 12) to get first estimates about its relevance. In order to validate the applied methods, design and engineering students (n = 10) repeated the process in stages followed by interviews. The results indicate that the method mix shows potential and leads to supportive user interfaces.
{"title":"From Task Analysis to Wireframe Design: An Approach to User-Centered Design of a GUI for Mobile HRI at Assembly Workplaces","authors":"C. Colceriu, B. Leichtmann, S. Brell-Çokcan, W. Jonas, V. Nitsch","doi":"10.1109/RO-MAN53752.2022.9900679","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900679","url":null,"abstract":"While user-centered design philosophy and corresponding design recommendations are central pillars of human-robot interaction (HRI) research, the process how to move from such abstract and generalized design recommendations to concrete, context-specific design implementations remains under-researched and vague in the literature. The goal of this paper is therefore to show an approach for moving from abstract design recommendations to a concrete interface, and thus illustrates a design process that is rarely illustrated in concrete terms in HRI. This is done using a real-world use case of designing a possible user-centered interface for mobile cooperative manufacturing robots for assembly work in a medium-sized company. A study is presented to conceptualize and test a Research-through-Design approach, which combines transdisciplinary methods to determine relevant information which should be displayed on a graphical user interface (GUI) for HRI. Based on the use case, a Goal-Directed Task Analysis (GDTA) was conducted, consisting of a participatory observation and interviews with subject matter experts to analyze an assembly task from the work objective to the information units. The acquired information has been transferred to a physical model. A wireframe has been created to show how the results of the GDTA and the physical model can be applied to a GUI. The wireframe design has been evaluated through qualitative interviews with end users (n = 12) to get first estimates about its relevance. In order to validate the applied methods, design and engineering students (n = 10) repeated the process in stages followed by interviews. The results indicate that the method mix shows potential and leads to supportive user interfaces.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117351504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900601
D. Mukherjee, Kashish Gupta, H. Najjaran
Cohesive human-robot collaboration (HRC) for carrying out an industrial task requires an intelligent robot capable of functioning in uncertain and noisy environments. This can be achieved through seamless and natural communication between human and robot partners. Introducing naturalness in communication is highly complex due to both aleatoric variability and epistemic uncertainty originating from the components of the HRC system including the human, sensors, robot(s), and the environment. The presented work proposes the artificial intelligence (AI)-powered multimodal, robust fusion (AI-MRF) architecture that combines communication modalities from the human for a more natural communication. The proposed architecture utilizes fuzzy inferencing and Dempster-Shafer theory for deal with different manifestations of uncertainty. AI-MRF is scalable and modular. The evaluation of AI-MRF for safety and robustness under real-world mimicking case studies is showcased. While the architecture has been evaluated for HRC in industrial settings, it can be readily implemented into any human and machine communication scenarios.
{"title":"An AI-powered Hierarchical Communication Framework for Robust Human-Robot Collaboration in Industrial Settings","authors":"D. Mukherjee, Kashish Gupta, H. Najjaran","doi":"10.1109/RO-MAN53752.2022.9900601","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900601","url":null,"abstract":"Cohesive human-robot collaboration (HRC) for carrying out an industrial task requires an intelligent robot capable of functioning in uncertain and noisy environments. This can be achieved through seamless and natural communication between human and robot partners. Introducing naturalness in communication is highly complex due to both aleatoric variability and epistemic uncertainty originating from the components of the HRC system including the human, sensors, robot(s), and the environment. The presented work proposes the artificial intelligence (AI)-powered multimodal, robust fusion (AI-MRF) architecture that combines communication modalities from the human for a more natural communication. The proposed architecture utilizes fuzzy inferencing and Dempster-Shafer theory for deal with different manifestations of uncertainty. AI-MRF is scalable and modular. The evaluation of AI-MRF for safety and robustness under real-world mimicking case studies is showcased. While the architecture has been evaluated for HRC in industrial settings, it can be readily implemented into any human and machine communication scenarios.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125755977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900615
Cara Bloom, Josiah Emery
Robots operating in public spaces, such as autonomous vehicles, will necessarily collect images and other data concerning the people and vehicles in their vicinity, raising privacy concerns. Common conceptions of privacy in robotics do not include the challenges of many-to-many surveillance where fleets of many individual robots collect data on many people during operation. Technologists, legal scholars, and privacy researchers recommend such technologies fulfill the reasonable privacy expectations of society, but there is no standard method for measuring privacy expectations. We propose a method informed by Contextual Integrity Theory for identifying societal privacy expectations for autonomous vehicle-collected data and codifying the contextual expectations as norms. We present a study (n = 600) that identifies twelve distinct norms, which are made up of contextual factors such as the subject of data collection and the data use. In a model for tolerance of autonomous vehicle data collection, we find that both contextual factors related to the data processing and factors related to the individual are significant predictors.
{"title":"Privacy Expectations for Human-Autonomous Vehicle Interactions","authors":"Cara Bloom, Josiah Emery","doi":"10.1109/RO-MAN53752.2022.9900615","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900615","url":null,"abstract":"Robots operating in public spaces, such as autonomous vehicles, will necessarily collect images and other data concerning the people and vehicles in their vicinity, raising privacy concerns. Common conceptions of privacy in robotics do not include the challenges of many-to-many surveillance where fleets of many individual robots collect data on many people during operation. Technologists, legal scholars, and privacy researchers recommend such technologies fulfill the reasonable privacy expectations of society, but there is no standard method for measuring privacy expectations. We propose a method informed by Contextual Integrity Theory for identifying societal privacy expectations for autonomous vehicle-collected data and codifying the contextual expectations as norms. We present a study (n = 600) that identifies twelve distinct norms, which are made up of contextual factors such as the subject of data collection and the data use. In a model for tolerance of autonomous vehicle data collection, we find that both contextual factors related to the data processing and factors related to the individual are significant predictors.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900596
Omer Keidar, S. Olatunji, Y. Edan
Mobile robotic telepresence (MRP) has emerged as a possible solution for supporting health caregivers in a multitude of tasks such as monitoring, pre-diagnosis, and delivery of items. Improved interaction with the system is an important part of using such MRP systems. The current study compared two feedback types ('push' and 'pull') for controlling mobile robots via telepresence. An experimental system that represented a hospital environment was developed. A remote operator (defined as a user) teleoperated a mobile robot to deliver medication supplies to a patient and receive samples from the patient while attending to a secondary task involving medical records. The influence of the feedback types on different aspects of performance and user perception was investigated. User studies were performed with 20 participants coming from two different types of groups – users with and without technological backgrounds. Results revealed that for both user types, the 'push' feedback enhances performance, situation awareness, and satisfaction compared to the 'pull' feedback. The study highlights the potential of improving the telecare experience with MRPs through different feedback types.
{"title":"Push and Pull Feedback in Mobile Robotic Telepresence - A Telecare Case Study","authors":"Omer Keidar, S. Olatunji, Y. Edan","doi":"10.1109/RO-MAN53752.2022.9900596","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900596","url":null,"abstract":"Mobile robotic telepresence (MRP) has emerged as a possible solution for supporting health caregivers in a multitude of tasks such as monitoring, pre-diagnosis, and delivery of items. Improved interaction with the system is an important part of using such MRP systems. The current study compared two feedback types ('push' and 'pull') for controlling mobile robots via telepresence. An experimental system that represented a hospital environment was developed. A remote operator (defined as a user) teleoperated a mobile robot to deliver medication supplies to a patient and receive samples from the patient while attending to a secondary task involving medical records. The influence of the feedback types on different aspects of performance and user perception was investigated. User studies were performed with 20 participants coming from two different types of groups – users with and without technological backgrounds. Results revealed that for both user types, the 'push' feedback enhances performance, situation awareness, and satisfaction compared to the 'pull' feedback. The study highlights the potential of improving the telecare experience with MRPs through different feedback types.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129840884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900645
Ebru Emir, C. Burns
In today’s world, it is not uncommon to see robots adopted in various domains and environments. Robots take over several roles and tasks from manufacturing facilities to households and offices. It is crucial to measure people’s judgment of robots' social attributes since the findings can shape the future design for social robots. Using only a simple and mono-functional robotic vacuum cleaner, this paper investigates the impact of expressive motions on how people perceive the social attributes of the robot. The Laban Effort Features, a framework for movement analysis that emerged from dance, was modified to design expressive motions for a simple cleaning task. Participants were asked to rate the social attributes of the robot under several treatment conditions using a video-based online survey. The results indicated that velocity influenced people’s ratings of the robot’s warmth and competence, while path planning behavior influenced people’s ratings of the robot’s competence and discomfort. Limitations of this study include the kinematic constraints of the robot, potential issues with survey design, and technical constraints related to the open interface provided by the robot’s developer. The findings should be considered when incorporating expressive motions into domestic service robots operating in social settings.
{"title":"Evaluation of Expressive Motions based on the Framework of Laban Effort Features for Social Attributes of Robots*","authors":"Ebru Emir, C. Burns","doi":"10.1109/RO-MAN53752.2022.9900645","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900645","url":null,"abstract":"In today’s world, it is not uncommon to see robots adopted in various domains and environments. Robots take over several roles and tasks from manufacturing facilities to households and offices. It is crucial to measure people’s judgment of robots' social attributes since the findings can shape the future design for social robots. Using only a simple and mono-functional robotic vacuum cleaner, this paper investigates the impact of expressive motions on how people perceive the social attributes of the robot. The Laban Effort Features, a framework for movement analysis that emerged from dance, was modified to design expressive motions for a simple cleaning task. Participants were asked to rate the social attributes of the robot under several treatment conditions using a video-based online survey. The results indicated that velocity influenced people’s ratings of the robot’s warmth and competence, while path planning behavior influenced people’s ratings of the robot’s competence and discomfort. Limitations of this study include the kinematic constraints of the robot, potential issues with survey design, and technical constraints related to the open interface provided by the robot’s developer. The findings should be considered when incorporating expressive motions into domestic service robots operating in social settings.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127988049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}