Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900835
Jacob Arkin, Siddharth Patki, J. Rosser, T. Howard
Contemporary approaches to grounded language communication accept an utterance and current world representation as input and produce symbols representing the meaning as output. Since modern approaches to language understanding for human-robot interaction use techniques rooted in machine learning, the quality or sensitivity of the solution is often opaque relative to small changes in input. Although it is possible to sample and visualize solutions over a large space of inputs, naïve application of current techniques is often prohibitively expensive for real-time feedback. In this paper we address this problem by reformulating the inference process of Distributed Correspondence Graphs to only recompute subsets of spatially dependent constituent features over a space of sampled environment models. We quantitatively evaluate the speed of inference in physical experiments involving a tabletop robot manipulation scenario. We demonstrate the ability to visualize configurations of the environment where symbol grounding produces consistent solutions in real-time and illustrate how these techniques can be used to identify and repair gaps or inaccuracies in training data.
{"title":"An Efficient Algorithm for Visualization and Interpretation of Grounded Language Models","authors":"Jacob Arkin, Siddharth Patki, J. Rosser, T. Howard","doi":"10.1109/RO-MAN53752.2022.9900835","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900835","url":null,"abstract":"Contemporary approaches to grounded language communication accept an utterance and current world representation as input and produce symbols representing the meaning as output. Since modern approaches to language understanding for human-robot interaction use techniques rooted in machine learning, the quality or sensitivity of the solution is often opaque relative to small changes in input. Although it is possible to sample and visualize solutions over a large space of inputs, naïve application of current techniques is often prohibitively expensive for real-time feedback. In this paper we address this problem by reformulating the inference process of Distributed Correspondence Graphs to only recompute subsets of spatially dependent constituent features over a space of sampled environment models. We quantitatively evaluate the speed of inference in physical experiments involving a tabletop robot manipulation scenario. We demonstrate the ability to visualize configurations of the environment where symbol grounding produces consistent solutions in real-time and illustrate how these techniques can be used to identify and repair gaps or inaccuracies in training data.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131929900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900535
Benedetta Bucci, Alessandra Rossi, Silvia Rossi
Expressiveness is a critical feature for the communication between humans and robots, and it helps humans to better understand and accept a robot. Emotions can be expressed through a variety of modalities: kinesthetic (via facial expression), body posture and gestures, auditory, thus the acoustic features of speech, and semantic, thus the content of what is said. One of the most effective modalities to communicate emotions is through facial expressions. Social robots often show facial expressions with coded animations. However, the robot must be able to express appropriate emotional responses according to the interaction with people. In this work, we consider verbal interactions between humans and robots and propose a system composed of two modules for the generation of facial emotions by recognising the arousal and valence values of a written sentence. The first module, based on Bidirectional Encoder Representations from Transformers, is deployed for emotion recognition in a sentence. The second, an Auxiliary Classifier Generative Adversarial Network, is proposed for the generation of facial movements for expressing the recognised emotion in terms of valence and arousal.
{"title":"Action Unit Generation through Dimensional Emotion Recognition from Text","authors":"Benedetta Bucci, Alessandra Rossi, Silvia Rossi","doi":"10.1109/RO-MAN53752.2022.9900535","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900535","url":null,"abstract":"Expressiveness is a critical feature for the communication between humans and robots, and it helps humans to better understand and accept a robot. Emotions can be expressed through a variety of modalities: kinesthetic (via facial expression), body posture and gestures, auditory, thus the acoustic features of speech, and semantic, thus the content of what is said. One of the most effective modalities to communicate emotions is through facial expressions. Social robots often show facial expressions with coded animations. However, the robot must be able to express appropriate emotional responses according to the interaction with people. In this work, we consider verbal interactions between humans and robots and propose a system composed of two modules for the generation of facial emotions by recognising the arousal and valence values of a written sentence. The first module, based on Bidirectional Encoder Representations from Transformers, is deployed for emotion recognition in a sentence. The second, an Auxiliary Classifier Generative Adversarial Network, is proposed for the generation of facial movements for expressing the recognised emotion in terms of valence and arousal.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122175907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900704
J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke
The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.
{"title":"A Modular Interface for Controlling Interactive Behaviors of a Humanoid Robot for Socio-Emotional Skills Training","authors":"J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke","doi":"10.1109/RO-MAN53752.2022.9900704","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900704","url":null,"abstract":"The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900785
Jacqueline Borgstedt, F. Pollick, S. Brewster
Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.
{"title":"Hot or not? Exploring User Perceptions of thermal Human-Robot Interaction*","authors":"Jacqueline Borgstedt, F. Pollick, S. Brewster","doi":"10.1109/RO-MAN53752.2022.9900785","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900785","url":null,"abstract":"Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133868828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900710
G. Nicola, E. Villagrossi, N. Pedrocchi
Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.
{"title":"Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback","authors":"G. Nicola, E. Villagrossi, N. Pedrocchi","doi":"10.1109/RO-MAN53752.2022.9900710","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900710","url":null,"abstract":"Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900534
Patrícia Alves-Oliveira, Elin A. Björling, Patriya Wiesmann, Heba Dwikat, S. Bhatia, Kai Mihata, M. Cakmak
Adolescents isolated at home during the COVID19 pandemic lockdown are more likely to feel lonely and in need of social connection. Social robots may provide a much needed social interaction without the risk of contracting an infection. In this paper, we detail our co-design process used to engage adolescents in the design of a social robot prototype intended to broadly support their mental health. Data gathered from our four week design study of nine remote sessions and interviews with 16 adolescents suggested the following design requirements for a home robot: (1) be able to enact a set of roles including a coach, companion, and confidant; (2) amplify human-to-human connection by supporting peer relationships; (3) account for data privacy and device ownership. Design materials are available in open-access, contributing to best practices for the field of Human-Robot Interaction.
{"title":"Robots for Connection: A Co-Design Study with Adolescents","authors":"Patrícia Alves-Oliveira, Elin A. Björling, Patriya Wiesmann, Heba Dwikat, S. Bhatia, Kai Mihata, M. Cakmak","doi":"10.1109/RO-MAN53752.2022.9900534","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900534","url":null,"abstract":"Adolescents isolated at home during the COVID19 pandemic lockdown are more likely to feel lonely and in need of social connection. Social robots may provide a much needed social interaction without the risk of contracting an infection. In this paper, we detail our co-design process used to engage adolescents in the design of a social robot prototype intended to broadly support their mental health. Data gathered from our four week design study of nine remote sessions and interviews with 16 adolescents suggested the following design requirements for a home robot: (1) be able to enact a set of roles including a coach, companion, and confidant; (2) amplify human-to-human connection by supporting peer relationships; (3) account for data privacy and device ownership. Design materials are available in open-access, contributing to best practices for the field of Human-Robot Interaction.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116058662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900559
N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn
Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.
{"title":"Moving away from robotic interactions: Evaluation of empathy, emotion and sentiment expressed and detected by computer systems","authors":"N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn","doi":"10.1109/RO-MAN53752.2022.9900559","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900559","url":null,"abstract":"Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900794
Jack Kolb, H. Ravichandar, S. Chernova
Mixed human-robot teams (HRTs) have the potential to perform complex tasks by leveraging diverse and complementary capabilities within the team. However, assigning humans to operator roles in HRTs is challenging due to the significant variation in user capabilities. While much of prior work in role assignment treats humans as interchangeable (either generally or within a category), we investigate the utility of personalized models of operator capabilities based in relevant human factors in an effort to improve overall team performance. We call this approach individualized role assignment (IRA) and provide a formal definition. A key challenge for IRA is associated with the fact that factors that affect human performance are not static (e.g., one’s ability to track multiple objects can change during or between tasks). Instead of relying on time-consuming and highly-intrusive measurements taken during the execution of tasks, we propose the use of short cognitive tests, taken before engaging in human-robot tasks, and predictive models of individual performance to perform IRA. Results from a comprehensive user study conclusively demonstrate that IRA leads to significantly better team performance than a baseline method that assumes human operators are interchangeable, even when we control for the influence of the robots’ performance. Further, our results point to the possibility that such relative benefits of IRA will increase as the number of operators (i.e., choices) increase for a fixed number of tasks.
{"title":"Leveraging Cognitive States in Human-Robot Teaming","authors":"Jack Kolb, H. Ravichandar, S. Chernova","doi":"10.1109/RO-MAN53752.2022.9900794","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900794","url":null,"abstract":"Mixed human-robot teams (HRTs) have the potential to perform complex tasks by leveraging diverse and complementary capabilities within the team. However, assigning humans to operator roles in HRTs is challenging due to the significant variation in user capabilities. While much of prior work in role assignment treats humans as interchangeable (either generally or within a category), we investigate the utility of personalized models of operator capabilities based in relevant human factors in an effort to improve overall team performance. We call this approach individualized role assignment (IRA) and provide a formal definition. A key challenge for IRA is associated with the fact that factors that affect human performance are not static (e.g., one’s ability to track multiple objects can change during or between tasks). Instead of relying on time-consuming and highly-intrusive measurements taken during the execution of tasks, we propose the use of short cognitive tests, taken before engaging in human-robot tasks, and predictive models of individual performance to perform IRA. Results from a comprehensive user study conclusively demonstrate that IRA leads to significantly better team performance than a baseline method that assumes human operators are interchangeable, even when we control for the influence of the robots’ performance. Further, our results point to the possibility that such relative benefits of IRA will increase as the number of operators (i.e., choices) increase for a fixed number of tasks.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115146761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900770
Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn
Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.
{"title":"Task Selection and Planning in Human-Robot Collaborative Processes: To be a Leader or a Follower?","authors":"Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn","doi":"10.1109/RO-MAN53752.2022.9900770","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900770","url":null,"abstract":"Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"182 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116706677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900684
A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg
The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.
{"title":"Designing Online Multiplayer Games with Haptically and Virtually Linked Tangible Robots to Enhance Social Interaction in Therapy","authors":"A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg","doi":"10.1109/RO-MAN53752.2022.9900684","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900684","url":null,"abstract":"The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121378526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}