Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900547
Daniel Schäle, M. Stoelen, E. Kyrkjebø
For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.
为了成功地部署物理人机合作(pHRC),人类需要能够快速教会机器人新的运动技能。概率运动原语(Probabilistic movement primitives, ProMPs)是一种很有前途的方法,可以对机器人在pHRC环境中从人类演示中学习到的运动技能进行编码。然而,大多数从人类演示中学习promp的算法都是以批处理模式运行的,当我们希望人类和机器人从第一次演示开始就一起工作时,这在pHRC中并不理想。在本文中,我们提出了一种新的学习算法,用于在pHRC环境中增量和连续地学习promp。我们的算法结合了新的演示顺序,因为他们到达,允许人类观察机器人的学习进度,并逐步塑造机器人的运动技能。内置的遗忘因素允许由于人类的学习曲线或任务限制的变化而产生的纠正性演示。我们将算法的性能与现有的批量ProMP算法在实验室的拾取任务生成的参考数据上进行了比较。此外,我们还展示了遗忘因素如何使我们适应任务中的变化。本文提出的增量学习算法有可能导致更直观的学习过程,并比批处理模式更快地建立人与机器人之间的成功合作。
{"title":"Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives","authors":"Daniel Schäle, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN53752.2022.9900547","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900547","url":null,"abstract":"For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900684
A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg
The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.
{"title":"Designing Online Multiplayer Games with Haptically and Virtually Linked Tangible Robots to Enhance Social Interaction in Therapy","authors":"A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg","doi":"10.1109/RO-MAN53752.2022.9900684","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900684","url":null,"abstract":"The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121378526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900559
N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn
Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.
{"title":"Moving away from robotic interactions: Evaluation of empathy, emotion and sentiment expressed and detected by computer systems","authors":"N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn","doi":"10.1109/RO-MAN53752.2022.9900559","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900559","url":null,"abstract":"Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900665
Amandine Mayima, A. Clodic, R. Alami
The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.
{"title":"JAHRVIS, a Supervision System for Human-Robot Collaboration","authors":"Amandine Mayima, A. Clodic, R. Alami","doi":"10.1109/RO-MAN53752.2022.9900665","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900665","url":null,"abstract":"The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126708044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900738
Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie
Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.
{"title":"A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks","authors":"Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie","doi":"10.1109/RO-MAN53752.2022.9900738","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900738","url":null,"abstract":"Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127090460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.
{"title":"A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots","authors":"Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab","doi":"10.1109/RO-MAN53752.2022.9900581","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900581","url":null,"abstract":"Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126019013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900710
G. Nicola, E. Villagrossi, N. Pedrocchi
Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.
{"title":"Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback","authors":"G. Nicola, E. Villagrossi, N. Pedrocchi","doi":"10.1109/RO-MAN53752.2022.9900710","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900710","url":null,"abstract":"Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900770
Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn
Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.
{"title":"Task Selection and Planning in Human-Robot Collaborative Processes: To be a Leader or a Follower?","authors":"Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn","doi":"10.1109/RO-MAN53752.2022.9900770","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900770","url":null,"abstract":"Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"182 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116706677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900704
J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke
The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.
{"title":"A Modular Interface for Controlling Interactive Behaviors of a Humanoid Robot for Socio-Emotional Skills Training","authors":"J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke","doi":"10.1109/RO-MAN53752.2022.9900704","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900704","url":null,"abstract":"The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-29DOI: 10.1109/RO-MAN53752.2022.9900785
Jacqueline Borgstedt, F. Pollick, S. Brewster
Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.
{"title":"Hot or not? Exploring User Perceptions of thermal Human-Robot Interaction*","authors":"Jacqueline Borgstedt, F. Pollick, S. Brewster","doi":"10.1109/RO-MAN53752.2022.9900785","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900785","url":null,"abstract":"Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133868828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}