首页 > 最新文献

2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives 基于概率运动原语的人-机器人物理合作的持续和增量学习
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900547
Daniel Schäle, M. Stoelen, E. Kyrkjebø
For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.
为了成功地部署物理人机合作(pHRC),人类需要能够快速教会机器人新的运动技能。概率运动原语(Probabilistic movement primitives, ProMPs)是一种很有前途的方法,可以对机器人在pHRC环境中从人类演示中学习到的运动技能进行编码。然而,大多数从人类演示中学习promp的算法都是以批处理模式运行的,当我们希望人类和机器人从第一次演示开始就一起工作时,这在pHRC中并不理想。在本文中,我们提出了一种新的学习算法,用于在pHRC环境中增量和连续地学习promp。我们的算法结合了新的演示顺序,因为他们到达,允许人类观察机器人的学习进度,并逐步塑造机器人的运动技能。内置的遗忘因素允许由于人类的学习曲线或任务限制的变化而产生的纠正性演示。我们将算法的性能与现有的批量ProMP算法在实验室的拾取任务生成的参考数据上进行了比较。此外,我们还展示了遗忘因素如何使我们适应任务中的变化。本文提出的增量学习算法有可能导致更直观的学习过程,并比批处理模式更快地建立人与机器人之间的成功合作。
{"title":"Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives","authors":"Daniel Schäle, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN53752.2022.9900547","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900547","url":null,"abstract":"For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing Online Multiplayer Games with Haptically and Virtually Linked Tangible Robots to Enhance Social Interaction in Therapy 设计具有触觉和虚拟连接的实体机器人的在线多人游戏,以增强治疗中的社会互动
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900684
A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg
The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.
治疗和培训的社会方面对患者避免社会孤立很重要,在设计平台时必须考虑到这一点,特别是在家庭康复方面。我们提出了一个在线版本的之前提出的有形吃豆人游戏上肢训练与触觉启用有形Cellulo机器人。我们的主要目标是通过社交整合提高玩家的积极性和参与度,并在远距离形成游戏化的多人康复模式。因此,允许亲戚、孩子和朋友与他们所爱的人联系和玩耍,同时也帮助他们从世界任何地方接受训练。以及通过触觉连接能力将治疗师与患者联系起来。当存在可能隔离老年人(占所有康复患者的大多数)的社会距离措施时,这一点尤其重要。
{"title":"Designing Online Multiplayer Games with Haptically and Virtually Linked Tangible Robots to Enhance Social Interaction in Therapy","authors":"A. Ozgur, Hala Khodr, Mehdi Akeddar, Michael Roust, P. Dillenbourg","doi":"10.1109/RO-MAN53752.2022.9900684","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900684","url":null,"abstract":"The social aspects of therapy and training are important for patients to avoid social isolation and must be considered when designing a platform, especially for home-based rehabilitation. We proposed an online version of the previously proposed tangible Pacman game for upper limb training with haptic-enabled tangible Cellulo robots. Our main objective is to enhance motivation and engagement through social integration and also to form a gamified multiplayer rehabilitation at a distance. Thus, allowing relatives, children, and friends to connect and play with their loved ones while also helping them with their training from anywhere in the world. As well as connecting therapists to their patients through haptically linking capabilities. This is especially relevant when there are social distancing measures which might isolate the elderly population, a majority of all rehabilitation patients.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121378526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Moving away from robotic interactions: Evaluation of empathy, emotion and sentiment expressed and detected by computer systems 远离机器人互动:评估由计算机系统表达和检测的同理心、情感和情绪
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900559
N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn
Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.
社交机器人经常被批评为过于“机械”和缺乏情感。对于情感人机交互(HRI),机器人必须检测情感并表达情感和同理心作为回报。我们探索了人们在多大程度上可以从计算机系统表达的语音中检测到情绪、同理心和情绪,重点关注韵律(音高、音调、音量)的变化,以及与情感分析仪相比,人们如何从书面文本中识别情绪。89名参与者从调查中嵌入的音频和文本中识别出同理心、情感和情绪。共鸣和情感在音频中表达得最好,而情绪是最难察觉的(分别为75%,67%和42%)。我们发现适度的协议(70%)之间的情绪确定的参与者和分析师。计算机系统有可能通过韵律的变化来表达情感,也有可能通过分析文本来识别情感。这可能有助于进一步发展社交机器人的情感能力和适当的反应,以避免“机器人”互动。未来的研究应该探索如何更好地表达负面情绪和情绪,同时利用多模态方法来进行人力资源调查。
{"title":"Moving away from robotic interactions: Evaluation of empathy, emotion and sentiment expressed and detected by computer systems","authors":"N. Gasteiger, Jongyoon Lim, Mehdi Hellou, Bruce A. MacDonald, H. Ahn","doi":"10.1109/RO-MAN53752.2022.9900559","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900559","url":null,"abstract":"Social robots are often critiqued as being too ‘robotic’ and unemotional. For affective human-robot interaction (HRI), robots must detect sentiment and express emotion and empathy in return. We explored the extent to which people can detect emotions, empathy and sentiment from speech expressed by a computer system, with a focus on changes in prosody (pitch, tone, volume) and how people identify sentiment from written text, compared to a sentiment analyzer. 89 participants identified empathy, emotion and sentiment from audio and text embedded in a survey. Empathy and sentiment were best expressed in the audio, while emotions were the most difficult detect (75%, 67% and 42% respectively). We found moderate agreement (70%) between the sentiment identified by the participants and the analyzer. There is potential for computer systems to express affect by using changes in prosody, as well as analyzing text to identify sentiment. This may help to further develop affective capabilities and appropriate responses in social robots, in order to avoid ‘robotic’ interactions. Future research should explore how to better express negative sentiment and emotions, while leveraging multi-modal approaches to HRI.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
JAHRVIS, a Supervision System for Human-Robot Collaboration 人机协作监控系统JAHRVIS
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900665
Amandine Mayima, A. Clodic, R. Alami
The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.
监督组件是机器人架构的粘合剂。没有它,就没有任务,没有交互发生,它引导架构的其他组件实现目标,这意味着,在与人类合作的上下文中,带来物理环境的变化并更新人类伙伴的精神状态。然而,没有太多的工作集中在这个负责机器人决策和控制的组件上,而这是机器人木偶师。大多数情况下,要么是简单地编写任务脚本,要么是为特定任务构建监督程序。因此,我们提出了JAHRVIS,一个基于联合行动的人类感知监督者。它旨在独立于任务,同时实施一套关键的联合行动和协作机制。有了这篇文章,我们打算推动自主协作机器人的部署,并附上我们的开源代码。
{"title":"JAHRVIS, a Supervision System for Human-Robot Collaboration","authors":"Amandine Mayima, A. Clodic, R. Alami","doi":"10.1109/RO-MAN53752.2022.9900665","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900665","url":null,"abstract":"The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126708044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks 基于层次强化学习网络的样本效率改进方法
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900738
Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie
Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.
从演示中学习(LfD)方法在社交机器人部署后,已经引起了人们对其在医疗保健、教育和服务领域教授各种任务的极大兴趣。这些LfD方法通常需要大量的演示,以便机器人从任务演示中学习性能模型。然而,在现实世界的应用中,要求非专家为社交机器人提供大量的演示来学习一项任务是不切实际的。在本文中,我们提出了一种通过数据增强、动态体验重播大小和分层深度q网络(DQN)来提高现有演示方法学习的样本效率的方法。在两个不同的数据集上验证了我们的方法后,结果表明我们提出的分层DQN在从演示中学习任务时有效地提高了样本效率。在未来,这种样本效率的方法有可能提高我们将LfD方法应用于社交机器人的能力,以在演示数据有限、稀疏和不平衡的领域中学习任务。
{"title":"A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks","authors":"Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie","doi":"10.1109/RO-MAN53752.2022.9900738","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900738","url":null,"abstract":"Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127090460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots 仿人机器人情感感知与适应的自学习系统
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900581
Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab
Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.
人形机器人为使用情感识别进行个性化交互提供了独特的机会。然而,在复杂的社会互动中,类人机器人的情感识别在互动的灵活性以及响应的个性化和适应性方面受到限制。我们设计了一个用于实时情绪识别的自适应学习系统,该系统可以提取自己的基本事实数据并更新个性化模型,以随着时间的推移提高性能。将基于现成的ResNet50和Inception v3的卷积神经网络组装成一个集成模型,用于通过面部表情进行实时情绪识别。两套机器人行为,一般和个性化,开发唤起不同的情绪反应。个性化的行为是根据通过测试前调查收集的用户偏好进行调整的。通过两阶段的用户研究验证了所提出系统的性能,并测试了自监督再训练的准确性。我们还使用信任、同理心和参与量表评估了机器人的个性化行为在唤起阶段之间预期情绪方面的有效性。参与者根据他们对机器人的熟悉程度和之前与机器人的互动情况分为两组。情绪识别结果表明,与预训练模型相比,第二阶段7种情绪的F1得分提高了12%。在两组参与者中,信任、参与和共情的平均得分都较高。这两个阶段的平均相似度得分为82%,而激发预期情绪的平均成功率在两个阶段之间增加了8.28%,尽管它们在熟悉度上存在差异,因此提供了一种减轻用户交互中的新颖性效应模式的方法。
{"title":"A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots","authors":"Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab","doi":"10.1109/RO-MAN53752.2022.9900581","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900581","url":null,"abstract":"Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126019013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback 人机协同操作的软材料:使机器人手动引导使用深度图反馈
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900710
G. Nicola, E. Villagrossi, N. Pedrocchi
Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.
人机协同操作由柔软材料(如织物、复合材料、纸张/纸板)制成的大而轻的元件是一项具有挑战性的操作,提出了几个相关的工业应用。作为主要限制,施加在材料上的力必须是单向的(即,用户只能拉动元件)。它的大小需要限制,以避免损坏材料本身。本文提出了利用三维摄像机跟踪柔性材料的变形,实现人机协同操作。利用卷积神经网络(CNN)对获取的深度图像进行处理,估计单元变形。CNN的输出是机器人控制器跟踪给定变形设定点的反馈。设定值跟踪将避免过度的材料变形,实现基于视觉的机器人手动引导。
{"title":"Human-robot co-manipulation of soft materials: enable a robot manual guidance using a depth map feedback","authors":"G. Nicola, E. Villagrossi, N. Pedrocchi","doi":"10.1109/RO-MAN53752.2022.9900710","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900710","url":null,"abstract":"Human-robot co-manipulation of large but lightweight elements made by soft materials, such as fabrics, composites, sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. As the primary limit, the force applied on the material must be unidirectional (i.e., the user can only pull the element). Its magnitude needs to be limited to avoid damages to the material itself. This paper proposes using a 3D camera to track the deformation of soft materials for human-robot co-manipulation. Thanks to a Convolutional Neural Network (CNN), the acquired depth image is processed to estimate the element deformation. The output of the CNN is the feedback for the robot controller to track a given set-point of deformation. The set-point tracking will avoid excessive material deformation, enabling a vision-based robot manual guidance.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Task Selection and Planning in Human-Robot Collaborative Processes: To be a Leader or a Follower? 人机协作过程中的任务选择与规划:做领导者还是跟随者?
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900770
Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn
Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.
协作机器人的最新进展为人类和机器人在共享工作空间中的密切协作提供了机会。为了利用这种协作,机器人需要在考虑人类存在和偏好的同时计划最佳团队绩效。本文研究了协同仿真场景下的任务选择与规划问题。现有的方法主要是通过任务分配单元将任务分配给代理,并通过通信接口通知他们,与之相反,我们赋予人类和机器人作为领导者或追随者的代理。这使得它们可以选择自己的任务,甚至可以相互分配任务。我们提出了一种任务选择和规划算法,使机器人能够考虑人类对领导的偏好,以及团队和人类的表现,并通过接受或给予领导来相应地适应自己。该算法的有效性已通过模拟研究验证了不同组合的人的准确性水平和领导偏好。
{"title":"Task Selection and Planning in Human-Robot Collaborative Processes: To be a Leader or a Follower?","authors":"Ali Noormohammadi-Asl, Ali Ayub, Stephen L. Smith, K. Dautenhahn","doi":"10.1109/RO-MAN53752.2022.9900770","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900770","url":null,"abstract":"Recent advances in collaborative robots have provided an opportunity for the close collaboration of humans and robots in a shared workspace. To exploit this collaboration, robots need to plan for optimal team performance while considering human presence and preference. This paper studies the problem of task selection and planning in a collaborative, simulated scenario. In contrast to existing approaches, which mainly involve assigning tasks to agents by a task allocation unit and informing them through a communication interface, we give the human and robot the agency to be the leader or follower. This allows them to select their own tasks or even assign tasks to each other. We propose a task selection and planning algorithm that enables the robot to consider the human’s preference to lead, as well as the team and the human’s performance, and adapts itself accordingly by taking or giving the lead. The effectiveness of this algorithm has been validated through a simulation study with different combinations of human accuracy levels and preferences for leading.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"182 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116706677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Modular Interface for Controlling Interactive Behaviors of a Humanoid Robot for Socio-Emotional Skills Training 面向社会情感技能训练的类人机器人交互行为控制模块化界面
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900704
J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke
The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.
社交机器人在心理治疗中的应用已经引起了人们的广泛关注。在治疗患有社会情感障碍的儿童(例如自闭症谱系)的背景下,第一批方法已经在研究中得到了成功的评估。在这种情况下,机器人可以被视为治疗师促进与孩子互动的工具。为了确保将社交机器人成功整合到治疗过程中,治疗师需要一个直观和全面的界面来保证节省和适当的人机交互。本出版物解决了机器人辅助治疗的图形用户界面的开发,以训练自闭症儿童的社会情感技能。该软件遵循通用和模块化的方法。此外,机器人中间件用于控制机器人,用户界面基于本地web应用程序。在治疗过程中,治疗师界面用于控制机器人的反应,并提供来自情感和唤醒识别软件的额外信息。这种方法是由人形机器人Pepper (Softbank Robotics)实现的。来自儿童和青少年精神病学的四名专家进行了一项试点研究,以评估治疗师界面的可行性和用户体验。总之,用户体验和有用性可以得到正面评价。
{"title":"A Modular Interface for Controlling Interactive Behaviors of a Humanoid Robot for Socio-Emotional Skills Training","authors":"J. Sessner, A. Porstmann, S. Kirst, N. Merz, I. Dziobek, J. Franke","doi":"10.1109/RO-MAN53752.2022.9900704","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900704","url":null,"abstract":"The usage of social robots in psychotherapy has gained interest in various applications. In the context of therapy for children with socio-emotional impairments, for example autism spectrum conditions, the first approaches have already been successfully evaluated in research. In this context, the robot can be seen as a tool for therapists to foster interaction with the children. To ensure a successful integration of social robots into therapy sessions, an intuitive and comprehensive interface for the therapist is needed to guarantee save and appropriate human-robot interaction. This publication addresses the development of a graphical user interface for robot-assisted therapy to train socio-emotional skills in children on the autism spectrum. The software follows a generic and modular approach. Furthermore, a robotic middleware is used to control the robot and the user interface is based on a local web application. During therapy sessions, the therapist interface is used to control the robot’s reactions and provides additional information from emotion and arousal recognition software. The approach is implemented with the humanoid robot Pepper (Softbank Robotics). A pilot study is carried out with four experts from a child and youth psychiatry to evaluate the feasibility and user experience of the therapist interface. In sum, the user experience and usefulness can be rated positively.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hot or not? Exploring User Perceptions of thermal Human-Robot Interaction* 热不热?探索热人机交互的用户感知*
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900785
Jacqueline Borgstedt, F. Pollick, S. Brewster
Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.
触觉是人类与社会辅助机器人互动的基本要素。然而,它通常仅限于运动或振动,而忽略了温度等关键方面。这项混合方法的研究探讨了在压力诱导任务中通过热刺激调节情感来增强人机交互[HRI]的潜力。参与者在完成Mannheim-multicomponent-stress-task (MMST)时暴露于热刺激。研究结果表明,人与机器人的情感接触可能会在暴露于急性应激源时引起舒适和放松。用户的影响可能会通过热刺激进一步增强,这是一种舒适、减压的体验,并改变了参与者对机器人的感知,使其更逼真。允许参与者校准他们认为平静的温度,为适合互动的温度范围提供了新的见解。虽然中性温度在参与者中最受欢迎,但研究结果表明,凉爽(4 - 29ºC)、中性(30 - 32ºC)和温暖(33 -36ºC)的温度都能在压力下产生舒适的效果。结果突出了热HRI的潜力,更具体地说,个性化温度校准的优势。
{"title":"Hot or not? Exploring User Perceptions of thermal Human-Robot Interaction*","authors":"Jacqueline Borgstedt, F. Pollick, S. Brewster","doi":"10.1109/RO-MAN53752.2022.9900785","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900785","url":null,"abstract":"Haptics is an essential element of interaction between humans and socially assistive robots. However, it is often limited to movements or vibrations and misses key aspects such as temperature. This mixed-methods study explores the potential of enhancing human-robot interaction [HRI] through thermal stimulation to regulate affect during a stress-inducing task. Participants were exposed to thermal stimulation while completing the Mannheim-multicomponent-stress-task (MMST). Findings yielded that human-robot emotional touch may induce comfort and relaxation during the exposure to acute stressors. User affect may be further enhanced through thermal stimulation, which was experienced as comforting, de-stressing, and altered participants’ perception of the robot to be more life-like. Allowing participants to calibrate a temperature they perceived as calming provided novel insights into the temperature ranges suitable for interaction. While neutral temperatures were the most popular amongst participants, findings suggest that cool (4 – 29 ºC), neutral (30 – 32 ºC), and warm (33ºC -36 ºC) temperatures can all induce comforting effects during exposure to stress. The results highlight the potential of thermal HRI in general and, more specifically, the advantages of personalized temperature calibration.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133868828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1