首页 > 最新文献

2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Domestic Social Robots as Companions or Assistants? The Effects of the Robot Positioning on the Consumer Purchase Intentions* 家用社交机器人是伴侣还是助手?机器人定位对消费者购买意愿的影响*
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900844
Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak
This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.
本研究探讨国产社交机器人的定位策略对消费者购买意愿的影响。具体来说,作者研究了机器人作为同伴、助手和器具定位的影响。研究结果表明,参与者更喜欢作为助手的家庭社交机器人,而不是作为伴侣的机器人。此外,对于男性参与者来说,将家庭社交机器人定位为家电也比定位为伴侣的机器人更受欢迎。研究结果还表明,定位对购买意愿的影响是由参与者对机器人有用性的感知介导的。
{"title":"Domestic Social Robots as Companions or Assistants? The Effects of the Robot Positioning on the Consumer Purchase Intentions*","authors":"Jun San Kim, Dahyun Kang, Jongsuk Choi, Sonya S. Kwak","doi":"10.1109/RO-MAN53752.2022.9900844","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900844","url":null,"abstract":"This study explores the effects of the positioning strategy of domestic social robots on the purchase intention of consumers. Specifically, the authors investigate the effects of robot positioning as companions with as assistants and as appliances. The study results showed that the participants preferred the domestic social robots positioned as assistants rather than as companions. Moreover, for male participants, the positioning of domestic social robots as appliances was also preferred over robots positioned as companions. The study results also showed that the effects of positioning on the purchase intention were mediated by the participants’ perception of usefulness regarding the robot.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125273045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Listen and tell me who the user is talking to: Automatic detection of the interlocutor’s type during a conversation 听并告诉我用户在和谁说话:在对话过程中自动检测对话者的类型
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900632
Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot
In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.
在著名的图灵测试中,人类必须判断自己是在给另一个人写信还是给聊天机器人写信。在本文中,我们提出了一个适用于实时对话的反向图灵测试:基于人类的语音,我们开发了一个模型,可以自动检测她/他是在与人工智能体还是人类说话。在这项工作中,我们提出了一种预测方法,结合了从行为中提取特定特征的步骤和基于循环神经网络的特定深度学习模型。预测结果表明,与传统的基于频谱特征(如Mel-frequency Cepstral Coefficients, MFCCs)的自动语音识别系统相比,我们的方法,特别是所考虑的特征,显著提高了预测效果。我们的方法允许自动评估对话代理的类型,人类或人工代理,仅基于人类对话者的语音。最重要的是,该模型提供了一种新颖且非常有前途的方法来衡量用于正确识别对话者性质的行为线索的重要性,换句话说,人类行为的哪些方面适应其对话者的性质。
{"title":"Listen and tell me who the user is talking to: Automatic detection of the interlocutor’s type during a conversation","authors":"Youssef Hmamouche, M. Ochs, T. Chaminade, Laurent Prévot","doi":"10.1109/RO-MAN53752.2022.9900632","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900632","url":null,"abstract":"In the well-known Turing test, humans have to judge whether they write to another human or a chatbot. In this article, we propose a reversed Turing test adapted to live conversations: based on the speech of the human, we have developed a model that automatically detects whether she/he speaks to an artificial agent or a human. We propose in this work a prediction methodology combining a step of specific features extraction from behaviour and a specific deep learning model based on recurrent neural networks. The prediction results show that our approach, and more particularly the considered features, improves significantly the predictions compared to the traditional approach in the field of automatic speech recognition systems, which is based on spectral features, such as Mel-frequency Cepstral Coefficients (MFCCs). Our approach allows evaluating automatically the type of conversational agent, human or artificial agent, solely based on the speech of the human interlocutor. Most importantly, this model provides a novel and very promising approach to weigh the importance of the behaviour cues used to make correctly recognize the nature of the interlocutor, in other words, what aspects of the human behaviour adapts to the nature of its interlocutor.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives 基于概率运动原语的人-机器人物理合作的持续和增量学习
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900547
Daniel Schäle, M. Stoelen, E. Kyrkjebø
For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.
为了成功地部署物理人机合作(pHRC),人类需要能够快速教会机器人新的运动技能。概率运动原语(Probabilistic movement primitives, ProMPs)是一种很有前途的方法,可以对机器人在pHRC环境中从人类演示中学习到的运动技能进行编码。然而,大多数从人类演示中学习promp的算法都是以批处理模式运行的,当我们希望人类和机器人从第一次演示开始就一起工作时,这在pHRC中并不理想。在本文中,我们提出了一种新的学习算法,用于在pHRC环境中增量和连续地学习promp。我们的算法结合了新的演示顺序,因为他们到达,允许人类观察机器人的学习进度,并逐步塑造机器人的运动技能。内置的遗忘因素允许由于人类的学习曲线或任务限制的变化而产生的纠正性演示。我们将算法的性能与现有的批量ProMP算法在实验室的拾取任务生成的参考数据上进行了比较。此外,我们还展示了遗忘因素如何使我们适应任务中的变化。本文提出的增量学习算法有可能导致更直观的学习过程,并比批处理模式更快地建立人与机器人之间的成功合作。
{"title":"Continuous and Incremental Learning in physical Human-Robot Cooperation using Probabilistic Movement Primitives","authors":"Daniel Schäle, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN53752.2022.9900547","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900547","url":null,"abstract":"For a successful deployment of physical Human-Robot Cooperation (pHRC), humans need to be able to teach robots new motor skills quickly. Probabilistic movement primitives (ProMPs) are a promising method to encode a robot’s motor skills learned from human demonstrations in pHRC settings. However, most algorithms to learn ProMPs from human demonstrations operate in batch mode, which is not ideal in pHRC when we want humans and robots to work together from even the first demonstration. In this paper, we propose a new learning algorithm to learn ProMPs incre-mentally and continuously in pHRC settings. Our algorithm incorporates new demonstrations sequentially as they arrive, allowing humans to observe the robot’s learning progress and incrementally shape the robot’s motor skill. A built-in forgetting factor allows for corrective demonstrations resulting from the human’s learning curve or changes in task constraints. We compare the performance of our algorithm to existing batch ProMP algorithms on reference data generated from a pick-and-place task at our lab. Furthermore, we demonstrate how the forgetting factor allows us to adapt to changes in the task. The incremental learning algorithm presented in this paper has the potential to lead to a more intuitive learning progress and to establish a successful cooperation between human and robot faster than training in batch mode.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The LMA12-O Framework for Emotional Robot Eye Gestures 情感机器人眼睛手势的LMA12-O框架
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900752
Kerl Galindo, Deborah Szapiro, R. Gomez
The eyes play a significant role in how robots are perceived socially by humans due to the eye’s centrality in human communication. To date there has been no consistent or reliable system for designing and transferring affective emotional eye gestures to anthropomorphized social robots. Combining research findings from Oculesics, Laban Movement Analysis and the Twelve Principles of Animation, this paper discusses the design and evaluation of the prototype LMA12-O framework for the purpose of maximising the emotive communication potential of eye gestures in anthropomorphized social robots. Results of initial user testings evidenced LMA12-O to be effective in designing affective emotional eye gestures in the test robot with important considerations for future iterations of this framework.
由于眼睛在人类交流中的中心地位,眼睛在人类如何感知机器人的社交中起着重要作用。到目前为止,还没有一致或可靠的系统来设计和转移情感的眼神手势到拟人化的社交机器人。本文结合眼科学、拉班运动分析和动画十二原理的研究成果,讨论了原型LMA12-O框架的设计和评估,以最大限度地发挥拟人化社交机器人眼睛手势的情感交流潜力。最初的用户测试结果证明LMA12-O在设计测试机器人的情感情感手势方面是有效的,并为该框架的未来迭代提供了重要考虑。
{"title":"The LMA12-O Framework for Emotional Robot Eye Gestures","authors":"Kerl Galindo, Deborah Szapiro, R. Gomez","doi":"10.1109/RO-MAN53752.2022.9900752","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900752","url":null,"abstract":"The eyes play a significant role in how robots are perceived socially by humans due to the eye’s centrality in human communication. To date there has been no consistent or reliable system for designing and transferring affective emotional eye gestures to anthropomorphized social robots. Combining research findings from Oculesics, Laban Movement Analysis and the Twelve Principles of Animation, this paper discusses the design and evaluation of the prototype LMA12-O framework for the purpose of maximising the emotive communication potential of eye gestures in anthropomorphized social robots. Results of initial user testings evidenced LMA12-O to be effective in designing affective emotional eye gestures in the test robot with important considerations for future iterations of this framework.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125783437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks 基于层次强化学习网络的样本效率改进方法
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900738
Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie
Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.
从演示中学习(LfD)方法在社交机器人部署后,已经引起了人们对其在医疗保健、教育和服务领域教授各种任务的极大兴趣。这些LfD方法通常需要大量的演示,以便机器人从任务演示中学习性能模型。然而,在现实世界的应用中,要求非专家为社交机器人提供大量的演示来学习一项任务是不切实际的。在本文中,我们提出了一种通过数据增强、动态体验重播大小和分层深度q网络(DQN)来提高现有演示方法学习的样本效率的方法。在两个不同的数据集上验证了我们的方法后,结果表明我们提出的分层DQN在从演示中学习任务时有效地提高了样本效率。在未来,这种样本效率的方法有可能提高我们将LfD方法应用于社交机器人的能力,以在演示数据有限、稀疏和不平衡的领域中学习任务。
{"title":"A Sample Efficiency Improved Method via Hierarchical Reinforcement Learning Networks","authors":"Qinghua Chen, Evan Dallas, Pourya Shahverdi, Jessica Korneder, O. Rawashdeh, W. Louie","doi":"10.1109/RO-MAN53752.2022.9900738","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900738","url":null,"abstract":"Learning from demonstration (LfD) approaches have garnered significant interest for teaching social robots a variety of tasks in healthcare, educational, and service domains after they have been deployed. These LfD approaches often require a significant number of demonstrations for a robot to learn a performant model from task demonstrations. However, requiring non-experts to provide numerous demonstrations for a social robot to learn a task is impractical in real-world applications. In this paper, we propose a method to improve the sample efficiency of existing learning from demonstration approaches via data augmentation, dynamic experience replay sizes, and hierarchical Deep Q-Networks (DQN). After validating our methods on two different datasets, results suggest that our proposed hierarchical DQN is effective for improving sample efficiency when learning tasks from demonstration. In the future, such a sample-efficient approach has the potential to improve our ability to apply LfD approaches for social robots to learn tasks in domains where demonstration data is limited, sparse, and imbalanced.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127090460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot 没有我们就没有我们:包容性签名Tiago机器人的参与式设计
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900538
Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi
The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition1. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.
机器人社区和这些服务的用户之间的成功互动是起草任何技术发展计划中相当重要的一个方面。在处理敏感服务和问题(例如与任何人口的特定子群体的交互相关的服务和问题)时,这方面变得更加相关。多年来,在整合和提出与耳聋和手语相关的技术方面,很少取得成功。相反,在本文中,我们提出了一个签名机器人和意大利聋人社区之间成功互动的描述,这发生在2021年智能城市机器人挑战赛(SciRoc)比赛期间1。由于使用了参与式设计,并且从项目的早期阶段就有聋人社区的专家参与,因此有可能创造出一种技术,在社区本身的接受方面取得了重大成果,并可能在技术开发方面取得重大成果。
{"title":"Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot","authors":"Emanuele Antonioni, Cristiana Sanalitro, O. Capirci, Alessio Di Renzo, Maria Beatrice D'Aversa, D. Bloisi, Lun Wang, Ermanno Bartoli, Lorenzo Diaco, V. Presutti, D. Nardi","doi":"10.1109/RO-MAN53752.2022.9900538","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900538","url":null,"abstract":"The success of the interaction between the robotics community and the users of these services is an aspect of considerable importance in the drafting of the development plan of any technology. This aspect becomes even more relevant when dealing with sensitive services and issues such as those related to interaction with specific subgroups of any population. Over the years, there have been few successes in integrating and proposing technologies related to deafness and sign language. Instead, in this paper, we propose an account of successful interaction between a signatory robot and the Italian deaf community, which occurred during the Smart City Robotics Challenge (SciRoc) 2021 competition1. Thanks to the use of a participatory design and the involvement of experts belonging to the deaf community from the early stages of the project, it was possible to create a technology that has achieved significant results in terms of acceptance by the community itself and could lead to significant results in the technology development as well.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement using EEG Brain Activity 机器人辅助语言学习中的动机手势:基于脑电图的认知参与研究
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900508
M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas
Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.
社交机器人在教学环境中被证明是有效的,因为它们的体现和社会行为可以提高学习者的动机和参与度。本研究探讨了社交机器人动机手势在机器人辅助语言学习(RALL)中的影响。25名大学生参加了由NAO机器人在两种条件下指导的语言学习任务(学科内设计);在一种情况下,机器人通过语言和非语言行为(手势条件)对参与者的表现提供积极和消极的反馈,在另一种情况下,机器人只使用语言反馈(无手势条件)。为了评估每种情况下的认知参与和学习,我们收集了参与者在互动过程中的脑电图活动,并在即时和延迟后测试中评估了他们的单词知识。在练习阶段,通过脑电图投入指数量化的认知投入没有发现显著差异。同样,单词测试结果表明,在这两种情况下,机器人的整体表现都很好,这表明无论机器人的手势如何,学习效果都差不多。这些发现并没有提供支持机器人在语言学习任务中的动机手势的证据,但同时也表明了教学机器人在设计有效的社会行为方面的挑战。
{"title":"Motivational Gestures in Robot-Assisted Language Learning: A Study of Cognitive Engagement using EEG Brain Activity","authors":"M. Alimardani, Jishnu Harinandansingh, Lindsey Ravin, M. Haas","doi":"10.1109/RO-MAN53752.2022.9900508","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900508","url":null,"abstract":"Social robots have been shown effective in pedagogical settings due to their embodiment and social behavior that can improve a learner’s motivation and engagement. In this study, the impact of a social robot’s motivational gestures in robot-assisted language learning (RALL) was investigated. Twenty-five university students participated in a language learning task tutored by a NAO robot under two conditions (within-subjects design); in one condition the robot provided positive and negative feedback on participant’s performance using both verbal and non-verbal behavior (Gesture condition), in another condition the robot only employed verbal feedback (No-Gesture condition). To assess cognitive engagement and learning in each condition, we collected EEG brain activity from the participants during the interaction and evaluated their word knowledge during an immediate and delayed post-test. No significant difference was found with respect to cognitive engagement as quantified by the EEG Engagement Index during the practice phase. Similarly, the word test results indicated an overall high performance in both conditions, suggesting similar learning gain regardless of the robot’s gestures. These findings do not provide evidence in favor of robot’s motivational gestures during language learning tasks but at the same time indicate challenges with respect to the design of effective social behavior for pedagogical robots.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114600426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots 仿人机器人情感感知与适应的自学习系统
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900581
Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab
Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.
人形机器人为使用情感识别进行个性化交互提供了独特的机会。然而,在复杂的社会互动中,类人机器人的情感识别在互动的灵活性以及响应的个性化和适应性方面受到限制。我们设计了一个用于实时情绪识别的自适应学习系统,该系统可以提取自己的基本事实数据并更新个性化模型,以随着时间的推移提高性能。将基于现成的ResNet50和Inception v3的卷积神经网络组装成一个集成模型,用于通过面部表情进行实时情绪识别。两套机器人行为,一般和个性化,开发唤起不同的情绪反应。个性化的行为是根据通过测试前调查收集的用户偏好进行调整的。通过两阶段的用户研究验证了所提出系统的性能,并测试了自监督再训练的准确性。我们还使用信任、同理心和参与量表评估了机器人的个性化行为在唤起阶段之间预期情绪方面的有效性。参与者根据他们对机器人的熟悉程度和之前与机器人的互动情况分为两组。情绪识别结果表明,与预训练模型相比,第二阶段7种情绪的F1得分提高了12%。在两组参与者中,信任、参与和共情的平均得分都较高。这两个阶段的平均相似度得分为82%,而激发预期情绪的平均成功率在两个阶段之间增加了8.28%,尽管它们在熟悉度上存在差异,因此提供了一种减轻用户交互中的新颖性效应模式的方法。
{"title":"A Self Learning System for Emotion Awareness and Adaptation in Humanoid Robots","authors":"Sudhir Shenoy, Yusheng Jiang, Tyler Lynch, Lauren Isabelle Manuel, Afsaneh Doryab","doi":"10.1109/RO-MAN53752.2022.9900581","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900581","url":null,"abstract":"Humanoid robots provide a unique opportunity for personalized interaction using emotion recognition. However, emotion recognition performed by humanoid robots in complex social interactions is limited in the flexibility of interaction as well as personalization and adaptation in the responses. We designed an adaptive learning system for real-time emotion recognition that elicits its own ground-truth data and updates individualized models to improve performance over time. A Convolutional Neural Network based on off-the-shelf ResNet50 and Inception v3 are assembled to form an ensemble model which is used for real-time emotion recognition through facial expression. Two sets of robot behaviors, general and personalized, are developed to evoke different emotion responses. The personalized behaviors are adapted based on user preferences collected through a pre-test survey. The performance of the proposed system is verified through a 2-stage user study and tested for the accuracy of the self-supervised retraining. We also evaluate the effectiveness of the personalized behavior of the robot in evoking intended emotions between stages using trust, empathy and engagement scales. The participants are divided into two groups based on their familiarity and previous interactions with the robot. The results of emotion recognition indicate a 12% increase in the F1 score for 7 emotions in stage 2 compared to pre-trained model. Higher mean scores for trust, engagement, and empathy are observed in both participant groups. The average similarity score for both stages was 82% and the average success rate of eliciting the intended emotion increased by 8.28% between stages, despite their differences in familiarity thus offering a way to mitigate novelty effect patterns among user interactions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126019013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JAHRVIS, a Supervision System for Human-Robot Collaboration 人机协作监控系统JAHRVIS
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900665
Amandine Mayima, A. Clodic, R. Alami
The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.
监督组件是机器人架构的粘合剂。没有它,就没有任务,没有交互发生,它引导架构的其他组件实现目标,这意味着,在与人类合作的上下文中,带来物理环境的变化并更新人类伙伴的精神状态。然而,没有太多的工作集中在这个负责机器人决策和控制的组件上,而这是机器人木偶师。大多数情况下,要么是简单地编写任务脚本,要么是为特定任务构建监督程序。因此,我们提出了JAHRVIS,一个基于联合行动的人类感知监督者。它旨在独立于任务,同时实施一套关键的联合行动和协作机制。有了这篇文章,我们打算推动自主协作机器人的部署,并附上我们的开源代码。
{"title":"JAHRVIS, a Supervision System for Human-Robot Collaboration","authors":"Amandine Mayima, A. Clodic, R. Alami","doi":"10.1109/RO-MAN53752.2022.9900665","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900665","url":null,"abstract":"The supervision component is the binder of a robotic architecture. Without it, there is no task, no interaction happening, it conducts the other components of the architecture towards the achievement of a goal, which means, in the context of a collaboration with a human, to bring changes in the physical environment and to update the human partner mental state. However, not so much work focus on this component in charge of the robot decision-making and control, whereas this is the robot puppeteer. Most often, either tasks are simply scripted, or the supervisor is built for a specific task. Thus, we propose JAHRVIS, a Joint Action-based Human-aware supeRVISor. It aims at being task-independent while implementing a set of key joint action and collaboration mechanisms. With this contribution, we intend to move the deployment of autonomous collaborative robots forward, accompanying this paper with our open-source code.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126708044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-Temporal Action Order Representation for Mobile Manipulation Planning* 移动操作规划的时空动作顺序表示*
Pub Date : 2022-08-29 DOI: 10.1109/RO-MAN53752.2022.9900643
Yosuke Kawasaki, Masaki Takahashi
Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot’s functions, requires a better understanding of the feasible actions in real space based on the robot’s subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot’s functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot’s functions.
社交机器人被用于根据人类提供的指令执行移动操作任务,例如整理和搬运。基于机器人的子系统配置和物体在环境中的位置,移动操作规划器需要更好地理解机器人在真实空间中的可行动作,从而实现机器人的功能开发。本研究旨在实现一个考虑世界状态的移动操作规划器,世界状态由机器人的状态(子系统配置及其状态)组成,以实现机器人的功能。在本文中,本研究提出了一种新的环境表示,称为世界状态依赖行为图(WDAG)。WDAG采用场景图知识表示和递归多层图结构来表示基于世界状态的可行动作的时空顺序。研究还提出了一种基于WDAG的移动操作规划方法。计划器能够推导出许多有效的动作序列,从而基于对动作的空间和时间联系的详尽理解来完成给定的任务。通过实际的机器实验,对该方法的有效性进行了评价。实验结果表明,该方法有利于机器人功能的有效利用。
{"title":"Spatio-Temporal Action Order Representation for Mobile Manipulation Planning*","authors":"Yosuke Kawasaki, Masaki Takahashi","doi":"10.1109/RO-MAN53752.2022.9900643","DOIUrl":"https://doi.org/10.1109/RO-MAN53752.2022.9900643","url":null,"abstract":"Social robots are used to perform mobile manipulation tasks, such as tidying up and carrying, based on instructions provided by humans. A mobile manipulation planner, which is used to exploit the robot’s functions, requires a better understanding of the feasible actions in real space based on the robot’s subsystem configuration and the object placement in the environment. This study aims to realize a mobile manipulation planner considering the world state, which consists of the robot state (subsystem configuration and their state) required to exploit the robot’s functions. In this paper, this study proposes a novel environmental representation called a world state-dependent action graph (WDAG). The WDAG represents the spatial and temporal order of feasible actions based on the world state by adopting the knowledge representation with scene graphs and a recursive multilayered graph structure. The study also proposes a mobile manipulation planning method using the WDAG. The planner enables the derivation of many effective action sequences to accomplish the given tasks based on an exhaustive understanding of the spatial and temporal connections of actions. The effectiveness of the proposed method is evaluated through practical machine experiments performed. The experimental result demonstrates that the proposed method facilitates the effective utilization of the robot’s functions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1