首页 > 最新文献

2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Toward a Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN) 利用深度多谱图卷积神经网络(Deep MS-CNN)从大脑信号中检测人类情绪的可穿戴情感机器人
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956382
Ker-Jiun Wang, C. Zheng
Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MSCNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.
不断监测、适应和反应人类需求的可穿戴机器人是一种很有潜力的技术,有助于缓解压力和促进心理健康。目前帮助心理健康的方法包括咨询、药物治疗和放松技巧,如冥想或呼吸练习,以改善精神状态。人类的触摸会导致身体释放催产素,从而有效地缓解焦虑,这一理论为现有方法的潜在替代方案提供了线索。产生情感触摸的可穿戴机器人有可能改善社会关系,调节情绪和认知功能。在这项研究中,我们使用了一种可穿戴的机器人触觉刺激装置AffectNodes2来模拟人类的情感触摸。触摸刺激的脑电波是由放置在顶叶、前额叶和左右颞叶区域的4个脑电图电极捕获的。基于情感轮询结构的深度MSCNN提取情感触摸、非情感触摸和放松刺激的准确率超过95%,使机器人能够掌握人类当前的情感状态。这种传感和解码结构是我们开发自适应机器人的第一步,它可以调整触摸刺激模式,帮助调节情感状态。
{"title":"Toward a Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN)","authors":"Ker-Jiun Wang, C. Zheng","doi":"10.1109/RO-MAN46459.2019.8956382","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956382","url":null,"abstract":"Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MSCNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124862618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches 用拐杖稳定的商用下肢医用外骨骼的电子、控制系统架构和人机界面综述
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956311
Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus
Research in the field of the powered orthoses or exoskeletons has expanded tremendously over the past years. Lower limb exoskeletons are widely used in robotic rehabilitation and are showing benefits in the patients quality of life. Many engineering reviews have been published about these devices and addressed general aspects. To the best of our knowledge, no review has minutely discussed specifically the control of the most common used devices, particularly the algorithms used to define the function state of the exoskeleton, such as walking, sit-to-stand, etc. In this contribution, the control hardware and software, as well as the integrated sensors for the feedback are thoroughly analyzed. We will also discuss the importance of user-specific state definition and customized control architecture. Although there are many prototypes developed nowadays, we chose to target medical lower limb exoskeletons that uses crutches to keep balance, and that are minimally actuated. These are the most common system that are now being commercialized and used worldwide. Therefore, the outcome of such a review helps to have a practical insight in all of: the mechatronics design, system architecture, and control technology.
在过去的几年里,动力矫形器或外骨骼领域的研究得到了极大的扩展。下肢外骨骼在机器人康复中得到了广泛的应用,并显示出对患者生活质量的益处。许多关于这些设备的工程评论已经发表,并讨论了一般方面。据我们所知,没有一篇综述详细讨论了最常用设备的控制,特别是用于定义外骨骼功能状态的算法,如行走、坐立等。在这篇贡献,控制硬件和软件,以及集成传感器的反馈进行了深入的分析。我们还将讨论特定于用户的状态定义和自定义控制体系结构的重要性。虽然现在已经开发了许多原型,但我们选择的目标是医用下肢外骨骼,它使用拐杖来保持平衡,并且是最小驱动的。这些是最常见的系统,现在正在商业化和世界范围内使用。因此,这种审查的结果有助于在所有的实际洞察:机电一体化设计,系统架构和控制技术。
{"title":"A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches","authors":"Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus","doi":"10.1109/RO-MAN46459.2019.8956311","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956311","url":null,"abstract":"Research in the field of the powered orthoses or exoskeletons has expanded tremendously over the past years. Lower limb exoskeletons are widely used in robotic rehabilitation and are showing benefits in the patients quality of life. Many engineering reviews have been published about these devices and addressed general aspects. To the best of our knowledge, no review has minutely discussed specifically the control of the most common used devices, particularly the algorithms used to define the function state of the exoskeleton, such as walking, sit-to-stand, etc. In this contribution, the control hardware and software, as well as the integrated sensors for the feedback are thoroughly analyzed. We will also discuss the importance of user-specific state definition and customized control architecture. Although there are many prototypes developed nowadays, we chose to target medical lower limb exoskeletons that uses crutches to keep balance, and that are minimally actuated. These are the most common system that are now being commercialized and used worldwide. Therefore, the outcome of such a review helps to have a practical insight in all of: the mechatronics design, system architecture, and control technology.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125214129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fatigue Estimation using Facial Expression features and Remote-PPG Signal 基于面部表情特征和Remote-PPG信号的疲劳估计
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956411
Masaki Hasegawa, Kotaro Hayashi, J. Miura
Currently, research and development of lifestyle support robots in daily life is being actively conducted. Health-case is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject’s movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).
目前,日常生活生活支持机器人的研发正在积极进行。健康箱就是这样一种功能机器人。在这项研究中,我们开发了一个使用相机的疲劳估计系统,可以很容易地安装在机器人上。在真实环境中进行的测量必须考虑由光线变化和物体运动引起的噪声。该疲劳估计系统基于鲁棒特征提取方法。LF/ hf比作为疲劳指标,由心电图RR间隔的功率谱或血容量脉冲(BVP)计算。BVP可以通过光体积脉搏波描记术(PPG)从指尖检测到。在这项研究中,我们使用了一种非接触式的PPG:远程PPG (rPPG),通过人脸图像的亮度变化来检测。一些研究表明,从面部视频中提取的面部表情特征对疲劳估计也很有用。过去使用LLE的降维方法破坏了特征大维度的信息。我们还开发了一种疲劳估计方法,该方法使用相机为医疗机器人提供了这些特征。该算法使用面部标志点、视线向量和大小与眼睛和嘴的标志点拟合的椭圆。因此,本文提出的方法简单地利用人脸的时变形状信息,如眼睛的大小或注视方向。利用支持向量机(SVM)进行疲劳状态分类,验证了所提特征的性能。
{"title":"Fatigue Estimation using Facial Expression features and Remote-PPG Signal","authors":"Masaki Hasegawa, Kotaro Hayashi, J. Miura","doi":"10.1109/RO-MAN46459.2019.8956411","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956411","url":null,"abstract":"Currently, research and development of lifestyle support robots in daily life is being actively conducted. Health-case is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject’s movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Role of Trust in Child-Robot Interaction* 信任在儿童-机器人交互中的作用研究*
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956400
Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture
In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.
在子机器人交互中,对机器人的信任是至关重要的。这在孩子第一次遇到机器人时尤为重要,因为在这次互动中获得的信任对未来的互动起着决定性的作用。我们展示了一项野外研究,波兰幼儿园的孩子们与Pepper机器人互动。本研究的视频分析了信任、人格化和对故障的反应问题,假设后两个因素影响儿童对Pepper的信任。我们的研究结果揭示了儿童对机器人执行人类特定任务的兴趣,强调了对话场景的重要性,以及机器人提供的关于其能力或起源的扩展答案库的需求,并展示了儿童如何倾向于激怒机器人。
{"title":"On the Role of Trust in Child-Robot Interaction*","authors":"Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture","doi":"10.1109/RO-MAN46459.2019.8956400","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956400","url":null,"abstract":"In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128887332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners 比较机器人、人工智能和人类伙伴玩电子游戏的社交和娱乐满足感
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956256
N. Bowman, J. Banks
As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.
随着社交机器人和人工智能代理的角色变得越来越多样化,这些机器越来越多地发挥社交伙伴的作用。这一趋势引发了一个问题,即社交游戏的满足感是否也会出现在人机合作中?在目前的研究中,我们研究了与人类伙伴一起玩电子游戏的社交结果,并将其与表面上的社交机器人或人工智能(即电脑控制的玩家)伙伴进行比较。参与者(N = 103)被随机分配到三种实验条件中,在这三种条件下,他们分别与人类、机器人或非人工智能一起玩合作视频游戏。结果表明,在任何一种伴侣类型之间,在感知与伴侣的亲密程度、相关性需求满意度或娱乐结果方面,几乎没有统计学上显著或有意义的差异。然而,定性数据表明,人类和机器人伴侣都被认为更善于社交,而人工智能伴侣则被认为更有功能。
{"title":"Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners","authors":"N. Bowman, J. Banks","doi":"10.1109/RO-MAN46459.2019.8956256","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956256","url":null,"abstract":"As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture 通过音乐驱动的机器人情感、韵律和手势建立人与机器人的信任
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956386
Richard J. Savery, R. Rose, Gil Weinberg
As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.
随着人机协作机会的不断扩大,信任对机器人的充分参与和利用变得越来越重要。建立在情感关系和人际关系上的情感信任尤其重要,因为它更能抵御错误,并增加合作的意愿。在本文中,我们提出了一个基于音乐驱动的情感韵律和手势的新模型,该模型鼓励对机器人身份的感知,旨在避免恐怖谷。象征性的音乐短语是由人类音乐家生成并标记情感信息的。这些短语控制了一个合成引擎播放通过插值音素和电子乐器生成的预渲染音频样本。手势也受到符号短语的驱动,将情感从音乐短语编码为低自由度动作。通过一项用户研究,我们发现我们的系统能够准确地向用户描绘一系列情绪。我们还展示了一个重要的结果,即我们的非语言音频生成比使用最先进的文本到语音系统的平均信任度高出8%。
{"title":"Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture","authors":"Richard J. Savery, R. Rose, Gil Weinberg","doi":"10.1109/RO-MAN46459.2019.8956386","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956386","url":null,"abstract":"As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117181278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Ontologenius: A long-term semantic memory for robotic agents 本体记忆:机器人代理的长期语义记忆
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956305
Guillaume Sarthou, A. Clodic, R. Alami
In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction. We introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Since Ontologenius was developed to be used by a robot which interacts with humans, we have endowed the system with ability to perform attributes and properties generalization and with the possibility to model and estimate the semantic memory of a human partner and to implement theory of mind processes. This paper presents the architecture and the main features of Ontologenius as well as examples of its use in robotics applications.
在本文中,我们提出了本体,一个自主机器人的语义知识存储和推理框架。除了一个经典的本体软件来查询知识库和一阶内部逻辑,因为它可以为web语义完成,我们提出了Ontologenius功能,适合机器人使用,包括人机交互。我们引入了在执行过程中修改知识库的能力,无论是通过对话还是几何推理,即使在机器人断电后也能保持这些更改。由于Ontologenius是为与人类交互的机器人而开发的,因此我们赋予该系统执行属性和属性泛化的能力,以及建模和估计人类伴侣的语义记忆的可能性,并实现心理过程理论。本文介绍了Ontologenius的架构和主要特点,以及它在机器人应用中的应用实例。
{"title":"Ontologenius: A long-term semantic memory for robotic agents","authors":"Guillaume Sarthou, A. Clodic, R. Alami","doi":"10.1109/RO-MAN46459.2019.8956305","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956305","url":null,"abstract":"In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction. We introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Since Ontologenius was developed to be used by a robot which interacts with humans, we have endowed the system with ability to perform attributes and properties generalization and with the possibility to model and estimate the semantic memory of a human partner and to implement theory of mind processes. This paper presents the architecture and the main features of Ontologenius as well as examples of its use in robotics applications.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124512572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm 惊喜!在社会辅助机器人偶然学习范式中预测婴儿视觉注意
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956385
Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić
Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.
早期干预婴儿发育障碍有可能促进神经发育结构和功能的改善[1]。研究人员开始探索社会辅助机器人(SAR)作为一种工具,提供早期干预,协同和加强人类管理的治疗。为了使SAR有效,机器人必须能够始终吸引婴儿的注意,以便使婴儿参与所需的活动。这项工作分析了5个6-8个月大的婴儿与Nao机器人互动的眼球追踪数据,Nao机器人踢它的腿作为婴儿腿部运动的偶然奖励。我们根据婴儿头戴式摄像机的视频数据和机器人行为的时间作为婴儿视觉注意力的预测因子来评估低水平惊讶的贝叶斯模型。结果表明,超过67%的婴儿注视位置位于模型评估的比平均水平更令人惊讶的区域。我们还提出了一个初步的探索,使用惊喜来预测机器人在研究的特定间隔内吸引婴儿视觉注意力的程度。这项工作首次在婴儿身上验证了惊喜模型;我们的研究结果表明,在SAR交互过程中,使用惊喜来通知机器人吸引婴儿注意的行为的潜力。
{"title":"Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm","authors":"Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956385","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956385","url":null,"abstract":"Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124252406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features 关注提取特征的深度强化学习神经网络的语言解释
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956301
Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara
In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.
近年来,人们对深度神经网络的透明度越来越感兴趣。大多数关于透明度的研究都是用于图像分类。在本文中,我们报告了透明度在深度强化学习网络(DRLNs)中的工作。这种网络在学习雅达利游戏的动作控制方面非常成功。在本文中,我们专注于生成深度强化学习策略的口头(自然语言)描述和解释。成功生成口头解释将使人们(例如用户、调试器)更好地理解drln的内部工作原理,从而最终增加对这些系统的信任。我们提出了一个生成模型,该模型由三部分组成:用于特征提取的编码器,用于从编码器输出中选择特征的注意结构,以及用于生成自然语言解释的解码器。设计并比较了注意结构的四种变体——充分注意、全局注意、适应性注意和客体注意。在所有变量中,自适应注意结构表现最好,即使对象注意结构被给予对象位置的额外信息。此外,我们的实验结果表明,所提出的编码器在区分游戏状态图像的能力上优于两个基线编码器(Resnet和VGG)。
{"title":"Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features","authors":"Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN46459.2019.8956301","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956301","url":null,"abstract":"In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131375974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders 为自闭症谱系障碍儿童设计一个长期在家使用的社交辅助机器人
Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956468
Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić
Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.
社会辅助机器人技术(SAR)的研究在自闭症谱系障碍(ASD)儿童的补充和增强治疗方面显示出巨大的潜力。然而,绝大多数的SAR研究仅限于在高度控制的环境中进行的短期研究。设计和开发能够长时间在原地自主交互的SAR系统涉及许多工程和计算挑战。本文介绍了一种完全自主的SAR系统的设计,用于自闭症儿童的长期家庭使用。我们根据鲁棒性和适应性需求来解决设计决策,讨论机器人的性格和互动的发展,并从长达一个月的ASD儿童的家庭数据收集中提供见解。这项工作为一个更大的研究项目做出了贡献,该项目正在探索如何利用SAR来促进自闭症儿童的社会和认知发展。
{"title":"Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders","authors":"Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956468","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956468","url":null,"abstract":"Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1