首页 > 最新文献

IEEE Transactions on Autonomous Mental Development最新文献

英文 中文
The Fourth IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 2014: Conference Summary and Report 第四届IEEE发展与学习与表观遗传机器人国际会议(ICDL-EpiRob) 2014:会议总结与报告
Pub Date : 2014-12-12 DOI: 10.1109/TAMD.2014.2377335
G. Metta, L. Natale
{"title":"The Fourth IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob) 2014: Conference Summary and Report","authors":"G. Metta, L. Natale","doi":"10.1109/TAMD.2014.2377335","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2377335","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"96 1","pages":"243"},"PeriodicalIF":0.0,"publicationDate":"2014-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88807103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Renewal for the IEEE Transactions on Autonomous Mental Development IEEE自主心理发展汇刊编辑更新
Pub Date : 2014-12-01 DOI: 10.1109/TAMD.2014.2377274
Zhengyou Zhang
{"title":"Editorial Renewal for the IEEE Transactions on Autonomous Mental Development","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2014.2377274","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2377274","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"241-242"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2377274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging 是什么触动了你的心弦?基于脑成像的多标签降维音乐情感分析
Pub Date : 2014-11-03 DOI: 10.1145/2647868.2655068
Yang Liu, Yan Liu, Yu Zhao, K. Hua
After 20 years of extensive study in psychology, some musical factors have been identified that can evoke certain kinds of emotions. However, the underlying mechanism of the relationship between music and emotion remains unanswered. This paper intends to find the genuine correlates of music emotion by exploring a systematic and quantitative framework. The task is formulated as a dimensionality reduction problem, which seeks the complete and compact feature set with intrinsic correlates for the given objectives. Since a song generally elicits more than one emotion, we explore dimensionality reduction techniques for multi-label classification. One challenging problem is that the hard label cannot represent the extent of the emotion and it is also difficult to ask the subjects to quantize their feelings. This work tries utilizing the electroencephalography (EEG) signal to solve this challenge. A learning scheme called EEG-based emotion smoothing ( E2S) and a bilinear multi-emotion similarity preserving embedding (BME-SPE) algorithm are proposed. We validate the effectiveness of the proposed framework on standard dataset CAL-500. Several influential correlates have been identified and the classification via those correlates has achieved good performance. We build a Chinese music dataset according to the identified correlates and find that the music from different cultures may share similar emotions.
经过20年的心理学广泛研究,已经确定了一些音乐因素可以唤起某些情绪。然而,音乐和情感之间关系的潜在机制仍未得到解答。本文试图通过探索一个系统的、定量的框架来寻找音乐情感的真正关联。该任务被表述为一个降维问题,它寻求给定目标具有内在相关性的完整和紧凑的特征集。由于一首歌通常会引发不止一种情绪,我们探索了多标签分类的降维技术。一个具有挑战性的问题是,硬标签不能代表情绪的程度,也很难要求受试者量化他们的感受。本文尝试利用脑电图(EEG)信号来解决这一难题。提出了一种基于脑电图的情感平滑(E2S)学习方案和双线性多情感相似度保持嵌入(BME-SPE)算法。我们在标准数据集CAL-500上验证了所提出框架的有效性。确定了几个有影响的相关因素,并通过这些相关因素进行分类取得了良好的效果。我们根据识别出的相关性建立了一个中国音乐数据集,并发现来自不同文化的音乐可能具有相似的情感。
{"title":"What Strikes the Strings of Your Heart?–Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging","authors":"Yang Liu, Yan Liu, Yu Zhao, K. Hua","doi":"10.1145/2647868.2655068","DOIUrl":"https://doi.org/10.1145/2647868.2655068","url":null,"abstract":"After 20 years of extensive study in psychology, some musical factors have been identified that can evoke certain kinds of emotions. However, the underlying mechanism of the relationship between music and emotion remains unanswered. This paper intends to find the genuine correlates of music emotion by exploring a systematic and quantitative framework. The task is formulated as a dimensionality reduction problem, which seeks the complete and compact feature set with intrinsic correlates for the given objectives. Since a song generally elicits more than one emotion, we explore dimensionality reduction techniques for multi-label classification. One challenging problem is that the hard label cannot represent the extent of the emotion and it is also difficult to ask the subjects to quantize their feelings. This work tries utilizing the electroencephalography (EEG) signal to solve this challenge. A learning scheme called EEG-based emotion smoothing ( E2S) and a bilinear multi-emotion similarity preserving embedding (BME-SPE) algorithm are proposed. We validate the effectiveness of the proposed framework on standard dataset CAL-500. Several influential correlates have been identified and the classification via those correlates has achieved good performance. We build a Chinese music dataset according to the identified correlates and find that the music from different cultures may share similar emotions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"176-188"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2647868.2655068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64160137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optimal Rewards for Cooperative Agents 合作代理的最优奖励
Pub Date : 2014-10-13 DOI: 10.1109/TAMD.2014.2362682
B. Liu, Satinder Singh, Richard L. Lewis, S. Qin
Following work on designing optimal rewards for single agents, we define a multiagent optimal rewards problem (ORP) in cooperative (specifically, common-payoff or team) settings. This new problem solves for individual agent reward functions that guide agents to better overall team performance relative to teams in which all agents guide their behavior with the same given team-reward function. We present a multiagent architecture in which each agent learns good reward functions from experience using a gradient-based algorithm in addition to performing the usual task of planning good policies (except in this case with respect to the learned rather than the given reward function). Multiagency introduces the challenge of nonstationarity: because the agents learn simultaneously, each agent's reward-learning problem is nonstationary and interdependent on the other agents evolving reward functions. We demonstrate on two simple domains that the proposed architecture outperforms the conventional approach in which all the agents use the same given team-reward function (even when accounting for the resource overhead of the reward learning); that the learning algorithm performs stably despite the nonstationarity; and that learning individual reward functions can lead to better specialization of roles than is possible with shared reward, whether learned or given.
在设计单个智能体的最优奖励之后,我们定义了合作(特别是共同奖励或团队)设置中的多智能体最优奖励问题(ORP)。这个新问题解决了个体代理奖励函数,它引导代理更好的整体团队绩效,而不是所有代理都用相同的团队奖励函数指导他们的行为。我们提出了一个多智能体架构,其中每个智能体除了执行规划良好策略的常规任务外,还使用基于梯度的算法从经验中学习良好的奖励函数(除了在这种情况下,相对于学习的而不是给定的奖励函数)。多代理引入了非平稳的挑战:因为智能体同时学习,每个智能体的奖励学习问题是非平稳的,并且依赖于其他智能体进化的奖励函数。我们在两个简单的领域证明了所提出的体系结构优于传统的方法,在这种方法中,所有代理都使用相同的给定团队奖励函数(即使考虑到奖励学习的资源开销);尽管存在非平稳性,但学习算法表现稳定;学习个人奖励功能比共享奖励(无论是学习的还是给予的)更能导致角色的专业化。
{"title":"Optimal Rewards for Cooperative Agents","authors":"B. Liu, Satinder Singh, Richard L. Lewis, S. Qin","doi":"10.1109/TAMD.2014.2362682","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2362682","url":null,"abstract":"Following work on designing optimal rewards for single agents, we define a multiagent optimal rewards problem (ORP) in cooperative (specifically, common-payoff or team) settings. This new problem solves for individual agent reward functions that guide agents to better overall team performance relative to teams in which all agents guide their behavior with the same given team-reward function. We present a multiagent architecture in which each agent learns good reward functions from experience using a gradient-based algorithm in addition to performing the usual task of planning good policies (except in this case with respect to the learned rather than the given reward function). Multiagency introduces the challenge of nonstationarity: because the agents learn simultaneously, each agent's reward-learning problem is nonstationary and interdependent on the other agents evolving reward functions. We demonstrate on two simple domains that the proposed architecture outperforms the conventional approach in which all the agents use the same given team-reward function (even when accounting for the resource overhead of the reward learning); that the learning algorithm performs stably despite the nonstationarity; and that learning individual reward functions can lead to better specialization of roles than is possible with shared reward, whether learned or given.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"25 1","pages":"286-297"},"PeriodicalIF":0.0,"publicationDate":"2014-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2362682","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning from Demonstration in Robots using the Shared Circuits Model 共享电路模型在机器人演示中的学习
Pub Date : 2014-10-01 DOI: 10.1109/TAMD.2014.2359912
Khawaja M. U. Suleman, M. Awais
Learning from demonstration presents an alternative method for programming robots for different nontrivial behaviors. Various techniques that address learning from demonstration in robots have been proposed but those do not scale up well. Thus there is a need to discover novel solutions to this problem. Given that the basic idea for such learning comes from nature in the form of imitation in few animals, it makes perfect sense to take advantage of the rigorous study of imitative learning available in relevant natural sciences. In this work a solution for robot learning from a relatively recent theory from natural sciences called the Shared Circuits Model, is sought. Shared Circuits Model theory is a comprehensive, multidiscipline representative theory. It is a modern synthesis that brings together different theories that explain imitation and other related social functions originating from various sciences. This paper attempts to import the shared circuits model to robotics for learning from demonstration. Specifically it: (1) expresses shared circuits model in a software design nomenclature; (2) heuristically extends the basic specification of Shared Circuits Model to implement a working imitative learning system; (3) applies the extended model on mobile robot navigation in a simulated indoor environment; and (4) attempts to validate the shared circuits model theory in the context of imitative learning. Results show that an extremely simple implementation of a theoretically sound theory, the shared circuits model, offers a realistic solution for robot learning from demonstration of nontrivial tasks.
从演示中学习为机器人的不同非平凡行为提供了另一种编程方法。已经提出了各种解决机器人从演示中学习的技术,但这些技术都没有很好地扩大规模。因此,有必要发现解决这个问题的新方法。考虑到这种学习的基本思想来自于自然界中少数动物的模仿形式,利用相关自然科学中对模仿学习的严格研究是完全有意义的。在这项工作中,从一个相对较新的自然科学理论——共享电路模型——中寻求机器人学习的解决方案。共享电路模型理论是一门综合性、多学科的代表性理论。这是一个现代的综合,汇集了不同的理论,解释模仿和其他相关的社会功能起源于不同的科学。本文试图将共享电路模型引入机器人中进行示范学习。具体来说:(1)用软件设计术语表达共享电路模型;(2)启发式地扩展了共享电路模型的基本规范,实现了一个工作模仿学习系统;(3)将扩展模型应用于模拟室内环境下的移动机器人导航;(4)尝试在模仿学习的背景下验证共享电路模型理论。结果表明,共享电路模型是一种非常简单的理论实现,为机器人从非平凡任务的演示中学习提供了一个现实的解决方案。
{"title":"Learning from Demonstration in Robots using the Shared Circuits Model","authors":"Khawaja M. U. Suleman, M. Awais","doi":"10.1109/TAMD.2014.2359912","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2359912","url":null,"abstract":"Learning from demonstration presents an alternative method for programming robots for different nontrivial behaviors. Various techniques that address learning from demonstration in robots have been proposed but those do not scale up well. Thus there is a need to discover novel solutions to this problem. Given that the basic idea for such learning comes from nature in the form of imitation in few animals, it makes perfect sense to take advantage of the rigorous study of imitative learning available in relevant natural sciences. In this work a solution for robot learning from a relatively recent theory from natural sciences called the Shared Circuits Model, is sought. Shared Circuits Model theory is a comprehensive, multidiscipline representative theory. It is a modern synthesis that brings together different theories that explain imitation and other related social functions originating from various sciences. This paper attempts to import the shared circuits model to robotics for learning from demonstration. Specifically it: (1) expresses shared circuits model in a software design nomenclature; (2) heuristically extends the basic specification of Shared Circuits Model to implement a working imitative learning system; (3) applies the extended model on mobile robot navigation in a simulated indoor environment; and (4) attempts to validate the shared circuits model theory in the context of imitative learning. Results show that an extremely simple implementation of a theoretically sound theory, the shared circuits model, offers a realistic solution for robot learning from demonstration of nontrivial tasks.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"244-258"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2359912","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Hierarchical System for a Distributed Representation of the Peripersonal Space of a Humanoid Robot 仿人机器人周边空间分布式表示的层次系统
Pub Date : 2014-06-26 DOI: 10.1109/TAMD.2014.2332875
Marco Antonelli, A. Gibaldi, Frederik Beuth, A. J. Duran, A. Canessa, Manuela Chessa, F. Solari, A. P. Pobil, F. Hamker, E. Chinellato, S. Sabatini
Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.
在未知和非结构化的环境中到达目标物体是人类很容易做到的。然而,设计一个执行相同任务的类人机器人需要实现复杂的能力,例如在视野中识别目标,估计其空间位置,并精确驱动手臂的电机到达目标。虽然研究通常是单独处理这种能力的发展,但在这项工作中,我们将许多计算模型集成到一个统一的框架中,并在人形躯干中展示了其周围个人空间的集成工作表示的可行性。为了实现这一目标,我们提出了一种认知架构,该架构连接了受大脑视觉、额叶和后顶叶皮层神经回路启发的几个模型。整合过程的结果是一个系统,该系统允许机器人通过感知和行动的相互适应,直接与环境互动,从而创建其内部模型和周围空间的表示。机器人最终能够执行一系列任务,比如识别、凝视和到达目标物体,这些任务可以单独工作,也可以协同工作,以支持更有条理、更有效的行为。
{"title":"A Hierarchical System for a Distributed Representation of the Peripersonal Space of a Humanoid Robot","authors":"Marco Antonelli, A. Gibaldi, Frederik Beuth, A. J. Duran, A. Canessa, Manuela Chessa, F. Solari, A. P. Pobil, F. Hamker, E. Chinellato, S. Sabatini","doi":"10.1109/TAMD.2014.2332875","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2332875","url":null,"abstract":"Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"14 1","pages":"259-273"},"PeriodicalIF":0.0,"publicationDate":"2014-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2332875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A Wearable Camera Detects Gaze Peculiarities during Social Interactions in Young Children with Pervasive Developmental Disorders 一种可穿戴相机在患有广发性发育障碍的幼儿的社交互动中检测凝视特性
Pub Date : 2014-06-03 DOI: 10.1109/TAMD.2014.2327812
Silvia Magrelli, Basilio Noris, Patrick Jermann, F. Ansermet, F. Hentsch, J. Nadel, A. Billard
We report on the study of gazes, conducted on children with pervasive developmental disorders (PDD), by using a novel head-mounted eye-tracking device called the WearCam. Due to the portable nature of the WearCam, we are able to monitor naturalistic interactions between the children and adults. The study involved a group of 3- to 11-year-old children ( n=13) with PDD compared to a group of typically developing (TD) children ( n=13) between 2- and 6-years old. We found significant differences between the two groups, in terms of the proportion and the frequency of episodes of directly looking at faces during the whole set of experiments. We also conducted a differentiated analysis, in two social conditions, of the gaze patterns directed to an adult's face when the adult addressed the child either verbally or through facial expression of emotion. We observe that children with PDD show a marked tendency to look more at the face of the adult when she makes facial expressions rather than when she speaks.
我们报告了一项对患有广泛性发育障碍(PDD)的儿童进行的凝视研究,该研究使用了一种名为WearCam的新型头戴式眼球追踪设备。由于WearCam的便携性,我们能够监测儿童和成人之间的自然互动。该研究涉及一组3至11岁的PDD儿童(n=13)和一组2至6岁的正常发育(TD)儿童(n=13)。我们发现,在整个实验过程中,两组人在直视人脸的比例和频率方面存在显著差异。我们还在两种社会条件下进行了差异化分析,即当成年人对孩子说话或通过面部表情表达情感时,他们的注视模式指向成年人的脸。我们观察到,患有PDD的儿童在成人做面部表情时比在她说话时更倾向于看她的脸。
{"title":"A Wearable Camera Detects Gaze Peculiarities during Social Interactions in Young Children with Pervasive Developmental Disorders","authors":"Silvia Magrelli, Basilio Noris, Patrick Jermann, F. Ansermet, F. Hentsch, J. Nadel, A. Billard","doi":"10.1109/TAMD.2014.2327812","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2327812","url":null,"abstract":"We report on the study of gazes, conducted on children with pervasive developmental disorders (PDD), by using a novel head-mounted eye-tracking device called the WearCam. Due to the portable nature of the WearCam, we are able to monitor naturalistic interactions between the children and adults. The study involved a group of 3- to 11-year-old children ( n=13) with PDD compared to a group of typically developing (TD) children ( n=13) between 2- and 6-years old. We found significant differences between the two groups, in terms of the proportion and the frequency of episodes of directly looking at faces during the whole set of experiments. We also conducted a differentiated analysis, in two social conditions, of the gaze patterns directed to an adult's face when the adult addressed the child either verbally or through facial expression of emotion. We observe that children with PDD show a marked tendency to look more at the face of the adult when she makes facial expressions rather than when she speaks.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"274-285"},"PeriodicalIF":0.0,"publicationDate":"2014-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2327812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence MEI机器人:利用母亲语言开发多模态情商
Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2317513
Angelica Lim, HIroshi G. Okuno
We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI's development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality-human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.
我们介绍了一个名为MEI(多模态情商)的发展机器人的第一步,这个机器人可以通过语音、手势和步态来理解和表达情感,使用一个只接受语音训练的控制器。众所周知,人类可以感知声音、动作、音乐甚至是点光显示的影响,但人类是如何发展这种技能的尚不清楚。是天生的吗?如果不是,那么婴儿的情商是如何发展的呢?MEI机器人通过声音输入和对声音特征的感知映射来发展这些技能。我们将MEI的发展基于这样一种观点,即母亲语从很小的时候就被用作一种将动态声音轮廓与面部情绪联系起来的方式。MEI使用这些动态轮廓来理解和表达多模态情绪,使用一个称为SIRE(速度,强度,不规则性和范围)的统一模型。基于MEI的离线实验支持其跨模态泛化能力:用语音数据训练的模型可以识别完全不同模态——人类步态下的快乐、悲伤和恐惧。用户对MEI机器人说话、手势和行走的评价表明,仅以语音训练模型为基础,它就能可靠地表达多模态的快乐和悲伤。
{"title":"The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence","authors":"Angelica Lim, HIroshi G. Okuno","doi":"10.1109/TAMD.2014.2317513","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2317513","url":null,"abstract":"We introduce the first steps in a developmental robot called MEI (multimodal emotional intelligence), a robot that can understand and express emotions in voice, gesture and gait using a controller trained only on voice. Whereas it is known that humans can perceive affect in voice, movement, music and even as little as point light displays, it is not clear how humans develop this skill. Is it innate? If not, how does this emotional intelligence develop in infants? The MEI robot develops these skills through vocal input and perceptual mapping of vocal features to other modalities. We base MEI's development on the idea that motherese is used as a way to associate dynamic vocal contours to facial emotion from an early age. MEI uses these dynamic contours to both understand and express multimodal emotions using a unified model called SIRE (Speed, Intensity, irRegularity, and Extent). Offline experiments with MEI support its cross-modal generalization ability: a model trained with voice data can recognize happiness, sadness, and fear in a completely different modality-human gait. User evaluations of the MEI robot speaking, gesturing and walking show that it can reliably express multimodal happiness and sadness using only the voice-trained model as a basis.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"126-138"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2317513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62762881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Guest Editorial Behavior Understanding and Developmental Robotics 行为理解与发展机器人
Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2328731
A. A. Salah, Pierre-Yves Oudeyer, Çetin Meriçli, Javier Ruiz-del-Solar
The scientific, technological, and application challenges that arise from the mutual interaction of developmental robotics and computational human behavior understanding give rise to two different perspectives. Robots need to be capable to learn dynamically and incrementally how to interpret, and thus understand multimodal human behavior, which means behavior analysis can be performed for developmental robotics. On the other hand, behavior analysis can also be performed through developmental robotics, since developmental social robots can offer stimulating opportunities for improving scientific understanding of human behavior, and especially to allow a deeper analysis of the semantics and structure of human behavior. The contributions to the Special Issue explore these two perspectives.
发展机器人和计算人类行为理解的相互作用所产生的科学、技术和应用挑战产生了两种不同的观点。机器人需要能够动态和增量地学习如何解释,从而理解多模态人类行为,这意味着行为分析可以用于发展机器人。另一方面,行为分析也可以通过发展机器人进行,因为发展社交机器人可以为提高对人类行为的科学理解提供刺激的机会,特别是允许对人类行为的语义和结构进行更深入的分析。本期特刊的文章探讨了这两个观点。
{"title":"Guest Editorial Behavior Understanding and Developmental Robotics","authors":"A. A. Salah, Pierre-Yves Oudeyer, Çetin Meriçli, Javier Ruiz-del-Solar","doi":"10.1109/TAMD.2014.2328731","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2328731","url":null,"abstract":"The scientific, technological, and application challenges that arise from the mutual interaction of developmental robotics and computational human behavior understanding give rise to two different perspectives. Robots need to be capable to learn dynamically and incrementally how to interpret, and thus understand multimodal human behavior, which means behavior analysis can be performed for developmental robotics. On the other hand, behavior analysis can also be performed through developmental robotics, since developmental social robots can offer stimulating opportunities for improving scientific understanding of human behavior, and especially to allow a deeper analysis of the semantics and structure of human behavior. The contributions to the Special Issue explore these two perspectives.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"296 1","pages":"77-79"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76470343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Corrections to “An Approach to Subjective Computing: A Robot That Learns From Interaction With Humans” 对“一种主观计算方法:从与人类的互动中学习的机器人”的更正
Pub Date : 2014-06-01 DOI: 10.1109/TAMD.2014.2328774
P. Gruneberg, Kenji Suzuki
{"title":"Corrections to “An Approach to Subjective Computing: A Robot That Learns From Interaction With Humans”","authors":"P. Gruneberg, Kenji Suzuki","doi":"10.1109/TAMD.2014.2328774","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2328774","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"6 1","pages":"168-168"},"PeriodicalIF":0.0,"publicationDate":"2014-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2328774","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Autonomous Mental Development
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1