首页 > 最新文献

IEEE Transactions on Autonomous Mental Development最新文献

英文 中文
Learning Through Imitation: a Biological Approach to Robotics 通过模仿学习:机器人的生物学方法
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2200250
Fabian Chersi
Humans are very efficient in learning new skills through imitation and social interaction with other individuals. Recent experimental findings on the functioning of the mirror neuron system in humans and animals and on the coding of intentions, have led to the development of more realistic and powerful models of action understanding and imitation. This paper describes the implementation on a humanoid robot of a spiking neuron model of the mirror system. The proposed architecture is validated in an imitation task where the robot has to observe and understand manipulative action sequences executed by a human demonstrator and reproduce them on demand utilizing its own motor repertoire. To instruct the robot what to observe and to learn, and when to imitate, the demonstrator utilizes a simple form of sign language. Two basic principles underlie the functioning of the system: 1) imitation is primarily directed toward reproducing the goals of observed actions rather than the exact hand trajectories; and 2) the capacity to understand the motor intentions of another individual is based on the resonance of the same neural populations that are active during action execution. Experimental findings show that the use of even a very simple form of gesture-based communication allows to develop robotic architectures that are efficient, simple and user friendly.
人类通过模仿和与他人的社会互动来学习新技能是非常有效的。最近关于人类和动物镜像神经元系统的功能以及意图编码的实验发现,导致了更现实和强大的动作理解和模仿模型的发展。本文描述了反射系统的尖峰神经元模型在仿人机器人上的实现。所提出的架构在模仿任务中得到验证,其中机器人必须观察和理解由人类演示者执行的操作动作序列,并根据需要利用自己的运动曲目复制它们。为了指导机器人观察和学习什么,以及何时模仿,演示者使用了一种简单的手语形式。该系统的两个基本原理是:1)模仿主要是为了再现所观察到的动作的目标,而不是准确的手部轨迹;2)理解另一个人的运动意图的能力是基于在行动执行过程中活跃的相同神经群的共振。实验结果表明,即使使用一种非常简单的基于手势的通信形式,也可以开发出高效、简单和用户友好的机器人架构。
{"title":"Learning Through Imitation: a Biological Approach to Robotics","authors":"Fabian Chersi","doi":"10.1109/TAMD.2012.2200250","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2200250","url":null,"abstract":"Humans are very efficient in learning new skills through imitation and social interaction with other individuals. Recent experimental findings on the functioning of the mirror neuron system in humans and animals and on the coding of intentions, have led to the development of more realistic and powerful models of action understanding and imitation. This paper describes the implementation on a humanoid robot of a spiking neuron model of the mirror system. The proposed architecture is validated in an imitation task where the robot has to observe and understand manipulative action sequences executed by a human demonstrator and reproduce them on demand utilizing its own motor repertoire. To instruct the robot what to observe and to learn, and when to imitate, the demonstrator utilizes a simple form of sign language. Two basic principles underlie the functioning of the system: 1) imitation is primarily directed toward reproducing the goals of observed actions rather than the exact hand trajectories; and 2) the capacity to understand the motor intentions of another individual is based on the resonance of the same neural populations that are active during action execution. Experimental findings show that the use of even a very simple form of gesture-based communication allows to develop robotic architectures that are efficient, simple and user friendly.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"204-214"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2200250","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Long Summer Days: Grounded Learning of Words for the Uneven Cycles of Real World Events 漫长的夏日:现实世界事件不均匀周期的基础学习
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2207455
Scott Heath, R. Schulz, David Ball, Janet Wiles
Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars.
时间和空间是人类语言和具身认知的基础。在我们早期的工作中,我们研究了Lingodroids,一种能够构建自己地图的机器人,如何进化出自己的地理空间语言。在随后的研究中,我们将空间概念和词汇的学习框架扩展到时间间隔的学习。本文考虑了时间的一个新方面,即早晨、下午、黎明和黄昏等概念的命名,这些概念是昼夜循环的一部分,但不是由时钟上的特定时间点定义的。这些术语的基础指的是昼夜周期的事件和特征,例如光照水平。我们研究了基于事件的时间,其中机器人经历了随着一年中的季节变化而变化的昼夜周期。然后,我们使用会面任务来证明所学的单词是有基础的,会面的时间是上午和下午,而不是特定的时钟时间。这些研究表明,循环时间的一个新方面的词汇和概念是如何通过对事件的经验而不是通过时钟或日历测量的时间来建立的。
{"title":"Long Summer Days: Grounded Learning of Words for the Uneven Cycles of Real World Events","authors":"Scott Heath, R. Schulz, David Ball, Janet Wiles","doi":"10.1109/TAMD.2012.2207455","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2207455","url":null,"abstract":"Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"10 1","pages":"192-203"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2207455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans 面向平台无关的协作式人机交互系统:III学习与执行动作与共享计划的体系结构
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2199754
S. Lallée, U. Pattacini, Séverin Lemaignan, A. Lenz, C. Melhuish, L. Natale, Sergey Skachek, Katharina Hamann, Jasmin Steinwender, E. A. Sisbot, G. Metta, J. Guitton, R. Alami, Matthieu Warnier, A. Pipe, Felix Warneken, Peter Ford Dominey
Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.
机器人应该能够以一种合作和自适应的方式与他们的人类同行进行互动,以完成可以实时变化的开放式任务。机器人行为的一个重要方面是通过观察和与人类互动来获得合作任务的新知识的能力。目前的研究解决了这一挑战。我们展示了一个协作式人机交互系统的结果,该系统是专门为不同类人平台之间的可移植性而开发的,通过感知和运动接口的抽象层。在感知领域,所得到的系统被证明能够学习识别物体和识别作为感知原语序列的动作,并在不同的机器人平台之间转移这种学习和识别。对于执行,复合动作和计划在一个机器人上学习,并在另一个机器人上成功执行。最重要的是,该系统提供了将行动链接到共享计划的能力,这构成了人机合作的基础,将人类认知发展的原则应用于机器人认知系统领域。
{"title":"Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans","authors":"S. Lallée, U. Pattacini, Séverin Lemaignan, A. Lenz, C. Melhuish, L. Natale, Sergey Skachek, Katharina Hamann, Jasmin Steinwender, E. A. Sisbot, G. Metta, J. Guitton, R. Alami, Matthieu Warnier, A. Pipe, Felix Warneken, Peter Ford Dominey","doi":"10.1109/TAMD.2012.2199754","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2199754","url":null,"abstract":"Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"239-253"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2199754","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Guest Editorial: Biologically Inspired Human-Robot Interactions - Developing More Natural Ways to Communicate with our Machines 嘉宾评论:受生物学启发的人机交互——开发更自然的方式与我们的机器交流
Pub Date : 2012-09-01 DOI: 10.1109/TAMD.2012.2216703
F. Harris, J. Krichmar, H. Siegelmann, H. Wagatsuma
The five articles in this special issue focus on human robot interactions. The papers bring together fields of study, such as cognitive architectures, computational neuroscience, developmental psychology, machine psychology, and sociall affective robots.
本期特刊的五篇文章聚焦于人机交互。这些论文汇集了认知架构、计算神经科学、发展心理学、机器心理学和社会情感机器人等研究领域。
{"title":"Guest Editorial: Biologically Inspired Human-Robot Interactions - Developing More Natural Ways to Communicate with our Machines","authors":"F. Harris, J. Krichmar, H. Siegelmann, H. Wagatsuma","doi":"10.1109/TAMD.2012.2216703","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2216703","url":null,"abstract":"The five articles in this special issue focus on human robot interactions. The papers bring together fields of study, such as cognitive architectures, computational neuroscience, developmental psychology, machine psychology, and sociall affective robots.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"18 1","pages":"190-191"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74644225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The “Interaction Engine”: A Common Pragmatic Competence Across Linguistic and Nonlinguistic Interactions “互动引擎”:跨语言和非语言互动的共同语用能力
Pub Date : 2012-06-01 DOI: 10.1109/TAMD.2011.2166261
G. Pezzulo
Recent research in cognitive psychology, neuro- science, and robotics has widely explored the tight relations between language and action systems in primates. However, the link between the pragmatics of linguistic and nonlinguistic inter- actions has received less attention up to now. In this paper, we argue that cognitive agents exploit the same cognitive processes and neural substrate-a general pragmatic competence-across linguistic and nonlinguistic interactive contexts. Elaborating on Levinson's idea of an “interaction engine” that permits to convey and recognize communicative intentions in both linguistic and nonlinguistic interactions, we offer a computationally guided analysis of pragmatic competence, suggesting that the core abilities required for successful linguistic interactions could derive from more primitive architectures for action control, nonlinguistic interactions, and joint actions. Furthermore, we make the case for a novel, embodied approach to human-robot interaction and communication, in which the ability to carry on face-to-face communication develops in coordination with the pragmatic competence required for joint action.
近年来,认知心理学、神经科学和机器人学的研究广泛地探讨了灵长类动物语言和动作系统之间的密切关系。然而,语言语用学与非语言互动之间的联系目前还没有得到足够的重视。在本文中,我们认为认知代理在语言和非语言互动环境中利用相同的认知过程和神经基质-一般的语用能力。在阐述Levinson的“交互引擎”(允许在语言和非语言交互中传达和识别交际意图)的想法时,我们提供了一种对语用能力的计算指导分析,表明成功的语言交互所需的核心能力可能来自更原始的动作控制、非语言交互和联合动作的架构。此外,我们提出了一种新颖的、具体化的人机交互和交流方法,在这种方法中,进行面对面交流的能力与联合行动所需的语用能力相协调。
{"title":"The “Interaction Engine”: A Common Pragmatic Competence Across Linguistic and Nonlinguistic Interactions","authors":"G. Pezzulo","doi":"10.1109/TAMD.2011.2166261","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2166261","url":null,"abstract":"Recent research in cognitive psychology, neuro- science, and robotics has widely explored the tight relations between language and action systems in primates. However, the link between the pragmatics of linguistic and nonlinguistic inter- actions has received less attention up to now. In this paper, we argue that cognitive agents exploit the same cognitive processes and neural substrate-a general pragmatic competence-across linguistic and nonlinguistic interactive contexts. Elaborating on Levinson's idea of an “interaction engine” that permits to convey and recognize communicative intentions in both linguistic and nonlinguistic interactions, we offer a computationally guided analysis of pragmatic competence, suggesting that the core abilities required for successful linguistic interactions could derive from more primitive architectures for action control, nonlinguistic interactions, and joint actions. Furthermore, we make the case for a novel, embodied approach to human-robot interaction and communication, in which the ability to carry on face-to-face communication develops in coordination with the pragmatic competence required for joint action.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"105-123"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2166261","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Brain-Like Emergent Spatial Processing 类大脑突发空间处理
Pub Date : 2012-06-01 DOI: 10.1109/TAMD.2011.2174636
J. Weng, M. Luciw
This is a theoretical, modeling, and algorithmic paper about the spatial aspect of brain-like information processing, modeled by the developmental network (DN) model. The new brain architecture allows the external environment (including teachers) to interact with the sensory ends and the motor ends of the skull-closed brain through development. It does not allow the human programmer to hand-pick extra-body concepts or to handcraft the concept boundaries inside the brain . Mathematically, the brain spatial processing performs real-time mapping from to , through network updates, where the contents of all emerge from experience. Using its limited resource, the brain does increasingly better through experience. A new principle is that the effector ends serve as hubs for concept learning and abstraction. The effector ends serve also as input and the sensory ends serve also as output. As DN embodiments, the Where-What Networks (WWNs) present three major function novelties-new concept abstraction, concept as emergent goals, and goal-directed perception. The WWN series appears to be the first general purpose emergent systems for detecting and recognizing multiple objects in complex backgrounds. Among others, the most significant new mechanism is general-purpose top-down attention.
这是一篇关于类脑信息处理空间方面的理论、建模和算法论文,采用发育网络(DN)模型建模。新的大脑结构允许外部环境(包括教师)通过发展与封闭大脑的感觉端和运动端相互作用。它不允许人类程序员手工挑选额外的身体概念,也不允许在大脑中手工制作概念边界。从数学上讲,大脑的空间处理通过网络更新进行实时映射,其中所有内容都来自经验。大脑利用其有限的资源,通过经验做得越来越好。一个新的原理是,效应端是概念学习和抽象的中枢。效应端也作为输入端,感觉端也作为输出端。作为DN的具体体现,Where-What网络(WWNs)呈现出三个主要的功能创新——新概念抽象、作为紧急目标的概念和目标导向感知。WWN系列似乎是第一个通用的紧急系统,用于检测和识别复杂背景下的多个物体。在其他机制中,最重要的新机制是通用的自上而下的关注。
{"title":"Brain-Like Emergent Spatial Processing","authors":"J. Weng, M. Luciw","doi":"10.1109/TAMD.2011.2174636","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2174636","url":null,"abstract":"This is a theoretical, modeling, and algorithmic paper about the spatial aspect of brain-like information processing, modeled by the developmental network (DN) model. The new brain architecture allows the external environment (including teachers) to interact with the sensory ends and the motor ends of the skull-closed brain through development. It does not allow the human programmer to hand-pick extra-body concepts or to handcraft the concept boundaries inside the brain . Mathematically, the brain spatial processing performs real-time mapping from to , through network updates, where the contents of all emerge from experience. Using its limited resource, the brain does increasingly better through experience. A new principle is that the effector ends serve as hubs for concept learning and abstraction. The effector ends serve also as input and the sensory ends serve also as output. As DN embodiments, the Where-What Networks (WWNs) present three major function novelties-new concept abstraction, concept as emergent goals, and goal-directed perception. The WWN series appears to be the first general purpose emergent systems for detecting and recognizing multiple objects in complex backgrounds. Among others, the most significant new mechanism is general-purpose top-down attention.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"161-185"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2174636","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Are Robots Appropriate for Troublesome and Communicative Tasks in a City Environment? 机器人是否适合在城市环境中完成麻烦和交流的任务?
Pub Date : 2012-06-01 DOI: 10.1109/TAMD.2011.2178846
Kotaro Hayashi, M. Shiomi, T. Kanda, N. Hagita
We studied people's acceptance of robots that per- form tasks in a city. Three different beings (a human, a human wearing a mascot costume, and a robot) performed tasks in three different scenarios: endless guidance, responding to irrational complaints, and removing an accidentally discarded key from the trash. All of these tasks involved beings interacting with visitors in troublesome situations: dull, stressful, and dirty. For this paper, 30 participants watched nine videos (three tasks performed by three beings) and evaluated each being's appropriateness for the task and its human-likeness. The results indicate that people prefer that a robot rather than a human perform these troublesome tasks, even though they require much interaction with people. In addition, comparisons with the costumed-human suggest that people's beliefs that a being deserves human rights rather than having a human-like appearance and behavior or cognitive capability is one explanation for their judgments about appropriateness.
我们研究了人们对在城市中执行任务的机器人的接受程度。三个不同的生物(一个人,一个穿着吉祥物服装的人和一个机器人)在三个不同的场景中执行任务:无休止的指导,回应不合理的抱怨,从垃圾桶中取出不小心丢弃的钥匙。所有这些任务都涉及到生物在令人烦恼的情况下与访客互动:沉闷、紧张、肮脏。在本文中,30名参与者观看了9个视频(三个人执行的三个任务),并评估每个人对任务的适当性及其与人类的相似性。结果表明,人们更喜欢机器人而不是人类来完成这些麻烦的任务,即使它们需要与人进行很多互动。此外,与穿着服装的人的比较表明,人们认为一个人应该享有人权,而不是拥有像人一样的外表和行为或认知能力,这是他们对适当性的判断的一个解释。
{"title":"Are Robots Appropriate for Troublesome and Communicative Tasks in a City Environment?","authors":"Kotaro Hayashi, M. Shiomi, T. Kanda, N. Hagita","doi":"10.1109/TAMD.2011.2178846","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2178846","url":null,"abstract":"We studied people's acceptance of robots that per- form tasks in a city. Three different beings (a human, a human wearing a mascot costume, and a robot) performed tasks in three different scenarios: endless guidance, responding to irrational complaints, and removing an accidentally discarded key from the trash. All of these tasks involved beings interacting with visitors in troublesome situations: dull, stressful, and dirty. For this paper, 30 participants watched nine videos (three tasks performed by three beings) and evaluated each being's appropriateness for the task and its human-likeness. The results indicate that people prefer that a robot rather than a human perform these troublesome tasks, even though they require much interaction with people. In addition, comparisons with the costumed-human suggest that people's beliefs that a being deserves human rights rather than having a human-like appearance and behavior or cognitive capability is one explanation for their judgments about appropriateness.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"150-160"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2178846","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Tool–Body Assimilation of Humanoid Robot Using a Neurodynamical System 基于神经动力系统的仿人机器人工具-体同化
Pub Date : 2012-06-01 DOI: 10.1109/TAMD.2011.2177660
S. Nishide, J. Tani, Toru Takahashi, HIroshi G. Okuno, T. Ogata
Researches in the brain science field have uncovered the human capability to use tools as if they are part of the human bodies (known as tool-body assimilation) through trial and experience. This paper presents a method to apply a robot's active sensing experience to create the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool-body assimilation module. Self-organizing map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple time-scales recurrent neural network (MTRNN) is used as the dynamics learning module. Parametric bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the properties of the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robot's dynamic properties change when holding a tool. Motion generation experiments show that the tool-body assimilation model is capable of applying to unknown tools to generate goal-oriented motions.
脑科学领域的研究通过试验和经验揭示了人类使用工具的能力,就好像它们是人体的一部分一样(称为工具-身体同化)。本文提出了一种利用机器人主动感知经验建立刀身同化模型的方法。该模型由特征提取模块、动力学学习模块和工具体同化模块组成。特征提取模块使用自组织映射(SOM)从原始图像中提取目标特征。采用多时间尺度递归神经网络(MTRNN)作为动态学习模块。将参数偏置(PB)节点作为二阶网络附加到MTRNN的权值上,根据工具的性质对MTRNN的行为进行调制。神经网络的泛化能力为模型提供了处理未知工具的能力。实验采用人形机器人HRP-2,采用无刀具、i型刀具、t型刀具和l型刀具进行。PB值的分布表明,模型已经学习到机器人在手持工具时的动态特性发生了变化。运动生成实验表明,工具体同化模型能够应用于未知工具生成目标运动。
{"title":"Tool–Body Assimilation of Humanoid Robot Using a Neurodynamical System","authors":"S. Nishide, J. Tani, Toru Takahashi, HIroshi G. Okuno, T. Ogata","doi":"10.1109/TAMD.2011.2177660","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2177660","url":null,"abstract":"Researches in the brain science field have uncovered the human capability to use tools as if they are part of the human bodies (known as tool-body assimilation) through trial and experience. This paper presents a method to apply a robot's active sensing experience to create the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool-body assimilation module. Self-organizing map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple time-scales recurrent neural network (MTRNN) is used as the dynamics learning module. Parametric bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the properties of the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robot's dynamic properties change when holding a tool. Motion generation experiments show that the tool-body assimilation model is capable of applying to unknown tools to generate goal-oriented motions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"31 1","pages":"139-149"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2177660","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Interactive Learning in Continuous Multimodal Space: A Bayesian Approach to Action-Based Soft Partitioning and Learning 连续多模态空间中的交互式学习:基于动作的软划分和学习的贝叶斯方法
Pub Date : 2012-06-01 DOI: 10.1109/TAMD.2011.2170213
H. Firouzi, M. N. Ahmadabadi, Babak Nadjar Araabi, S. Amizadeh, M. Mirian, R. Siegwart
A probabilistic framework for interactive learning in continuous and multimodal perceptual spaces is proposed. In this framework, the agent learns the task along with adaptive partitioning of its multimodal perceptual space. The learning process is formulated in a Bayesian reinforcement learning setting to facilitate the adaptive partitioning. The partitioning is gradually and softly done using Gaussian distributions. The parameters of distributions are adapted based on the agent's estimate of its actions' expected values. The probabilistic nature of the method results in experience generalization in addition to robustness against uncertainty and noise. To benefit from experience generalization diversity in different perceptual subspaces, the learning is performed in multiple perceptual subspaces-including the original space-in parallel. In every learning step, the policies learned in the subspaces are fused to select the final action. This concurrent learning in multiple spaces and the decision fusion result in faster learning, possibility of adding and/or removing sensors-i.e., gradual expansion or contraction of the perceptual space-, and appropriate robustness against probable failure of or ambiguity in the data of sensors. Results of two sets of simulations in addition to some experiments are reported to demonstrate the key properties of the framework.
提出了一种用于连续多模态感知空间交互学习的概率框架。在这个框架中,智能体学习任务并对其多模态感知空间进行自适应划分。学习过程是在贝叶斯强化学习设置中制定的,以方便自适应划分。使用高斯分布逐步而温和地完成分区。分布的参数是根据智能体对其动作期望值的估计来调整的。该方法的概率特性除了对不确定性和噪声具有鲁棒性外,还具有经验泛化性。为了利用不同感知子空间的经验泛化多样性,在多个感知子空间(包括原始空间)中并行进行学习。在每个学习步骤中,将在子空间中学习到的策略融合以选择最终操作。这种在多个空间中的并行学习和决策融合导致更快的学习,增加和/或删除传感器的可能性。感知空间的逐渐扩展或收缩,以及对传感器数据可能出现的故障或模糊的适当鲁棒性。通过两组仿真和一些实验,验证了该框架的主要特性。
{"title":"Interactive Learning in Continuous Multimodal Space: A Bayesian Approach to Action-Based Soft Partitioning and Learning","authors":"H. Firouzi, M. N. Ahmadabadi, Babak Nadjar Araabi, S. Amizadeh, M. Mirian, R. Siegwart","doi":"10.1109/TAMD.2011.2170213","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2170213","url":null,"abstract":"A probabilistic framework for interactive learning in continuous and multimodal perceptual spaces is proposed. In this framework, the agent learns the task along with adaptive partitioning of its multimodal perceptual space. The learning process is formulated in a Bayesian reinforcement learning setting to facilitate the adaptive partitioning. The partitioning is gradually and softly done using Gaussian distributions. The parameters of distributions are adapted based on the agent's estimate of its actions' expected values. The probabilistic nature of the method results in experience generalization in addition to robustness against uncertainty and noise. To benefit from experience generalization diversity in different perceptual subspaces, the learning is performed in multiple perceptual subspaces-including the original space-in parallel. In every learning step, the policies learned in the subspaces are fused to select the final action. This concurrent learning in multiple spaces and the decision fusion result in faster learning, possibility of adding and/or removing sensors-i.e., gradual expansion or contraction of the perceptual space-, and appropriate robustness against probable failure of or ambiguity in the data of sensors. Results of two sets of simulations in addition to some experiments are reported to demonstrate the key properties of the framework.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"4 1","pages":"124-138"},"PeriodicalIF":0.0,"publicationDate":"2012-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2170213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Goal-Directed Visual Perception System Using Object-Based Top–Down Attention 基于对象的自上而下注意的目标导向视觉感知系统
Pub Date : 2012-03-01 DOI: 10.1109/TAMD.2011.2163513
Yuanlong Yu, G. Mann, R. Gosine
The selective attention mechanism is employed by humans and primates to realize a truly intelligent perception system, which has the cognitive capability of learning and thinking about how to perceive the environment autonomously. The attention mechanism involves the top-down and bottom-up ways that correspond to the goal-directed and automatic perceptual behaviors, respectively. Rather than considering the automatic perception, this paper presents an artificial system of the goal-directed visual perception by using the object-based top-down visual attention mechanism. This cognitive system can guide the perception to an object of interest according to the current task, context and learned knowledge. It consists of three successive stages: preattentive processing, top-down attentional selection and post-attentive perception. The preattentive processing stage divides the input scene into homogeneous proto-objects, one of which is then selected by the top-down attention and finally sent to the post-attentive perception stage for high-level analysis. Experimental results of target detection in the cluttered environments are shown to validate this system.
人类和灵长类动物利用选择性注意机制来实现真正的智能感知系统,即具有学习和思考如何自主感知环境的认知能力。注意机制包括自顶向下和自底向上两种方式,分别对应于目标导向和自动感知行为。本文在不考虑自动感知的基础上,利用基于对象的自上而下视觉注意机制,提出了一种目标导向视觉感知的人工系统。这个认知系统可以根据当前的任务、语境和所学的知识,引导感知到感兴趣的对象。它包括三个连续的阶段:前注意加工、自上而下的注意选择和后注意感知。前注意加工阶段将输入场景划分为同质的原型对象,由自上而下的注意选择一个原型对象,最后送到后注意感知阶段进行高层次分析。实验结果验证了该系统的有效性。
{"title":"A Goal-Directed Visual Perception System Using Object-Based Top–Down Attention","authors":"Yuanlong Yu, G. Mann, R. Gosine","doi":"10.1109/TAMD.2011.2163513","DOIUrl":"https://doi.org/10.1109/TAMD.2011.2163513","url":null,"abstract":"The selective attention mechanism is employed by humans and primates to realize a truly intelligent perception system, which has the cognitive capability of learning and thinking about how to perceive the environment autonomously. The attention mechanism involves the top-down and bottom-up ways that correspond to the goal-directed and automatic perceptual behaviors, respectively. Rather than considering the automatic perception, this paper presents an artificial system of the goal-directed visual perception by using the object-based top-down visual attention mechanism. This cognitive system can guide the perception to an object of interest according to the current task, context and learned knowledge. It consists of three successive stages: preattentive processing, top-down attentional selection and post-attentive perception. The preattentive processing stage divides the input scene into homogeneous proto-objects, one of which is then selected by the top-down attention and finally sent to the post-attentive perception stage for high-level analysis. Experimental results of target detection in the cluttered environments are shown to validate this system.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"21 1","pages":"87-103"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2011.2163513","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
IEEE Transactions on Autonomous Mental Development
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1