首页 > 最新文献

Proceedings. The 4nd International Conference on Development and Learning, 2005.最新文献

英文 中文
Statistical Characteristics of Velocity of Movements of Limbs in Young Infants during the Conjugate Reinforcement Mobile Task 幼儿共轭强化运动任务中肢体运动速度的统计特征
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490984
R. Saji, H. Watanabe, G. Taga
In this paper, we demonstrate statistical identifications for the time series of velocity of the movements of limbs in young infants during the conjugate reinforcement mobile task. The mean square velocity and the probability density function (PDF) of the time rate change of velocity are estimated. We found that the PDF is universally symmetric with a sharpened peak at the origin and exponential-tails. The result suggests that the PDF is a useful measure that reflects the motor pattern generation and memory formation during the mobile task
在本文中,我们证明了在共轭强化移动任务中,幼儿肢体运动速度的时间序列的统计识别。估计了速度的均方函数和速度随时间速率变化的概率密度函数。我们发现PDF是普遍对称的,在原点处有一个尖峰和指数尾。结果表明,PDF是反映运动任务过程中运动模式产生和记忆形成的有效指标
{"title":"Statistical Characteristics of Velocity of Movements of Limbs in Young Infants during the Conjugate Reinforcement Mobile Task","authors":"R. Saji, H. Watanabe, G. Taga","doi":"10.1109/DEVLRN.2005.1490984","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490984","url":null,"abstract":"In this paper, we demonstrate statistical identifications for the time series of velocity of the movements of limbs in young infants during the conjugate reinforcement mobile task. The mean square velocity and the probability density function (PDF) of the time rate change of velocity are estimated. We found that the PDF is universally symmetric with a sharpened peak at the origin and exponential-tails. The result suggests that the PDF is a useful measure that reflects the motor pattern generation and memory formation during the mobile task","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134520066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inhibition in Cognitive Development: Contextual Impairments in Autism 认知发展中的抑制:自闭症的情境障碍
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490978
P. Bjorne, B. Johansson, C. Balkenius
Persons with autism, probably due to early sensory impairments, attend to and select for stimuli in an uncommon way. Inhibition of some features of a stimulus, such as location and shape, might be intact, while other features are not as readily inhibited, for example color. Stimuli irrelevant to the task might be attended to. This results in a learning process where irrelevant stimuli are erroneously activated and maintained. Therefore, we propose that the developmental pathway and behavior of persons with autism needs to be understood in a framework including discussions of inhibitory processes and context learning and maintenance. We believe that this provides a fruitful framework for understanding the causes of the seemingly diverse and complex cognitive difficulties seen in autism
自闭症患者可能由于早期感觉障碍,以一种不寻常的方式注意和选择刺激。对刺激的某些特征的抑制,如位置和形状,可能是完整的,而其他特征则不那么容易被抑制,例如颜色。与任务无关的刺激可能会被注意到。这导致了一个学习过程,其中不相关的刺激被错误地激活和维持。因此,我们建议自闭症患者的发展途径和行为需要在一个框架中理解,包括讨论抑制过程和情境学习和维持。我们相信,这为理解自闭症中看似多样和复杂的认知困难的原因提供了一个富有成效的框架
{"title":"Inhibition in Cognitive Development: Contextual Impairments in Autism","authors":"P. Bjorne, B. Johansson, C. Balkenius","doi":"10.1109/DEVLRN.2005.1490978","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490978","url":null,"abstract":"Persons with autism, probably due to early sensory impairments, attend to and select for stimuli in an uncommon way. Inhibition of some features of a stimulus, such as location and shape, might be intact, while other features are not as readily inhibited, for example color. Stimuli irrelevant to the task might be attended to. This results in a learning process where irrelevant stimuli are erroneously activated and maintained. Therefore, we propose that the developmental pathway and behavior of persons with autism needs to be understood in a framework including discussions of inhibitory processes and context learning and maintenance. We believe that this provides a fruitful framework for understanding the causes of the seemingly diverse and complex cognitive difficulties seen in autism","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Other Motion Equivalence Learning for Head Movement Imitation 头部动作模仿的自我-他人动作等价学习
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490958
Y. Nagai
Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 200
只提供摘要形式。提出了一种基于自我动作与他人动作对等的头部动作模仿学习模型。人类婴儿可以模仿成人的头部和面部动作。关于婴儿模仿能力的一个悬而未决的问题是,他们自己和其他婴儿利用什么对等来模仿成人呈现的动作(Meltzolf和Moore, 1997)。自我产生的头部运动或面部运动不能以感知他人动作的相同方式被感知。一些研究人员已经开发出模仿人类头部运动的机器人模型。然而,他们的模型使用了机器人无法检测到的人类姿势数据,并且/或者人类和机器人之间的动作关系是由设计师完全定义的。该模型使机器人仅使用自检测的传感器信息就能学习自我-他者等价性来模仿人类头部运动。有证据表明,当婴儿观察到有运动的动作时,他们会模仿动作,而不是没有运动的动作,我的模型利用了动作的运动信息。由机器人的躯体传感器检测到的自产生动作的运动,表示为机器人头部的角位移矢量。当机器人注视人脸时,人类动作的运动被检测为机器人视觉感知中的光流。通过使用这些表征,机器人通过视觉跟踪人脸的经验来学习头部运动模仿的自我-他人运动等价。如图所示,在面对面的互动中,机器人首先将人的脸视为一个有趣的目标,当人把头转向一边时,机器人会在相机图像中检测到光流。这篇文章还展示了当人的头从中心转到机器人的左边时检测到的光流。然后,视觉上跟踪人脸的能力使机器人能够将其头部转向与人相同的方向,因为人的花边位置在相机图像中移动。这张图还显示了机器人通过跟踪人脸向左转动头部时检测到的运动矢量,其中圆圈中的线条表示八个运动方向上的角位移矢量。结果,机器人发现自运动向量在与人类头部运动的光流相同的运动方向上被激活。这种自我-他人运动等价是通过Hebbian学习获得的。实验表明,该模型使机器人能够在几分钟的在线学习时间内获得自身与人之间的运动等效。利用获得的感觉运动映射,机器人能够模仿人类的头部运动。这种模仿能力可以通过使用一个物体作为被关注的目标而导致联合视觉注意的发展(Nagai, 2005)。
{"title":"Self-Other Motion Equivalence Learning for Head Movement Imitation","authors":"Y. Nagai","doi":"10.1109/DEVLRN.2005.1490958","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490958","url":null,"abstract":"Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 200","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122630714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning the Correspondence between Continuous Speeches and Motions 学习连续的言语和动作之间的对应关系
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490983
O. Natsuki, N. Arata, I. Yoshiaki
Summary form only given. Roy (1999) developed a computational model of early lexical learning to address three questions: First, how do infants discover linguistic units? Second, how do they learn perceptually-grounded semantic categories? And third, how do they learn to associate linguistic units with appropriate semantic categories? His model coupled speech recordings with static images of objects, and acquired a lexicon of shape names. Kaplan et al. (2001) presented a model for teaching names of actions to an enhanced version of AIBO. The AIBO had built-in speech recognition facilities and behaviors. In this paper, we try to build a system that learns the correspondence between continuous speeches and continuous motions without a built-in speech recognizer nor built-in behaviors. We teach RobotPHONE to respond to voices properly by taking its hands. For example, one says 'bye-bye' to the RobotPHONE holding its hand and waving. From continuous input, the system must segment speech and discover acoustic units which correspond to words. The segmentation is done based on recurrent patterns which was found by incremental reference interval-free continuous DP (IRIFCDP) by Kiyama et al. (1996) and Utsunomiya et al. (2004), and we accelerate the IRIFCDP using ShiftCDP (Itoh and Tanaka, 2004). The system also segments motion by the accelerated IRIFCDP, and it memorizes co-occurring speech and motion patterns. Then, it can respond to taught words properly by detecting taught words in speech input by ShiftCDP. We gave a demonstration with a RobotPHONE at the conference. We expect that it can learn words in any languages because it has no built-in facilities specific to any language
只提供摘要形式。Roy(1999)开发了一个早期词汇学习的计算模型来解决三个问题:第一,婴儿如何发现语言单位?第二,他们如何学习基于感知的语义范畴?第三,他们如何学会将语言单位与适当的语义范畴联系起来?他的模型将语音记录与物体的静态图像相结合,并获得了形状名称的词典。Kaplan等人(2001)提出了一个向AIBO增强版教授动作名称的模型。AIBO内置了语音识别功能和行为。在本文中,我们试图建立一个系统来学习连续语音和连续动作之间的对应关系,而不需要内置语音识别器和内置行为。我们教RobotPHONE通过握住它的手来正确地回应声音。例如,一个人握着RobotPHONE的手挥手说“再见”。从连续输入中,系统必须分割语音并发现与单词对应的声学单元。分割是基于Kiyama等人(1996)和Utsunomiya等人(2004)通过增量参考无间隔连续DP (IRIFCDP)发现的循环模式完成的,我们使用ShiftCDP加速IRIFCDP (Itoh和Tanaka, 2004)。该系统还通过加速的IRIFCDP分割运动,并记忆同时发生的语音和运动模式。然后,通过检测ShiftCDP输入的语音中的教词,对教词做出正确的响应。我们在会议上用机器人电话做了演示。我们期望它可以学习任何语言的单词,因为它没有任何特定于任何语言的内置功能
{"title":"Learning the Correspondence between Continuous Speeches and Motions","authors":"O. Natsuki, N. Arata, I. Yoshiaki","doi":"10.1109/DEVLRN.2005.1490983","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490983","url":null,"abstract":"Summary form only given. Roy (1999) developed a computational model of early lexical learning to address three questions: First, how do infants discover linguistic units? Second, how do they learn perceptually-grounded semantic categories? And third, how do they learn to associate linguistic units with appropriate semantic categories? His model coupled speech recordings with static images of objects, and acquired a lexicon of shape names. Kaplan et al. (2001) presented a model for teaching names of actions to an enhanced version of AIBO. The AIBO had built-in speech recognition facilities and behaviors. In this paper, we try to build a system that learns the correspondence between continuous speeches and continuous motions without a built-in speech recognizer nor built-in behaviors. We teach RobotPHONE to respond to voices properly by taking its hands. For example, one says 'bye-bye' to the RobotPHONE holding its hand and waving. From continuous input, the system must segment speech and discover acoustic units which correspond to words. The segmentation is done based on recurrent patterns which was found by incremental reference interval-free continuous DP (IRIFCDP) by Kiyama et al. (1996) and Utsunomiya et al. (2004), and we accelerate the IRIFCDP using ShiftCDP (Itoh and Tanaka, 2004). The system also segments motion by the accelerated IRIFCDP, and it memorizes co-occurring speech and motion patterns. Then, it can respond to taught words properly by detecting taught words in speech input by ShiftCDP. We gave a demonstration with a RobotPHONE at the conference. We expect that it can learn words in any languages because it has no built-in facilities specific to any language","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Robot Soccer Team Behaviours Through Approximate Simulation 基于近似模拟的机器人足球队行为研究
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490968
S. R. Young, S. Chalup
Robot soccer is now recognized as one of the most popular and efficient testbeds for intelligent robotics. It involves many challenges for computation, mechanics, control, software engineering, machine learning, and other fields. The international RoboCup initiative supports research into robot soccer and provides an excellent environment to investigate machine learning for robotics in simulation and the real world
机器人足球现在被认为是智能机器人最流行和最有效的测试平台之一。它涉及计算、力学、控制、软件工程、机器学习和其他领域的许多挑战。国际机器人世界杯计划支持机器人足球的研究,并为研究模拟和现实世界中的机器人机器学习提供了良好的环境
{"title":"Towards Robot Soccer Team Behaviours Through Approximate Simulation","authors":"S. R. Young, S. Chalup","doi":"10.1109/DEVLRN.2005.1490968","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490968","url":null,"abstract":"Robot soccer is now recognized as one of the most popular and efficient testbeds for intelligent robotics. It involves many challenges for computation, mechanics, control, software engineering, machine learning, and other fields. The international RoboCup initiative supports research into robot soccer and provides an excellent environment to investigate machine learning for robotics in simulation and the real world","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Tone Perception and Naming: Development in Acquisition of Color Modifiers 色调知觉与命名:色彩修饰词习得的发展
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490954
D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze
Color is one of the most obvious attributes with which children usually start to classify objects they see. The purpose of this study was to investigate the development of children's ability to discriminate and name colors that varied in saturation and intensity (value) for a given hue (i.e., color tones). Perceptual and naming behaviors were assessed in 221 children, aged between 8 and 24, grouped in three categories, elementary, junior high school and university students. Color tone perception was observed through odd-one-out task and naming responses were obtained in terms of modifiers: vivid, strong, dark, bright, dull, and pale. Results revealed that the discrimination of subtle variations of color tones in two younger age groups was similar to that of the university students. In addition, it was found that elementary school children reliably start interpreting their experience of such variations with just three modifier terms: bright, strong, and dark. The knowledge of color modifier terms varied with age. When the naming task was constrained, a developmental order in the acquisition of such terms was observed. Salient dimensions underlying the judgments of color modifier terms were identified. The importance of each dimension varied with age. At the level of elementary, the semantic classification of color tones was strongly based only on intensity. At the junior high school level, it was found that saturation emerged as an important dimension in assigning modifiers
颜色是最明显的属性之一,孩子们通常用它来对他们看到的物体进行分类。本研究的目的是调查儿童辨别和命名特定色调(即色调)的饱和度和强度(值)不同的颜色的能力的发展。研究人员对221名年龄在8岁至24岁之间的儿童进行了感知和命名行为评估,他们被分为小学生、初中生和大学生三类。通过“奇数一出”任务观察色调感知,并根据修饰词:生动、强烈、暗、亮、暗和苍白获得命名反应。结果显示,两个年龄较小的群体对色调细微变化的辨别与大学生相似。此外,研究还发现,小学生开始可靠地用三个修饰语来解释他们对这些变化的体验:明亮、强烈和黑暗。颜色修饰词的知识随年龄的变化而变化。当命名任务受到限制时,观察到这些术语习得的发展顺序。确定了颜色修饰词判断的显著维度。每个维度的重要性随着年龄的增长而变化。在初级层次上,色调的语义分类强烈地只基于强度。在初中阶段,我们发现饱和度在分配修饰语时成为一个重要维度
{"title":"Color Tone Perception and Naming: Development in Acquisition of Color Modifiers","authors":"D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze","doi":"10.1109/DEVLRN.2005.1490954","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490954","url":null,"abstract":"Color is one of the most obvious attributes with which children usually start to classify objects they see. The purpose of this study was to investigate the development of children's ability to discriminate and name colors that varied in saturation and intensity (value) for a given hue (i.e., color tones). Perceptual and naming behaviors were assessed in 221 children, aged between 8 and 24, grouped in three categories, elementary, junior high school and university students. Color tone perception was observed through odd-one-out task and naming responses were obtained in terms of modifiers: vivid, strong, dark, bright, dull, and pale. Results revealed that the discrimination of subtle variations of color tones in two younger age groups was similar to that of the university students. In addition, it was found that elementary school children reliably start interpreting their experience of such variations with just three modifier terms: bright, strong, and dark. The knowledge of color modifier terms varied with age. When the naming task was constrained, a developmental order in the acquisition of such terms was observed. Salient dimensions underlying the judgments of color modifier terms were identified. The importance of each dimension varied with age. At the level of elementary, the semantic classification of color tones was strongly based only on intensity. At the junior high school level, it was found that saturation emerged as an important dimension in assigning modifiers","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Transient Synchrony and Dynamical Representation of Behavioral Goals of the Prefrontal Cortex 前额皮质行为目标的瞬时同步性和动态表征
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490985
K. Sakamoto, H. Mushiake, N. Saito, J. Tanji
Summary form only given. Behavioral planning requires organizing actions by integrating perceived or memorized information to achieve goals. Studies have suggested that the underlying neural mechanisms involve updating representation of goals for action in associative cortices such as the prefrontal cortex (Saito et al., 2005). Although the underlying neural mechanisms are still unknown, we assume that functional linking of neurons would contribute to this transformation of behavioral goals. Thus, we investigated the relation of synchronous firing of neurons to the transformation of goal representation by recording neurons from the dorsolateral prefrontal cortex (DLPFC), while the monkeys performed a path-planning task (Mushiake et al., 2001) that requires them to plan immediate goals of actions to achieve final goals. Two monkeys were trained to perform a path-planning task that required them to move a cursor to a goal in a lattice-like display. After the cursor emerged in the center of the lattice (start display), a goal was presented in a corner (final goal display). The delay 1 period was followed by the delay 2 period, in which a part of the path in the lattice was blocked that disabled the cursor to move through the path. Then, a go signal was provided to allow the monkey to move the cursor for one check of the lattice. To dissociate arm movements and cursor movements, the monkeys to perform with three different arm-cursor assignments, which were changed every 48 trials. Neuronal pairs that were recorded simultaneously during more than two arm-cursor assignment blocks (> 96 trials) were included in the dataset. The analysis for task-related modulation of synchronous firing was based on the time-resolved cross-correlation method (Baker et al., 2001). This method can estimate neuronal synchrony well, because it can exclude the influence of firing rate change in and among trials by using instantaneous firing rate (IFK) for the predictor. In an example, weak and strong increase in co-firing rate of the neuronal pair is seen at final goal display and delay 2 period respectively, while synchronized firing can be recognized at delay 1 period without accompanying co-firing rate increase. We selected DLPFC neurons showing significant synchrony and goal-related activity with gradual shift of representation from final to immediate goals before initiation of the action. Many of the DLPFC neurons were found to show transient enhancement of synchrony without firing-rate increases. Furthermore, such enhancement was nearly coincident with the timing of shift in their goal representations. These results suggest that transient synchrony plays an important role in the transforming process of goal representations during behavioral planning
只提供摘要形式。行为计划需要通过整合感知或记忆的信息来组织行动,以实现目标。研究表明,潜在的神经机制包括在前额皮质等联想皮质中更新行动目标的表征(Saito等人,2005)。尽管潜在的神经机制尚不清楚,但我们假设神经元的功能连接将有助于这种行为目标的转变。因此,当猴子执行路径规划任务(Mushiake et al., 2001)时,我们通过记录背外侧前额叶皮层(DLPFC)的神经元,研究了神经元的同步放电与目标表征转换的关系,该任务要求它们计划立即的行动目标以实现最终目标。两只猴子被训练去执行一项路径规划任务,该任务要求它们将光标移动到一个格子状显示器上的目标。光标出现在格子中心(开始显示)后,一个目标出现在角落(最终目标显示)。延迟1周期之后是延迟2周期,其中晶格中路径的一部分被阻塞,使光标无法在路径中移动。然后,提供一个go信号,允许猴子移动光标进行一次格子检查。为了分离手臂运动和光标运动,猴子们执行了三种不同的手臂-光标分配,每48次试验改变一次。在两个以上的手臂光标分配块(> 96次试验)中同时记录的神经元对被包括在数据集中。同步发射的任务相关调制分析基于时间分辨互相关方法(Baker et al., 2001)。该方法可以很好地估计神经元的同步性,因为它可以通过使用瞬时发射率(IFK)作为预测器来排除试验内和试验间发射率变化的影响。在一个例子中,在最终目标显示和延迟2期分别可以看到神经元对共燃率的微弱和强烈增加,而在延迟1期可以识别到同步放电,但没有伴随共燃率的增加。我们选择的DLPFC神经元在动作开始前表现出显著的同步性和目标相关活动,从最终目标到直接目标的表征逐渐转变。许多DLPFC神经元表现出短暂的同步性增强,但放电率没有增加。此外,这种增强几乎与他们的目标表征的转移时间一致。这些结果表明,在行为规划过程中,瞬时同步性在目标表征的转化过程中起着重要作用
{"title":"Transient Synchrony and Dynamical Representation of Behavioral Goals of the Prefrontal Cortex","authors":"K. Sakamoto, H. Mushiake, N. Saito, J. Tanji","doi":"10.1109/DEVLRN.2005.1490985","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490985","url":null,"abstract":"Summary form only given. Behavioral planning requires organizing actions by integrating perceived or memorized information to achieve goals. Studies have suggested that the underlying neural mechanisms involve updating representation of goals for action in associative cortices such as the prefrontal cortex (Saito et al., 2005). Although the underlying neural mechanisms are still unknown, we assume that functional linking of neurons would contribute to this transformation of behavioral goals. Thus, we investigated the relation of synchronous firing of neurons to the transformation of goal representation by recording neurons from the dorsolateral prefrontal cortex (DLPFC), while the monkeys performed a path-planning task (Mushiake et al., 2001) that requires them to plan immediate goals of actions to achieve final goals. Two monkeys were trained to perform a path-planning task that required them to move a cursor to a goal in a lattice-like display. After the cursor emerged in the center of the lattice (start display), a goal was presented in a corner (final goal display). The delay 1 period was followed by the delay 2 period, in which a part of the path in the lattice was blocked that disabled the cursor to move through the path. Then, a go signal was provided to allow the monkey to move the cursor for one check of the lattice. To dissociate arm movements and cursor movements, the monkeys to perform with three different arm-cursor assignments, which were changed every 48 trials. Neuronal pairs that were recorded simultaneously during more than two arm-cursor assignment blocks (> 96 trials) were included in the dataset. The analysis for task-related modulation of synchronous firing was based on the time-resolved cross-correlation method (Baker et al., 2001). This method can estimate neuronal synchrony well, because it can exclude the influence of firing rate change in and among trials by using instantaneous firing rate (IFK) for the predictor. In an example, weak and strong increase in co-firing rate of the neuronal pair is seen at final goal display and delay 2 period respectively, while synchronized firing can be recognized at delay 1 period without accompanying co-firing rate increase. We selected DLPFC neurons showing significant synchrony and goal-related activity with gradual shift of representation from final to immediate goals before initiation of the action. Many of the DLPFC neurons were found to show transient enhancement of synchrony without firing-rate increases. Furthermore, such enhancement was nearly coincident with the timing of shift in their goal representations. These results suggest that transient synchrony plays an important role in the transforming process of goal representations during behavioral planning","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133377556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotional elicitation by dynamic facial expressions 通过动态面部表情激发情绪
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490973
W. Sato, S. Yoshikawa
In the present study, we investigated the emotional effect of the dynamic presentation of facial expressions. Dynamic presentation of facial expressions was implemented using a computer-morphing technique. We presented dynamic and static expressions of fear and happiness, as well as other dynamic and static mosaic images, to 17 subjects. Subjects rated the valence and arousal of their emotional response to the images. Results indicated higher reported arousal in response to dynamic presentations than to static facial expressions (for both emotions) and to mosaic images. These results suggest that the specific effect of the dynamic presentation of emotional facial expressions is that it enhances the overall emotional experience without a corresponding qualitative change in that experience, and that this effect is not restricted to facial images
在本研究中,我们研究了面部表情的动态呈现对情绪的影响。面部表情的动态呈现是通过计算机变形技术实现的。我们向 17 名受试者展示了动态和静态的恐惧和快乐表情,以及其他动态和静态的马赛克图像。受试者对他们对图像的情绪反应的价值和唤醒程度进行评分。结果显示,受试者对动态图像(两种情绪)和马赛克图像的唤醒反应高于对静态面部表情的唤醒反应。这些结果表明,动态呈现情绪面部表情的具体效果是,它增强了整体情绪体验,而这种体验并没有相应的质变,而且这种效果并不局限于面部图像
{"title":"Emotional elicitation by dynamic facial expressions","authors":"W. Sato, S. Yoshikawa","doi":"10.1109/DEVLRN.2005.1490973","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490973","url":null,"abstract":"In the present study, we investigated the emotional effect of the dynamic presentation of facial expressions. Dynamic presentation of facial expressions was implemented using a computer-morphing technique. We presented dynamic and static expressions of fear and happiness, as well as other dynamic and static mosaic images, to 17 subjects. Subjects rated the valence and arousal of their emotional response to the images. Results indicated higher reported arousal in response to dynamic presentations than to static facial expressions (for both emotions) and to mosaic images. These results suggest that the specific effect of the dynamic presentation of emotional facial expressions is that it enhances the overall emotional experience without a corresponding qualitative change in that experience, and that this effect is not restricted to facial images","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131285255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prototype-specific learning for children's vocabulary 针对儿童词汇的原型学习
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490982
S. Hidaka, J. Saiki
Several studies suggested that knowledge about the relationship between vocabulary and perceptual objects work as a constraint to enable children to generalize novel words quickly. Children's bias in novel word generalization is considered to reflect their prior knowledge and is investigated in various contexts. In particular, children have a bias to attend to shape similarity of solid objects and material similarity of nonsolid substance in novel word acquisition (Imai and Gentner, 1997). A few studies reported that a model based on Boltzmann machine could explain categorization bias among shape, material and solidity by learning an artificial vocabulary environment (Colunga and Smith, 2000 and Samuelson, 2002). The model has few constraints within its internal structure, but bias emerges through learning artificial vocabulary using simple statistical property about entities' shape, solidity and count/mass syntactical class (Samuelson and Smith, 1999). We proposed a model (prototype-specific attention learning; PSAL) that could learn optimal feature attention for specific prototype of vocabulary. The Boltzmann machine model learns vocabulary in uniform feature space. On the other hand, PSAL learns it in feature space with different metric specific to proximal prototypes. Real children show categorization bias robustly in various learning environment, thus a model should have robustness to various environments. Therefore, we investigated how the two models behave in a few typical vocabulary environments and discuss how prototype-specific learning influence categorization bias
一些研究表明,关于词汇和感知对象之间关系的知识可以作为一种约束,使儿童能够快速概括新单词。儿童对新词语概括的偏见被认为反映了他们的先验知识,并在不同的背景下进行了研究。特别是,儿童在新语习得中更倾向于注意固体物体的形状相似性和非固体物质的材料相似性(Imai and Gentner, 1997)。一些研究报道,基于玻尔兹曼机的模型可以通过学习人工词汇环境来解释形状、材料和固体之间的分类偏差(Colunga and Smith, 2000; Samuelson, 2002)。该模型在其内部结构中几乎没有约束,但通过使用关于实体的形状、坚固性和计数/质量语法类的简单统计属性来学习人工词汇,就会出现偏差(Samuelson和Smith, 1999)。我们提出了一个模型(特定原型注意学习;对于特定的词汇原型,可以学习到最优的特征注意。玻尔兹曼机器模型在均匀特征空间中学习词汇。另一方面,PSAL在具有不同度量的特征空间中对近端原型进行学习。真实儿童在各种学习环境中都表现出稳健的分类偏差,因此模型对各种学习环境应具有稳健性。因此,我们研究了这两种模型在一些典型词汇环境中的表现,并讨论了特定原型学习如何影响分类偏差
{"title":"Prototype-specific learning for children's vocabulary","authors":"S. Hidaka, J. Saiki","doi":"10.1109/DEVLRN.2005.1490982","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490982","url":null,"abstract":"Several studies suggested that knowledge about the relationship between vocabulary and perceptual objects work as a constraint to enable children to generalize novel words quickly. Children's bias in novel word generalization is considered to reflect their prior knowledge and is investigated in various contexts. In particular, children have a bias to attend to shape similarity of solid objects and material similarity of nonsolid substance in novel word acquisition (Imai and Gentner, 1997). A few studies reported that a model based on Boltzmann machine could explain categorization bias among shape, material and solidity by learning an artificial vocabulary environment (Colunga and Smith, 2000 and Samuelson, 2002). The model has few constraints within its internal structure, but bias emerges through learning artificial vocabulary using simple statistical property about entities' shape, solidity and count/mass syntactical class (Samuelson and Smith, 1999). We proposed a model (prototype-specific attention learning; PSAL) that could learn optimal feature attention for specific prototype of vocabulary. The Boltzmann machine model learns vocabulary in uniform feature space. On the other hand, PSAL learns it in feature space with different metric specific to proximal prototypes. Real children show categorization bias robustly in various learning environment, thus a model should have robustness to various environments. Therefore, we investigated how the two models behave in a few typical vocabulary environments and discuss how prototype-specific learning influence categorization bias","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121286781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Computational Model which Learns to Selectively Attend in Category Learning 分类学习中选择性学习的计算模型
Pub Date : 2005-07-19 DOI: 10.1109/DEVLRN.2005.1490981
Lingyun Zhang, G. Cottrell
Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories
Shepard et al.(1961)利用学习和记忆任务对不同类型分类的难度进行了实证和理论研究。由于难度等级反映了与类别相关的特征维度的数量,后来的研究者将其作为证据,认为类别学习包括学习如何选择性地只关注有用的特征,即学习如何将注意力最佳地分配到相对于类别的这些维度上(Rosch和Mervis, 1975)。我们建立了一个递归神经网络模型,按顺序处理各个特征。一次只有一个特征是明确可用的(如Rehder和Hoffman的眼动追踪设置(Render and Hoffman, 2003)),之前的信息隐式地表示在网络中。眼球从一个特征移动到下一个特征的概率被保存为注视转移表。注视开始是随机的,对任何特定的特征或任何运动没有太多的偏见。网络学习相关的特征,并通过顺序地关注这些特征来进行分类。学习时间的排名与类别的难度有质的匹配
{"title":"A Computational Model which Learns to Selectively Attend in Category Learning","authors":"Lingyun Zhang, G. Cottrell","doi":"10.1109/DEVLRN.2005.1490981","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490981","url":null,"abstract":"Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings. The 4nd International Conference on Development and Learning, 2005.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1