Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490984
R. Saji, H. Watanabe, G. Taga
In this paper, we demonstrate statistical identifications for the time series of velocity of the movements of limbs in young infants during the conjugate reinforcement mobile task. The mean square velocity and the probability density function (PDF) of the time rate change of velocity are estimated. We found that the PDF is universally symmetric with a sharpened peak at the origin and exponential-tails. The result suggests that the PDF is a useful measure that reflects the motor pattern generation and memory formation during the mobile task
{"title":"Statistical Characteristics of Velocity of Movements of Limbs in Young Infants during the Conjugate Reinforcement Mobile Task","authors":"R. Saji, H. Watanabe, G. Taga","doi":"10.1109/DEVLRN.2005.1490984","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490984","url":null,"abstract":"In this paper, we demonstrate statistical identifications for the time series of velocity of the movements of limbs in young infants during the conjugate reinforcement mobile task. The mean square velocity and the probability density function (PDF) of the time rate change of velocity are estimated. We found that the PDF is universally symmetric with a sharpened peak at the origin and exponential-tails. The result suggests that the PDF is a useful measure that reflects the motor pattern generation and memory formation during the mobile task","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134520066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490978
P. Bjorne, B. Johansson, C. Balkenius
Persons with autism, probably due to early sensory impairments, attend to and select for stimuli in an uncommon way. Inhibition of some features of a stimulus, such as location and shape, might be intact, while other features are not as readily inhibited, for example color. Stimuli irrelevant to the task might be attended to. This results in a learning process where irrelevant stimuli are erroneously activated and maintained. Therefore, we propose that the developmental pathway and behavior of persons with autism needs to be understood in a framework including discussions of inhibitory processes and context learning and maintenance. We believe that this provides a fruitful framework for understanding the causes of the seemingly diverse and complex cognitive difficulties seen in autism
{"title":"Inhibition in Cognitive Development: Contextual Impairments in Autism","authors":"P. Bjorne, B. Johansson, C. Balkenius","doi":"10.1109/DEVLRN.2005.1490978","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490978","url":null,"abstract":"Persons with autism, probably due to early sensory impairments, attend to and select for stimuli in an uncommon way. Inhibition of some features of a stimulus, such as location and shape, might be intact, while other features are not as readily inhibited, for example color. Stimuli irrelevant to the task might be attended to. This results in a learning process where irrelevant stimuli are erroneously activated and maintained. Therefore, we propose that the developmental pathway and behavior of persons with autism needs to be understood in a framework including discussions of inhibitory processes and context learning and maintenance. We believe that this provides a fruitful framework for understanding the causes of the seemingly diverse and complex cognitive difficulties seen in autism","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490958
Y. Nagai
Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 200
{"title":"Self-Other Motion Equivalence Learning for Head Movement Imitation","authors":"Y. Nagai","doi":"10.1109/DEVLRN.2005.1490958","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490958","url":null,"abstract":"Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 200","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122630714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490983
O. Natsuki, N. Arata, I. Yoshiaki
Summary form only given. Roy (1999) developed a computational model of early lexical learning to address three questions: First, how do infants discover linguistic units? Second, how do they learn perceptually-grounded semantic categories? And third, how do they learn to associate linguistic units with appropriate semantic categories? His model coupled speech recordings with static images of objects, and acquired a lexicon of shape names. Kaplan et al. (2001) presented a model for teaching names of actions to an enhanced version of AIBO. The AIBO had built-in speech recognition facilities and behaviors. In this paper, we try to build a system that learns the correspondence between continuous speeches and continuous motions without a built-in speech recognizer nor built-in behaviors. We teach RobotPHONE to respond to voices properly by taking its hands. For example, one says 'bye-bye' to the RobotPHONE holding its hand and waving. From continuous input, the system must segment speech and discover acoustic units which correspond to words. The segmentation is done based on recurrent patterns which was found by incremental reference interval-free continuous DP (IRIFCDP) by Kiyama et al. (1996) and Utsunomiya et al. (2004), and we accelerate the IRIFCDP using ShiftCDP (Itoh and Tanaka, 2004). The system also segments motion by the accelerated IRIFCDP, and it memorizes co-occurring speech and motion patterns. Then, it can respond to taught words properly by detecting taught words in speech input by ShiftCDP. We gave a demonstration with a RobotPHONE at the conference. We expect that it can learn words in any languages because it has no built-in facilities specific to any language
{"title":"Learning the Correspondence between Continuous Speeches and Motions","authors":"O. Natsuki, N. Arata, I. Yoshiaki","doi":"10.1109/DEVLRN.2005.1490983","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490983","url":null,"abstract":"Summary form only given. Roy (1999) developed a computational model of early lexical learning to address three questions: First, how do infants discover linguistic units? Second, how do they learn perceptually-grounded semantic categories? And third, how do they learn to associate linguistic units with appropriate semantic categories? His model coupled speech recordings with static images of objects, and acquired a lexicon of shape names. Kaplan et al. (2001) presented a model for teaching names of actions to an enhanced version of AIBO. The AIBO had built-in speech recognition facilities and behaviors. In this paper, we try to build a system that learns the correspondence between continuous speeches and continuous motions without a built-in speech recognizer nor built-in behaviors. We teach RobotPHONE to respond to voices properly by taking its hands. For example, one says 'bye-bye' to the RobotPHONE holding its hand and waving. From continuous input, the system must segment speech and discover acoustic units which correspond to words. The segmentation is done based on recurrent patterns which was found by incremental reference interval-free continuous DP (IRIFCDP) by Kiyama et al. (1996) and Utsunomiya et al. (2004), and we accelerate the IRIFCDP using ShiftCDP (Itoh and Tanaka, 2004). The system also segments motion by the accelerated IRIFCDP, and it memorizes co-occurring speech and motion patterns. Then, it can respond to taught words properly by detecting taught words in speech input by ShiftCDP. We gave a demonstration with a RobotPHONE at the conference. We expect that it can learn words in any languages because it has no built-in facilities specific to any language","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490968
S. R. Young, S. Chalup
Robot soccer is now recognized as one of the most popular and efficient testbeds for intelligent robotics. It involves many challenges for computation, mechanics, control, software engineering, machine learning, and other fields. The international RoboCup initiative supports research into robot soccer and provides an excellent environment to investigate machine learning for robotics in simulation and the real world
{"title":"Towards Robot Soccer Team Behaviours Through Approximate Simulation","authors":"S. R. Young, S. Chalup","doi":"10.1109/DEVLRN.2005.1490968","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490968","url":null,"abstract":"Robot soccer is now recognized as one of the most popular and efficient testbeds for intelligent robotics. It involves many challenges for computation, mechanics, control, software engineering, machine learning, and other fields. The international RoboCup initiative supports research into robot soccer and provides an excellent environment to investigate machine learning for robotics in simulation and the real world","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490954
D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze
Color is one of the most obvious attributes with which children usually start to classify objects they see. The purpose of this study was to investigate the development of children's ability to discriminate and name colors that varied in saturation and intensity (value) for a given hue (i.e., color tones). Perceptual and naming behaviors were assessed in 221 children, aged between 8 and 24, grouped in three categories, elementary, junior high school and university students. Color tone perception was observed through odd-one-out task and naming responses were obtained in terms of modifiers: vivid, strong, dark, bright, dull, and pale. Results revealed that the discrimination of subtle variations of color tones in two younger age groups was similar to that of the university students. In addition, it was found that elementary school children reliably start interpreting their experience of such variations with just three modifier terms: bright, strong, and dark. The knowledge of color modifier terms varied with age. When the naming task was constrained, a developmental order in the acquisition of such terms was observed. Salient dimensions underlying the judgments of color modifier terms were identified. The importance of each dimension varied with age. At the level of elementary, the semantic classification of color tones was strongly based only on intensity. At the junior high school level, it was found that saturation emerged as an important dimension in assigning modifiers
{"title":"Color Tone Perception and Naming: Development in Acquisition of Color Modifiers","authors":"D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze","doi":"10.1109/DEVLRN.2005.1490954","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490954","url":null,"abstract":"Color is one of the most obvious attributes with which children usually start to classify objects they see. The purpose of this study was to investigate the development of children's ability to discriminate and name colors that varied in saturation and intensity (value) for a given hue (i.e., color tones). Perceptual and naming behaviors were assessed in 221 children, aged between 8 and 24, grouped in three categories, elementary, junior high school and university students. Color tone perception was observed through odd-one-out task and naming responses were obtained in terms of modifiers: vivid, strong, dark, bright, dull, and pale. Results revealed that the discrimination of subtle variations of color tones in two younger age groups was similar to that of the university students. In addition, it was found that elementary school children reliably start interpreting their experience of such variations with just three modifier terms: bright, strong, and dark. The knowledge of color modifier terms varied with age. When the naming task was constrained, a developmental order in the acquisition of such terms was observed. Salient dimensions underlying the judgments of color modifier terms were identified. The importance of each dimension varied with age. At the level of elementary, the semantic classification of color tones was strongly based only on intensity. At the junior high school level, it was found that saturation emerged as an important dimension in assigning modifiers","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490985
K. Sakamoto, H. Mushiake, N. Saito, J. Tanji
Summary form only given. Behavioral planning requires organizing actions by integrating perceived or memorized information to achieve goals. Studies have suggested that the underlying neural mechanisms involve updating representation of goals for action in associative cortices such as the prefrontal cortex (Saito et al., 2005). Although the underlying neural mechanisms are still unknown, we assume that functional linking of neurons would contribute to this transformation of behavioral goals. Thus, we investigated the relation of synchronous firing of neurons to the transformation of goal representation by recording neurons from the dorsolateral prefrontal cortex (DLPFC), while the monkeys performed a path-planning task (Mushiake et al., 2001) that requires them to plan immediate goals of actions to achieve final goals. Two monkeys were trained to perform a path-planning task that required them to move a cursor to a goal in a lattice-like display. After the cursor emerged in the center of the lattice (start display), a goal was presented in a corner (final goal display). The delay 1 period was followed by the delay 2 period, in which a part of the path in the lattice was blocked that disabled the cursor to move through the path. Then, a go signal was provided to allow the monkey to move the cursor for one check of the lattice. To dissociate arm movements and cursor movements, the monkeys to perform with three different arm-cursor assignments, which were changed every 48 trials. Neuronal pairs that were recorded simultaneously during more than two arm-cursor assignment blocks (> 96 trials) were included in the dataset. The analysis for task-related modulation of synchronous firing was based on the time-resolved cross-correlation method (Baker et al., 2001). This method can estimate neuronal synchrony well, because it can exclude the influence of firing rate change in and among trials by using instantaneous firing rate (IFK) for the predictor. In an example, weak and strong increase in co-firing rate of the neuronal pair is seen at final goal display and delay 2 period respectively, while synchronized firing can be recognized at delay 1 period without accompanying co-firing rate increase. We selected DLPFC neurons showing significant synchrony and goal-related activity with gradual shift of representation from final to immediate goals before initiation of the action. Many of the DLPFC neurons were found to show transient enhancement of synchrony without firing-rate increases. Furthermore, such enhancement was nearly coincident with the timing of shift in their goal representations. These results suggest that transient synchrony plays an important role in the transforming process of goal representations during behavioral planning
只提供摘要形式。行为计划需要通过整合感知或记忆的信息来组织行动,以实现目标。研究表明,潜在的神经机制包括在前额皮质等联想皮质中更新行动目标的表征(Saito等人,2005)。尽管潜在的神经机制尚不清楚,但我们假设神经元的功能连接将有助于这种行为目标的转变。因此,当猴子执行路径规划任务(Mushiake et al., 2001)时,我们通过记录背外侧前额叶皮层(DLPFC)的神经元,研究了神经元的同步放电与目标表征转换的关系,该任务要求它们计划立即的行动目标以实现最终目标。两只猴子被训练去执行一项路径规划任务,该任务要求它们将光标移动到一个格子状显示器上的目标。光标出现在格子中心(开始显示)后,一个目标出现在角落(最终目标显示)。延迟1周期之后是延迟2周期,其中晶格中路径的一部分被阻塞,使光标无法在路径中移动。然后,提供一个go信号,允许猴子移动光标进行一次格子检查。为了分离手臂运动和光标运动,猴子们执行了三种不同的手臂-光标分配,每48次试验改变一次。在两个以上的手臂光标分配块(> 96次试验)中同时记录的神经元对被包括在数据集中。同步发射的任务相关调制分析基于时间分辨互相关方法(Baker et al., 2001)。该方法可以很好地估计神经元的同步性,因为它可以通过使用瞬时发射率(IFK)作为预测器来排除试验内和试验间发射率变化的影响。在一个例子中,在最终目标显示和延迟2期分别可以看到神经元对共燃率的微弱和强烈增加,而在延迟1期可以识别到同步放电,但没有伴随共燃率的增加。我们选择的DLPFC神经元在动作开始前表现出显著的同步性和目标相关活动,从最终目标到直接目标的表征逐渐转变。许多DLPFC神经元表现出短暂的同步性增强,但放电率没有增加。此外,这种增强几乎与他们的目标表征的转移时间一致。这些结果表明,在行为规划过程中,瞬时同步性在目标表征的转化过程中起着重要作用
{"title":"Transient Synchrony and Dynamical Representation of Behavioral Goals of the Prefrontal Cortex","authors":"K. Sakamoto, H. Mushiake, N. Saito, J. Tanji","doi":"10.1109/DEVLRN.2005.1490985","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490985","url":null,"abstract":"Summary form only given. Behavioral planning requires organizing actions by integrating perceived or memorized information to achieve goals. Studies have suggested that the underlying neural mechanisms involve updating representation of goals for action in associative cortices such as the prefrontal cortex (Saito et al., 2005). Although the underlying neural mechanisms are still unknown, we assume that functional linking of neurons would contribute to this transformation of behavioral goals. Thus, we investigated the relation of synchronous firing of neurons to the transformation of goal representation by recording neurons from the dorsolateral prefrontal cortex (DLPFC), while the monkeys performed a path-planning task (Mushiake et al., 2001) that requires them to plan immediate goals of actions to achieve final goals. Two monkeys were trained to perform a path-planning task that required them to move a cursor to a goal in a lattice-like display. After the cursor emerged in the center of the lattice (start display), a goal was presented in a corner (final goal display). The delay 1 period was followed by the delay 2 period, in which a part of the path in the lattice was blocked that disabled the cursor to move through the path. Then, a go signal was provided to allow the monkey to move the cursor for one check of the lattice. To dissociate arm movements and cursor movements, the monkeys to perform with three different arm-cursor assignments, which were changed every 48 trials. Neuronal pairs that were recorded simultaneously during more than two arm-cursor assignment blocks (> 96 trials) were included in the dataset. The analysis for task-related modulation of synchronous firing was based on the time-resolved cross-correlation method (Baker et al., 2001). This method can estimate neuronal synchrony well, because it can exclude the influence of firing rate change in and among trials by using instantaneous firing rate (IFK) for the predictor. In an example, weak and strong increase in co-firing rate of the neuronal pair is seen at final goal display and delay 2 period respectively, while synchronized firing can be recognized at delay 1 period without accompanying co-firing rate increase. We selected DLPFC neurons showing significant synchrony and goal-related activity with gradual shift of representation from final to immediate goals before initiation of the action. Many of the DLPFC neurons were found to show transient enhancement of synchrony without firing-rate increases. Furthermore, such enhancement was nearly coincident with the timing of shift in their goal representations. These results suggest that transient synchrony plays an important role in the transforming process of goal representations during behavioral planning","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133377556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490973
W. Sato, S. Yoshikawa
In the present study, we investigated the emotional effect of the dynamic presentation of facial expressions. Dynamic presentation of facial expressions was implemented using a computer-morphing technique. We presented dynamic and static expressions of fear and happiness, as well as other dynamic and static mosaic images, to 17 subjects. Subjects rated the valence and arousal of their emotional response to the images. Results indicated higher reported arousal in response to dynamic presentations than to static facial expressions (for both emotions) and to mosaic images. These results suggest that the specific effect of the dynamic presentation of emotional facial expressions is that it enhances the overall emotional experience without a corresponding qualitative change in that experience, and that this effect is not restricted to facial images
{"title":"Emotional elicitation by dynamic facial expressions","authors":"W. Sato, S. Yoshikawa","doi":"10.1109/DEVLRN.2005.1490973","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490973","url":null,"abstract":"In the present study, we investigated the emotional effect of the dynamic presentation of facial expressions. Dynamic presentation of facial expressions was implemented using a computer-morphing technique. We presented dynamic and static expressions of fear and happiness, as well as other dynamic and static mosaic images, to 17 subjects. Subjects rated the valence and arousal of their emotional response to the images. Results indicated higher reported arousal in response to dynamic presentations than to static facial expressions (for both emotions) and to mosaic images. These results suggest that the specific effect of the dynamic presentation of emotional facial expressions is that it enhances the overall emotional experience without a corresponding qualitative change in that experience, and that this effect is not restricted to facial images","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131285255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490982
S. Hidaka, J. Saiki
Several studies suggested that knowledge about the relationship between vocabulary and perceptual objects work as a constraint to enable children to generalize novel words quickly. Children's bias in novel word generalization is considered to reflect their prior knowledge and is investigated in various contexts. In particular, children have a bias to attend to shape similarity of solid objects and material similarity of nonsolid substance in novel word acquisition (Imai and Gentner, 1997). A few studies reported that a model based on Boltzmann machine could explain categorization bias among shape, material and solidity by learning an artificial vocabulary environment (Colunga and Smith, 2000 and Samuelson, 2002). The model has few constraints within its internal structure, but bias emerges through learning artificial vocabulary using simple statistical property about entities' shape, solidity and count/mass syntactical class (Samuelson and Smith, 1999). We proposed a model (prototype-specific attention learning; PSAL) that could learn optimal feature attention for specific prototype of vocabulary. The Boltzmann machine model learns vocabulary in uniform feature space. On the other hand, PSAL learns it in feature space with different metric specific to proximal prototypes. Real children show categorization bias robustly in various learning environment, thus a model should have robustness to various environments. Therefore, we investigated how the two models behave in a few typical vocabulary environments and discuss how prototype-specific learning influence categorization bias
一些研究表明,关于词汇和感知对象之间关系的知识可以作为一种约束,使儿童能够快速概括新单词。儿童对新词语概括的偏见被认为反映了他们的先验知识,并在不同的背景下进行了研究。特别是,儿童在新语习得中更倾向于注意固体物体的形状相似性和非固体物质的材料相似性(Imai and Gentner, 1997)。一些研究报道,基于玻尔兹曼机的模型可以通过学习人工词汇环境来解释形状、材料和固体之间的分类偏差(Colunga and Smith, 2000; Samuelson, 2002)。该模型在其内部结构中几乎没有约束,但通过使用关于实体的形状、坚固性和计数/质量语法类的简单统计属性来学习人工词汇,就会出现偏差(Samuelson和Smith, 1999)。我们提出了一个模型(特定原型注意学习;对于特定的词汇原型,可以学习到最优的特征注意。玻尔兹曼机器模型在均匀特征空间中学习词汇。另一方面,PSAL在具有不同度量的特征空间中对近端原型进行学习。真实儿童在各种学习环境中都表现出稳健的分类偏差,因此模型对各种学习环境应具有稳健性。因此,我们研究了这两种模型在一些典型词汇环境中的表现,并讨论了特定原型学习如何影响分类偏差
{"title":"Prototype-specific learning for children's vocabulary","authors":"S. Hidaka, J. Saiki","doi":"10.1109/DEVLRN.2005.1490982","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490982","url":null,"abstract":"Several studies suggested that knowledge about the relationship between vocabulary and perceptual objects work as a constraint to enable children to generalize novel words quickly. Children's bias in novel word generalization is considered to reflect their prior knowledge and is investigated in various contexts. In particular, children have a bias to attend to shape similarity of solid objects and material similarity of nonsolid substance in novel word acquisition (Imai and Gentner, 1997). A few studies reported that a model based on Boltzmann machine could explain categorization bias among shape, material and solidity by learning an artificial vocabulary environment (Colunga and Smith, 2000 and Samuelson, 2002). The model has few constraints within its internal structure, but bias emerges through learning artificial vocabulary using simple statistical property about entities' shape, solidity and count/mass syntactical class (Samuelson and Smith, 1999). We proposed a model (prototype-specific attention learning; PSAL) that could learn optimal feature attention for specific prototype of vocabulary. The Boltzmann machine model learns vocabulary in uniform feature space. On the other hand, PSAL learns it in feature space with different metric specific to proximal prototypes. Real children show categorization bias robustly in various learning environment, thus a model should have robustness to various environments. Therefore, we investigated how the two models behave in a few typical vocabulary environments and discuss how prototype-specific learning influence categorization bias","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121286781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-19DOI: 10.1109/DEVLRN.2005.1490981
Lingyun Zhang, G. Cottrell
Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories
Shepard et al.(1961)利用学习和记忆任务对不同类型分类的难度进行了实证和理论研究。由于难度等级反映了与类别相关的特征维度的数量,后来的研究者将其作为证据,认为类别学习包括学习如何选择性地只关注有用的特征,即学习如何将注意力最佳地分配到相对于类别的这些维度上(Rosch和Mervis, 1975)。我们建立了一个递归神经网络模型,按顺序处理各个特征。一次只有一个特征是明确可用的(如Rehder和Hoffman的眼动追踪设置(Render and Hoffman, 2003)),之前的信息隐式地表示在网络中。眼球从一个特征移动到下一个特征的概率被保存为注视转移表。注视开始是随机的,对任何特定的特征或任何运动没有太多的偏见。网络学习相关的特征,并通过顺序地关注这些特征来进行分类。学习时间的排名与类别的难度有质的匹配
{"title":"A Computational Model which Learns to Selectively Attend in Category Learning","authors":"Lingyun Zhang, G. Cottrell","doi":"10.1109/DEVLRN.2005.1490981","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490981","url":null,"abstract":"Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}