首页 > 最新文献

2019 Conference on Cognitive Computational Neuroscience最新文献

英文 中文
Narratives as Networks: Predicting Memory from the Structure of Naturalistic Events 作为网络的叙事:从自然事件的结构预测记忆
Pub Date : 2021-04-24 DOI: 10.32470/ccn.2019.1170-0
Hongmi Lee, Janice Chen
Human life consists of a multitude of diverse and interconnected events. However, extant research has focused on how humans segment and remember discrete events from continuous input, with far less attention given to how the structure of connections between events impacts memory. We conducted an fMRI study in which subjects watched and recalled a series of realistic audiovisual narratives. By transforming narratives into networks of events, we found that more central events—those with stronger semantic or causal connections to other events—were better remembered. During encoding, central events evoked larger hippocampal event boundary responses associated with memory consolidation. During recall, high centrality predicted stronger activation in cortical areas involved in episodic recollection, and more similar neural representations across individuals. Together, these results suggest that when humans encode and retrieve complex real-world experiences, the reliability and accessibility of memory representations is shaped by their location within a network of events.
人的生活由许多不同的、相互联系的事件组成。然而,现有的研究主要集中在人类如何从连续输入中分割和记忆离散事件,而很少关注事件之间的连接结构如何影响记忆。我们进行了一项功能磁共振成像研究,让受试者观看并回忆一系列现实的视听叙事。通过将叙述转换成事件网络,我们发现更中心的事件——那些与其他事件有更强语义或因果联系的事件——被更好地记住。在编码过程中,中枢事件诱发了与记忆巩固相关的更大的海马事件边界反应。在回忆过程中,高中心性预示着与情景回忆有关的皮质区域更强的激活,以及个体间更相似的神经表征。总之,这些结果表明,当人类对复杂的现实世界经历进行编码和检索时,记忆表征的可靠性和可及性是由它们在事件网络中的位置决定的。
{"title":"Narratives as Networks: Predicting Memory from the Structure of Naturalistic Events","authors":"Hongmi Lee, Janice Chen","doi":"10.32470/ccn.2019.1170-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1170-0","url":null,"abstract":"Human life consists of a multitude of diverse and interconnected events. However, extant research has focused on how humans segment and remember discrete events from continuous input, with far less attention given to how the structure of connections between events impacts memory. We conducted an fMRI study in which subjects watched and recalled a series of realistic audiovisual narratives. By transforming narratives into networks of events, we found that more central events—those with stronger semantic or causal connections to other events—were better remembered. During encoding, central events evoked larger hippocampal event boundary responses associated with memory consolidation. During recall, high centrality predicted stronger activation in cortical areas involved in episodic recollection, and more similar neural representations across individuals. Together, these results suggest that when humans encode and retrieve complex real-world experiences, the reliability and accessibility of memory representations is shaped by their location within a network of events.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122013747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Do LSTMs know about Principle C? lstm知道原理C吗?
Pub Date : 2019-09-14 DOI: 10.32470/ccn.2019.1241-0
Jeff Mitchell, N. Kazanina, Conor J. Houghton, J. Bowers
We investigate whether a recurrent network trained on raw text can learn an important syntactic constraint on coreference. A Long Short-Term Memory (LSTM) network that is sensitive to some other syntactic constraints was tested on psycholinguistic materials from two published experiments on coreference. Whereas the participants were sensitive to the Principle C constraint on coreference the LSTM network was not. Our results suggest that, whether as cognitive models of linguistic processes or as engineering solutions in practical applications, recurrent networks may need to be augmented with additional inductive biases to be able to learn models and representations that fully capture the structures of language underlying comprehension.
我们研究了在原始文本上训练的循环网络是否能够学习到一个重要的句法约束。本文利用已发表的两篇关于共同参照的心理语言学实验,对一个对句法约束敏感的长短期记忆(LSTM)网络进行了测试。被试对C原则约束敏感,而LSTM网络则不敏感。我们的研究结果表明,无论是作为语言过程的认知模型,还是作为实际应用中的工程解决方案,循环网络都可能需要增加额外的归纳偏差,以便能够学习模型和表征,充分捕捉理解基础的语言结构。
{"title":"Do LSTMs know about Principle C?","authors":"Jeff Mitchell, N. Kazanina, Conor J. Houghton, J. Bowers","doi":"10.32470/ccn.2019.1241-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1241-0","url":null,"abstract":"We investigate whether a recurrent network trained on raw text can learn an important syntactic constraint on coreference. A Long Short-Term Memory (LSTM) network that is sensitive to some other syntactic constraints was tested on psycholinguistic materials from two published experiments on coreference. Whereas the participants were sensitive to the Principle C constraint on coreference the LSTM network was not. Our results suggest that, whether as cognitive models of linguistic processes or as engineering solutions in practical applications, recurrent networks may need to be augmented with additional inductive biases to be able to learn models and representations that fully capture the structures of language underlying comprehension.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"39 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125734275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Subtractive gating improves generalization in working memory tasks 减法门控提高工作记忆任务的泛化能力
Pub Date : 2019-09-14 DOI: 10.32470/ccn.2019.1352-0
M. L. Montero, Gaurav Malhotra, J. Bowers, R. P. Costa
It is largely unclear how the brain learns to generalize to new situations. Although deep learning models offer great promise as potential models of the brain, they break down when tested on novel conditions not present in their training datasets. One of the most successful models in machine learning are gated-recurrent neural networks. Because of its working memory properties here we refer to these networks as working memory networks (WMN). We compare WMNs with a biologically motivated variant of these networks. In contrast to the multiplicative gating used by WMNs, this new variant operates via subtracting gating (subWMN). We tested these two models in a range of working memory tasks: orientation recall with distractors, orientation recall with update/addition and distractors, and a more challenging task: sequence recognition based on the machine learning handwritten digits dataset. We evaluated the generalization properties of these two networks in working memory tasks by measuring how well they copped with three working memory loads: memory maintenance over time, making memories distractor-resistant and memory updating. Across these tests subWMNs perform better and more robustly than WMNs. These results suggests that the brain may rely on subtractive gating for improved generalization in working memory tasks.
目前还不清楚大脑是如何学会对新情况进行概括的。尽管深度学习模型作为潜在的大脑模型提供了巨大的希望,但当在训练数据集中不存在的新条件下进行测试时,它们会崩溃。门控递归神经网络是机器学习中最成功的模型之一。由于其工作记忆特性,我们将这些网络称为工作记忆网络(working memory networks, WMN)。我们将WMNs与这些网络的生物动机变体进行了比较。与wmn使用的乘法门控相反,这种新的变体通过减法门控(subWMN)操作。我们在一系列工作记忆任务中测试了这两个模型:有干扰物的方向回忆,有更新/添加和干扰物的方向回忆,以及一个更具挑战性的任务:基于机器学习手写数字数据集的序列识别。我们评估了这两个网络在工作记忆任务中的泛化特性,通过测量他们如何应对三种工作记忆负荷:随时间的记忆维持,使记忆抵抗干扰和记忆更新。在这些测试中,子WMNs比WMNs表现得更好、更健壮。这些结果表明,大脑可能依靠减法门控来改善工作记忆任务的泛化。
{"title":"Subtractive gating improves generalization in working memory tasks","authors":"M. L. Montero, Gaurav Malhotra, J. Bowers, R. P. Costa","doi":"10.32470/ccn.2019.1352-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1352-0","url":null,"abstract":"It is largely unclear how the brain learns to generalize to new situations. Although deep learning models offer great promise as potential models of the brain, they break down when tested on novel conditions not present in their training datasets. One of the most successful models in machine learning are gated-recurrent neural networks. Because of its working memory properties here we refer to these networks as working memory networks (WMN). We compare WMNs with a biologically motivated variant of these networks. In contrast to the multiplicative gating used by WMNs, this new variant operates via subtracting gating (subWMN). We tested these two models in a range of working memory tasks: orientation recall with distractors, orientation recall with update/addition and distractors, and a more challenging task: sequence recognition based on the machine learning handwritten digits dataset. We evaluated the generalization properties of these two networks in working memory tasks by measuring how well they copped with three working memory loads: memory maintenance over time, making memories distractor-resistant and memory updating. Across these tests subWMNs perform better and more robustly than WMNs. These results suggests that the brain may rely on subtractive gating for improved generalization in working memory tasks.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Training of Neural Encoding Models on Population Spike Trains 种群尖峰列车神经编码模型的对抗性训练
Pub Date : 2019-09-13 DOI: 10.32470/ccn.2019.1263-0
Poornima Ramesh, Mohamad Atayi, J. Macke
Neural population responses to sensory stimuli can exhibit both nonlinear stimulusdependence and richly structured shared variability. Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data. To account for the discrete nature of neural spike trains, we use and compare gradient estimators for adversarial optimization of neural encoding models. We illustrate our approach on population recordings from primary visual cortex. We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.
神经群体对感官刺激的反应既可以表现出非线性的刺激依赖性,也可以表现出结构丰富的共享变异性。在这里,我们展示了如何使用对抗性训练来优化神经编码模型,以捕获神经种群数据的确定性和随机成分。为了解释神经脉冲序列的离散性,我们使用梯度估计器并比较神经编码模型的对抗性优化。我们说明了我们的方法在人口记录从初级视觉皮层。我们表明,将潜在噪声源添加到卷积神经网络中可以产生一个模型,该模型既可以捕获种群活动的刺激依赖性,也可以捕获噪声相关性。
{"title":"Adversarial Training of Neural Encoding Models on Population Spike Trains","authors":"Poornima Ramesh, Mohamad Atayi, J. Macke","doi":"10.32470/ccn.2019.1263-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1263-0","url":null,"abstract":"Neural population responses to sensory stimuli can exhibit both nonlinear stimulusdependence and richly structured shared variability. Here, we show how adversarial training can be used to optimize neural encoding models to capture both the deterministic and stochastic components of neural population data. To account for the discrete nature of neural spike trains, we use and compare gradient estimators for adversarial optimization of neural encoding models. We illustrate our approach on population recordings from primary visual cortex. We show that adding latent noise-sources to a convolutional neural network yields a model which captures both the stimulus-dependence and noise correlations of the population activity.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128381924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Unfolding of multisensory inference in the brain and behavior 多感官推理在大脑和行为中的展开
Pub Date : 2019-09-13 DOI: 10.32470/ccn.2019.1219-0
Yinan Cao, Hame Park, Bruno L. Giordano, C. Kayser, C. Spence, C. Summerfield
Yinan Cao (yinan.cao@psy.ox.ac.uk) University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom Hame Park Bielefeld University, 33615 Bielefeld, Germany Bruno L. Giordano* Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille, France Christoph Kayser* Bielefeld University, 33615 Bielefeld, Germany Charles Spence* University of Oxford, Walton Street, Oxford OX2 6GG, United Kingdom Christopher Summerfield* University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom [* Equal contributions]
曹一南(yinan.cao@psy.ox.ac.uk)英国牛津大学,牛津顿街OX2 6AE,英国比勒费尔德大学,德国比勒费尔德33615布鲁诺L. Giordano*国家科学研究中心和艾克斯-马赛大学,法国马赛Christoph Kayser*比勒费尔德大学,德国比勒费尔德33615,英国查尔斯斯宾塞*牛津大学,牛津顿街OX2 6GG,英国牛津大学,牛津顿街OX2 6AE,联合王国[*同等捐款]
{"title":"Unfolding of multisensory inference in the brain and behavior","authors":"Yinan Cao, Hame Park, Bruno L. Giordano, C. Kayser, C. Spence, C. Summerfield","doi":"10.32470/ccn.2019.1219-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1219-0","url":null,"abstract":"Yinan Cao (yinan.cao@psy.ox.ac.uk) University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom Hame Park Bielefeld University, 33615 Bielefeld, Germany Bruno L. Giordano* Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille, France Christoph Kayser* Bielefeld University, 33615 Bielefeld, Germany Charles Spence* University of Oxford, Walton Street, Oxford OX2 6GG, United Kingdom Christopher Summerfield* University of Oxford, Walton Street, Oxford OX2 6AE, United Kingdom [* Equal contributions]","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123406805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving the Olfactory System 进化嗅觉系统
Pub Date : 2019-09-11 DOI: 10.32470/ccn.2019.1355-0
G. R. Yang, Peter Y. Wang, Yi Sun, Ashok Litwin-Kumar, R. Axel, L. Abbott
Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization. What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing? In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors. We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer. Finally, we offer theoretical justifications for each result. Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system.
苍蝇和老鼠是相隔6亿年进化的物种,但它们进化出的嗅觉系统在解剖学和功能组织上有许多相似之处。这些共同的解剖和功能特征有什么功能,它们是气味感知的最佳选择吗?在这项研究中,我们通过研究人工神经网络训练来感知气味来解决嗅觉电路进化设计的最优性。我们发现人工神经网络定量概括了嗅觉系统固有的结构,包括肾小球在压缩层上的形成,以及在扩张层上的稀疏和随机连接。最后,我们为每个结果提供理论证明。我们的工作为解释嗅觉回路的进化趋同提供了一个框架,并为嗅觉系统的解剖和功能结构提供了见解和逻辑。
{"title":"Evolving the Olfactory System","authors":"G. R. Yang, Peter Y. Wang, Yi Sun, Ashok Litwin-Kumar, R. Axel, L. Abbott","doi":"10.32470/ccn.2019.1355-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1355-0","url":null,"abstract":"Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization. What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing? In this study, we address the optimality of evolutionary design in olfactory circuits by studying artificial neural networks trained to sense odors. We found that artificial neural networks quantitatively recapitulate structures inherent in the olfactory system, including the formation of glomeruli onto a compression layer and sparse and random connectivity onto an expansion layer. Finally, we offer theoretical justifications for each result. Our work offers a framework to explain the evolutionary convergence of olfactory circuits, and gives insight and logic into the anatomic and functional structure of the olfactory system.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How do people learn how to plan? 人们是如何学会计划的?
Pub Date : 2019-09-01 DOI: 10.32470/ccn.2019.1313-0
Y. Jain, Sanit Gupta, V. Rakesh, P. Dayan, Frederick Callaway, Falk Lieder
How does the brain learn how to plan? We reverseengineer people’s underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people’s planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people’s average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms – including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people’s ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people’s ability to improve their decision mechanisms and represent a significant step towards reverseengineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.
大脑是如何学会计划的?通过将认知可塑性的理性过程模型与最近开发的经验方法相结合,我们对人类潜在的学习机制进行了逆向工程,这些方法使我们能够追踪人类规划策略的时间演变。我们发现我们的计算学习值模型(LVOC)准确地捕捉了人们的平均学习曲线。然而,元认知学习中也存在实质性的个体差异,这些差异最好通过多种不同的学习机制来理解,包括策略选择学习。此外,我们观察到LVOC不能完全捕捉人们自适应决定何时停止计划的能力。我们成功地扩展了LVOC模型来解决这些差异。我们的模型广泛地捕捉了人们改善决策机制的能力,并代表了逆向工程大脑如何通过与环境的相互作用学习越来越有效的认知策略的重要一步。
{"title":"How do people learn how to plan?","authors":"Y. Jain, Sanit Gupta, V. Rakesh, P. Dayan, Frederick Callaway, Falk Lieder","doi":"10.32470/ccn.2019.1313-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1313-0","url":null,"abstract":"How does the brain learn how to plan? We reverseengineer people’s underlying learning mechanisms by combining rational process models of cognitive plasticity with recently developed empirical methods that allow us to trace the temporal evolution of people’s planning strategies. We find that our Learned Value of Computation model (LVOC) accurately captures people’s average learning curve. However, there were also substantial individual differences in metacognitive learning that are best understood in terms of multiple different learning mechanisms – including strategy selection learning. Furthermore, we observed that LVOC could not fully capture people’s ability to adaptively decide when to stop planning. We successfully extended the LVOC model to address these discrepancies. Our models broadly capture people’s ability to improve their decision mechanisms and represent a significant step towards reverseengineering how the brain learns increasingly effective cognitive strategies through its interaction with the environment.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122133132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Modeling the development of decision making in volatile environments using strategies, reinforcement learning, and Bayesian inference 使用策略、强化学习和贝叶斯推理对易变环境中决策制定的发展进行建模
Pub Date : 2019-09-01 DOI: 10.32470/ccn.2019.1409-0
Maria K. Eckstein, Sarah L. Master, R. Dahl, L. Wilbrecht, A. Collins
Continuously adjusting behavior in changing environments is a crucial skill for intelligent creatures, but we know little about how this ability develops in humans. Here, we investigate this question in a large sample using behavioral analyses and computational modeling. We assessed over 200 participants (ages 8-30) on a probabilistic, volatile reinforcement learning task, and measured pubertal development status and salivary testosterone. We used three classes of models to analyze behavior on the task: fixed strategies, incremental reinforcement learning, and Bayesian inference. All model classes provided converging evidence for a decrease in decision noise or exploration with age. Individual models also provided insight into unique aspects of decision making, such as changes in estimated reward probabilities, and sed-specific changes in the sensitivity to positive versus negative outcomes. Our results show that the combination of models can provide detailed insight into the development of decision making, and into complex cognition more generally.
在不断变化的环境中不断调整行为是智能生物的一项关键技能,但我们对人类如何发展这种能力知之甚少。在这里,我们使用行为分析和计算建模在一个大样本中调查这个问题。我们评估了200多名参与者(8-30岁)的概率性、挥发性强化学习任务,并测量了青春期发育状态和唾液睾酮。我们使用了三类模型来分析任务上的行为:固定策略、增量强化学习和贝叶斯推理。所有的模型类都提供了随着年龄增长而减少决策噪声或探索的收敛证据。个体模型还提供了对决策的独特方面的见解,例如估计奖励概率的变化,以及对积极和消极结果的敏感性的特定变化。我们的研究结果表明,这些模型的结合可以为决策的发展提供详细的见解,并更广泛地研究复杂的认知。
{"title":"Modeling the development of decision making in volatile environments using strategies, reinforcement learning, and Bayesian inference","authors":"Maria K. Eckstein, Sarah L. Master, R. Dahl, L. Wilbrecht, A. Collins","doi":"10.32470/ccn.2019.1409-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1409-0","url":null,"abstract":"Continuously adjusting behavior in changing environments is a crucial skill for intelligent creatures, but we know little about how this ability develops in humans. Here, we investigate this question in a large sample using behavioral analyses and computational modeling. We assessed over 200 participants (ages 8-30) on a probabilistic, volatile reinforcement learning task, and measured pubertal development status and salivary testosterone. We used three classes of models to analyze behavior on the task: fixed strategies, incremental reinforcement learning, and Bayesian inference. All model classes provided converging evidence for a decrease in decision noise or exploration with age. Individual models also provided insight into unique aspects of decision making, such as changes in estimated reward probabilities, and sed-specific changes in the sensitivity to positive versus negative outcomes. Our results show that the combination of models can provide detailed insight into the development of decision making, and into complex cognition more generally.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132299871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reading times and temporo-parietal BOLD activity encode the semantic hierarchy of language prediction 阅读时间和颞顶叶BOLD活动编码语言预测的语义层次
Pub Date : 2019-09-01 DOI: 10.32470/ccn.2019.1333-0
L. Schmitt, J. Erb, Sarah Tune, A. Rysop, G. Hartwigsen, J. Obleser
When poor acoustics challenge speech comprehension, listeners are thought to increasingly draw on semantic context to predict upcoming speech. However, previous research focused mostly on speech material with short timescales of context (e.g., isolated sentences). In an fMRI experiment, 30 participants listened to a one-hour narrative incorporating a multitude of timescales while confronted with competing resynthesized natural sounds. We modeled semantic predictability at five timescales of increasing context length by computing the similarity between word embeddings. An encoding model revealed that short informative timescales are coupled to increased activity in the posterior portion of superior temporal gyrus, whereas long informative timescales are coupled to increased activity in parietal regions like the angular gyrus. In a second experiment, we probed the behavioral relevance of semantic timescales in language prediction: 11 participants performed a self-paced reading task on a text version of the narrative. Reading times sped up for the shortest informative timescale, but also tended to speed up for the longest informative timescales. Our results suggest that short-term dependencies as well as the gist of a story drive behavioral processing fluency and engage a temporo-parietal processing hierarchy.
当糟糕的音响效果对语音理解构成挑战时,人们认为听者会越来越多地利用语义语境来预测即将到来的语音。然而,以往的研究主要集中在具有短时间尺度上下文的语音材料上(例如,孤立的句子)。在一项功能磁共振成像实验中,30名参与者听了一小时的包含多种时间尺度的叙述,同时面对相互竞争的重新合成的自然声音。我们通过计算词嵌入之间的相似度,在增加上下文长度的五个时间尺度上建模语义可预测性。编码模型显示,较短的信息时间尺度与颞上回后部的活动增加有关,而较长的信息时间尺度与顶叶区域(如角回)的活动增加有关。在第二个实验中,我们探讨了语义时间尺度在语言预测中的行为相关性:11名参与者对文本版本的叙述进行了自定节奏阅读任务。在最短的信息时间尺度上,阅读时间加快,但在最长的信息时间尺度上,阅读时间也趋于加快。我们的研究结果表明,短期依赖和故事的主旨驱动了行为加工的流畅性,并参与了一个颞顶叶加工层次。
{"title":"Reading times and temporo-parietal BOLD activity encode the semantic hierarchy of language prediction","authors":"L. Schmitt, J. Erb, Sarah Tune, A. Rysop, G. Hartwigsen, J. Obleser","doi":"10.32470/ccn.2019.1333-0","DOIUrl":"https://doi.org/10.32470/ccn.2019.1333-0","url":null,"abstract":"When poor acoustics challenge speech comprehension, listeners are thought to increasingly draw on semantic context to predict upcoming speech. However, previous research focused mostly on speech material with short timescales of context (e.g., isolated sentences). In an fMRI experiment, 30 participants listened to a one-hour narrative incorporating a multitude of timescales while confronted with competing resynthesized natural sounds. We modeled semantic predictability at five timescales of increasing context length by computing the similarity between word embeddings. An encoding model revealed that short informative timescales are coupled to increased activity in the posterior portion of superior temporal gyrus, whereas long informative timescales are coupled to increased activity in parietal regions like the angular gyrus. In a second experiment, we probed the behavioral relevance of semantic timescales in language prediction: 11 participants performed a self-paced reading task on a text version of the narrative. Reading times sped up for the shortest informative timescale, but also tended to speed up for the longest informative timescales. Our results suggest that short-term dependencies as well as the gist of a story drive behavioral processing fluency and engage a temporo-parietal processing hierarchy.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131728655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models 用深度神经语言模型跟踪自然语言预测
Pub Date : 2019-09-01 DOI: 10.32470/CCN.2019.1096-0
Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange
Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.
语言预测传统上是用简单的设计来研究的,在这种设计中,神经对预期和意外单词的反应以分类的方式进行比较。然而,这些设计被认为是“鼓励预测”,可能夸大了预测在语言理解中的重要性。最近的一些研究已经开始通过使用基于模型的方法来探讨语言可预测性在自然刺激(例如连续叙事)中的影响来解决这些担忧。然而,到目前为止,这些研究只关注了非常局部的预测形式,在计算一个单词的可预测性时,使用的模型只考虑了前两个单词。在这里,我们使用最先进的神经语言模型扩展了这种方法,该模型可以考虑大约500倍长的语言上下文。与简单的模型相比,来自神经网络的可预测性估计更适合于听自然叙事的受试者的脑电图数据,并揭示出类似于P200和N400的强烈惊讶反应。这些结果表明,语言的可预测性效应并不是简单设计的副作用,并展示了人工智能在语言认知神经科学方面的最新进展的实际应用。
{"title":"Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models","authors":"Micha Heilbron, Benedikt V. Ehinger, P. Hagoort, F. P. Lange","doi":"10.32470/CCN.2019.1096-0","DOIUrl":"https://doi.org/10.32470/CCN.2019.1096-0","url":null,"abstract":"Prediction in language has traditionally been studied using simple designs in which neural responses to expected and unexpected words are compared in a categorical fashion. However, these designs have been contested as being `prediction encouraging', potentially exaggerating the importance of prediction in language understanding. A few recent studies have begun to address these worries by using model-based approaches to probe the effects of linguistic predictability in naturalistic stimuli (e.g. continuous narrative). However, these studies so far only looked at very local forms of prediction, using models that take no more than the prior two words into account when computing a word's predictability. Here, we extend this approach using a state-of-the-art neural language model that can take roughly 500 times longer linguistic contexts into account. Predictability estimates from the neural network offer a much better fit to EEG data from subjects listening to naturalistic narrative than simpler models, and reveal strong surprise responses akin to the P200 and N400. These results show that predictability effects in language are not a side-effect of simple designs, and demonstrate the practical use of recent advances in AI for the cognitive neuroscience of language.","PeriodicalId":281121,"journal":{"name":"2019 Conference on Cognitive Computational Neuroscience","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124481344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2019 Conference on Cognitive Computational Neuroscience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1