利用共享动态图案在递归网络中进行灵活的多任务计算

IF 21.2 1区 医学 Q1 NEUROSCIENCES Nature neuroscience Pub Date : 2024-07-09 DOI:10.1038/s41593-024-01668-6
Laura N. Driscoll, Krishna Shenoy, David Sussillo
{"title":"利用共享动态图案在递归网络中进行灵活的多任务计算","authors":"Laura N. Driscoll, Krishna Shenoy, David Sussillo","doi":"10.1038/s41593-024-01668-6","DOIUrl":null,"url":null,"abstract":"Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization. The authors identify reusable ‘dynamical motifs’ in artificial neural networks. These motifs enable flexible recombination of previously learned capabilities, promoting modular, compositional computation and rapid transfer learning. This discovery sheds light on the fundamental building blocks of intelligent behavior.","PeriodicalId":19076,"journal":{"name":"Nature neuroscience","volume":null,"pages":null},"PeriodicalIF":21.2000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41593-024-01668-6.pdf","citationCount":"0","resultStr":"{\"title\":\"Flexible multitask computation in recurrent networks utilizes shared dynamical motifs\",\"authors\":\"Laura N. Driscoll, Krishna Shenoy, David Sussillo\",\"doi\":\"10.1038/s41593-024-01668-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization. The authors identify reusable ‘dynamical motifs’ in artificial neural networks. These motifs enable flexible recombination of previously learned capabilities, promoting modular, compositional computation and rapid transfer learning. This discovery sheds light on the fundamental building blocks of intelligent behavior.\",\"PeriodicalId\":19076,\"journal\":{\"name\":\"Nature neuroscience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":21.2000,\"publicationDate\":\"2024-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s41593-024-01668-6.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nature neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.nature.com/articles/s41593-024-01668-6\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature neuroscience","FirstCategoryId":"3","ListUrlMain":"https://www.nature.com/articles/s41593-024-01668-6","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

灵活计算是智能行为的标志。然而,人们对神经网络如何针对不同计算进行上下文重组知之甚少。在本研究中,我们通过对多任务人工递归神经网络的研究,确定了模块化计算的算法神经基底。动态系统分析显示,学习到的计算策略反映了训练任务集的模块化子任务结构。动态图案是神经活动的重复模式,通过吸引子、决策边界和旋转等动力学实现特定计算,这些图案在不同任务中被重复使用。例如,需要记忆连续循环变量的任务会重复使用相同的环形吸引子。我们的研究表明,当单元激活函数被限制为正值时,动态图案是由单元集群实现的。集群病变会导致模块表现缺陷。在经过初始学习阶段后,动机被重新配置以实现快速迁移学习。这项研究确立了动态图案是介于神经元和网络之间的基本组成计算单元。随着全脑研究同时记录来自多个特化系统的活动,动态图案框架将为有关特化和泛化的问题提供指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization. The authors identify reusable ‘dynamical motifs’ in artificial neural networks. These motifs enable flexible recombination of previously learned capabilities, promoting modular, compositional computation and rapid transfer learning. This discovery sheds light on the fundamental building blocks of intelligent behavior.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Nature neuroscience
Nature neuroscience 医学-神经科学
CiteScore
38.60
自引率
1.20%
发文量
212
审稿时长
1 months
期刊介绍: Nature Neuroscience, a multidisciplinary journal, publishes papers of the utmost quality and significance across all realms of neuroscience. The editors welcome contributions spanning molecular, cellular, systems, and cognitive neuroscience, along with psychophysics, computational modeling, and nervous system disorders. While no area is off-limits, studies offering fundamental insights into nervous system function receive priority. The journal offers high visibility to both readers and authors, fostering interdisciplinary communication and accessibility to a broad audience. It maintains high standards of copy editing and production, rigorous peer review, rapid publication, and operates independently from academic societies and other vested interests. In addition to primary research, Nature Neuroscience features news and views, reviews, editorials, commentaries, perspectives, book reviews, and correspondence, aiming to serve as the voice of the global neuroscience community.
期刊最新文献
Deep RNA sequencing of human dorsal root ganglion neurons reveals somatosensory mechanisms Mapping out multiple sclerosis with spatial transcriptomics Cell type mapping reveals tissue niches and interactions in subcortical multiple sclerosis lesions Spatially resolved gene signatures of white matter lesion progression in multiple sclerosis Smelling a concept
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1