回到持续吸引器

Ábel Ságodi, Guillermo Martín-Sánchez, Piotr Sokół, Il Memming Park
{"title":"回到持续吸引器","authors":"Ábel Ságodi, Guillermo Martín-Sánchez, Piotr Sokół, Il Memming Park","doi":"arxiv-2408.00109","DOIUrl":null,"url":null,"abstract":"Continuous attractors offer a unique class of solutions for storing\ncontinuous-valued variables in recurrent system states for indefinitely long\ntime intervals. Unfortunately, continuous attractors suffer from severe\nstructural instability in general--they are destroyed by most infinitesimal\nchanges of the dynamical law that defines them. This fragility limits their\nutility especially in biological systems as their recurrent dynamics are\nsubject to constant perturbations. We observe that the bifurcations from\ncontinuous attractors in theoretical neuroscience models display various\nstructurally stable forms. Although their asymptotic behaviors to maintain\nmemory are categorically distinct, their finite-time behaviors are similar. We\nbuild on the persistent manifold theory to explain the commonalities between\nbifurcations from and approximations of continuous attractors. Fast-slow\ndecomposition analysis uncovers the persistent manifold that survives the\nseemingly destructive bifurcation. Moreover, recurrent neural networks trained\non analog memory tasks display approximate continuous attractors with predicted\nslow manifold structures. Therefore, continuous attractors are functionally\nrobust and remain useful as a universal analogy for understanding analog\nmemory.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Back to the Continuous Attractor\",\"authors\":\"Ábel Ságodi, Guillermo Martín-Sánchez, Piotr Sokół, Il Memming Park\",\"doi\":\"arxiv-2408.00109\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Continuous attractors offer a unique class of solutions for storing\\ncontinuous-valued variables in recurrent system states for indefinitely long\\ntime intervals. Unfortunately, continuous attractors suffer from severe\\nstructural instability in general--they are destroyed by most infinitesimal\\nchanges of the dynamical law that defines them. This fragility limits their\\nutility especially in biological systems as their recurrent dynamics are\\nsubject to constant perturbations. We observe that the bifurcations from\\ncontinuous attractors in theoretical neuroscience models display various\\nstructurally stable forms. Although their asymptotic behaviors to maintain\\nmemory are categorically distinct, their finite-time behaviors are similar. We\\nbuild on the persistent manifold theory to explain the commonalities between\\nbifurcations from and approximations of continuous attractors. Fast-slow\\ndecomposition analysis uncovers the persistent manifold that survives the\\nseemingly destructive bifurcation. Moreover, recurrent neural networks trained\\non analog memory tasks display approximate continuous attractors with predicted\\nslow manifold structures. Therefore, continuous attractors are functionally\\nrobust and remain useful as a universal analogy for understanding analog\\nmemory.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.00109\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.00109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

连续吸引子提供了一类独特的解决方案,可以在无限长的时间间隔内将连续值变量存储在循环系统状态中。遗憾的是,连续吸引子一般都具有严重的结构不稳定性--它们会被定义它们的动力学定律的大多数微小变化所破坏。这种脆弱性限制了它们的实用性,尤其是在生物系统中,因为它们的循环动力学会受到持续的扰动。我们观察到,理论神经科学模型中连续吸引子的分岔显示出各种结构稳定的形式。虽然它们维持记忆的渐近行为在本质上是不同的,但它们的有限时间行为是相似的。我们利用持久流形理论来解释连续吸引子的分岔和近似之间的共性。快慢分解分析揭示了在这些看似破坏性的分岔中幸存下来的持久流形。此外,在模拟记忆任务中训练的递归神经网络显示出近似的连续吸引子与预测的慢流形结构。因此,连续吸引子在功能上是稳健的,并且仍然是理解模拟记忆的通用类比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Back to the Continuous Attractor
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Early reduced dopaminergic tone mediated by D3 receptor and dopamine transporter in absence epileptogenesis Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Identifying Influential nodes in Brain Networks via Self-Supervised Graph-Transformer Contrastive Learning in Memristor-based Neuromorphic Systems Self-Attention Limits Working Memory Capacity of Transformer-Based Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1