递归神经网络连通性的初始化与自组织优化。

Hfsp Journal Pub Date : 2009-10-01 Epub Date: 2009-10-26 DOI:10.2976/1.3240502
Joschka Boedecker, Oliver Obst, N Michael Mayer, Minoru Asada
{"title":"递归神经网络连通性的初始化与自组织优化。","authors":"Joschka Boedecker,&nbsp;Oliver Obst,&nbsp;N Michael Mayer,&nbsp;Minoru Asada","doi":"10.2976/1.3240502","DOIUrl":null,"url":null,"abstract":"<p><p>Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.</p>","PeriodicalId":55056,"journal":{"name":"Hfsp Journal","volume":"3 5","pages":"340-9"},"PeriodicalIF":0.0000,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2976/1.3240502","citationCount":"40","resultStr":"{\"title\":\"Initialization and self-organized optimization of recurrent neural network connectivity.\",\"authors\":\"Joschka Boedecker,&nbsp;Oliver Obst,&nbsp;N Michael Mayer,&nbsp;Minoru Asada\",\"doi\":\"10.2976/1.3240502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.</p>\",\"PeriodicalId\":55056,\"journal\":{\"name\":\"Hfsp Journal\",\"volume\":\"3 5\",\"pages\":\"340-9\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.2976/1.3240502\",\"citationCount\":\"40\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Hfsp Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2976/1.3240502\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2009/10/26 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Hfsp Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2976/1.3240502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2009/10/26 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40

摘要

储层计算(RC)是递归神经网络领域的一个新范式。RC中的网络具有稀疏且随机连接的固定隐藏层,并且只训练输出连接。最近,RC网络作为一种通用神经微电路的数学模型受到越来越多的关注,用于研究和解释新皮质柱的计算。然而,应用于特定任务时,它们的固定随机连接会导致性能的显著变化。目前已知的针对特定问题的优化程序很少,这对工程应用很重要,但也有助于理解生物学中的网络是如何形成的,以最佳地适应其环境的要求。研究了一种基于置换矩阵的通用网络初始化方法,并推导了一种基于内在可塑性(IP)的无监督学习规则。基于ip的学习只使用局部学习,其目的是以自组织的方式提高网络性能。通过使用三个不同的基准测试,我们发现具有储层连通性排列矩阵的网络比其他方法具有更多的持久记忆,但也能够执行高度非线性映射。我们还表明,基于s型传递函数的ip在可以实现的输出分布方面是有限的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Initialization and self-organized optimization of recurrent neural network connectivity.

Reservoir computing (RC) is a recent paradigm in the field of recurrent neural networks. Networks in RC have a sparsely and randomly connected fixed hidden layer, and only output connections are trained. RC networks have recently received increased attention as a mathematical model for generic neural microcircuits to investigate and explain computations in neocortical columns. Applied to specific tasks, their fixed random connectivity, however, leads to significant variation in performance. Few problem-specific optimization procedures are known, which would be important for engineering applications, but also in order to understand how networks in biology are shaped to be optimally adapted to requirements of their environment. We study a general network initialization method using permutation matrices and derive a new unsupervised learning rule based on intrinsic plasticity (IP). The IP-based learning uses only local learning, and its aim is to improve network performance in a self-organized way. Using three different benchmarks, we show that networks with permutation matrices for the reservoir connectivity have much more persistent memory than the other methods but are also able to perform highly nonlinear mappings. We also show that IP-based on sigmoid transfer functions is limited concerning the output distributions that can be achieved.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Hfsp Journal
Hfsp Journal 综合性期刊-综合性期刊
自引率
0.00%
发文量
0
审稿时长
>12 weeks
期刊最新文献
Frontiers in life science. Inherited adaptation of genome-rewired cells in response to a challenging environment. Network reconstruction reveals new links between aging and calorie restriction in yeast. Molecular motors as an auto-oscillator. Robustness versus evolvability: a paradigm revisited.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1