利用概念器对递归神经网络进行自适应控制

Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano
{"title":"利用概念器对递归神经网络进行自适应控制","authors":"Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano","doi":"arxiv-2405.07236","DOIUrl":null,"url":null,"abstract":"Recurrent Neural Networks excel at predicting and generating complex\nhigh-dimensional temporal patterns. Due to their inherent nonlinear dynamics\nand memory, they can learn unbounded temporal dependencies from data. In a\nMachine Learning setting, the network's parameters are adapted during a\ntraining phase to match the requirements of a given task/problem increasing its\ncomputational capabilities. After the training, the network parameters are kept\nfixed to exploit the learned computations. The static parameters thereby render\nthe network unadaptive to changing conditions, such as external or internal\nperturbation. In this manuscript, we demonstrate how keeping parts of the\nnetwork adaptive even after the training enhances its functionality and\nrobustness. Here, we utilize the conceptor framework and conceptualize an\nadaptive control loop analyzing the network's behavior continuously and\nadjusting its time-varying internal representation to follow a desired target.\nWe demonstrate how the added adaptivity of the network supports the\ncomputational functionality in three distinct tasks: interpolation of temporal\npatterns, stabilization against partial network degradation, and robustness\nagainst input distortion. Our results highlight the potential of adaptive\nnetworks in machine learning beyond training, enabling them to not only learn\ncomplex patterns but also dynamically adjust to changing environments,\nultimately broadening their applicability.","PeriodicalId":501305,"journal":{"name":"arXiv - PHYS - Adaptation and Self-Organizing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive control of recurrent neural networks using conceptors\",\"authors\":\"Guillaume Pourcel, Mirko Goldmann, Ingo Fischer, Miguel C. Soriano\",\"doi\":\"arxiv-2405.07236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recurrent Neural Networks excel at predicting and generating complex\\nhigh-dimensional temporal patterns. Due to their inherent nonlinear dynamics\\nand memory, they can learn unbounded temporal dependencies from data. In a\\nMachine Learning setting, the network's parameters are adapted during a\\ntraining phase to match the requirements of a given task/problem increasing its\\ncomputational capabilities. After the training, the network parameters are kept\\nfixed to exploit the learned computations. The static parameters thereby render\\nthe network unadaptive to changing conditions, such as external or internal\\nperturbation. In this manuscript, we demonstrate how keeping parts of the\\nnetwork adaptive even after the training enhances its functionality and\\nrobustness. Here, we utilize the conceptor framework and conceptualize an\\nadaptive control loop analyzing the network's behavior continuously and\\nadjusting its time-varying internal representation to follow a desired target.\\nWe demonstrate how the added adaptivity of the network supports the\\ncomputational functionality in three distinct tasks: interpolation of temporal\\npatterns, stabilization against partial network degradation, and robustness\\nagainst input distortion. Our results highlight the potential of adaptive\\nnetworks in machine learning beyond training, enabling them to not only learn\\ncomplex patterns but also dynamically adjust to changing environments,\\nultimately broadening their applicability.\",\"PeriodicalId\":501305,\"journal\":{\"name\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Adaptation and Self-Organizing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2405.07236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Adaptation and Self-Organizing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.07236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

循环神经网络擅长预测和生成复杂的高维时间模式。由于其固有的非线性动态性和记忆性,它们可以从数据中学习无限制的时间依赖关系。在机器学习设置中,网络参数会在训练阶段进行调整,以满足特定任务/问题的要求,从而提高其计算能力。训练结束后,网络参数保持固定,以利用所学计算。因此,静态参数会使网络无法适应不断变化的条件,如外部或内部扰动。在本手稿中,我们将展示如何在训练后仍保持网络的部分适应性,以增强其功能性和稳健性。在此,我们利用概念器框架和概念化自适应控制环路,持续分析网络行为并调整其随时间变化的内部表示,以遵循所需的目标。我们展示了网络的附加自适应能力如何在三个不同的任务中支持计算功能:时序模式插值、针对部分网络退化的稳定性以及针对输入失真的鲁棒性。我们的研究结果凸显了自适应网络在机器学习中超越训练的潜力,使它们不仅能学习复杂的模式,还能根据不断变化的环境进行动态调整,最终拓宽了它们的应用范围。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Adaptive control of recurrent neural networks using conceptors
Recurrent Neural Networks excel at predicting and generating complex high-dimensional temporal patterns. Due to their inherent nonlinear dynamics and memory, they can learn unbounded temporal dependencies from data. In a Machine Learning setting, the network's parameters are adapted during a training phase to match the requirements of a given task/problem increasing its computational capabilities. After the training, the network parameters are kept fixed to exploit the learned computations. The static parameters thereby render the network unadaptive to changing conditions, such as external or internal perturbation. In this manuscript, we demonstrate how keeping parts of the network adaptive even after the training enhances its functionality and robustness. Here, we utilize the conceptor framework and conceptualize an adaptive control loop analyzing the network's behavior continuously and adjusting its time-varying internal representation to follow a desired target. We demonstrate how the added adaptivity of the network supports the computational functionality in three distinct tasks: interpolation of temporal patterns, stabilization against partial network degradation, and robustness against input distortion. Our results highlight the potential of adaptive networks in machine learning beyond training, enabling them to not only learn complex patterns but also dynamically adjust to changing environments, ultimately broadening their applicability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Expected and unexpected routes to synchronization in a system of swarmalators Synchronization cluster bursting in adaptive oscillators networks The forced one-dimensional swarmalator model Periodic systems have new classes of synchronization stability Reduced-order adaptive synchronization in a chaotic neural network with parameter mismatch: A dynamical system vs. machine learning approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1