采用 Hebbian 学习的 Hopfield 网络中的原型分析

Hayden McAlister, Anthony Robins, Lech Szymanski
{"title":"采用 Hebbian 学习的 Hopfield 网络中的原型分析","authors":"Hayden McAlister, Anthony Robins, Lech Szymanski","doi":"arxiv-2407.03342","DOIUrl":null,"url":null,"abstract":"We discuss prototype formation in the Hopfield network. Typically, Hebbian\nlearning with highly correlated states leads to degraded memory performance. We\nshow this type of learning can lead to prototype formation, where unlearned\nstates emerge as representatives of large correlated subsets of states,\nalleviating capacity woes. This process has similarities to prototype learning\nin human cognition. We provide a substantial literature review of prototype\nlearning in associative memories, covering contributions from psychology,\nstatistical physics, and computer science. We analyze prototype formation from\na theoretical perspective and derive a stability condition for these states\nbased on the number of examples of the prototype presented for learning, the\nnoise in those examples, and the number of non-example states presented. The\nstability condition is used to construct a probability of stability for a\nprototype state as the factors of stability change. We also note similarities\nto traditional network analysis, allowing us to find a prototype capacity. We\ncorroborate these expectations of prototype formation with experiments using a\nsimple Hopfield network with standard Hebbian learning. We extend our\nexperiments to a Hopfield network trained on data with multiple prototypes and\nfind the network is capable of stabilizing multiple prototypes concurrently. We\nmeasure the basins of attraction of the multiple prototype states, finding\nattractor strength grows with the number of examples and the agreement of\nexamples. We link the stability and dominance of prototype states to the energy\nprofile of these states, particularly when comparing the profile shape to\ntarget states or other spurious states.","PeriodicalId":501066,"journal":{"name":"arXiv - PHYS - Disordered Systems and Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prototype Analysis in Hopfield Networks with Hebbian Learning\",\"authors\":\"Hayden McAlister, Anthony Robins, Lech Szymanski\",\"doi\":\"arxiv-2407.03342\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We discuss prototype formation in the Hopfield network. Typically, Hebbian\\nlearning with highly correlated states leads to degraded memory performance. We\\nshow this type of learning can lead to prototype formation, where unlearned\\nstates emerge as representatives of large correlated subsets of states,\\nalleviating capacity woes. This process has similarities to prototype learning\\nin human cognition. We provide a substantial literature review of prototype\\nlearning in associative memories, covering contributions from psychology,\\nstatistical physics, and computer science. We analyze prototype formation from\\na theoretical perspective and derive a stability condition for these states\\nbased on the number of examples of the prototype presented for learning, the\\nnoise in those examples, and the number of non-example states presented. The\\nstability condition is used to construct a probability of stability for a\\nprototype state as the factors of stability change. We also note similarities\\nto traditional network analysis, allowing us to find a prototype capacity. We\\ncorroborate these expectations of prototype formation with experiments using a\\nsimple Hopfield network with standard Hebbian learning. We extend our\\nexperiments to a Hopfield network trained on data with multiple prototypes and\\nfind the network is capable of stabilizing multiple prototypes concurrently. We\\nmeasure the basins of attraction of the multiple prototype states, finding\\nattractor strength grows with the number of examples and the agreement of\\nexamples. We link the stability and dominance of prototype states to the energy\\nprofile of these states, particularly when comparing the profile shape to\\ntarget states or other spurious states.\",\"PeriodicalId\":501066,\"journal\":{\"name\":\"arXiv - PHYS - Disordered Systems and Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Disordered Systems and Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.03342\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Disordered Systems and Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.03342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们讨论了 Hopfield 网络中的原型形成。通常,具有高度相关状态的希比安学习会导致记忆性能下降。在这种情况下,未学习的状态会作为大相关状态子集的代表出现,从而缓解容量问题。这一过程与人类认知中的原型学习有相似之处。我们对联想记忆中的原型学习进行了大量的文献综述,涉及心理学、统计物理学和计算机科学等领域。我们从理论角度分析了原型的形成,并根据为学习而呈现的原型示例的数量、这些示例中的噪声以及呈现的非示例状态的数量,推导出了这些状态的稳定条件。随着稳定因素的变化,稳定条件被用来构建原型状态的稳定概率。我们还注意到与传统网络分析的相似性,这使我们能够找到原型容量。我们通过使用标准海比学习的简单 Hopfield 网络进行实验,证实了对原型形成的这些预期。我们将实验扩展到在具有多个原型的数据上训练的 Hopfield 网络,发现该网络能够同时稳定多个原型。我们测量了多个原型状态的吸引盆地,发现吸引器强度会随着示例数量和示例一致性的增加而增加。我们将原型态的稳定性和主导性与这些态的能量剖面联系起来,特别是在将剖面形状与目标态或其他虚假态进行比较时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prototype Analysis in Hopfield Networks with Hebbian Learning
We discuss prototype formation in the Hopfield network. Typically, Hebbian learning with highly correlated states leads to degraded memory performance. We show this type of learning can lead to prototype formation, where unlearned states emerge as representatives of large correlated subsets of states, alleviating capacity woes. This process has similarities to prototype learning in human cognition. We provide a substantial literature review of prototype learning in associative memories, covering contributions from psychology, statistical physics, and computer science. We analyze prototype formation from a theoretical perspective and derive a stability condition for these states based on the number of examples of the prototype presented for learning, the noise in those examples, and the number of non-example states presented. The stability condition is used to construct a probability of stability for a prototype state as the factors of stability change. We also note similarities to traditional network analysis, allowing us to find a prototype capacity. We corroborate these expectations of prototype formation with experiments using a simple Hopfield network with standard Hebbian learning. We extend our experiments to a Hopfield network trained on data with multiple prototypes and find the network is capable of stabilizing multiple prototypes concurrently. We measure the basins of attraction of the multiple prototype states, finding attractor strength grows with the number of examples and the agreement of examples. We link the stability and dominance of prototype states to the energy profile of these states, particularly when comparing the profile shape to target states or other spurious states.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast Analysis of the OpenAI O1-Preview Model in Solving Random K-SAT Problem: Does the LLM Solve the Problem Itself or Call an External SAT Solver? Trade-off relations between quantum coherence and measure of many-body localization Soft modes in vector spin glass models on sparse random graphs Boolean mean field spin glass model: rigorous results Generalized hetero-associative neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1