Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

IF 1.7 4区 工程技术 Q3 COMPUTER SCIENCE, CYBERNETICS Biological Cybernetics Pub Date : 2024-09-09 DOI:10.1007/s00422-024-00995-y
Maria Osório, Luis Sa-Couto, Andreas Wichert
{"title":"Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?","authors":"Maria Osório, Luis Sa-Couto, Andreas Wichert","doi":"10.1007/s00422-024-00995-y","DOIUrl":null,"url":null,"abstract":"<p><p>It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the \"curse of dimensionality\". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the \"curse of dimensionality\" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.</p>","PeriodicalId":55374,"journal":{"name":"Biological Cybernetics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biological Cybernetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00422-024-00995-y","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
类似希比安的学习规则能否避免稀疏分布式数据中的维度诅咒?
一般认为,大脑使用类似稀疏分布式的表征。然而,这些表征是高维的,因此它们会因 "维度诅咒 "而影响传统机器学习模型的分类性能。在有大量标注数据的任务中,深度网络似乎可以通过多层和非河北反向传播算法来解决这个问题。然而,大脑似乎只需很少的层就能解决这个问题。在这项工作中,我们假设这是通过使用希比安学习来实现的。实际上,限制性玻尔兹曼机的希比安学习规则是以非对称方式学习输入模式的。它只学习非零值之间的相关性,而忽略代表绝大多数输入维度的零值。通过忽略零值,可以避免 "维度诅咒 "问题。为了验证我们的假设,我们生成了几个稀疏数据集,并将受限玻尔兹曼机分类器的性能与一些 Backprop 训练的网络进行了比较。使用这些代码进行的实验证实了我们最初的直觉,因为受限玻尔兹曼机显示了良好的泛化性能,而使用反向传播算法训练的神经网络则过度拟合了训练数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Biological Cybernetics
Biological Cybernetics 工程技术-计算机:控制论
CiteScore
3.50
自引率
5.30%
发文量
38
审稿时长
6-12 weeks
期刊介绍: Biological Cybernetics is an interdisciplinary medium for theoretical and application-oriented aspects of information processing in organisms, including sensory, motor, cognitive, and ecological phenomena. Topics covered include: mathematical modeling of biological systems; computational, theoretical or engineering studies with relevance for understanding biological information processing; and artificial implementation of biological information processing and self-organizing principles. Under the main aspects of performance and function of systems, emphasis is laid on communication between life sciences and technical/theoretical disciplines.
期刊最新文献
Phase response curves and the role of coordinates. Neuroscientific insights about computer vision models: a concise review. Astrocyte-mediated neuronal irregularities and dynamics: the complexity of the tripartite synapse Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data? Variational analysis of sensory feedback mechanisms in powerstroke-recovery systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1