Scaling the HTM Spatial Pooler

Damir Dobric, Andreas Pech, B. Ghita, T. Wennekers
{"title":"Scaling the HTM Spatial Pooler","authors":"Damir Dobric, Andreas Pech, B. Ghita, T. Wennekers","doi":"10.5121/ijaia.2020.11407","DOIUrl":null,"url":null,"abstract":"The Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a theory and machine learning technology that aims to capture cortical algorithm of the neocortex. Inspired by the biological functioning of the neocortex, it provides a theoretical framework, which helps to better understand how the cortical algorithm inside of the brain might work. It organizes populations of neurons in column-like units, crossing several layers such that the units are connected into structures called regions (areas). Areas and columns are hierarchically organized and can further be connected into more complex networks, which implement higher cognitive capabilities like invariant representations. Columns inside of layers are specialized on learning of spatial patterns and sequences. This work targets specifically spatial pattern learning algorithm called Spatial Pooler. A complex topology and high number of neurons used in this algorithm, require more computing power than even a single machine with multiple cores or a GPUs could provide. This work aims to improve the HTM CLA Spatial Pooler by enabling it to run in the distributed environment on multiple physical machines by using the Actor Programming Model. The proposed model is based on a mathematical theory and computation model, which targets massive concurrency. Using this model drives different reasoning about concurrent execution and enables flexible distribution of parallel cortical computation logic across multiple physical nodes. This work is the first one about the parallel HTM Spatial Pooler on multiple physical nodes with named computational model. With the increasing popularity of cloud computing and server less architectures, it is the first step towards proposing interconnected independent HTM CLA units in an elastic cognitive network. Thereby it can provide an alternative to deep neuronal networks, with theoretically unlimited scale in a distributed cloud environment.","PeriodicalId":93188,"journal":{"name":"International journal of artificial intelligence & applications","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of artificial intelligence & applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijaia.2020.11407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a theory and machine learning technology that aims to capture cortical algorithm of the neocortex. Inspired by the biological functioning of the neocortex, it provides a theoretical framework, which helps to better understand how the cortical algorithm inside of the brain might work. It organizes populations of neurons in column-like units, crossing several layers such that the units are connected into structures called regions (areas). Areas and columns are hierarchically organized and can further be connected into more complex networks, which implement higher cognitive capabilities like invariant representations. Columns inside of layers are specialized on learning of spatial patterns and sequences. This work targets specifically spatial pattern learning algorithm called Spatial Pooler. A complex topology and high number of neurons used in this algorithm, require more computing power than even a single machine with multiple cores or a GPUs could provide. This work aims to improve the HTM CLA Spatial Pooler by enabling it to run in the distributed environment on multiple physical machines by using the Actor Programming Model. The proposed model is based on a mathematical theory and computation model, which targets massive concurrency. Using this model drives different reasoning about concurrent execution and enables flexible distribution of parallel cortical computation logic across multiple physical nodes. This work is the first one about the parallel HTM Spatial Pooler on multiple physical nodes with named computational model. With the increasing popularity of cloud computing and server less architectures, it is the first step towards proposing interconnected independent HTM CLA units in an elastic cognitive network. Thereby it can provide an alternative to deep neuronal networks, with theoretically unlimited scale in a distributed cloud environment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
扩展HTM空间池
分层时间记忆皮层学习算法(HTM-CLA)是一种理论和机器学习技术,旨在捕捉新皮质的皮层算法。受新皮层生物功能的启发,它提供了一个理论框架,有助于更好地理解大脑内部的皮层算法是如何工作的。它将神经元群体组织成柱状单元,跨越几层,使这些单元连接成称为区域(区域)的结构。区域和列是分层组织的,可以进一步连接到更复杂的网络中,实现更高的认知能力,如不变表示。层内的列专门用于学习空间模式和序列。这项工作专门针对空间模式学习算法称为空间池。该算法中使用的复杂拓扑结构和大量神经元需要比具有多个核心或GPU的单机更高的计算能力。这项工作旨在通过使用Actor编程模型,使HTM-CLA空间池能够在多台物理机器上的分布式环境中运行,从而改进HTM-CLA空间池。所提出的模型基于数学理论和计算模型,以大规模并发为目标。使用该模型可以驱动关于并发执行的不同推理,并使并行皮层计算逻辑能够灵活分布在多个物理节点上。这是第一个在多个物理节点上使用命名计算模型的并行HTM空间池的工作。随着云计算和无服务器架构的日益普及,这是在弹性认知网络中提出互连的独立HTM-CLA单元的第一步。因此,它可以为深度神经元网络提供一种替代方案,在分布式云环境中具有理论上无限的规模。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Characteristics of Networks Generated by Kernel Growing Neural Gas Identifying Text Classification Failures in Multilingual AI-Generated Content Subverting Characters Stereotypes: Exploring the Role of AI in Stereotype Subversion Performance Evaluation of Block-Sized Algorithms for Majority Vote in Facial Recognition Sentiment Analysis in Indian Elections: Unraveling Public Perception of the Karnataka Elections With Transformers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1