Scaling neural simulations in STACS

Felix Wang, Shruti Kulkarni, Bradley H. Theilman, Fredrick Rothganger, C. Schuman, Seung-Hwan Lim, J. Aimone
{"title":"Scaling neural simulations in STACS","authors":"Felix Wang, Shruti Kulkarni, Bradley H. Theilman, Fredrick Rothganger, C. Schuman, Seung-Hwan Lim, J. Aimone","doi":"10.1088/2634-4386/ad3be7","DOIUrl":null,"url":null,"abstract":"\n As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on STACS (Simulation Tool for Asynchronous Cortical Streams), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"45 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuromorphic Computing and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2634-4386/ad3be7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on STACS (Simulation Tool for Asynchronous Cortical Streams), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在 STACS 中扩展神经模拟
随着现代神经科学工具获得更多有关大脑的细节,对生物级神经模拟的需求不断增长。然而,有效的大规模模拟仍然是一项挑战。除了实现并行执行所需的工具外,还有突触互连的独特结构,这种结构在全局上是稀疏的,但每个神经元的连接密度和非局部交互相对较高。在高性能计算应用中还需要考虑各种实际问题,例如需要对神经网络进行序列化,以支持可能需要检查点重启的长时间运行模拟。虽然在神经形态硬件上进行加速也是一种可能,但这一领域的开发可能很困难,因为不同平台的硬件支持往往各不相同,而软件对更大规模模型的支持也往往有限。在本文中,我们将注意力集中在 STACS(异步皮质流仿真工具)上,这是一个利用 Charm++ 并行编程框架的尖峰神经网络仿真器,目标是支持生物规模的仿真以及平台间的互操作性。这些目标的核心是实现可扩展的数据结构,以便在并行分区中有效地分配网络。在这里,我们将讨论一种并行数据格式的直接扩展,这种格式在图分区器中已有使用历史,也可作为不同神经形态后端的可移植中间表示。我们在 Summit 超级计算机上进行了扩展研究,考察了 STACS 在网络构建和存储、分区和执行方面的能力。我们强调了适当分区、空间依赖性的突触结构如何引入非常适合 Charm++ 支持的组播通信的通信工作量。我们评估了数百万神经元和数十亿突触数量级网络的强和弱扩展行为,结果表明 STACS 实现了具有竞争力的并行效率水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.90
自引率
0.00%
发文量
0
期刊最新文献
Difficulties and approaches in enabling learning-in-memory using crossbar arrays of memristors A liquid optical memristor using photochromic effect and capillary effect Tissue-like interfacing of planar electrochemical organic neuromorphic devices Implementation of two-step gradual reset scheme for enhancing state uniformity of 2D hBN-based memristors for image processing Modulating short-term and long-term plasticity of polymer-based artificial synapses for neuromorphic computing and beyond
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1