Benjamin J Arthur, Christopher M Kim, Susu Chen, Stephan Preibisch, Ran Darshan
{"title":"用于训练尖峰神经网络的递归最小二乘算法的可扩展实现。","authors":"Benjamin J Arthur, Christopher M Kim, Susu Chen, Stephan Preibisch, Ran Darshan","doi":"10.3389/fninf.2023.1099510","DOIUrl":null,"url":null,"abstract":"<p><p>Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive <i>in-silico</i> study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as <i>in-vivo</i> experiments are being conducted, thus closing the loop between modeling and experiments.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1099510"},"PeriodicalIF":2.5000,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10333503/pdf/","citationCount":"0","resultStr":"{\"title\":\"A scalable implementation of the recursive least-squares algorithm for training spiking neural networks.\",\"authors\":\"Benjamin J Arthur, Christopher M Kim, Susu Chen, Stephan Preibisch, Ran Darshan\",\"doi\":\"10.3389/fninf.2023.1099510\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive <i>in-silico</i> study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as <i>in-vivo</i> experiments are being conducted, thus closing the loop between modeling and experiments.</p>\",\"PeriodicalId\":12462,\"journal\":{\"name\":\"Frontiers in Neuroinformatics\",\"volume\":\"17 \",\"pages\":\"1099510\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10333503/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Neuroinformatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3389/fninf.2023.1099510\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroinformatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fninf.2023.1099510","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
根据神经元记录或行为任务训练尖峰递归神经网络已成为研究神经系统计算的一种常用方法。随着神经记录的大小和复杂性的增加,我们需要能在短时间内利用最少资源训练模型的高效算法。我们介绍了尖峰神经网络中递归最小二乘算法的 CPU 和 GPU 优化实现。GPU 实现可以训练包含一百万个神经元、一亿个可塑性突触和十亿个静态突触的网络,比未经优化的 CPU 参考实现快约 1000 倍。我们在不到一小时的时间内就训练出了一个网络,重现了小鼠在执行决策任务时记录的 66000 个神经元的活动,从而证明了代码的实用性。这种快速的实现方式可以对多区域计算的动态和连通性进行更加互动的内部研究。它还可以在进行体内实验的同时训练模型,从而实现建模与实验之间的闭环。
A scalable implementation of the recursive least-squares algorithm for training spiking neural networks.
Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.
期刊介绍:
Frontiers in Neuroinformatics publishes rigorously peer-reviewed research on the development and implementation of numerical/computational models and analytical tools used to share, integrate and analyze experimental data and advance theories of the nervous system functions. Specialty Chief Editors Jan G. Bjaalie at the University of Oslo and Sean L. Hill at the École Polytechnique Fédérale de Lausanne are supported by an outstanding Editorial Board of international experts. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics and the public worldwide.
Neuroscience is being propelled into the information age as the volume of information explodes, demanding organization and synthesis. Novel synthesis approaches are opening up a new dimension for the exploration of the components of brain elements and systems and the vast number of variables that underlie their functions. Neural data is highly heterogeneous with complex inter-relations across multiple levels, driving the need for innovative organizing and synthesizing approaches from genes to cognition, and covering a range of species and disease states.
Frontiers in Neuroinformatics therefore welcomes submissions on existing neuroscience databases, development of data and knowledge bases for all levels of neuroscience, applications and technologies that can facilitate data sharing (interoperability, formats, terminologies, and ontologies), and novel tools for data acquisition, analyses, visualization, and dissemination of nervous system data. Our journal welcomes submissions on new tools (software and hardware) that support brain modeling, and the merging of neuroscience databases with brain models used for simulation and visualization.