首页 > 最新文献

Neuromorphic Computing and Engineering最新文献

英文 中文
Hands-on reservoir computing: a tutorial for practical implementation 动手水库计算:教程的实际实施
Pub Date : 2022-07-01 DOI: 10.1088/2634-4386/ac7db7
Matteo Cucchi, Steven Abreu, G. Ciccone, D. Brunner, H. Kleemann
This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online 7 7 https://github.com/stevenabreu7/handson_reservoir.. https://github.com/stevenabreu7/handson_reservoir.
这份手稿服务于一个特定的目的:给读者从领域,如材料科学,化学,或电子学实现一个水库计算(RC)实验与她/他的材料系统的概述。关于该主题的介绍性文献很少,绝大多数评论提出了RC的基本概念,这些概念对于不熟悉机器学习领域的人来说可能是不平凡的(例如参考Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659-686)。考虑到大量材料系统显示出非线性行为和短期记忆,这可能被用来设计新的计算范式,这是不幸的。RC为材料系统的计算提供了一个框架,它规避了在硬件上实现传统的、成熟的前馈神经网络时出现的典型问题,例如最小的设备到设备的可变性,以及对每个单元/神经元和连接的控制。相反,可以使用随机的、未经训练的存储库,其中只有输出层被优化,例如,使用线性回归。在下文中,我们将重点介绍RC在基于硬件的神经网络中的潜力,其相对于传统方法的优势,以及其实现需要克服的障碍。准备一个高维非线性系统作为一个性能良好的水库来完成一个特定的任务并不像乍一看那么容易。我们希望本教程将降低科学家试图利用非线性系统进行机器学习和人工智能领域中典型的计算任务的障碍。本文附带的仿真工具可在网上获得7 7 https://github.com/stevenabreu7/handson_reservoir..https://github.com/stevenabreu7/handson_reservoir。
{"title":"Hands-on reservoir computing: a tutorial for practical implementation","authors":"Matteo Cucchi, Steven Abreu, G. Ciccone, D. Brunner, H. Kleemann","doi":"10.1088/2634-4386/ac7db7","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7db7","url":null,"abstract":"This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online 7 7 https://github.com/stevenabreu7/handson_reservoir.. https://github.com/stevenabreu7/handson_reservoir.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114802454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Advantages of binary stochastic synapses for hardware spiking neural networks with realistic memristors 二元随机突触在具有真实忆阻器的硬件尖峰神经网络中的优势
Pub Date : 2022-06-28 DOI: 10.1088/2634-4386/ac7c89
K. Sulinskas, M. Borg
Hardware implementing spiking neural networks (SNNs) has the potential to provide transformative gains in energy efficiency and throughput for energy-restricted machine-learning tasks. This is enabled by large arrays of memristive synapse devices that can be realized by various emerging memory technologies. But in practice, the performance of such hardware is limited by non-ideal features of the memristor devices such as nonlinear and asymmetric state updates, limited bit-resolution, limited cycling endurance and device noise. Here we investigate how stochastic switching in binary synapses can provide advantages compared with realistic analog memristors when using unsupervised training of SNNs via spike timing-dependent plasticity. We find that the performance of binary stochastic SNNs is similar to or even better than analog deterministic SNNs when one considers memristors with realistic bit-resolution as well in situations with considerable cycle-to-cycle noise. Furthermore, binary stochastic SNNs require many fewer weight updates to train, leading to superior utilization of the limited endurance in realistic memristive devices.
实现尖峰神经网络(snn)的硬件有可能为能源限制的机器学习任务提供能源效率和吞吐量方面的变革性收益。这是通过各种新兴的存储技术实现的大型记忆突触设备阵列实现的。但在实际应用中,这些硬件的性能受到忆阻器器件的非理想特性的限制,如非线性和非对称状态更新、有限的位分辨率、有限的循环耐力和器件噪声。在这里,我们研究了在使用无监督snn训练时,通过尖峰时间依赖的可塑性,二进制突触中的随机开关如何提供与现实模拟忆阻器相比的优势。我们发现,当考虑具有实际比特分辨率的忆阻器以及具有相当的周期噪声的情况时,二进制随机snn的性能与模拟确定性snn相似甚至更好。此外,二元随机snn需要更少的权值更新来训练,从而在现实记忆装置中更好地利用有限的耐力。
{"title":"Advantages of binary stochastic synapses for hardware spiking neural networks with realistic memristors","authors":"K. Sulinskas, M. Borg","doi":"10.1088/2634-4386/ac7c89","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7c89","url":null,"abstract":"Hardware implementing spiking neural networks (SNNs) has the potential to provide transformative gains in energy efficiency and throughput for energy-restricted machine-learning tasks. This is enabled by large arrays of memristive synapse devices that can be realized by various emerging memory technologies. But in practice, the performance of such hardware is limited by non-ideal features of the memristor devices such as nonlinear and asymmetric state updates, limited bit-resolution, limited cycling endurance and device noise. Here we investigate how stochastic switching in binary synapses can provide advantages compared with realistic analog memristors when using unsupervised training of SNNs via spike timing-dependent plasticity. We find that the performance of binary stochastic SNNs is similar to or even better than analog deterministic SNNs when one considers memristors with realistic bit-resolution as well in situations with considerable cycle-to-cycle noise. Furthermore, binary stochastic SNNs require many fewer weight updates to train, leading to superior utilization of the limited endurance in realistic memristive devices.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114816785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario 简单和复杂的尖峰神经元:简单STDP场景中的观点和分析
Pub Date : 2022-06-28 DOI: 10.1088/2634-4386/ac999b
Davide L. Manna, A. Sola, Paul Kirkland, Trevor J. Bihl, G. D. Caterina
Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. Among many neuron models, the integrate-and-fire (I&F) models are often adopted, with the simple leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the quadratic I&F (QIF) and the exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with spike-timing dependent plasticity (STDP) on a classification task on the N-MNIST and DVS gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system’s performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available.
脉冲神经网络(snn)很大程度上受到生物学和神经科学的启发,并利用思想和理论来创建快速高效的学习系统。脉冲神经元模型是神经形态系统的核心处理单元,因为它能够实现基于事件的处理。在众多的神经元模型中,通常采用的是集成与激发(integrated -and-fire, I&F)模型,其中最常用的是简单的漏I&F (LIF)模型。采用这种模型的原因是它们的效率和/或生物学上的合理性。然而,在人工学习系统中采用LIF而不是其他神经元模型的严格理由尚未得到研究。这项工作考虑了文献中的各种神经元模型,然后选择了单变量、高效、显示不同类型复杂性的计算神经元模型。从这个选择中,我们对三种简单的I&F神经元模型进行了比较研究,即LIF、二次I&F (QIF)和指数I&F (EIF),以了解使用更复杂的模型是否会提高系统的性能,以及神经元模型的选择是否可以由要完成的任务来指导。在N-MNIST和DVS手势数据集的分类任务上,神经元模型在SNN中进行了spike-timing dependent plasticity (STDP)训练。实验结果表明,更复杂的神经元表现出与更简单的神经元相同的能力,可以在简单数据集(N-MNIST)上实现高水平的准确性,尽管需要相对更多的超参数调整。然而,当数据具有更丰富的时空特征时,QIF和EIF神经元模型稳定地取得了更好的结果。这表明,根据数据特征谱的丰富度来准确选择模型可以提高整个系统的性能。最后,在SpykeTorch框架中实现尖峰神经元的代码是公开的。
{"title":"Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario","authors":"Davide L. Manna, A. Sola, Paul Kirkland, Trevor J. Bihl, G. D. Caterina","doi":"10.1088/2634-4386/ac999b","DOIUrl":"https://doi.org/10.1088/2634-4386/ac999b","url":null,"abstract":"Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. Among many neuron models, the integrate-and-fire (I&F) models are often adopted, with the simple leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the quadratic I&F (QIF) and the exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with spike-timing dependent plasticity (STDP) on a classification task on the N-MNIST and DVS gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system’s performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"13 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133774141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fluctuation-driven initialization for spiking neural network training 波动驱动的尖峰神经网络初始化训练
Pub Date : 2022-06-21 DOI: 10.1088/2634-4386/ac97bb
Julian Rossbroich, Julia Gygax, F T Zenke
Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale’s law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.
尖峰神经网络(snn)是大脑中低功耗、容错信息处理的基础,当在合适的神经形态硬件加速器上实现时,可以构成传统深度神经网络的节能替代方案。然而,在计算机上实例化snn来解决复杂的计算任务仍然是一个重大挑战。代理梯度(SG)技术已经成为端到端snn训练的标准解决方案。尽管如此,它们的成功取决于突触权重初始化,类似于传统的人工神经网络(ann)。然而,与人工神经网络的情况不同,SNN的良好初始状态仍然难以捉摸。在这里,我们开发了snn的一般初始化策略,该策略受到大脑中常见的波动驱动机制的启发。具体地说,我们推导了数据依赖权重初始化的实用解决方案,以确保在广泛使用的泄漏集成-点火神经元中波动驱动的放电。我们的经验表明,根据我们的策略初始化的snn在使用SGs训练时表现出优越的学习性能。这些发现适用于多个数据集和SNN架构,包括完全连接的、深度卷积的、循环的和更符合戴尔定律的生物学上合理的SNN。因此,波动驱动初始化为提高神经形态工程和计算神经科学中不同任务的SNN训练性能提供了一种实用、通用且易于实现的策略。
{"title":"Fluctuation-driven initialization for spiking neural network training","authors":"Julian Rossbroich, Julia Gygax, F T Zenke","doi":"10.1088/2634-4386/ac97bb","DOIUrl":"https://doi.org/10.1088/2634-4386/ac97bb","url":null,"abstract":"Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale’s law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127786017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
2022 roadmap on neuromorphic devices and applications research in China 2022中国神经形态器件及应用研究路线图
Pub Date : 2022-06-20 DOI: 10.1088/2634-4386/ac7a5a
Qing Wan, C. Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Pengcheng Zhou, Lin Chen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, L. Zhu, Jian‐yu Du, Chengqiang Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su‐Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lun Wang, Yinshui Xia, Chen Mu, F. Lin, Chixiao Chen, Bo Cheng, Y. Xing, W. Zeng, Hong Chen, Lei Yu, G. Indiveri, Ning Qiao
The data throughput in the von Neumann architecture-based computing system is limited by its separated processing and memory structure, and the mismatching speed between the two units. As a result, it is quite difficult to improve the energy efficiency in conventional computing system, especially for dealing with unstructured data. Meanwhile, artificial intelligence and robotics nowadays still behave poorly in autonomy, creativity, and sociality, which has been considered as the unimaginable computational requirement for sensorimotor skills. These two plights have urged the imitation and replication of the biological systems in terms of computing, sensing, and even motoring. Hence, the so-called neuromorphic system has drawn worldwide attention in recent decade, which is aimed at addressing the aforementioned needs from the mimicking of neural system. The recent developments on emerging memory devices, nanotechnologies, and materials science have provided an unprecedented opportunity for this aim.
基于冯·诺依曼体系结构的计算系统的数据吞吐量受到其分离的处理和存储结构以及两个单元之间的不匹配速度的限制。因此,传统计算系统的能效很难提高,特别是在处理非结构化数据时。与此同时,如今的人工智能和机器人在自主性、创造性和社会性方面仍然表现不佳,这被认为是对感觉运动技能的难以想象的计算需求。这两种困境促使了生物系统在计算、传感甚至运动方面的模仿和复制。因此,近十年来,所谓的神经形态系统(neuromorphic system)引起了全世界的关注,其目的是通过模拟神经系统来解决上述需求。新兴存储设备、纳米技术和材料科学的最新发展为实现这一目标提供了前所未有的机会。
{"title":"2022 roadmap on neuromorphic devices and applications research in China","authors":"Qing Wan, C. Wan, Huaqiang Wu, Yuchao Yang, Xiaohe Huang, Pengcheng Zhou, Lin Chen, Tian-Yu Wang, Yi Li, Kanhao Xue, Yuhui He, Xiangshui Miao, Xi Li, Chenchen Xie, Houpeng Chen, Z. Song, Hong Wang, Yue Hao, Junyao Zhang, Jia Huang, Zheng Yu Ren, L. Zhu, Jian‐yu Du, Chengqiang Ge, Yang Liu, Guanglong Ding, Ye Zhou, Su‐Ting Han, Guosheng Wang, Xiao Yu, Bing Chen, Zhufei Chu, Lun Wang, Yinshui Xia, Chen Mu, F. Lin, Chixiao Chen, Bo Cheng, Y. Xing, W. Zeng, Hong Chen, Lei Yu, G. Indiveri, Ning Qiao","doi":"10.1088/2634-4386/ac7a5a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7a5a","url":null,"abstract":"The data throughput in the von Neumann architecture-based computing system is limited by its separated processing and memory structure, and the mismatching speed between the two units. As a result, it is quite difficult to improve the energy efficiency in conventional computing system, especially for dealing with unstructured data. Meanwhile, artificial intelligence and robotics nowadays still behave poorly in autonomy, creativity, and sociality, which has been considered as the unimaginable computational requirement for sensorimotor skills. These two plights have urged the imitation and replication of the biological systems in terms of computing, sensing, and even motoring. Hence, the so-called neuromorphic system has drawn worldwide attention in recent decade, which is aimed at addressing the aforementioned needs from the mimicking of neural system. The recent developments on emerging memory devices, nanotechnologies, and materials science have provided an unprecedented opportunity for this aim.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Quantization, training, parasitic resistance correction, and programming techniques of memristor-crossbar neural networks for edge intelligence 用于边缘智能的忆阻器-交叉棒神经网络的量化、训练、寄生电阻校正和编程技术
Pub Date : 2022-06-13 DOI: 10.1088/2634-4386/ac781a
T. Nguyen, Jiyong An, Seokjin Oh, S. N. Truong, K. Min
In the internet-of-things era, edge intelligence is critical for overcoming the communication and computing energy crisis, which is unavoidable if cloud computing is used exclusively. Memristor crossbars with in-memory computing may be suitable for realizing edge intelligence hardware. They can perform both memory and computing functions, allowing for the development of low-power computing architectures that go beyond the von Neumann computer. For implementing edge-intelligence hardware with memristor crossbars, in this paper, we review various techniques such as quantization, training, parasitic resistance correction, and low-power crossbar programming, and so on. In particular, memristor crossbars can be considered to realize quantized neural networks with binary and ternary synapses. For preventing memristor defects from degrading edge intelligence performance, chip-in-the-loop training can be useful when training memristor crossbars. Another undesirable effect in memristor crossbars is parasitic resistances such as source, line, and neuron resistance, which worsens as crossbar size increases. Various circuit and software techniques can compensate for parasitic resistances like source, line, and neuron resistance. Finally, we discuss an energy-efficient programming method for updating synaptic weights in memristor crossbars, which is needed for learning the edge devices.
在物联网时代,边缘智能对于克服通信和计算能源危机至关重要,如果只使用云计算,这将不可避免。具有内存计算的忆阻器横栅可能适合于实现边缘智能硬件。它们可以执行存储和计算功能,允许开发超越冯·诺伊曼计算机的低功耗计算架构。为了利用忆阻交叉棒实现边缘智能硬件,本文回顾了量化、训练、寄生电阻校正和低功耗交叉棒编程等各种技术。特别地,可以考虑使用忆阻交叉棒来实现具有二值和三值突触的量化神经网络。为了防止忆阻器缺陷降低边缘智能性能,在训练忆阻器横条时,芯片在环训练是有用的。忆阻交叉栅中的另一个不良影响是寄生电阻,如源电阻、线电阻和神经元电阻,随着交叉栅尺寸的增加而恶化。各种电路和软件技术可以补偿寄生电阻,如源电阻、线路电阻和神经元电阻。最后,我们讨论了一种用于学习边缘器件的记忆电阻器交叉栅中突触权值更新的节能编程方法。
{"title":"Quantization, training, parasitic resistance correction, and programming techniques of memristor-crossbar neural networks for edge intelligence","authors":"T. Nguyen, Jiyong An, Seokjin Oh, S. N. Truong, K. Min","doi":"10.1088/2634-4386/ac781a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac781a","url":null,"abstract":"In the internet-of-things era, edge intelligence is critical for overcoming the communication and computing energy crisis, which is unavoidable if cloud computing is used exclusively. Memristor crossbars with in-memory computing may be suitable for realizing edge intelligence hardware. They can perform both memory and computing functions, allowing for the development of low-power computing architectures that go beyond the von Neumann computer. For implementing edge-intelligence hardware with memristor crossbars, in this paper, we review various techniques such as quantization, training, parasitic resistance correction, and low-power crossbar programming, and so on. In particular, memristor crossbars can be considered to realize quantized neural networks with binary and ternary synapses. For preventing memristor defects from degrading edge intelligence performance, chip-in-the-loop training can be useful when training memristor crossbars. Another undesirable effect in memristor crossbars is parasitic resistances such as source, line, and neuron resistance, which worsens as crossbar size increases. Various circuit and software techniques can compensate for parasitic resistances like source, line, and neuron resistance. Finally, we discuss an energy-efficient programming method for updating synaptic weights in memristor crossbars, which is needed for learning the edge devices.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125316447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Self-organized nanoscale networks: are neuromorphic properties conserved in realistic device geometries? 自组织纳米级网络:神经形态特性在现实器件几何结构中是否守恒?
Pub Date : 2022-05-31 DOI: 10.1088/2634-4386/ac74da
Z. Heywood, J. Mallinson, E. Galli, S. Acharya, S. Bose, Matthew Arnold, P. Bones, S. Brown
Self-organised nanoscale networks are currently under investigation because of their potential to be used as novel neuromorphic computing systems. In these systems, electrical input and output signals will necessarily couple to the recurrent electrical signals within the network that provide brain-like functionality. This raises important questions as to whether practical electrode configurations and network geometries might influence the brain-like dynamics. We use the concept of criticality (which is itself a key charactistic of brain-like processing) to quantify the neuromorphic potential of the devices, and find that in most cases criticality, and therefore optimal information processing capability, is maintained. In particular we find that devices with multiple electrodes remain critical despite the concentration of current near the electrodes. We find that broad network activity is maintained because current still flows through the entire network. We also develop a formalism to allow a detailed analysis of the number of dominant paths through the network. For rectangular systems we show that the number of pathways decreases as the system size increases, which consequently causes a reduction in network activity.
自组织纳米网络目前正在研究中,因为它们有可能被用作新的神经形态计算系统。在这些系统中,电输入和输出信号必须与网络中提供类似大脑功能的循环电信号耦合。这就提出了一个重要的问题,即实际的电极结构和网络几何形状是否会影响类脑动力学。我们使用临界性的概念(这本身就是类脑处理的一个关键特征)来量化设备的神经形态电位,并发现在大多数情况下,临界性,因此最佳的信息处理能力,是保持的。特别是,我们发现具有多个电极的器件,无论电极附近的电流浓度如何,都保持临界状态。我们发现广泛的网络活动得以维持,因为电流仍然流经整个网络。我们还开发了一种形式主义,允许对网络中主要路径的数量进行详细分析。对于矩形系统,我们表明路径的数量随着系统大小的增加而减少,从而导致网络活动的减少。
{"title":"Self-organized nanoscale networks: are neuromorphic properties conserved in realistic device geometries?","authors":"Z. Heywood, J. Mallinson, E. Galli, S. Acharya, S. Bose, Matthew Arnold, P. Bones, S. Brown","doi":"10.1088/2634-4386/ac74da","DOIUrl":"https://doi.org/10.1088/2634-4386/ac74da","url":null,"abstract":"Self-organised nanoscale networks are currently under investigation because of their potential to be used as novel neuromorphic computing systems. In these systems, electrical input and output signals will necessarily couple to the recurrent electrical signals within the network that provide brain-like functionality. This raises important questions as to whether practical electrode configurations and network geometries might influence the brain-like dynamics. We use the concept of criticality (which is itself a key charactistic of brain-like processing) to quantify the neuromorphic potential of the devices, and find that in most cases criticality, and therefore optimal information processing capability, is maintained. In particular we find that devices with multiple electrodes remain critical despite the concentration of current near the electrodes. We find that broad network activity is maintained because current still flows through the entire network. We also develop a formalism to allow a detailed analysis of the number of dominant paths through the network. For rectangular systems we show that the number of pathways decreases as the system size increases, which consequently causes a reduction in network activity.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129450438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Memristive devices based hardware for unlabeled data processing 用于无标签数据处理的基于硬件的忆阻装置
Pub Date : 2022-05-25 DOI: 10.1088/2634-4386/ac734a
Zhuojian Xiao, Bonan Yan, Teng Zhang, Ru Huang, Yuchao Yang
Unlabeled data processing is of great significance for artificial intelligence (AI), since well-structured labeled data are scarce in a majority of practical applications due to the high cost of human annotation of labeling data. Therefore, automatous analysis of unlabeled datasets is important, and relevant algorithms for processing unlabeled data, such as k-means clustering, restricted Boltzmann machine and locally competitive algorithms etc, play a critical role in the development of AI techniques. Memristive devices offer potential for power and time efficient implementation of unlabeled data processing due to their unique properties in neuromorphic and in-memory computing. This review provides an overview of the design principles and applications of memristive devices for various unlabeled data processing and cognitive AI tasks.
由于人工标注数据的成本高,在大多数实际应用中缺乏结构良好的标注数据,因此无标注数据处理对人工智能(AI)具有重要意义。因此,对未标记数据集的自动分析非常重要,而处理未标记数据的相关算法,如k-means聚类、受限玻尔兹曼机和局部竞争算法等,在人工智能技术的发展中起着至关重要的作用。记忆器件由于其在神经形态和内存计算中的独特特性,为无标记数据处理的节能和省时实现提供了潜力。本文综述了记忆装置在各种未标记数据处理和认知人工智能任务中的设计原理和应用。
{"title":"Memristive devices based hardware for unlabeled data processing","authors":"Zhuojian Xiao, Bonan Yan, Teng Zhang, Ru Huang, Yuchao Yang","doi":"10.1088/2634-4386/ac734a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac734a","url":null,"abstract":"Unlabeled data processing is of great significance for artificial intelligence (AI), since well-structured labeled data are scarce in a majority of practical applications due to the high cost of human annotation of labeling data. Therefore, automatous analysis of unlabeled datasets is important, and relevant algorithms for processing unlabeled data, such as k-means clustering, restricted Boltzmann machine and locally competitive algorithms etc, play a critical role in the development of AI techniques. Memristive devices offer potential for power and time efficient implementation of unlabeled data processing due to their unique properties in neuromorphic and in-memory computing. This review provides an overview of the design principles and applications of memristive devices for various unlabeled data processing and cognitive AI tasks.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129300198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Physics-based compact modelling of the analog dynamics of HfO x resistive memories 基于物理的HfO x电阻式存储器模拟动力学的紧凑建模
Pub Date : 2022-05-25 DOI: 10.1088/2634-4386/ac7327
F. Vaccaro, S. Brivio, S. Perotto, A. G. Mauri, S. Spiga
Resistive random access memories (RRAMs) constitute a class of memristive devices particularly appealing for bio-inspired computing schemes. In particular, the possibility of achieving analog control of the electrical conductivity of RRAM devices can be exploited to mimic the behaviour of biological synapses in neuromorphic systems. With a view to neuromorphic computing applications, it turns out to be crucial to guarantee some features, among which a detailed device characterization, a mathematical modelling comprehensive of all the key features of the device both in quasi-static and dynamic conditions, a description of the variability due to the inherently stochasticity of the processes involved in the switching transitions. In this paper, starting from experimental data, we provide a modelling and simulation framework to reproduce the operative analog behaviour of HfO x -based RRAM devices under train of programming pulses both in the analog and binary operation mode. To this aim, we have calibrated the model by using a single set of parameters for the quasi-static current–voltage characteristics as well as switching kinetics and device dynamics. The physics-based compact model here settled captures the difference between the SET and the RESET processes in the I–V characteristics, as well as the device memory window both for strong and weak programming conditions. Moreover, the model reproduces the correct slopes of the highly non-linear kinetics curves over several orders of magnitudes in time, and the dynamic device response including the inherent device variability.
电阻式随机存取存储器(rram)是一类记忆器件,特别适用于仿生计算方案。特别是,实现对RRAM器件电导率的模拟控制的可能性可以用来模拟神经形态系统中生物突触的行为。从神经形态计算应用的角度来看,保证一些特征是至关重要的,其中包括详细的器件特性,在准静态和动态条件下综合器件所有关键特征的数学建模,由于涉及切换转换过程的固有随机性而引起的可变性的描述。在本文中,我们从实验数据出发,提供了一个建模和仿真框架,以再现基于HfO x的RRAM器件在模拟和二进制操作模式下的编程脉冲序列下的操作模拟行为。为此,我们通过使用一组准静态电流-电压特性以及开关动力学和器件动力学参数来校准模型。这里确定的基于物理的紧凑模型捕获了I-V特性中SET和RESET进程之间的差异,以及用于强编程条件和弱编程条件的设备内存窗口。此外,该模型在时间上再现了几个数量级的高度非线性动力学曲线的正确斜率,以及包括固有器件可变性在内的动态器件响应。
{"title":"Physics-based compact modelling of the analog dynamics of HfO x resistive memories","authors":"F. Vaccaro, S. Brivio, S. Perotto, A. G. Mauri, S. Spiga","doi":"10.1088/2634-4386/ac7327","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7327","url":null,"abstract":"Resistive random access memories (RRAMs) constitute a class of memristive devices particularly appealing for bio-inspired computing schemes. In particular, the possibility of achieving analog control of the electrical conductivity of RRAM devices can be exploited to mimic the behaviour of biological synapses in neuromorphic systems. With a view to neuromorphic computing applications, it turns out to be crucial to guarantee some features, among which a detailed device characterization, a mathematical modelling comprehensive of all the key features of the device both in quasi-static and dynamic conditions, a description of the variability due to the inherently stochasticity of the processes involved in the switching transitions. In this paper, starting from experimental data, we provide a modelling and simulation framework to reproduce the operative analog behaviour of HfO x -based RRAM devices under train of programming pulses both in the analog and binary operation mode. To this aim, we have calibrated the model by using a single set of parameters for the quasi-static current–voltage characteristics as well as switching kinetics and device dynamics. The physics-based compact model here settled captures the difference between the SET and the RESET processes in the I–V characteristics, as well as the device memory window both for strong and weak programming conditions. Moreover, the model reproduces the correct slopes of the highly non-linear kinetics curves over several orders of magnitudes in time, and the dynamic device response including the inherent device variability.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131496566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computational properties of multi-compartment LIF neurons with passive dendrites 具有被动树突的多室LIF神经元的计算特性
Pub Date : 2022-05-23 DOI: 10.1088/2634-4386/ac724c
Andreas Stöckel, C. Eliasmith
Mixed-signal neuromorphic computers often emulate some variant of the LIF neuron model. While, in theory, two-layer networks of these neurons are universal function approximators, single-layer networks consisting of slightly more complex neurons can, at the cost of universality, be more efficient. In this paper, we discuss a family of LIF neurons with passive dendrites. We provide rules that describe how input channels targeting different dendritic compartments interact, and test in how far these interactions can be harnessed in a spiking neural network context. We find that a single layer of two-compartment neurons approximates some functions at smaller errors than similarly sized hidden-layer networks. Single-layer networks with with three compartment neurons can approximate functions such as XOR and four-quadrant multiplication well; adding more compartments only offers small improvements in accuracy. From the perspective of mixed-signal neuromorphic systems, our results suggest that only small modifications to the neuron circuit are necessary to construct more computationally powerful and energy efficient systems that move more computation into the dendritic, analogue domain.
混合信号神经形态计算机通常模拟LIF神经元模型的某些变体。虽然从理论上讲,这些神经元的两层网络是通用函数近似器,但由稍微复杂的神经元组成的单层网络可以以通用性为代价,更有效。本文讨论了一类具有被动树突的LIF神经元。我们提供了描述针对不同树突隔室的输入通道如何相互作用的规则,并测试了这些相互作用在多大程度上可以在尖峰神经网络环境中被利用。我们发现单层的两室神经元以更小的误差近似于类似大小的隐藏层网络的某些函数。具有三室神经元的单层网络可以很好地近似异或和四象限乘法等函数;增加更多的隔层只能提供很小的精度改进。从混合信号神经形态系统的角度来看,我们的研究结果表明,只需要对神经元回路进行微小的修改,就可以构建计算能力更强、更节能的系统,将更多的计算转移到树突、模拟域。
{"title":"Computational properties of multi-compartment LIF neurons with passive dendrites","authors":"Andreas Stöckel, C. Eliasmith","doi":"10.1088/2634-4386/ac724c","DOIUrl":"https://doi.org/10.1088/2634-4386/ac724c","url":null,"abstract":"Mixed-signal neuromorphic computers often emulate some variant of the LIF neuron model. While, in theory, two-layer networks of these neurons are universal function approximators, single-layer networks consisting of slightly more complex neurons can, at the cost of universality, be more efficient. In this paper, we discuss a family of LIF neurons with passive dendrites. We provide rules that describe how input channels targeting different dendritic compartments interact, and test in how far these interactions can be harnessed in a spiking neural network context. We find that a single layer of two-compartment neurons approximates some functions at smaller errors than similarly sized hidden-layer networks. Single-layer networks with with three compartment neurons can approximate functions such as XOR and four-quadrant multiplication well; adding more compartments only offers small improvements in accuracy. From the perspective of mixed-signal neuromorphic systems, our results suggest that only small modifications to the neuron circuit are necessary to construct more computationally powerful and energy efficient systems that move more computation into the dendritic, analogue domain.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115605102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Neuromorphic Computing and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1