Pub Date : 2023-02-06DOI: 10.1088/2634-4386/acb965
Yibei Zhang, Qingtian Zhang, Qi Qin, Wenbin Zhang, Yue Xi, Zhixing Jiang, Jianshi Tang, B. Gao, H. Qian, Huaqiang Wu
The long-time retention issue of resistive random access memory (RRAM) brings a great challenge in the performance maintenance of large-scale RRAM-based computation-in-memory (CIM) systems. The periodic update is a feasible method to compensate for the accuracy loss caused by retention degradation, especially in demanding high-accuracy applications. In this paper, we propose a selective refresh strategy to reduce the updating cost by predicting the devices’ retention behavior. A convolutional neural network-based retention prediction framework is developed. The framework can determine whether an RRAM device has poor retention that needs to be updated according to its short-time relaxation behavior. By reprogramming these few selected devices, the method can recover the accuracy of the RRAM-based CIM system effectively. This work provides a valuable retention coping strategy with low time and energy costs and new insights for analyzing the physical connection between the relaxation and retention behavior of the RRAM device.
{"title":"An RRAM retention prediction framework using a convolutional neural network based on relaxation behavior","authors":"Yibei Zhang, Qingtian Zhang, Qi Qin, Wenbin Zhang, Yue Xi, Zhixing Jiang, Jianshi Tang, B. Gao, H. Qian, Huaqiang Wu","doi":"10.1088/2634-4386/acb965","DOIUrl":"https://doi.org/10.1088/2634-4386/acb965","url":null,"abstract":"The long-time retention issue of resistive random access memory (RRAM) brings a great challenge in the performance maintenance of large-scale RRAM-based computation-in-memory (CIM) systems. The periodic update is a feasible method to compensate for the accuracy loss caused by retention degradation, especially in demanding high-accuracy applications. In this paper, we propose a selective refresh strategy to reduce the updating cost by predicting the devices’ retention behavior. A convolutional neural network-based retention prediction framework is developed. The framework can determine whether an RRAM device has poor retention that needs to be updated according to its short-time relaxation behavior. By reprogramming these few selected devices, the method can recover the accuracy of the RRAM-based CIM system effectively. This work provides a valuable retention coping strategy with low time and energy costs and new insights for analyzing the physical connection between the relaxation and retention behavior of the RRAM device.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-03DOI: 10.1088/2634-4386/acb8d7
Meng Zhang, Zhizhuo Liang, Z. R. Huang
Reservoir computing (RC) is one kind of neuromorphic computing mainly applied to process sequential data such as time-dependent signals. In this paper, the bifurcation diagram of a photonic time-delay RC system is thoroughly studied, and a method of bifurcation dynamics guided hardware hyperparameter optimization is presented. The time-evolution equation expressed by the photonic hardware parameters is established while the intrinsic dynamics of the photonic RC system is quantitively studied. Bifurcation dynamics based hyperparameter optimization offers a simple yet effective approach in hardware setting optimization that aims to reduce the complexity and time in hardware adjustment. Three benchmark tasks, nonlinear channel equalization (NCE), nonlinear auto regressive moving average with 10th order time lag (NARMA10) and Santa Fe laser time-series prediction tasks are implemented on the photonic delay-line RC using bifurcation dynamics guided hardware optimization. The experimental results of these benchmark tasks achieved overall good agreement with the simulated bifurcation dynamics modeling results.
{"title":"Hardware optimization for photonic time-delay reservoir computer dynamics","authors":"Meng Zhang, Zhizhuo Liang, Z. R. Huang","doi":"10.1088/2634-4386/acb8d7","DOIUrl":"https://doi.org/10.1088/2634-4386/acb8d7","url":null,"abstract":"Reservoir computing (RC) is one kind of neuromorphic computing mainly applied to process sequential data such as time-dependent signals. In this paper, the bifurcation diagram of a photonic time-delay RC system is thoroughly studied, and a method of bifurcation dynamics guided hardware hyperparameter optimization is presented. The time-evolution equation expressed by the photonic hardware parameters is established while the intrinsic dynamics of the photonic RC system is quantitively studied. Bifurcation dynamics based hyperparameter optimization offers a simple yet effective approach in hardware setting optimization that aims to reduce the complexity and time in hardware adjustment. Three benchmark tasks, nonlinear channel equalization (NCE), nonlinear auto regressive moving average with 10th order time lag (NARMA10) and Santa Fe laser time-series prediction tasks are implemented on the photonic delay-line RC using bifurcation dynamics guided hardware optimization. The experimental results of these benchmark tasks achieved overall good agreement with the simulated bifurcation dynamics modeling results.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125939632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.1088/2634-4386/acb37f
Federico Rondelli, A. D. Salvo, Gioacchino Calandra Sebastianella, M. Murgia, L. Fadiga, F. Biscarini, M. D. Lauro
The role of pre-synaptic DC bias is investigated in three-terminal organic neuromorphic architectures based on electrolyte-gated organic transistors—EGOTs. By means of pre-synaptic offset it is possible to finely control the number of discrete conductance states in short-term plasticity experiments, to obtain, at will, both depressive and facilitating response in the same neuromorphic device and to set the ratio between two subsequent pulses in paired-pulse experiments. The charge dynamics leading to these important features are discussed in relationship with macroscopic device figures of merit such as conductivity and transconductance, establishing a novel key enabling parameter in devising the operation of neuromorphic organic electronics.
{"title":"Pre-synaptic DC bias controls the plasticity and dynamics of three-terminal neuromorphic electrolyte-gated organic transistors","authors":"Federico Rondelli, A. D. Salvo, Gioacchino Calandra Sebastianella, M. Murgia, L. Fadiga, F. Biscarini, M. D. Lauro","doi":"10.1088/2634-4386/acb37f","DOIUrl":"https://doi.org/10.1088/2634-4386/acb37f","url":null,"abstract":"The role of pre-synaptic DC bias is investigated in three-terminal organic neuromorphic architectures based on electrolyte-gated organic transistors—EGOTs. By means of pre-synaptic offset it is possible to finely control the number of discrete conductance states in short-term plasticity experiments, to obtain, at will, both depressive and facilitating response in the same neuromorphic device and to set the ratio between two subsequent pulses in paired-pulse experiments. The charge dynamics leading to these important features are discussed in relationship with macroscopic device figures of merit such as conductivity and transconductance, establishing a novel key enabling parameter in devising the operation of neuromorphic organic electronics.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127719784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-13DOI: 10.1088/2634-4386/acb2ef
Madeleine Abernot, Todri-Sanial Aida
The growing number of edge devices in everyday life generates a considerable amount of data that current AI algorithms, like artificial neural networks, cannot handle inside edge devices with limited bandwidth, memory, and energy available. Neuromorphic computing, with low-power oscillatory neural networks (ONNs), is an alternative and attractive solution to solve complex problems at the edge. However, ONN is currently limited with its fully-connected recurrent architecture to solve auto-associative memory problems. In this work, we use an alternative two-layer bidirectional ONN architecture. We introduce a two-layer feedforward ONN architecture to perform image edge detection, using the ONN to replace convolutional filters to scan the image. Using an HNN Matlab emulator and digital ONN design simulations, we report efficient image edge detection from both architectures using various size filters (3 × 3, 5 × 5, and 7 × 7) on black and white images. In contrast, the feedforward architectures can also perform image edge detection on gray scale images. With the digital ONN design, we also assess latency performances and obtain that the bidirectional architecture with a 3 × 3 filter size can perform image edge detection in real-time (camera flow from 25 to 30 images per second) on images with up to 128 × 128 pixels while the feedforward architecture with same 3 × 3 filter size can deal with 170 × 170 pixels, due to its faster computation.
{"title":"Simulation and implementation of two-layer oscillatory neural networks for image edge detection: bidirectional and feedforward architectures","authors":"Madeleine Abernot, Todri-Sanial Aida","doi":"10.1088/2634-4386/acb2ef","DOIUrl":"https://doi.org/10.1088/2634-4386/acb2ef","url":null,"abstract":"The growing number of edge devices in everyday life generates a considerable amount of data that current AI algorithms, like artificial neural networks, cannot handle inside edge devices with limited bandwidth, memory, and energy available. Neuromorphic computing, with low-power oscillatory neural networks (ONNs), is an alternative and attractive solution to solve complex problems at the edge. However, ONN is currently limited with its fully-connected recurrent architecture to solve auto-associative memory problems. In this work, we use an alternative two-layer bidirectional ONN architecture. We introduce a two-layer feedforward ONN architecture to perform image edge detection, using the ONN to replace convolutional filters to scan the image. Using an HNN Matlab emulator and digital ONN design simulations, we report efficient image edge detection from both architectures using various size filters (3 × 3, 5 × 5, and 7 × 7) on black and white images. In contrast, the feedforward architectures can also perform image edge detection on gray scale images. With the digital ONN design, we also assess latency performances and obtain that the bidirectional architecture with a 3 × 3 filter size can perform image edge detection in real-time (camera flow from 25 to 30 images per second) on images with up to 128 × 128 pixels while the feedforward architecture with same 3 × 3 filter size can deal with 170 × 170 pixels, due to its faster computation.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128309888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-12DOI: 10.1088/2634-4386/acb286
Travis DeWolf, Kinjal Patel, Pawel Jaworski, Roxana Leontie, Joe Hays, C. Eliasmith
In this paper, we present a fully spiking neural network running on Intel’s Loihi chip for operational space control of a simulated 7-DOF arm. Our approach uniquely combines neural engineering and deep learning methods to successfully implement position and orientation control of the end effector. The development process involved four stages: (1) Designing a node-based network architecture implementing an analytical solution; (2) developing rate neuron networks to replace the nodes; (3) retraining the network to handle spiking neurons and temporal dynamics; and finally (4) adapting the network for the specific hardware constraints of the Loihi. We benchmark the controller on a center-out reaching task, using the deviation of the end effector from the ideal trajectory as our evaluation metric. The RMSE of the final neuromorphic controller running on Loihi is only slightly worse than the analytic solution, with 4.13% more deviation from the ideal trajectory, and uses two orders of magnitude less energy per inference than standard hardware solutions. While qualitative discrepancies remain, we find these results support both our approach and the potential of neuromorphic controllers. To the best of our knowledge, this work represents the most advanced neuromorphic implementation of neurorobotics developed to date.
{"title":"Neuromorphic control of a simulated 7-DOF arm using Loihi","authors":"Travis DeWolf, Kinjal Patel, Pawel Jaworski, Roxana Leontie, Joe Hays, C. Eliasmith","doi":"10.1088/2634-4386/acb286","DOIUrl":"https://doi.org/10.1088/2634-4386/acb286","url":null,"abstract":"In this paper, we present a fully spiking neural network running on Intel’s Loihi chip for operational space control of a simulated 7-DOF arm. Our approach uniquely combines neural engineering and deep learning methods to successfully implement position and orientation control of the end effector. The development process involved four stages: (1) Designing a node-based network architecture implementing an analytical solution; (2) developing rate neuron networks to replace the nodes; (3) retraining the network to handle spiking neurons and temporal dynamics; and finally (4) adapting the network for the specific hardware constraints of the Loihi. We benchmark the controller on a center-out reaching task, using the deviation of the end effector from the ideal trajectory as our evaluation metric. The RMSE of the final neuromorphic controller running on Loihi is only slightly worse than the analytic solution, with 4.13% more deviation from the ideal trajectory, and uses two orders of magnitude less energy per inference than standard hardware solutions. While qualitative discrepancies remain, we find these results support both our approach and the potential of neuromorphic controllers. To the best of our knowledge, this work represents the most advanced neuromorphic implementation of neurorobotics developed to date.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134482914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-21DOI: 10.1088/2634-4386/acad98
Gaspard Goupy, A. Juneau-Fecteau, Nikhil Garg, Ismael Balafrej, F. Alibart, L. Fréchette, Dominique Drouin, Y. Beilliard
Spiking neural networks (SNNs) are gaining attention due to their energy-efficient computing ability, making them relevant for implementation on low-power neuromorphic hardware. Their biological plausibility has permitted them to benefit from unsupervised learning with bio-inspired plasticity rules, such as spike timing-dependent plasticity (STDP). However, standard STDP has some limitations that make it challenging to implement on hardware. In this paper, we propose a convolutional SNN (CSNN) integrating single-spike integrate-and-fire (SSIF) neurons and trained for the first time with voltage-dependent synaptic plasticity (VDSP), a novel unsupervised and local plasticity rule developed for the implementation of STDP on memristive-based neuromorphic hardware. We evaluated the CSNN on the TIDIGITS dataset, where, helped by our sound preprocessing pipeline, we obtained a performance better than the state of the art, with a mean accuracy of 99.43%. Moreover, the use of SSIF neurons, coupled with time-to-first-spike (TTFS) encoding, results in a sparsely activated model, as we recorded a mean of 5036 spikes per input over the 172 580 neurons of the network. This makes the proposed CSNN promising for the development of models that are extremely efficient in energy. We also demonstrate the efficiency of VDSP on the MNIST dataset, where we obtained results comparable to the state of the art, with an accuracy of 98.56%. Our adaptation of VDSP for SSIF neurons introduces a depression factor that has been very effective at reducing the number of training samples needed, and hence, training time, by a factor of two and more, with similar performance.
{"title":"Unsupervised and efficient learning in sparsely activated convolutional spiking neural networks enabled by voltage-dependent synaptic plasticity","authors":"Gaspard Goupy, A. Juneau-Fecteau, Nikhil Garg, Ismael Balafrej, F. Alibart, L. Fréchette, Dominique Drouin, Y. Beilliard","doi":"10.1088/2634-4386/acad98","DOIUrl":"https://doi.org/10.1088/2634-4386/acad98","url":null,"abstract":"Spiking neural networks (SNNs) are gaining attention due to their energy-efficient computing ability, making them relevant for implementation on low-power neuromorphic hardware. Their biological plausibility has permitted them to benefit from unsupervised learning with bio-inspired plasticity rules, such as spike timing-dependent plasticity (STDP). However, standard STDP has some limitations that make it challenging to implement on hardware. In this paper, we propose a convolutional SNN (CSNN) integrating single-spike integrate-and-fire (SSIF) neurons and trained for the first time with voltage-dependent synaptic plasticity (VDSP), a novel unsupervised and local plasticity rule developed for the implementation of STDP on memristive-based neuromorphic hardware. We evaluated the CSNN on the TIDIGITS dataset, where, helped by our sound preprocessing pipeline, we obtained a performance better than the state of the art, with a mean accuracy of 99.43%. Moreover, the use of SSIF neurons, coupled with time-to-first-spike (TTFS) encoding, results in a sparsely activated model, as we recorded a mean of 5036 spikes per input over the 172 580 neurons of the network. This makes the proposed CSNN promising for the development of models that are extremely efficient in energy. We also demonstrate the efficiency of VDSP on the MNIST dataset, where we obtained results comparable to the state of the art, with an accuracy of 98.56%. Our adaptation of VDSP for SSIF neurons introduces a depression factor that has been very effective at reducing the number of training samples needed, and hence, training time, by a factor of two and more, with similar performance.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128512871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-08DOI: 10.1088/2634-4386/acaf9c
S. Panzeri, Ella Janotte, Alejandro Pequeño-Zurro, Jacopo Bonato, C. Bartolozzi
In the brain, information is encoded, transmitted and used to inform behaviour at the level of timing of action potentials distributed over population of neurons. To implement neural-like systems in silico, to emulate neural function, and to interface successfully with the brain, neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain. To facilitate the cross-talk between neuromorphic engineering and neuroscience, in this review we first critically examine and summarize emerging recent findings about how population of neurons encode and transmit information. We examine the effects on encoding and readout of information for different features of neural population activity, namely the sparseness of neural representations, the heterogeneity of neural properties, the correlations among neurons, and the timescales (from short to long) at which neurons encode information and maintain it consistently over time. Finally, we critically elaborate on how these facts constrain the design of information coding in neuromorphic circuits. We focus primarily on the implications for designing neuromorphic circuits that communicate with the brain, as in this case it is essential that artificial and biological neurons use compatible neural codes. However, we also discuss implications for the design of neuromorphic systems for implementation or emulation of neural computation.
{"title":"Constraints on the design of neuromorphic circuits set by the properties of neural population codes","authors":"S. Panzeri, Ella Janotte, Alejandro Pequeño-Zurro, Jacopo Bonato, C. Bartolozzi","doi":"10.1088/2634-4386/acaf9c","DOIUrl":"https://doi.org/10.1088/2634-4386/acaf9c","url":null,"abstract":"In the brain, information is encoded, transmitted and used to inform behaviour at the level of timing of action potentials distributed over population of neurons. To implement neural-like systems in silico, to emulate neural function, and to interface successfully with the brain, neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain. To facilitate the cross-talk between neuromorphic engineering and neuroscience, in this review we first critically examine and summarize emerging recent findings about how population of neurons encode and transmit information. We examine the effects on encoding and readout of information for different features of neural population activity, namely the sparseness of neural representations, the heterogeneity of neural properties, the correlations among neurons, and the timescales (from short to long) at which neurons encode information and maintain it consistently over time. Finally, we critically elaborate on how these facts constrain the design of information coding in neuromorphic circuits. We focus primarily on the implications for designing neuromorphic circuits that communicate with the brain, as in this case it is essential that artificial and biological neurons use compatible neural codes. However, we also discuss implications for the design of neuromorphic systems for implementation or emulation of neural computation.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126592910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-06DOI: 10.1088/2634-4386/aca92c
Xin Zheng, Ryan Zarcone, Akash Levy, W. Khwa, Priyanka Raina, B. Olshausen, H. P. Wong
Data stored in the cloud or on mobile devices reside in physical memory systems with finite sizes. Today, huge amounts of analog data, e.g. images and videos, are first digitalized and then compression algorithms (e.g. the JPEG standard) are employed to minimize the amount of physical storage required. Emerging non-volatile-memory technologies (e.g. phase change memory (PCM), resistive RAM (RRAM)) provide the possibility to store the analog information in a compressed format directly into analog memory systems. Here, we demonstrate with hardware experiments an image storage and compression scheme (joint source-channel coding) with analog-valued PCM and RRAM arrays. This scheme stores information in a distributed fashion and shows resilience to the PCM and RRAM device technology non-idealities, including defective cells, device variability, resistance drift, and relaxation.
{"title":"High-density analog image storage in an analog-valued non-volatile memory array","authors":"Xin Zheng, Ryan Zarcone, Akash Levy, W. Khwa, Priyanka Raina, B. Olshausen, H. P. Wong","doi":"10.1088/2634-4386/aca92c","DOIUrl":"https://doi.org/10.1088/2634-4386/aca92c","url":null,"abstract":"Data stored in the cloud or on mobile devices reside in physical memory systems with finite sizes. Today, huge amounts of analog data, e.g. images and videos, are first digitalized and then compression algorithms (e.g. the JPEG standard) are employed to minimize the amount of physical storage required. Emerging non-volatile-memory technologies (e.g. phase change memory (PCM), resistive RAM (RRAM)) provide the possibility to store the analog information in a compressed format directly into analog memory systems. Here, we demonstrate with hardware experiments an image storage and compression scheme (joint source-channel coding) with analog-valued PCM and RRAM arrays. This scheme stores information in a distributed fashion and shows resilience to the PCM and RRAM device technology non-idealities, including defective cells, device variability, resistance drift, and relaxation.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115510295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-27DOI: 10.1088/2634-4386/ace64c
D. Zendrikov, Sergio Solinas, G. Indiveri
Neuromorphic processing systems implementing spiking neural networks with mixed signal analog/digital electronic circuits and/or memristive devices represent a promising technology for edge computing applications that require low power, low latency, and that cannot connect to the cloud for off-line processing, either due to lack of connectivity or for privacy concerns. However, these circuits are typically noisy and imprecise, because they are affected by device-to-device variability, and operate with extremely small currents. So achieving reliable computation and high accuracy following this approach is still an open challenge that has hampered progress on the one hand and limited widespread adoption of this technology on the other. By construction, these hardware processing systems have many constraints that are biologically plausible, such as heterogeneity and non-negativity of parameters. More and more evidence is showing that applying such constraints to artificial neural networks, including those used in artificial intelligence, promotes robustness in learning and improves their reliability. Here we delve even more into neuroscience and present network-level brain-inspired strategies that further improve reliability and robustness in these neuromorphic systems: we quantify, with chip measurements, to what extent population averaging is effective in reducing variability in neural responses, we demonstrate experimentally how the neural coding strategies of cortical models allow silicon neurons to produce reliable signal representations, and show how to robustly implement essential computational primitives, such as selective amplification, signal restoration, working memory, and relational networks, exploiting such strategies. We argue that these strategies can be instrumental for guiding the design of robust and reliable ultra-low power electronic neural processing systems implemented using noisy and imprecise computing substrates such as subthreshold neuromorphic circuits and emerging memory technologies.
{"title":"Brain-inspired methods for achieving robust computation in heterogeneous mixed-signal neuromorphic processing systems","authors":"D. Zendrikov, Sergio Solinas, G. Indiveri","doi":"10.1088/2634-4386/ace64c","DOIUrl":"https://doi.org/10.1088/2634-4386/ace64c","url":null,"abstract":"Neuromorphic processing systems implementing spiking neural networks with mixed signal analog/digital electronic circuits and/or memristive devices represent a promising technology for edge computing applications that require low power, low latency, and that cannot connect to the cloud for off-line processing, either due to lack of connectivity or for privacy concerns. However, these circuits are typically noisy and imprecise, because they are affected by device-to-device variability, and operate with extremely small currents. So achieving reliable computation and high accuracy following this approach is still an open challenge that has hampered progress on the one hand and limited widespread adoption of this technology on the other. By construction, these hardware processing systems have many constraints that are biologically plausible, such as heterogeneity and non-negativity of parameters. More and more evidence is showing that applying such constraints to artificial neural networks, including those used in artificial intelligence, promotes robustness in learning and improves their reliability. Here we delve even more into neuroscience and present network-level brain-inspired strategies that further improve reliability and robustness in these neuromorphic systems: we quantify, with chip measurements, to what extent population averaging is effective in reducing variability in neural responses, we demonstrate experimentally how the neural coding strategies of cortical models allow silicon neurons to produce reliable signal representations, and show how to robustly implement essential computational primitives, such as selective amplification, signal restoration, working memory, and relational networks, exploiting such strategies. We argue that these strategies can be instrumental for guiding the design of robust and reliable ultra-low power electronic neural processing systems implemented using noisy and imprecise computing substrates such as subthreshold neuromorphic circuits and emerging memory technologies.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125513176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-21DOI: 10.1088/2634-4386/ac9c8a
Daniel Felder, Katerina Muche, J. Linkhorst, Matthias Wessling
Organic neuromorphic device networks can accelerate neural network algorithms and directly integrate with microfluidic systems or living tissues. Proposed devices based on the bio-compatible conductive polymer PEDOT:PSS have shown high switching speeds and low energy demand. However, as electrochemical systems, they are prone to self-discharge through parasitic electrochemical reactions. Therefore, the network’s synapses forget their trained conductance states over time. This work integrates single-device high-resolution charge transport models to simulate entire neuromorphic device networks and analyze the impact of self-discharge on network performance. Simulation of a single-layer nine-pixel image classification network commonly used in experimental demonstrations reveals no significant impact of self-discharge on training efficiency. And, even though the network’s weights drift significantly during self-discharge, its predictions remain 100% accurate for over ten hours. On the other hand, a multi-layer network for the approximation of the circle function is shown to degrade significantly over twenty minutes with a final mean-squared-error loss of 0.4. We propose to counter the effect by periodically reminding the network based on a map between a synapse’s current state, the time since the last reminder, and the weight drift. We show that this method with a map obtained through validated simulations can reduce the effective loss to below 0.1 even with worst-case assumptions. Finally, while the training of this network is affected by self-discharge, a good classification is still obtained. Electrochemical organic neuromorphic devices have not been integrated into larger device networks. This work predicts their behavior under nonideal conditions, mitigates the worst-case effects of parasitic self-discharge, and opens the path toward implementing fast and efficient neural networks on organic neuromorphic hardware.
{"title":"Reminding forgetful organic neuromorphic device networks","authors":"Daniel Felder, Katerina Muche, J. Linkhorst, Matthias Wessling","doi":"10.1088/2634-4386/ac9c8a","DOIUrl":"https://doi.org/10.1088/2634-4386/ac9c8a","url":null,"abstract":"Organic neuromorphic device networks can accelerate neural network algorithms and directly integrate with microfluidic systems or living tissues. Proposed devices based on the bio-compatible conductive polymer PEDOT:PSS have shown high switching speeds and low energy demand. However, as electrochemical systems, they are prone to self-discharge through parasitic electrochemical reactions. Therefore, the network’s synapses forget their trained conductance states over time. This work integrates single-device high-resolution charge transport models to simulate entire neuromorphic device networks and analyze the impact of self-discharge on network performance. Simulation of a single-layer nine-pixel image classification network commonly used in experimental demonstrations reveals no significant impact of self-discharge on training efficiency. And, even though the network’s weights drift significantly during self-discharge, its predictions remain 100% accurate for over ten hours. On the other hand, a multi-layer network for the approximation of the circle function is shown to degrade significantly over twenty minutes with a final mean-squared-error loss of 0.4. We propose to counter the effect by periodically reminding the network based on a map between a synapse’s current state, the time since the last reminder, and the weight drift. We show that this method with a map obtained through validated simulations can reduce the effective loss to below 0.1 even with worst-case assumptions. Finally, while the training of this network is affected by self-discharge, a good classification is still obtained. Electrochemical organic neuromorphic devices have not been integrated into larger device networks. This work predicts their behavior under nonideal conditions, mitigates the worst-case effects of parasitic self-discharge, and opens the path toward implementing fast and efficient neural networks on organic neuromorphic hardware.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128994724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}