Pub Date : 2023-05-01DOI: 10.1088/2634-4386/acf684
D. Winge, Magnus Borgström, E. Lind, A. Mikkelsen
Neurons with internal memory have been proposed for biological and bio-inspired neural networks, adding important functionality. We introduce an internal time-limited charge-based memory into a III–V nanowire (NW) based optoelectronic neural node circuit designed for handling optical signals in a neural network. The new circuit can receive inhibiting and exciting light signals, store them, perform a non-linear evaluation, and emit a light signal. Using experimental values from the performance of individual III–V NWs we create a realistic computational model of the complete artificial neural node circuit. We then create a flexible neural network simulation that uses these circuits as neuronal nodes and light for communication between the nodes. This model can simulate combinations of nodes with different hardware derived memory properties and variable interconnects. Using the full model, we simulate the hardware implementation for two types of neural networks. First, we show that intentional variations in the memory decay time of the nodes can significantly improve the performance of a reservoir network. Second, we simulate the implementation in an anatomically constrained functioning model of the central complex network of the insect brain and find that it resolves an important functionality of the network even with significant variations in the node performance. Our work demonstrates the advantages of an internal memory in a concrete, nanophotonic neural node. The use of variable memory time constants in neural nodes is a general hardware derived feature and could be used in a broad range of implementations.
{"title":"Artificial nanophotonic neuron with internal memory for biologically inspired and reservoir network computing","authors":"D. Winge, Magnus Borgström, E. Lind, A. Mikkelsen","doi":"10.1088/2634-4386/acf684","DOIUrl":"https://doi.org/10.1088/2634-4386/acf684","url":null,"abstract":"Neurons with internal memory have been proposed for biological and bio-inspired neural networks, adding important functionality. We introduce an internal time-limited charge-based memory into a III–V nanowire (NW) based optoelectronic neural node circuit designed for handling optical signals in a neural network. The new circuit can receive inhibiting and exciting light signals, store them, perform a non-linear evaluation, and emit a light signal. Using experimental values from the performance of individual III–V NWs we create a realistic computational model of the complete artificial neural node circuit. We then create a flexible neural network simulation that uses these circuits as neuronal nodes and light for communication between the nodes. This model can simulate combinations of nodes with different hardware derived memory properties and variable interconnects. Using the full model, we simulate the hardware implementation for two types of neural networks. First, we show that intentional variations in the memory decay time of the nodes can significantly improve the performance of a reservoir network. Second, we simulate the implementation in an anatomically constrained functioning model of the central complex network of the insect brain and find that it resolves an important functionality of the network even with significant variations in the node performance. Our work demonstrates the advantages of an internal memory in a concrete, nanophotonic neural node. The use of variable memory time constants in neural nodes is a general hardware derived feature and could be used in a broad range of implementations.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114632416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-24DOI: 10.1088/2634-4386/accfbb
Pankaj Sharma, J. Seidel
Mimicking and replicating the function of biological synapses with engineered materials is a challenge for the 21st century. The field of neuromorphic computing has recently seen significant developments, and new concepts are being explored. One of these approaches uses topological defects, such as domain walls in ferroic materials, especially ferroelectrics, that can naturally be addressed by electric fields to alter and tailor their intrinsic or extrinsic properties and functionality. Here, we review concepts of neuromorphic functionality found in ferroelectric domain walls and give a perspective on future developments and applications in low-energy, agile, brain-inspired electronics and computing.
{"title":"Neuromorphic functionality of ferroelectric domain walls","authors":"Pankaj Sharma, J. Seidel","doi":"10.1088/2634-4386/accfbb","DOIUrl":"https://doi.org/10.1088/2634-4386/accfbb","url":null,"abstract":"Mimicking and replicating the function of biological synapses with engineered materials is a challenge for the 21st century. The field of neuromorphic computing has recently seen significant developments, and new concepts are being explored. One of these approaches uses topological defects, such as domain walls in ferroic materials, especially ferroelectrics, that can naturally be addressed by electric fields to alter and tailor their intrinsic or extrinsic properties and functionality. Here, we review concepts of neuromorphic functionality found in ferroelectric domain walls and give a perspective on future developments and applications in low-energy, agile, brain-inspired electronics and computing.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123295684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-20DOI: 10.1088/2634-4386/accec4
E. Hernández‐Balaguera, Laura Muñoz-Díaz, A. Bou, B. Romero, B. Ilyassov, Antonio Guerrero, J. Bisquert
Perovskite memristors have emerged as leading contenders in brain-inspired neuromorphic electronics. Although these devices have been shown to accurately reproduce synaptic dynamics, they pose challenges for in-depth understanding of the underlying nonlinear phenomena. Potentiation effects on the electrical conductance of memristive devices have attracted increasing attention from the emerging neuromorphic community, demanding adequate interpretation. Here, we propose a detailed interpretation of the temporal dynamics of potentiation based on nonlinear electrical circuits that can be validated by impedance spectroscopy. The fundamental observation is that the current in a capacitor decreases with time; conversely, for an inductor, it increases with time. There is no electromagnetic effect in a halide perovskite memristor, but ionic-electronic coupling creates a chemical inductor effect that lies behind the potentiation property. Therefore, we show that beyond negative transients, the accumulation of mobile ions and the eventual penetration into the charge-transport layers constitute a bioelectrical memory feature that is the key to long-term synaptic enhancement. A quantitative dynamical electrical model formed by nonlinear differential equations explains the memory-based ionic effects to inductive phenomena associated with the slow and delayed currents, invisible during the ‘off mode’ of the presynaptic spike-based stimuli. Our work opens a new pathway for the rational development of material mimesis of neural communications across synapses, particularly the learning and memory functions in the human brain, through a Hodgkin–Huxley-style biophysical model.
{"title":"Long-term potentiation mechanism of biological postsynaptic activity in neuro-inspired halide perovskite memristors","authors":"E. Hernández‐Balaguera, Laura Muñoz-Díaz, A. Bou, B. Romero, B. Ilyassov, Antonio Guerrero, J. Bisquert","doi":"10.1088/2634-4386/accec4","DOIUrl":"https://doi.org/10.1088/2634-4386/accec4","url":null,"abstract":"Perovskite memristors have emerged as leading contenders in brain-inspired neuromorphic electronics. Although these devices have been shown to accurately reproduce synaptic dynamics, they pose challenges for in-depth understanding of the underlying nonlinear phenomena. Potentiation effects on the electrical conductance of memristive devices have attracted increasing attention from the emerging neuromorphic community, demanding adequate interpretation. Here, we propose a detailed interpretation of the temporal dynamics of potentiation based on nonlinear electrical circuits that can be validated by impedance spectroscopy. The fundamental observation is that the current in a capacitor decreases with time; conversely, for an inductor, it increases with time. There is no electromagnetic effect in a halide perovskite memristor, but ionic-electronic coupling creates a chemical inductor effect that lies behind the potentiation property. Therefore, we show that beyond negative transients, the accumulation of mobile ions and the eventual penetration into the charge-transport layers constitute a bioelectrical memory feature that is the key to long-term synaptic enhancement. A quantitative dynamical electrical model formed by nonlinear differential equations explains the memory-based ionic effects to inductive phenomena associated with the slow and delayed currents, invisible during the ‘off mode’ of the presynaptic spike-based stimuli. Our work opens a new pathway for the rational development of material mimesis of neural communications across synapses, particularly the learning and memory functions in the human brain, through a Hodgkin–Huxley-style biophysical model.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123860130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1088/2634-4386/acce61
A. Razumnaya, Y. Tikhonov, V. Vinokur, I. Luk’yanchuk
Multilevel devices demonstrating switchable polarization enable us to efficiently realize neuromorphic functionalities including synaptic plasticity and neuronal activity. Here we propose using the ferroelectric logic unit comprising multiple nanodots disposed between two electrodes and coated by the dielectric material. We devise the integration of the ferroelectric logic unit, providing topologically configurable non-binary logic into a gate stack of the field-effect transistor as an analog-like device with resistive states. By controlling the charge of the gate, we demonstrate the various routes of the topological switchings between different polarization configurations in ferroelectric nanodots. Switching routes between different logic levels are characterized by hysteresis loops with multiple branches realizing specific interconnectivity regimes. The switching between different types of hysteresis loops is achieved by the variation of external fields and temperature. The devised ferroelectric multilevel devices provide a pathway toward the novel topologically-controlled implementation of discrete synaptic states in neuromorphic computing.
{"title":"Ferroelectric topologically configurable multilevel logic unit","authors":"A. Razumnaya, Y. Tikhonov, V. Vinokur, I. Luk’yanchuk","doi":"10.1088/2634-4386/acce61","DOIUrl":"https://doi.org/10.1088/2634-4386/acce61","url":null,"abstract":"Multilevel devices demonstrating switchable polarization enable us to efficiently realize neuromorphic functionalities including synaptic plasticity and neuronal activity. Here we propose using the ferroelectric logic unit comprising multiple nanodots disposed between two electrodes and coated by the dielectric material. We devise the integration of the ferroelectric logic unit, providing topologically configurable non-binary logic into a gate stack of the field-effect transistor as an analog-like device with resistive states. By controlling the charge of the gate, we demonstrate the various routes of the topological switchings between different polarization configurations in ferroelectric nanodots. Switching routes between different logic levels are characterized by hysteresis loops with multiple branches realizing specific interconnectivity regimes. The switching between different types of hysteresis loops is achieved by the variation of external fields and temperature. The devised ferroelectric multilevel devices provide a pathway toward the novel topologically-controlled implementation of discrete synaptic states in neuromorphic computing.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129465833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-18DOI: 10.1088/2634-4386/ac7ee0
S. Stroobants, Julien Dupeyroux, G. de Croon
Compelling evidence has been given for the high energy efficiency and update rates of neuromorphic processors, with performance beyond what standard Von Neumann architectures can achieve. Such promising features could be advantageous in critical embedded systems, especially in robotics. To date, the constraints inherent in robots (e.g., size and weight, battery autonomy, available sensors, computing resources, processing time, etc), and particularly in aerial vehicles, severely hamper the performance of fully-autonomous on-board control, including sensor processing and state estimation. In this work, we propose a spiking neural network capable of estimating the pitch and roll angles of a quadrotor in highly dynamic movements from six-degree of freedom inertial measurement unit data. With only 150 neurons and a limited training dataset obtained using a quadrotor in a real world setup, the network shows competitive results as compared to state-of-the-art, non-neuromorphic attitude estimators. The proposed architecture was successfully tested on the Loihi neuromorphic processor on-board a quadrotor to estimate the attitude when flying. Our results show the robustness of neuromorphic attitude estimation and pave the way toward energy-efficient, fully autonomous control of quadrotors with dedicated neuromorphic computing systems.
{"title":"Neuromorphic computing for attitude estimation onboard quadrotors","authors":"S. Stroobants, Julien Dupeyroux, G. de Croon","doi":"10.1088/2634-4386/ac7ee0","DOIUrl":"https://doi.org/10.1088/2634-4386/ac7ee0","url":null,"abstract":"Compelling evidence has been given for the high energy efficiency and update rates of neuromorphic processors, with performance beyond what standard Von Neumann architectures can achieve. Such promising features could be advantageous in critical embedded systems, especially in robotics. To date, the constraints inherent in robots (e.g., size and weight, battery autonomy, available sensors, computing resources, processing time, etc), and particularly in aerial vehicles, severely hamper the performance of fully-autonomous on-board control, including sensor processing and state estimation. In this work, we propose a spiking neural network capable of estimating the pitch and roll angles of a quadrotor in highly dynamic movements from six-degree of freedom inertial measurement unit data. With only 150 neurons and a limited training dataset obtained using a quadrotor in a real world setup, the network shows competitive results as compared to state-of-the-art, non-neuromorphic attitude estimators. The proposed architecture was successfully tested on the Loihi neuromorphic processor on-board a quadrotor to estimate the attitude when flying. Our results show the robustness of neuromorphic attitude estimation and pave the way toward energy-efficient, fully autonomous control of quadrotors with dedicated neuromorphic computing systems.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123838514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-17DOI: 10.1088/2634-4386/accd90
Daniel Felder, J. Linkhorst, Matthias Wessling
Organic neuromorphic devices can accelerate neural networks and integrate with biological systems. Devices based on the biocompatible and conductive polymer PEDOT:PSS are fast, require low amounts of energy and perform well in crossbar simulations. However, parasitic electrochemical reactions lead to self-discharge and the fading of the learned conductance states over time. This limits a neural network’s operating time and requires complex compensation mechanisms. Spiking neural networks (SNNs) take inspiration from biology to implement local and always-on learning. We show that these SNNs can function on organic neuromorphic hardware and compensate for self-discharge by continuously relearning and reinforcing forgotten states. In this work, we use a high-resolution charge transport model to describe the behavior of organic neuromorphic devices and create a computationally efficient surrogate model. By integrating the surrogate model into a Brian 2 simulation, we can describe the behavior of SNNs on organic neuromorphic hardware. A biologically plausible two-layer network for recognizing 28×28 pixel MNIST images is trained and observed during self-discharge. The network achieves, for its size, competitive recognition results of up to 82.5%. Building a network with forgetful devices yields superior accuracy during training with 84.5% compared to ideal devices. However, trained networks without active spike-timing-dependent plasticity quickly lose their predictive performance. We show that online learning can keep the performance at a steady level close to the initial accuracy, even for idle rates of up to 90%. This performance is maintained when the output neuron’s labels are not revalidated for up to 24 h. These findings reconfirm the potential of organic neuromorphic devices for brain-inspired computing. Their biocompatibility and the demonstrated adaptability to SNNs open the path towards close integration with multi-electrode arrays, drug-delivery devices, and other bio-interfacing systems as either fully organic or hybrid organic-inorganic systems.
{"title":"Spiking neural networks compensate for weight drift in organic neuromorphic device networks","authors":"Daniel Felder, J. Linkhorst, Matthias Wessling","doi":"10.1088/2634-4386/accd90","DOIUrl":"https://doi.org/10.1088/2634-4386/accd90","url":null,"abstract":"Organic neuromorphic devices can accelerate neural networks and integrate with biological systems. Devices based on the biocompatible and conductive polymer PEDOT:PSS are fast, require low amounts of energy and perform well in crossbar simulations. However, parasitic electrochemical reactions lead to self-discharge and the fading of the learned conductance states over time. This limits a neural network’s operating time and requires complex compensation mechanisms. Spiking neural networks (SNNs) take inspiration from biology to implement local and always-on learning. We show that these SNNs can function on organic neuromorphic hardware and compensate for self-discharge by continuously relearning and reinforcing forgotten states. In this work, we use a high-resolution charge transport model to describe the behavior of organic neuromorphic devices and create a computationally efficient surrogate model. By integrating the surrogate model into a Brian 2 simulation, we can describe the behavior of SNNs on organic neuromorphic hardware. A biologically plausible two-layer network for recognizing 28×28 pixel MNIST images is trained and observed during self-discharge. The network achieves, for its size, competitive recognition results of up to 82.5%. Building a network with forgetful devices yields superior accuracy during training with 84.5% compared to ideal devices. However, trained networks without active spike-timing-dependent plasticity quickly lose their predictive performance. We show that online learning can keep the performance at a steady level close to the initial accuracy, even for idle rates of up to 90%. This performance is maintained when the output neuron’s labels are not revalidated for up to 24 h. These findings reconfirm the potential of organic neuromorphic devices for brain-inspired computing. Their biocompatibility and the demonstrated adaptability to SNNs open the path towards close integration with multi-electrode arrays, drug-delivery devices, and other bio-interfacing systems as either fully organic or hybrid organic-inorganic systems.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124920054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-12DOI: 10.1088/2634-4386/accc51
Taehwan Moon, Hyun-Yong Lee, S. Nam, H. Bae, Duk-Hyun Choe, Sanghyun Jo, Yunseong Lee, Yoon-Ho Park, J. Yang, J. Heo
We propose a novel synaptic design of more efficient neuromorphic edge-computing with substantially improved linearity and extremely low variability. Specifically, a parallel arrangement of ferroelectric tunnel junctions (FTJ) with an incremental pulsing scheme provides a great improvement in linearity for synaptic weight updating by averaging weight update rates of multiple devices. To enable such design with FTJ building blocks, we have demonstrated the lowest reported variability: σ/μ = 0.036 for cycle to cycle and σ/μ = 0.032 for device among six dies across an 8 inch wafer. With such devices, we further show improved synaptic performance and pattern recognition accuracy through experiments combined with simulations.
{"title":"Parallel synaptic design of ferroelectric tunnel junctions for neuromorphic computing","authors":"Taehwan Moon, Hyun-Yong Lee, S. Nam, H. Bae, Duk-Hyun Choe, Sanghyun Jo, Yunseong Lee, Yoon-Ho Park, J. Yang, J. Heo","doi":"10.1088/2634-4386/accc51","DOIUrl":"https://doi.org/10.1088/2634-4386/accc51","url":null,"abstract":"We propose a novel synaptic design of more efficient neuromorphic edge-computing with substantially improved linearity and extremely low variability. Specifically, a parallel arrangement of ferroelectric tunnel junctions (FTJ) with an incremental pulsing scheme provides a great improvement in linearity for synaptic weight updating by averaging weight update rates of multiple devices. To enable such design with FTJ building blocks, we have demonstrated the lowest reported variability: σ/μ = 0.036 for cycle to cycle and σ/μ = 0.032 for device among six dies across an 8 inch wafer. With such devices, we further show improved synaptic performance and pattern recognition accuracy through experiments combined with simulations.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133263770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-04DOI: 10.1088/2634-4386/acca45
Sebastian Siegel, Younes Bouhadjar, T. Tetzlaff, R. Waser, R. Dittmann, D. Wouters
Machine learning models for sequence learning and processing often suffer from high energy consumption and require large amounts of training data. The brain presents more efficient solutions to how these types of tasks can be solved. While this has inspired the conception of novel brain-inspired algorithms, their realizations remain constrained to conventional von-Neumann machines. Therefore, the potential power efficiency of the algorithm cannot be exploited due to the inherent memory bottleneck of the computing architecture. Therefore, we present in this paper a dedicated hardware implementation of a biologically plausible version of the Temporal Memory component of the Hierarchical Temporal Memory concept. Our implementation is built on a memristive crossbar array and is the result of a hardware-algorithm co-design process. Rather than using the memristive devices solely for data storage, our approach leverages their specific switching dynamics to propose a formulation of the peripheral circuitry, resulting in a more efficient design. By combining a brain-like algorithm with emerging non-volatile memristive device technology we strive for maximum energy efficiency. We present simulation results on the training of complex high-order sequences and discuss how the system is able to predict in a context-dependent manner. Finally, we investigate the energy consumption during the training and conclude with a discussion of scaling prospects.
{"title":"System model of neuromorphic sequence learning on a memristive crossbar array","authors":"Sebastian Siegel, Younes Bouhadjar, T. Tetzlaff, R. Waser, R. Dittmann, D. Wouters","doi":"10.1088/2634-4386/acca45","DOIUrl":"https://doi.org/10.1088/2634-4386/acca45","url":null,"abstract":"Machine learning models for sequence learning and processing often suffer from high energy consumption and require large amounts of training data. The brain presents more efficient solutions to how these types of tasks can be solved. While this has inspired the conception of novel brain-inspired algorithms, their realizations remain constrained to conventional von-Neumann machines. Therefore, the potential power efficiency of the algorithm cannot be exploited due to the inherent memory bottleneck of the computing architecture. Therefore, we present in this paper a dedicated hardware implementation of a biologically plausible version of the Temporal Memory component of the Hierarchical Temporal Memory concept. Our implementation is built on a memristive crossbar array and is the result of a hardware-algorithm co-design process. Rather than using the memristive devices solely for data storage, our approach leverages their specific switching dynamics to propose a formulation of the peripheral circuitry, resulting in a more efficient design. By combining a brain-like algorithm with emerging non-volatile memristive device technology we strive for maximum energy efficiency. We present simulation results on the training of complex high-order sequences and discuss how the system is able to predict in a context-dependent manner. Finally, we investigate the energy consumption during the training and conclude with a discussion of scaling prospects.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123783679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-23DOI: 10.1088/2634-4386/acc6e8
Xuan Hu, Can Cui, Samuel Liu, F. García-Sánchez, Wesley H. Brigner, Benjamin W. Walker, Alexander J. Edwards, T. Xiao, C. Bennett, Naimul Hassan, M. Frank, Jean Anne C Incorvia, J. Friedman
Topological solitons are exciting candidates for the physical implementation of next-generation computing systems. As these solitons are nanoscale and can be controlled with minimal energy consumption, they are ideal to fulfill emerging needs for computing in the era of big data processing and storage. Magnetic domain walls (DWs) and magnetic skyrmions are two types of topological solitons that are particularly exciting for next-generation computing systems in light of their non-volatility, scalability, rich physical interactions, and ability to exhibit non-linear behaviors. Here we summarize the development of computing systems based on magnetic topological solitons, highlighting logical and neuromorphic computing with magnetic DWs and skyrmions.
{"title":"Magnetic skyrmions and domain walls for logical and neuromorphic computing","authors":"Xuan Hu, Can Cui, Samuel Liu, F. García-Sánchez, Wesley H. Brigner, Benjamin W. Walker, Alexander J. Edwards, T. Xiao, C. Bennett, Naimul Hassan, M. Frank, Jean Anne C Incorvia, J. Friedman","doi":"10.1088/2634-4386/acc6e8","DOIUrl":"https://doi.org/10.1088/2634-4386/acc6e8","url":null,"abstract":"Topological solitons are exciting candidates for the physical implementation of next-generation computing systems. As these solitons are nanoscale and can be controlled with minimal energy consumption, they are ideal to fulfill emerging needs for computing in the era of big data processing and storage. Magnetic domain walls (DWs) and magnetic skyrmions are two types of topological solitons that are particularly exciting for next-generation computing systems in light of their non-volatility, scalability, rich physical interactions, and ability to exhibit non-linear behaviors. Here we summarize the development of computing systems based on magnetic topological solitons, highlighting logical and neuromorphic computing with magnetic DWs and skyrmions.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121790993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-22DOI: 10.1088/2634-4386/acc683
Ugo Bruno, Anna Mariano, Daniela Rana, T. Gemmeke, Simon Musall, F. Santoro
The computation of the brain relies on the highly efficient communication among billions of neurons. Such efficiency derives from the brain’s plastic and reconfigurable nature, enabling complex computations and maintenance of vital functions with a remarkably low power consumption of only ∼20 W. First efforts to leverage brain-inspired computational principles have led to the introduction of artificial neural networks that revolutionized information processing and daily life. The relentless pursuit of the definitive computing platform is now pushing researchers towards investigation of novel solutions to emulate specific brain features (such as synaptic plasticity) to allow local and energy efficient computations. The development of such devices may also be pivotal in addressing major challenges of a continuously aging world, including the treatment of neurodegenerative diseases. To date, the neuroelectronics field has been instrumental in deepening the understanding of how neurons communicate, owing to the rapid development of silicon-based platforms for neural recordings and stimulation. However, this approach still does not allow for in loco processing of biological signals. In fact, despite the success of silicon-based devices in electronic applications, they are ill-suited for directly interfacing with biological tissue. A cornucopia of solutions has therefore been proposed in the last years to obtain neuromorphic materials to create effective biointerfaces and enable reliable bidirectional communication with neurons. Organic conductive materials in particular are not only highly biocompatible and able to electrochemically transduce biological signals, but also promise to include neuromorphic features, such as neuro-transmitter mediated plasticity and learning capabilities. Furthermore, organic electronics, relying on mixed electronic/ionic conduction mechanism, can be efficiently coupled with biological neural networks, while still successfully communicating with silicon-based electronics. Here, we envision neurohybrid systems that integrate silicon-based and organic electronics-based neuromorphic technologies to create active artificial interfaces with biological tissues. We believe that this approach may pave the way towards the development of a functional bidirectional communication between biological and artificial ‘brains’, offering new potential therapeutic applications and allowing for novel approaches in prosthetics.
{"title":"From neuromorphic to neurohybrid: transition from the emulation to the integration of neuronal networks","authors":"Ugo Bruno, Anna Mariano, Daniela Rana, T. Gemmeke, Simon Musall, F. Santoro","doi":"10.1088/2634-4386/acc683","DOIUrl":"https://doi.org/10.1088/2634-4386/acc683","url":null,"abstract":"The computation of the brain relies on the highly efficient communication among billions of neurons. Such efficiency derives from the brain’s plastic and reconfigurable nature, enabling complex computations and maintenance of vital functions with a remarkably low power consumption of only ∼20 W. First efforts to leverage brain-inspired computational principles have led to the introduction of artificial neural networks that revolutionized information processing and daily life. The relentless pursuit of the definitive computing platform is now pushing researchers towards investigation of novel solutions to emulate specific brain features (such as synaptic plasticity) to allow local and energy efficient computations. The development of such devices may also be pivotal in addressing major challenges of a continuously aging world, including the treatment of neurodegenerative diseases. To date, the neuroelectronics field has been instrumental in deepening the understanding of how neurons communicate, owing to the rapid development of silicon-based platforms for neural recordings and stimulation. However, this approach still does not allow for in loco processing of biological signals. In fact, despite the success of silicon-based devices in electronic applications, they are ill-suited for directly interfacing with biological tissue. A cornucopia of solutions has therefore been proposed in the last years to obtain neuromorphic materials to create effective biointerfaces and enable reliable bidirectional communication with neurons. Organic conductive materials in particular are not only highly biocompatible and able to electrochemically transduce biological signals, but also promise to include neuromorphic features, such as neuro-transmitter mediated plasticity and learning capabilities. Furthermore, organic electronics, relying on mixed electronic/ionic conduction mechanism, can be efficiently coupled with biological neural networks, while still successfully communicating with silicon-based electronics. Here, we envision neurohybrid systems that integrate silicon-based and organic electronics-based neuromorphic technologies to create active artificial interfaces with biological tissues. We believe that this approach may pave the way towards the development of a functional bidirectional communication between biological and artificial ‘brains’, offering new potential therapeutic applications and allowing for novel approaches in prosthetics.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125413185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}