Pub Date : 2025-01-01Epub Date: 2025-12-01DOI: 10.1038/s44335-025-00045-1
Tristan Austin, Simon Bilodeau, Andrew Hayman, Nir Rotenberg, Bhavin J Shastri
Neuromorphic (brain-inspired) photonics accelerates AI1 with high-speed, energy-efficient solutions for RF communication2, image processing3,4, and fast matrix multiplication5,6. However, integrated neuromorphic photonic hardware faces size constraints that limit network complexity. Recent advances in photonic quantum hardware7 and performant trainable quantum circuits8 offer a path to more scalable photonic neural networks. Here, we show that a combination of classical network layers with trainable continuous variable quantum circuits yields hybrid networks with improved trainability and accuracy. On a classification task, these hybrid networks match the performance of classical networks nearly twice their size. These performance benefits remain even when evaluated at state-of-the-art bit precisions for classical and quantum hardware. Finally, we outline available hardware and a roadmap to hybrid architectures. These hybrid quantum-classical networks demonstrate a unique route to enhance the computational capacity of integrated photonic neural networks without increasing the network size.
{"title":"Hybrid quantum-classical photonic neural networks.","authors":"Tristan Austin, Simon Bilodeau, Andrew Hayman, Nir Rotenberg, Bhavin J Shastri","doi":"10.1038/s44335-025-00045-1","DOIUrl":"10.1038/s44335-025-00045-1","url":null,"abstract":"<p><p>Neuromorphic (brain-inspired) photonics accelerates AI<sup>1</sup> with high-speed, energy-efficient solutions for RF communication<sup>2</sup>, image processing<sup>3,4</sup>, and fast matrix multiplication<sup>5,6</sup>. However, integrated neuromorphic photonic hardware faces size constraints that limit network complexity. Recent advances in photonic quantum hardware<sup>7</sup> and performant trainable quantum circuits<sup>8</sup> offer a path to more scalable photonic neural networks. Here, we show that a combination of classical network layers with trainable continuous variable quantum circuits yields hybrid networks with improved trainability and accuracy. On a classification task, these hybrid networks match the performance of classical networks nearly twice their size. These performance benefits remain even when evaluated at state-of-the-art bit precisions for classical and quantum hardware. Finally, we outline available hardware and a roadmap to hybrid architectures. These hybrid quantum-classical networks demonstrate a unique route to enhance the computational capacity of integrated photonic neural networks without increasing the network size.</p>","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":"2 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12669026/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145672906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-04DOI: 10.1038/s44335-025-00021-9
M L Schneider, E M Jué, M R Pufall, K Segall, C W Anderson
Neuromorphic computing takes biological inspiration to the device level aiming to improve computational efficiency and capabilities. One of the major issues that arises is the training of neuromorphic hardware systems. Typically training algorithms require global information and are thus inefficient to implement directly in hardware. In this paper we describe a set of reinforcement learning based, local weight update rules and their implementation in superconducting hardware. Using SPICE circuit simulations, we implement a small-scale neural network with a learning time of order one nanosecond per update. This network can be trained to learn new functions simply by changing the target output for a given set of inputs, without the need for any external adjustments to the network. Further, this architecture does not require programing explicit weight values in the network, alleviating a critical challenge with analog hardware implementations of neural networks.
{"title":"A self-training spiking superconducting neuromorphic architecture.","authors":"M L Schneider, E M Jué, M R Pufall, K Segall, C W Anderson","doi":"10.1038/s44335-025-00021-9","DOIUrl":"10.1038/s44335-025-00021-9","url":null,"abstract":"<p><p>Neuromorphic computing takes biological inspiration to the device level aiming to improve computational efficiency and capabilities. One of the major issues that arises is the training of neuromorphic hardware systems. Typically training algorithms require global information and are thus inefficient to implement directly in hardware. In this paper we describe a set of reinforcement learning based, local weight update rules and their implementation in superconducting hardware. Using SPICE circuit simulations, we implement a small-scale neural network with a learning time of order one nanosecond per update. This network can be trained to learn new functions simply by changing the target output for a given set of inputs, without the need for any external adjustments to the network. Further, this architecture does not require programing explicit weight values in the network, alleviating a critical challenge with analog hardware implementations of neural networks.</p>","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":"2 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-02DOI: 10.1038/s44335-025-00024-6
Chiara De Luca, Mirco Tincani, Giacomo Indiveri, Elisa Donati
With the advent of novel sensor and machine learning technologies, it is becoming possible to develop wearable systems that perform continuous recording and processing of biosignals for health or body state assessment. For example, modern smartwatches can already track physiological functions, including heart rate and its anomalies, with high precision. However, stringent constraints on size and energy consumption pose significant challenges for always-on operation to detect trends across multiple time scales for extended periods of time. To address these challenges, we propose an alternative solution that exploits the ultra-low power consumption features of mixed-signal neuromorphic technologies. We present a biosignal processing architecture that integrates multimodal sensory inputs and processes them using the principles of neural computation to reliably detect trends in heart rate and physiological states. We validate this architecture on a mixed-signal neuromorphic processor and demonstrate its robust operation despite the inherent variability of the analog circuits present in the system. In addition, we demonstrate how the system can process multi scale signals, namely instantaneous heart rate and its long-term states discretized into distinct zones, effectively detecting monotonic changes over extended periods that indicate pathological conditions such as agitation. This approach paves the way for a new generation of energy-efficient stand-alone wearable devices that are particularly suited for scenarios that require continuous health monitoring with minimal device maintenance.
{"title":"A neuromorphic multi-scale approach for real-time heart rate and state detection.","authors":"Chiara De Luca, Mirco Tincani, Giacomo Indiveri, Elisa Donati","doi":"10.1038/s44335-025-00024-6","DOIUrl":"10.1038/s44335-025-00024-6","url":null,"abstract":"<p><p>With the advent of novel sensor and machine learning technologies, it is becoming possible to develop wearable systems that perform continuous recording and processing of biosignals for health or body state assessment. For example, modern smartwatches can already track physiological functions, including heart rate and its anomalies, with high precision. However, stringent constraints on size and energy consumption pose significant challenges for always-on operation to detect trends across multiple time scales for extended periods of time. To address these challenges, we propose an alternative solution that exploits the ultra-low power consumption features of mixed-signal neuromorphic technologies. We present a biosignal processing architecture that integrates multimodal sensory inputs and processes them using the principles of neural computation to reliably detect trends in heart rate and physiological states. We validate this architecture on a mixed-signal neuromorphic processor and demonstrate its robust operation despite the inherent variability of the analog circuits present in the system. In addition, we demonstrate how the system can process multi scale signals, namely instantaneous heart rate and its long-term states discretized into distinct zones, effectively detecting monotonic changes over extended periods that indicate pathological conditions such as agitation. This approach paves the way for a new generation of energy-efficient stand-alone wearable devices that are particularly suited for scenarios that require continuous health monitoring with minimal device maintenance.</p>","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":"2 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11964916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143797550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1038/s44335-024-00015-z
Aida Todri-Sanial, Corentin Delacour, Madeleine Abernot, Filip Sabo
Networks of coupled oscillators have far-reaching implications across various fields, providing insights into a plethora of dynamics. This review offers an in-depth overview of computing with oscillators covering computational capability, synchronization occurrence and mathematical formalism. We discuss numerous circuit design implementations, technology choices and applications from pattern retrieval, combinatorial optimization problems to machine learning algorithms. We also outline perspectives to broaden the applications and mathematical understanding of coupled oscillator dynamics.
{"title":"Computing with oscillators from theoretical underpinnings to applications and demonstrators","authors":"Aida Todri-Sanial, Corentin Delacour, Madeleine Abernot, Filip Sabo","doi":"10.1038/s44335-024-00015-z","DOIUrl":"10.1038/s44335-024-00015-z","url":null,"abstract":"Networks of coupled oscillators have far-reaching implications across various fields, providing insights into a plethora of dynamics. This review offers an in-depth overview of computing with oscillators covering computational capability, synchronization occurrence and mathematical formalism. We discuss numerous circuit design implementations, technology choices and applications from pattern retrieval, combinatorial optimization problems to machine learning algorithms. We also outline perspectives to broaden the applications and mathematical understanding of coupled oscillator dynamics.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00015-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142762891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-04DOI: 10.1038/s44335-024-00013-1
Marco Massarotto, Stefano Saggini, Mirko Loghi, David Esseni
In recent years, the in-memory-computing in charge domain has gained significant interest as a promising solution to further enhance the energy efficiency of neuromorphic hardware. In this work, we explore the synergy between the brain-inspired computation and the adiabatic paradigm by presenting an adiabatic Leaky Integrate-and-Fire neuron in 180 nm CMOS technology, that is able to emulate the most important primitives for a valuable neuromorphic computation, such as the accumulation of the incoming input spikes, an exponential leakage of the membrane potential and a tunable refractory period. Differently from previous contributions in the literature, our design can exploit both the charging and recovery phases of the adiabatic operation to ensure a seamless and continuous computation, all the while exchanging energy with the power supply with an efficiency higher than 90% over a wide range of resonance frequencies, and even surpassing 99% for the lowest frequencies. Our simulations unveil a minimum energy per synaptic operation of 470 fJ at a 500 kHz resonance frequency, which yields a 9x energy saving with respect to a non-adiabatic operation.
{"title":"Adiabatic leaky integrate and fire neurons with refractory period for ultra low energy neuromorphic computing","authors":"Marco Massarotto, Stefano Saggini, Mirko Loghi, David Esseni","doi":"10.1038/s44335-024-00013-1","DOIUrl":"10.1038/s44335-024-00013-1","url":null,"abstract":"In recent years, the in-memory-computing in charge domain has gained significant interest as a promising solution to further enhance the energy efficiency of neuromorphic hardware. In this work, we explore the synergy between the brain-inspired computation and the adiabatic paradigm by presenting an adiabatic Leaky Integrate-and-Fire neuron in 180 nm CMOS technology, that is able to emulate the most important primitives for a valuable neuromorphic computation, such as the accumulation of the incoming input spikes, an exponential leakage of the membrane potential and a tunable refractory period. Differently from previous contributions in the literature, our design can exploit both the charging and recovery phases of the adiabatic operation to ensure a seamless and continuous computation, all the while exchanging energy with the power supply with an efficiency higher than 90% over a wide range of resonance frequencies, and even surpassing 99% for the lowest frequencies. Our simulations unveil a minimum energy per synaptic operation of 470 fJ at a 500 kHz resonance frequency, which yields a 9x energy saving with respect to a non-adiabatic operation.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00013-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142762941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1038/s44335-024-00014-0
Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon, Samuel Duffield, Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles
Linear algebra is central to many algorithms in engineering, science, and machine learning; hence, accelerating it would have tremendous economic impact. Quantum computing has been proposed for this purpose, although the resource requirements are far beyond current technological capabilities. We consider an alternative physics-based computing paradigm based on classical thermodynamics, to provide a near-term approach to accelerating linear algebra. At first sight, thermodynamics and linear algebra seem to be unrelated fields. Here, we connect solving linear algebra problems to sampling from the thermodynamic equilibrium distribution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms for solving linear systems of equations, computing matrix inverses, and computing matrix determinants. Under reasonable assumptions, we rigorously establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy, and equilibration, highlighting the deep connection between these two seemingly distinct fields, and opening up algebraic applications for thermodynamic computers.
{"title":"Thermodynamic linear algebra","authors":"Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon, Samuel Duffield, Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles","doi":"10.1038/s44335-024-00014-0","DOIUrl":"10.1038/s44335-024-00014-0","url":null,"abstract":"Linear algebra is central to many algorithms in engineering, science, and machine learning; hence, accelerating it would have tremendous economic impact. Quantum computing has been proposed for this purpose, although the resource requirements are far beyond current technological capabilities. We consider an alternative physics-based computing paradigm based on classical thermodynamics, to provide a near-term approach to accelerating linear algebra. At first sight, thermodynamics and linear algebra seem to be unrelated fields. Here, we connect solving linear algebra problems to sampling from the thermodynamic equilibrium distribution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms for solving linear systems of equations, computing matrix inverses, and computing matrix determinants. Under reasonable assumptions, we rigorously establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy, and equilibration, highlighting the deep connection between these two seemingly distinct fields, and opening up algebraic applications for thermodynamic computers.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-11"},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00014-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1038/s44335-024-00012-2
Adam M. Krajewski, Allison M. Beese, Wesley F. Reinhart, Zi-Kui Liu
Diverse disciplines across science and engineering deal with problems related to compositions, which exist in non-Euclidean simplex spaces, rendering many standard tools inaccurate or inefficient. This work explores such spaces conceptually in the context of materials discovery, quantifies their computational feasibility, and implements several essential methods specific to simplex spaces through a new high-performance open-source library nimplex. Most significantly, we derive and implement an algorithm for constructing a novel n-dimensional simplex graph data structure, containing all discretized compositions and possible neighbor-to-neighbor transitions. Critically, no distance or neighborhood calculations are performed, instead leveraging pure combinatorics and order in procedurally generated simplex grids, keeping the algorithm $${mathcal{O}}(N)$$ , with minimal memory, enabling rapid construction of graphs with billions of transitions in seconds. Additionally, we demonstrate how such graph representations can be combined to homogeneously express complex path-planning problems, while facilitating efficient deployment of existing high-performance gradient descent, graph traversal, and other optimization algorithms.
科学和工程领域的多个学科都在处理与组合相关的问题,这些问题存在于非欧几里得单纯形空间中,导致许多标准工具不准确或效率低下。这项研究从概念上探索了材料发现背景下的这类空间,量化了其计算可行性,并通过一个新的高性能开源库 nimplex 实现了几种针对单纯形空间的基本方法。最重要的是,我们推导并实现了一种构建新颖的 n 维单纯形图数据结构的算法,其中包含所有离散化组合和可能的邻域到邻域转换。重要的是,该算法不进行距离或邻域计算,而是利用纯粹的组合学和程序化生成的单纯形网格中的秩序,保持算法 $${mathcal{O}}(N)$$ 的最小内存,从而能够在数秒内快速构建具有数十亿次转换的图形。此外,我们还展示了如何将这种图表示法结合起来,同质地表达复杂的路径规划问题,同时促进现有高性能梯度下降、图遍历和其他优化算法的高效部署。
{"title":"Efficient generation of grids and traversal graphs in compositional spaces towards exploration and path planning","authors":"Adam M. Krajewski, Allison M. Beese, Wesley F. Reinhart, Zi-Kui Liu","doi":"10.1038/s44335-024-00012-2","DOIUrl":"10.1038/s44335-024-00012-2","url":null,"abstract":"Diverse disciplines across science and engineering deal with problems related to compositions, which exist in non-Euclidean simplex spaces, rendering many standard tools inaccurate or inefficient. This work explores such spaces conceptually in the context of materials discovery, quantifies their computational feasibility, and implements several essential methods specific to simplex spaces through a new high-performance open-source library nimplex. Most significantly, we derive and implement an algorithm for constructing a novel n-dimensional simplex graph data structure, containing all discretized compositions and possible neighbor-to-neighbor transitions. Critically, no distance or neighborhood calculations are performed, instead leveraging pure combinatorics and order in procedurally generated simplex grids, keeping the algorithm $${mathcal{O}}(N)$$ , with minimal memory, enabling rapid construction of graphs with billions of transitions in seconds. Additionally, we demonstrate how such graph representations can be combined to homogeneously express complex path-planning problems, while facilitating efficient deployment of existing high-performance gradient descent, graph traversal, and other optimization algorithms.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00012-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1038/s44335-024-00010-4
Manuel Le Gallo, Oscar Hrynkevych, Benedikt Kersting, Geethan Karunaratne, Athanasios Vasilopoulos, Riduan Khaddam-Aljameh, Ghazi Sarwat Syed, Abu Sebastian
Analog in-memory computing (AIMC) leverages the inherent physical characteristics of resistive memory devices to execute computational operations, notably matrix-vector multiplications (MVMs). However, executing MVMs using a single-phase reading scheme to reduce latency necessitates the simultaneous application of both positive and negative voltages across resistive memory devices. This degrades the accuracy of the computation due to the dependence of the device conductance on the voltage polarity. Here, we demonstrate the realization of a 4-quadrant MVM in a single modulation by developing analog and digital calibration procedures to mitigate the conductance polarity dependence, fully implemented on a multi-core AIMC chip based on phase-change memory. With this approach, we experimentally demonstrate accurate neural network inference and similarity search tasks using one or multiple cores of the chip, at 4 times higher MVM throughput and energy efficiency than the conventional four-phase reading scheme.
{"title":"Demonstration of 4-quadrant analog in-memory matrix multiplication in a single modulation","authors":"Manuel Le Gallo, Oscar Hrynkevych, Benedikt Kersting, Geethan Karunaratne, Athanasios Vasilopoulos, Riduan Khaddam-Aljameh, Ghazi Sarwat Syed, Abu Sebastian","doi":"10.1038/s44335-024-00010-4","DOIUrl":"10.1038/s44335-024-00010-4","url":null,"abstract":"Analog in-memory computing (AIMC) leverages the inherent physical characteristics of resistive memory devices to execute computational operations, notably matrix-vector multiplications (MVMs). However, executing MVMs using a single-phase reading scheme to reduce latency necessitates the simultaneous application of both positive and negative voltages across resistive memory devices. This degrades the accuracy of the computation due to the dependence of the device conductance on the voltage polarity. Here, we demonstrate the realization of a 4-quadrant MVM in a single modulation by developing analog and digital calibration procedures to mitigate the conductance polarity dependence, fully implemented on a multi-core AIMC chip based on phase-change memory. With this approach, we experimentally demonstrate accurate neural network inference and similarity search tasks using one or multiple cores of the chip, at 4 times higher MVM throughput and energy efficiency than the conventional four-phase reading scheme.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00010-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Similarity search is essential in current artificial intelligence applications and widely utilized in various fields, such as recommender systems. However, the exponential growth of data poses significant challenges in search time and energy consumption on traditional digital hardware. Here, we propose a software-hardware co-optimization to address these challenges. On the software side, we employ a learning-to-hash method for vector encoding and achieve an approximate nearest neighbor search by calculating Hamming distance, thereby reducing computational complexity. On the hardware side, we leverage the resistance random-access memory crossbar array to implement the hash encoding process and the content-addressable memory with an in-memory computing paradigm to lower the energy consumption during searches. Simulations on the MovieLens dataset demonstrate that the implementation achieves comparable accuracy to software and reduces energy consumption by 30-fold compared to traditional digital systems. These results provide insight into the development of energy-efficient in-memory search systems for edge computing.
{"title":"In-memory search with learning to hash based on resistive memory for recommendation acceleration","authors":"Fei Wang, Woyu Zhang, Zhi Li, Ning Lin, Rui Bao, Xiaoxin Xu, Chunmeng Dou, Zhongrui Wang, Dashan Shang","doi":"10.1038/s44335-024-00009-x","DOIUrl":"10.1038/s44335-024-00009-x","url":null,"abstract":"Similarity search is essential in current artificial intelligence applications and widely utilized in various fields, such as recommender systems. However, the exponential growth of data poses significant challenges in search time and energy consumption on traditional digital hardware. Here, we propose a software-hardware co-optimization to address these challenges. On the software side, we employ a learning-to-hash method for vector encoding and achieve an approximate nearest neighbor search by calculating Hamming distance, thereby reducing computational complexity. On the hardware side, we leverage the resistance random-access memory crossbar array to implement the hash encoding process and the content-addressable memory with an in-memory computing paradigm to lower the energy consumption during searches. Simulations on the MovieLens dataset demonstrate that the implementation achieves comparable accuracy to software and reduces energy consumption by 30-fold compared to traditional digital systems. These results provide insight into the development of energy-efficient in-memory search systems for edge computing.","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00009-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142360088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1038/s44335-024-00011-3
Wei D. Lu, Christof Teuscher, Stephen A. Sarles, Yuchao Yang, Aida Todri-Sanial, Xiao-Bo Zhu
{"title":"A perfect storm and a new dawn for unconventional computing technologies","authors":"Wei D. Lu, Christof Teuscher, Stephen A. Sarles, Yuchao Yang, Aida Todri-Sanial, Xiao-Bo Zhu","doi":"10.1038/s44335-024-00011-3","DOIUrl":"10.1038/s44335-024-00011-3","url":null,"abstract":"","PeriodicalId":501715,"journal":{"name":"npj Unconventional Computing","volume":" ","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44335-024-00011-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142170398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}