{"title":"利用电阻式随机存取存储器桥和电容神经元实现不受器件变化和压降影响的二值化神经网络","authors":"Mona Ezzadeen, Atreya Majumdar, Olivier Valorge, Niccolo Castellani, Valentin Gherman, Guillaume Regis, Bastien Giraud, Jean-Philippe Noel, Valentina Meli, Marc Bocquet, Francois Andrieu, Damien Querlioz, Jean-Michel Portal","doi":"10.1038/s44172-024-00226-z","DOIUrl":null,"url":null,"abstract":"Resistive Random Access Memories (ReRAM) arrays provides a promising basement to deploy neural network accelerators based on near or in memory computing. However most popular accelerators rely on Ohm’s and Kirchhoff’s laws to achieve multiply and accumulate, and thus are prone to ReRAM variability and voltage drop in the memory array, and thus need sophisticated readout circuits. Here we propose a robust binary neural network, based on fully differential capacitive neurons and ReRAM synapses, used in a resistive bridge fashion. We fabricated a network layer with up to 23 inputs that we extrapolated to large numbers of inputs through simulation. Defining proper programming and reading conditions, we demonstrate the high resilience of this solution with a minimal accuracy drop, compared to a software baseline, on image classification tasks. Moreover, our solution can achieve a peak energy efficiency, comparable with the state of the art, when projected to a 22 nanometer technology. Mona Ezzadeen and co-authors demonstrate a compute-in memory cell with a low consumed power per operation. In silicon implementation with 23 inputs is successfully used to solve benchmarking tasks of digit recognition.","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":" ","pages":"1-15"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44172-024-00226-z.pdf","citationCount":"0","resultStr":"{\"title\":\"Implementation of binarized neural networks immune to device variation and voltage drop employing resistive random access memory bridges and capacitive neurons\",\"authors\":\"Mona Ezzadeen, Atreya Majumdar, Olivier Valorge, Niccolo Castellani, Valentin Gherman, Guillaume Regis, Bastien Giraud, Jean-Philippe Noel, Valentina Meli, Marc Bocquet, Francois Andrieu, Damien Querlioz, Jean-Michel Portal\",\"doi\":\"10.1038/s44172-024-00226-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Resistive Random Access Memories (ReRAM) arrays provides a promising basement to deploy neural network accelerators based on near or in memory computing. However most popular accelerators rely on Ohm’s and Kirchhoff’s laws to achieve multiply and accumulate, and thus are prone to ReRAM variability and voltage drop in the memory array, and thus need sophisticated readout circuits. Here we propose a robust binary neural network, based on fully differential capacitive neurons and ReRAM synapses, used in a resistive bridge fashion. We fabricated a network layer with up to 23 inputs that we extrapolated to large numbers of inputs through simulation. Defining proper programming and reading conditions, we demonstrate the high resilience of this solution with a minimal accuracy drop, compared to a software baseline, on image classification tasks. Moreover, our solution can achieve a peak energy efficiency, comparable with the state of the art, when projected to a 22 nanometer technology. Mona Ezzadeen and co-authors demonstrate a compute-in memory cell with a low consumed power per operation. In silicon implementation with 23 inputs is successfully used to solve benchmarking tasks of digit recognition.\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":\" \",\"pages\":\"1-15\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s44172-024-00226-z.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s44172-024-00226-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44172-024-00226-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implementation of binarized neural networks immune to device variation and voltage drop employing resistive random access memory bridges and capacitive neurons
Resistive Random Access Memories (ReRAM) arrays provides a promising basement to deploy neural network accelerators based on near or in memory computing. However most popular accelerators rely on Ohm’s and Kirchhoff’s laws to achieve multiply and accumulate, and thus are prone to ReRAM variability and voltage drop in the memory array, and thus need sophisticated readout circuits. Here we propose a robust binary neural network, based on fully differential capacitive neurons and ReRAM synapses, used in a resistive bridge fashion. We fabricated a network layer with up to 23 inputs that we extrapolated to large numbers of inputs through simulation. Defining proper programming and reading conditions, we demonstrate the high resilience of this solution with a minimal accuracy drop, compared to a software baseline, on image classification tasks. Moreover, our solution can achieve a peak energy efficiency, comparable with the state of the art, when projected to a 22 nanometer technology. Mona Ezzadeen and co-authors demonstrate a compute-in memory cell with a low consumed power per operation. In silicon implementation with 23 inputs is successfully used to solve benchmarking tasks of digit recognition.