Alex Henderson;Chris Yakopcic;Cory Merkel;Hananel Hazan;Steven Harbour;Tarek M. Taha
{"title":"基于 Memristor 的液态机器与现场训练方法","authors":"Alex Henderson;Chris Yakopcic;Cory Merkel;Hananel Hazan;Steven Harbour;Tarek M. Taha","doi":"10.1109/TNANO.2024.3381008","DOIUrl":null,"url":null,"abstract":"Spiking neural network (SNN) hardware has gained significant interest due to its ability to process complex data in size, weight, and power (SWaP) constrained environments. Memristors, in particular, offer the potential to enhance SNN algorithms by providing analog domain acceleration with exceptional energy and throughput efficiency. Among the current SNN architectures, the Liquid State Machine (LSM), a form of Reservoir Computing (RC), stands out due to its low resource utilization and straightforward training process. In this paper, we present a custom memristor-based LSM circuit design with an online learning methodology. The proposed circuit implementing the LSM is designed using SPICE to ensure precise device level accuracy. Furthermore, we explore liquid connectivity tuning to facilitate a real-time and efficient design process. To assess the performance of our system, we evaluate it on multiple datasets, including MNIST, TI-46 spoken digits, acoustic drone recordings, and musical MIDI files. Our results demonstrate comparable accuracy while achieving significant power and energy savings when compared to existing LSM accelerators. Moreover, our design exhibits resilience in the presence of noise and neuron misfires. These findings highlight the potential of a memristor based LSM architecture to rival purely CMOS-based LSM implementations, offering robust and energy-efficient neuromorphic computing capabilities with memristive SNNs.","PeriodicalId":449,"journal":{"name":"IEEE Transactions on Nanotechnology","volume":"23 ","pages":"376-385"},"PeriodicalIF":2.1000,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Memristor Based Liquid State Machine With Method for In-Situ Training\",\"authors\":\"Alex Henderson;Chris Yakopcic;Cory Merkel;Hananel Hazan;Steven Harbour;Tarek M. Taha\",\"doi\":\"10.1109/TNANO.2024.3381008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Spiking neural network (SNN) hardware has gained significant interest due to its ability to process complex data in size, weight, and power (SWaP) constrained environments. Memristors, in particular, offer the potential to enhance SNN algorithms by providing analog domain acceleration with exceptional energy and throughput efficiency. Among the current SNN architectures, the Liquid State Machine (LSM), a form of Reservoir Computing (RC), stands out due to its low resource utilization and straightforward training process. In this paper, we present a custom memristor-based LSM circuit design with an online learning methodology. The proposed circuit implementing the LSM is designed using SPICE to ensure precise device level accuracy. Furthermore, we explore liquid connectivity tuning to facilitate a real-time and efficient design process. To assess the performance of our system, we evaluate it on multiple datasets, including MNIST, TI-46 spoken digits, acoustic drone recordings, and musical MIDI files. Our results demonstrate comparable accuracy while achieving significant power and energy savings when compared to existing LSM accelerators. Moreover, our design exhibits resilience in the presence of noise and neuron misfires. These findings highlight the potential of a memristor based LSM architecture to rival purely CMOS-based LSM implementations, offering robust and energy-efficient neuromorphic computing capabilities with memristive SNNs.\",\"PeriodicalId\":449,\"journal\":{\"name\":\"IEEE Transactions on Nanotechnology\",\"volume\":\"23 \",\"pages\":\"376-385\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Nanotechnology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10478203/\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Nanotechnology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10478203/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Memristor Based Liquid State Machine With Method for In-Situ Training
Spiking neural network (SNN) hardware has gained significant interest due to its ability to process complex data in size, weight, and power (SWaP) constrained environments. Memristors, in particular, offer the potential to enhance SNN algorithms by providing analog domain acceleration with exceptional energy and throughput efficiency. Among the current SNN architectures, the Liquid State Machine (LSM), a form of Reservoir Computing (RC), stands out due to its low resource utilization and straightforward training process. In this paper, we present a custom memristor-based LSM circuit design with an online learning methodology. The proposed circuit implementing the LSM is designed using SPICE to ensure precise device level accuracy. Furthermore, we explore liquid connectivity tuning to facilitate a real-time and efficient design process. To assess the performance of our system, we evaluate it on multiple datasets, including MNIST, TI-46 spoken digits, acoustic drone recordings, and musical MIDI files. Our results demonstrate comparable accuracy while achieving significant power and energy savings when compared to existing LSM accelerators. Moreover, our design exhibits resilience in the presence of noise and neuron misfires. These findings highlight the potential of a memristor based LSM architecture to rival purely CMOS-based LSM implementations, offering robust and energy-efficient neuromorphic computing capabilities with memristive SNNs.
期刊介绍:
The IEEE Transactions on Nanotechnology is devoted to the publication of manuscripts of archival value in the general area of nanotechnology, which is rapidly emerging as one of the fastest growing and most promising new technological developments for the next generation and beyond.