{"title":"A Neuromorphic Design Using Chaotic Mott Memristor with Relaxation Oscillation","authors":"Bonan Yan, Xiong Cao, Hai Li","doi":"10.1145/3195970.3195977","DOIUrl":null,"url":null,"abstract":"The recent proposed nanoscale Mott memristor features negative differential resistance and chaotic dynamics. This work proposes a novel neuromorphic computing system that utilizes Mott memristors to simplify peripheral circuitry. According to the analytic description of chaotic dynamics and relaxation oscillation, we carefully tune the working point of Mott memristors to balance the chaotic behavior weighing testing accuracy and training efficiency. Compared with conventional designs, the proposed design accelerates the training by 1.893× averagely and saves 27.68% and 43.32% power consumption with 36.67% and 26.75% less area for single-layer and two-layer perceptrons, respectively.","PeriodicalId":6491,"journal":{"name":"2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC)","volume":"55 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3195970.3195977","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The recent proposed nanoscale Mott memristor features negative differential resistance and chaotic dynamics. This work proposes a novel neuromorphic computing system that utilizes Mott memristors to simplify peripheral circuitry. According to the analytic description of chaotic dynamics and relaxation oscillation, we carefully tune the working point of Mott memristors to balance the chaotic behavior weighing testing accuracy and training efficiency. Compared with conventional designs, the proposed design accelerates the training by 1.893× averagely and saves 27.68% and 43.32% power consumption with 36.67% and 26.75% less area for single-layer and two-layer perceptrons, respectively.