{"title":"Computation of Backpropagation Learning Algorithm Using Neuron Machine Architecture","authors":"J. B. Ahn","doi":"10.1109/CIMSIM.2013.13","DOIUrl":null,"url":null,"abstract":"The neuron machine (NM) is a hardwarearchitecture that can be used to design efficient neural networksimulation systems. However, owing to its intrinsicunidirectional nature, NM architecture does not supportbackpropagation (BP) learning algorithms. This paperproposes novel schemes for NM architecture to support BPalgorithms. Reverse-mapping memories, synapse placementalgorithm, and a memory structure called triple rotatememory can be used to share synaptic weights in both the feedforwardand error BP stages without degrading thecomputational performance. An NM system supporting a BPtraining algorithm was implemented on a field-programmablegate array board and successfully trained a neural networkthat can classify MNIST handwritten digits. The implementedsystem showed a better performance over most chip-level orboard-level systems based on other hardware architectures.","PeriodicalId":249355,"journal":{"name":"2013 Fifth International Conference on Computational Intelligence, Modelling and Simulation","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Fifth International Conference on Computational Intelligence, Modelling and Simulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIMSIM.2013.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The neuron machine (NM) is a hardwarearchitecture that can be used to design efficient neural networksimulation systems. However, owing to its intrinsicunidirectional nature, NM architecture does not supportbackpropagation (BP) learning algorithms. This paperproposes novel schemes for NM architecture to support BPalgorithms. Reverse-mapping memories, synapse placementalgorithm, and a memory structure called triple rotatememory can be used to share synaptic weights in both the feedforwardand error BP stages without degrading thecomputational performance. An NM system supporting a BPtraining algorithm was implemented on a field-programmablegate array board and successfully trained a neural networkthat can classify MNIST handwritten digits. The implementedsystem showed a better performance over most chip-level orboard-level systems based on other hardware architectures.