Alvaro Narciso Perez-Garcia, Gerardo Marcos Tornez-Xavier, L. M. Flores-Nava, F. Gómez-Castañeda, J. Moreno-Cadenas
{"title":"Multilayer perceptron network with integrated training algorithm in FPGA","authors":"Alvaro Narciso Perez-Garcia, Gerardo Marcos Tornez-Xavier, L. M. Flores-Nava, F. Gómez-Castañeda, J. Moreno-Cadenas","doi":"10.1109/ICEEE.2014.6978300","DOIUrl":null,"url":null,"abstract":"In this manuscript we present the implementation of an artificial neural network type Multilayer Perceptron (ANN-MP or NNMP) in Field-Programmable Gate Arrays (FPGA), including Back-Propagation training method based on descendent gradient. This network has 2 reconfigurable hidden layers, adjustable parameters (epochs and ratio learning) and batch learning. The proposed architecture aims to reduce the number of logical elements to be used, so serial processing is utilized. In order to test the performance of the trained network, a nonlinear function was approximated with satisfactory results.","PeriodicalId":6661,"journal":{"name":"2014 11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE)","volume":"77 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEE.2014.6978300","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
In this manuscript we present the implementation of an artificial neural network type Multilayer Perceptron (ANN-MP or NNMP) in Field-Programmable Gate Arrays (FPGA), including Back-Propagation training method based on descendent gradient. This network has 2 reconfigurable hidden layers, adjustable parameters (epochs and ratio learning) and batch learning. The proposed architecture aims to reduce the number of logical elements to be used, so serial processing is utilized. In order to test the performance of the trained network, a nonlinear function was approximated with satisfactory results.