{"title":"在FPGA架构上实现片上反向传播学习算法","authors":"H. Vo","doi":"10.1109/ICSSE.2017.8030932","DOIUrl":null,"url":null,"abstract":"Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.","PeriodicalId":296191,"journal":{"name":"2017 International Conference on System Science and Engineering (ICSSE)","volume":"342 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Implementing the on-chip backpropagation learning algorithm on FPGA architecture\",\"authors\":\"H. Vo\",\"doi\":\"10.1109/ICSSE.2017.8030932\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.\",\"PeriodicalId\":296191,\"journal\":{\"name\":\"2017 International Conference on System Science and Engineering (ICSSE)\",\"volume\":\"342 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on System Science and Engineering (ICSSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSSE.2017.8030932\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on System Science and Engineering (ICSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSE.2017.8030932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implementing the on-chip backpropagation learning algorithm on FPGA architecture
Scaling CMOS integrated circuit technology leads to decrease the chip price and increase processing performance in complex applications with re-configurability. Thus, VLSI architecture is a promising candidate in implementing neural network models nowadays. Backpropagation algorithm is used for training multilayer perceptron with high degree of parallel processing. Parallel computing implementation is the best suitable on FPGA or ASIC. The on-chip back-propagation learning algorithm design is proposed to implement 2×2×1 neural network architecture on FPGA. The simulation results show that back-propagation learning algorithm is converged in 3 epochs with error target as small as 0.05. The updated weighting also makes comparison between learning on FPGA and Matlab less than 2%. The achievements extend the applications with larger neural networks to communicate with other hardware architecture.