V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu
{"title":"一种用于FPGA实现前馈神经网络的新硬件架构","authors":"V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu","doi":"10.1109/ACCESS51619.2021.9563342","DOIUrl":null,"url":null,"abstract":"Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.","PeriodicalId":409648,"journal":{"name":"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A New Hardware Architecture for FPGA Implementation of Feed Forward Neural Networks\",\"authors\":\"V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu\",\"doi\":\"10.1109/ACCESS51619.2021.9563342\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.\",\"PeriodicalId\":409648,\"journal\":{\"name\":\"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACCESS51619.2021.9563342\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACCESS51619.2021.9563342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A New Hardware Architecture for FPGA Implementation of Feed Forward Neural Networks
Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.