一种用于FPGA实现前馈神经网络的新硬件架构

V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu
{"title":"一种用于FPGA实现前馈神经网络的新硬件架构","authors":"V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu","doi":"10.1109/ACCESS51619.2021.9563342","DOIUrl":null,"url":null,"abstract":"Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.","PeriodicalId":409648,"journal":{"name":"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A New Hardware Architecture for FPGA Implementation of Feed Forward Neural Networks\",\"authors\":\"V.A Sumayyabeevi, Jaimy James Poovely, N. Aswathy, S. Chinnu\",\"doi\":\"10.1109/ACCESS51619.2021.9563342\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.\",\"PeriodicalId\":409648,\"journal\":{\"name\":\"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACCESS51619.2021.9563342\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Conference on Advances in Computing, Communication, Embedded and Secure Systems (ACCESS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACCESS51619.2021.9563342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

人工神经网络是当今非常流行和快速发展的机器学习算法。实现人工神经网络的方法有很多。一般来说,主要的两种技术是神经形态规划和神经网络。本文概述了这些方法。如今,机器学习芯片具有高水平的并行设计,但深度神经网络需要灵活高效的硬件结构,可以完美地用于任何类型的神经网络。此外,各种硬件拓扑可用于FPGA实现。本文解释了这些体系结构变化,并提出了一种新的拓扑结构。提出的结构采用收缩结构,适用于任何前馈神经网络,如多层感知器(MLP)、自动编码器(AE)和逻辑回归(LR)。与其他硬件神经网络结构不同,该体系结构仅实现单个激活函数块和最大层。本文还以MNIST为数据集,在Zynq-7000板上实现了用于数字识别(0到9)的前馈神经网络。网络使用不同的激活函数,每个激活函数的参数也不同。本文从精度、操作频率和资源利用率三个方面提出了改进措施。与其他函数相比,Logistic s型函数具有更高的精度和性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A New Hardware Architecture for FPGA Implementation of Feed Forward Neural Networks
Artificial neural networks are very popular and fast-growing machine learning algorithms today. There exist a large number of ways for implementing ANN into reality. Generally, the main two techniques are neuromorphic programming and neural networks. This paper presents an overview of such methods. Nowadays machine learning chips are available with a high level of parallel designs, but deep neural network requires flexible and efficient hardware structure that can be perfect for any type of neural networks. Also, varieties of hardware topologies are available for FPGA implementation. This paper explains those architectural variations and suggests a new topology. The proposed architecture adopts the systolic structure and applies to any feed forward neural networks such as Multi-Layer Perceptron (MLP), Auto Encoder (AE) and, Logic Regression (LR). Unlike other hardware neural network structures, this architecture implements a single activation function block and the largest layer only. This paper also includes the implementation of a feed-forward neural network for digit recognition (0 to 9) in the Zynq-7000 board with MNIST as the dataset. Different activation functions and different parameters of each activation function are used for the network. Changes and improvements are mentioned in this paper based on Accuracy, Operating frequency and, Resource usage. Logistic Sigmoidal functions can achieve more accuracy and performance as compared with others.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Efficient Approach towards Face Recognition using Deep Reinforcement Learning, Viola Jones and K-nearest neighbor Different Communication Technologies and Challenges for Implementing UWSN Development of Virtual Reality Training module for Maithari of Martial Art Kalari Enhancement of Veterinary Vaginoscopic Image of Female Canines using Color Transfer Algorithm in 1αβ Color Space Multivariate Air Pollution Levels Forecasting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1