A Survey of Neural Network Hardware Accelerators in Machine Learning

F. Jasem, Manar AlSaraf
{"title":"A Survey of Neural Network Hardware Accelerators in Machine Learning","authors":"F. Jasem, Manar AlSaraf","doi":"10.5121/mlaij.2021.8402","DOIUrl":null,"url":null,"abstract":"The use of Machine Learning in Artificial Intelligence is the inspiration that shaped technology as it is today. Machine Learning has the power to greatly simplify our lives. Improvement in speech recognition and language understanding help the community interact more naturally with technology. The popularity of machine learning opens up the opportunities for optimizing the design of computing platforms using welldefined hardware accelerators. In the upcoming few years, cameras will be utilised as sensors for several applications. For ease of use and privacy restrictions, the requested image processing should be limited to a local embedded computer platform and with a high accuracy. Furthermore, less energy should be consumed. Dedicated acceleration of Convolutional Neural Networks can achieve these targets with high flexibility to perform multiple vision tasks. However, due to the exponential growth in technology constraints (especially in terms of energy) which could lead to heterogeneous multicores, and increasing number of defects, the strategy of defect-tolerant accelerators for heterogeneous multi-cores may become a main micro-architecture research issue. The up to date accelerators used still face some performance issues such as memory limitations, bandwidth, speed etc. This literature summarizes (in terms of a survey) recent work of accelerators including their advantages and disadvantages to make it easier for developers with neural network interests to further improve what has already been established.","PeriodicalId":347528,"journal":{"name":"Machine Learning and Applications: An International Journal","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Learning and Applications: An International Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/mlaij.2021.8402","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The use of Machine Learning in Artificial Intelligence is the inspiration that shaped technology as it is today. Machine Learning has the power to greatly simplify our lives. Improvement in speech recognition and language understanding help the community interact more naturally with technology. The popularity of machine learning opens up the opportunities for optimizing the design of computing platforms using welldefined hardware accelerators. In the upcoming few years, cameras will be utilised as sensors for several applications. For ease of use and privacy restrictions, the requested image processing should be limited to a local embedded computer platform and with a high accuracy. Furthermore, less energy should be consumed. Dedicated acceleration of Convolutional Neural Networks can achieve these targets with high flexibility to perform multiple vision tasks. However, due to the exponential growth in technology constraints (especially in terms of energy) which could lead to heterogeneous multicores, and increasing number of defects, the strategy of defect-tolerant accelerators for heterogeneous multi-cores may become a main micro-architecture research issue. The up to date accelerators used still face some performance issues such as memory limitations, bandwidth, speed etc. This literature summarizes (in terms of a survey) recent work of accelerators including their advantages and disadvantages to make it easier for developers with neural network interests to further improve what has already been established.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
机器学习中的神经网络硬件加速器综述
机器学习在人工智能中的应用是塑造今天技术的灵感来源。机器学习有能力大大简化我们的生活。语音识别和语言理解的提高有助于社区更自然地与技术互动。机器学习的普及为使用定义良好的硬件加速器优化计算平台的设计提供了机会。在接下来的几年里,摄像头将被用作传感器用于多种应用。为了方便使用和隐私限制,所请求的图像处理应限制在本地嵌入式计算机平台上,并具有较高的精度。此外,应该减少能源消耗。卷积神经网络的专用加速可以实现这些目标,并且具有高度的灵活性来执行多个视觉任务。然而,由于技术约束的指数级增长(特别是在能量方面)可能导致异构多核,以及缺陷数量的增加,异构多核的容错加速器策略可能成为微架构研究的主要问题。使用的最新加速器仍然面临一些性能问题,如内存限制、带宽、速度等。这篇文献总结了加速器的最新工作,包括它们的优点和缺点,使对神经网络感兴趣的开发人员更容易进一步改进已经建立的东西。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Machine Learning Method for Prediction of Yogurt Quality and Consumers Preferencesusing Sensory Attributes and Image Processing Techniques Automatic Spectral Classification of Stars using Machine Learning: An Approach based on the use of Unbalanced Data Ai_Birder: Using Artificial Intelligence and Deep Learning to Create a Mobile Application that Automates Bird Classification DSAGLSTM-DTA: Prediction of Drug-Target Affinity using Dual Self-Attention and LSTM Multilingual Speech to Text using Deep Learning based on MFCC Features
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1