{"title":"Deep neural networks accelerators with focus on tensor processors","authors":"Hamidreza Bolhasani , Mohammad Marandinejad","doi":"10.1016/j.micpro.2023.105005","DOIUrl":null,"url":null,"abstract":"<div><p><span>The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and </span>machine learning<span> can be useful in solving this problem. Using deep neural networks<span> to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.</span></span></p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microprocessors and Microsystems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141933123002508","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and machine learning can be useful in solving this problem. Using deep neural networks to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.
期刊介绍:
Microprocessors and Microsystems: Embedded Hardware Design (MICPRO) is a journal covering all design and architectural aspects related to embedded systems hardware. This includes different embedded system hardware platforms ranging from custom hardware via reconfigurable systems and application specific processors to general purpose embedded processors. Special emphasis is put on novel complex embedded architectures, such as systems on chip (SoC), systems on a programmable/reconfigurable chip (SoPC) and multi-processor systems on a chip (MPSoC), as well as, their memory and communication methods and structures, such as network-on-chip (NoC).
Design automation of such systems including methodologies, techniques, flows and tools for their design, as well as, novel designs of hardware components fall within the scope of this journal. Novel cyber-physical applications that use embedded systems are also central in this journal. While software is not in the main focus of this journal, methods of hardware/software co-design, as well as, application restructuring and mapping to embedded hardware platforms, that consider interplay between software and hardware components with emphasis on hardware, are also in the journal scope.