Edwin González, Walter D. Villamizar Luna, Carlos Augusto Fajardo Ariza
{"title":"A Hardware Accelerator for the Inference of a Convolutional Neural network","authors":"Edwin González, Walter D. Villamizar Luna, Carlos Augusto Fajardo Ariza","doi":"10.18359/rcin.4194","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs) are becoming increasingly popular in deep learning applications, e.g. image classification, speech recognition, medicine, to name a few. However, the CNN inference is computationally intensive and demanding a large among of memory resources. In this work is proposed a CNN inference hardware accelerator, which was implemented in a co-processing scheme. The aim is to reduce the hardware resources and achieve the better possible throughput. The design was implemented in the Digilent Arty Z7-20 development board, which is based on System on Chip (SoC) Zynq-7000 of Xilinx. Our implementation achieved a of accuracy for the MNIST database using only 12-bits fixed-point format. The results show that the co-processing scheme operating at a conservative speed of 100 MHz can identify around 441 images per second, which is about 17% times faster than a 650 MHz - software implementation. It is difficult to compare our results against other implementations based on Field-Programmable Gate Array (FPGA), because the others implementations are not exactly like ours. However, some comparisons, regarding the logical resources used and accuracy, suggest that our work could be better than previous works.","PeriodicalId":31201,"journal":{"name":"Ciencia e Ingenieria Neogranadina","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ciencia e Ingenieria Neogranadina","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18359/rcin.4194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Convolutional Neural Networks (CNNs) are becoming increasingly popular in deep learning applications, e.g. image classification, speech recognition, medicine, to name a few. However, the CNN inference is computationally intensive and demanding a large among of memory resources. In this work is proposed a CNN inference hardware accelerator, which was implemented in a co-processing scheme. The aim is to reduce the hardware resources and achieve the better possible throughput. The design was implemented in the Digilent Arty Z7-20 development board, which is based on System on Chip (SoC) Zynq-7000 of Xilinx. Our implementation achieved a of accuracy for the MNIST database using only 12-bits fixed-point format. The results show that the co-processing scheme operating at a conservative speed of 100 MHz can identify around 441 images per second, which is about 17% times faster than a 650 MHz - software implementation. It is difficult to compare our results against other implementations based on Field-Programmable Gate Array (FPGA), because the others implementations are not exactly like ours. However, some comparisons, regarding the logical resources used and accuracy, suggest that our work could be better than previous works.