{"title":"Variable precision, mixed fixed/floating point MAC unit for DNN accelerators","authors":"Ali Talebi, Morteza Mousazadeh","doi":"10.1109/IICM57986.2022.10152326","DOIUrl":null,"url":null,"abstract":"Artificial intelligence has become more popular than ever in past few years. Deep neural networks are being used in various use cases e.g., image processing, data analysis and etc. MAC operations are the core of DNNs thus making MAC units a very crucial element of any DNN accelerator. This paper presents a Variable precision, mixed fixed/floating point MAC unit capable of performing single precision floating-point MAC. Also, proposed MAC unit features additional modes for performing one 32-bit fixed-point MAC or two concurrent 16-bit fixed-point MACs or four concurrent 8-bit fixed-point MACs. Aside from high flexibility in number precision, proposed MAC unit uses recurring Karatsuba algorithm to implement higher bit-count multiplication only by using 8-bit multiplier and 8-bit adders. Proposed MAC unit has achieved 44.64 MOPS in 32-bit floating-point, 44.64 MOPS in 32-bit fixed-point, 89.29 MOPS in 16-bit fixed-point and 178.57 MOPS in 8-bit fixed-point on FPGA board ‘NEXYS 4 DDR’ featuring XILINX ‘xc7al00tcsg324-1’ FPGA chip.","PeriodicalId":131546,"journal":{"name":"2022 Iranian International Conference on Microelectronics (IICM)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Iranian International Conference on Microelectronics (IICM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IICM57986.2022.10152326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence has become more popular than ever in past few years. Deep neural networks are being used in various use cases e.g., image processing, data analysis and etc. MAC operations are the core of DNNs thus making MAC units a very crucial element of any DNN accelerator. This paper presents a Variable precision, mixed fixed/floating point MAC unit capable of performing single precision floating-point MAC. Also, proposed MAC unit features additional modes for performing one 32-bit fixed-point MAC or two concurrent 16-bit fixed-point MACs or four concurrent 8-bit fixed-point MACs. Aside from high flexibility in number precision, proposed MAC unit uses recurring Karatsuba algorithm to implement higher bit-count multiplication only by using 8-bit multiplier and 8-bit adders. Proposed MAC unit has achieved 44.64 MOPS in 32-bit floating-point, 44.64 MOPS in 32-bit fixed-point, 89.29 MOPS in 16-bit fixed-point and 178.57 MOPS in 8-bit fixed-point on FPGA board ‘NEXYS 4 DDR’ featuring XILINX ‘xc7al00tcsg324-1’ FPGA chip.