Alaa M. Salman, Ahmed S. Tulan, Rana Y. Mohamed, Michael H. Zakhari, H. Mostafa
{"title":"Comparative Study of Hardware Accelerated Convolution Neural Network on PYNQ Board","authors":"Alaa M. Salman, Ahmed S. Tulan, Rana Y. Mohamed, Michael H. Zakhari, H. Mostafa","doi":"10.1109/NILES50944.2020.9257899","DOIUrl":null,"url":null,"abstract":"In recent years convolutional neural networks (CNNs) have been remarkably used in many applications, and they are the heart of many intelligent systems. The advancements in both new electronic design automation (EDA) tools and in new hardware development boards such as Python Productivity for Zynq (PYNQ) have significantly decreased the development time of CNNs. However, the short time-to-market is at the cost of implementation area, performance and power consumption. Over the last period, CNNs’ energy consumption needs have skyrocketed dramatically. Thus, In this work, the authors conduct a comprehensive study on the power consumption of hardware accelerated CNN whether implemented using new EDAs High Level Synthesis (HLS) or the basic design abstraction of Register Transfer Level (RTL). Both methods are implemented on modern development boards from Xilinx (as PYNQ). Modern EDAs flow such as HLS does not represent the best environment for a good power consumption. The power consumption of the HLS implementation is six times more power than the RTL one. It is concluded that the new EDAs method have a deficiency to deliver highly efficient CNNs but it has the ability to deliver sufficient results within a very short period of time.","PeriodicalId":253090,"journal":{"name":"2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NILES50944.2020.9257899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years convolutional neural networks (CNNs) have been remarkably used in many applications, and they are the heart of many intelligent systems. The advancements in both new electronic design automation (EDA) tools and in new hardware development boards such as Python Productivity for Zynq (PYNQ) have significantly decreased the development time of CNNs. However, the short time-to-market is at the cost of implementation area, performance and power consumption. Over the last period, CNNs’ energy consumption needs have skyrocketed dramatically. Thus, In this work, the authors conduct a comprehensive study on the power consumption of hardware accelerated CNN whether implemented using new EDAs High Level Synthesis (HLS) or the basic design abstraction of Register Transfer Level (RTL). Both methods are implemented on modern development boards from Xilinx (as PYNQ). Modern EDAs flow such as HLS does not represent the best environment for a good power consumption. The power consumption of the HLS implementation is six times more power than the RTL one. It is concluded that the new EDAs method have a deficiency to deliver highly efficient CNNs but it has the ability to deliver sufficient results within a very short period of time.
近年来,卷积神经网络(cnn)在许多应用中得到了显著的应用,它是许多智能系统的核心。新的电子设计自动化(EDA)工具和新的硬件开发板(如Python Productivity for Zynq (PYNQ))的进步大大缩短了cnn的开发时间。然而,上市时间短是以牺牲实现面积、性能和功耗为代价的。在过去的一段时间里,cnn的能源消耗需求急剧上升。因此,在这项工作中,作者对硬件加速CNN的功耗进行了全面的研究,无论是使用新的EDAs High Level Synthesis (HLS)还是Register Transfer Level (RTL)的基本设计抽象来实现。这两种方法都在赛灵思的现代开发板上实现(作为PYNQ)。现代EDAs流(如HLS)并不代表良好功耗的最佳环境。HLS实现的功耗是RTL实现的六倍。结论是,新的EDAs方法在提供高效cnn方面存在不足,但它有能力在很短的时间内提供足够的结果。