Soulef Bouaafia, Seifeddine Messaoud, Randa Khemiri, F. Sayadi
{"title":"基于FPGA-SoC的卷积神经网络硬件加速","authors":"Soulef Bouaafia, Seifeddine Messaoud, Randa Khemiri, F. Sayadi","doi":"10.1109/SETIT54465.2022.9875430","DOIUrl":null,"url":null,"abstract":"Deep learning has evolved as a discipline that has demonstrated its capacity and usefulness in tackling complicated learning issues as a result of recent improvements in digital technology and the availability of authentic data. Convolutional neural networks (CNNs) in particular have demonstrated their usefulness in image processing and computer vision applications. They do, however, need heavy CPU operations and memory bandwidth, which prevents general-purpose CPUs from attaining desirable performance levels. To boost CNN throughput, hardware accelerators such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphics processing units (GPUs) have been deployed. FPGAs, in particular, have lately been used to accelerate the development of deep learning networks due to their ability to optimize parallelism and power efficiency. Based on hardware-software architecture, this research provides a CNN acceleration model for video compression applications. Vivado High Level Synthesis is used to accelerate the CNN model in order to develop Intellectual Property (IP) cores.","PeriodicalId":126155,"journal":{"name":"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An FPGA-SoC based Hardware Acceleration of Convolutional Neural Networks\",\"authors\":\"Soulef Bouaafia, Seifeddine Messaoud, Randa Khemiri, F. Sayadi\",\"doi\":\"10.1109/SETIT54465.2022.9875430\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning has evolved as a discipline that has demonstrated its capacity and usefulness in tackling complicated learning issues as a result of recent improvements in digital technology and the availability of authentic data. Convolutional neural networks (CNNs) in particular have demonstrated their usefulness in image processing and computer vision applications. They do, however, need heavy CPU operations and memory bandwidth, which prevents general-purpose CPUs from attaining desirable performance levels. To boost CNN throughput, hardware accelerators such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphics processing units (GPUs) have been deployed. FPGAs, in particular, have lately been used to accelerate the development of deep learning networks due to their ability to optimize parallelism and power efficiency. Based on hardware-software architecture, this research provides a CNN acceleration model for video compression applications. Vivado High Level Synthesis is used to accelerate the CNN model in order to develop Intellectual Property (IP) cores.\",\"PeriodicalId\":126155,\"journal\":{\"name\":\"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SETIT54465.2022.9875430\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SETIT54465.2022.9875430","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
由于数字技术的进步和真实数据的可用性,深度学习已经发展成为一门学科,在解决复杂的学习问题方面已经证明了它的能力和有用性。特别是卷积神经网络(cnn)已经证明了它们在图像处理和计算机视觉应用中的有用性。然而,它们确实需要大量的CPU操作和内存带宽,这使得通用CPU无法达到理想的性能水平。为了提高CNN的吞吐量,部署了专用集成电路(asic)、现场可编程门阵列(fpga)和图形处理单元(gpu)等硬件加速器。特别是fpga,由于其优化并行性和功率效率的能力,最近被用于加速深度学习网络的发展。基于软硬件架构,提出了一种用于视频压缩的CNN加速模型。使用Vivado High Level Synthesis来加速CNN模型,以开发知识产权(IP)核心。
An FPGA-SoC based Hardware Acceleration of Convolutional Neural Networks
Deep learning has evolved as a discipline that has demonstrated its capacity and usefulness in tackling complicated learning issues as a result of recent improvements in digital technology and the availability of authentic data. Convolutional neural networks (CNNs) in particular have demonstrated their usefulness in image processing and computer vision applications. They do, however, need heavy CPU operations and memory bandwidth, which prevents general-purpose CPUs from attaining desirable performance levels. To boost CNN throughput, hardware accelerators such as application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphics processing units (GPUs) have been deployed. FPGAs, in particular, have lately been used to accelerate the development of deep learning networks due to their ability to optimize parallelism and power efficiency. Based on hardware-software architecture, this research provides a CNN acceleration model for video compression applications. Vivado High Level Synthesis is used to accelerate the CNN model in order to develop Intellectual Property (IP) cores.