{"title":"Lightweight FPGA acceleration framework for structurally tailored multi-version MobileNetV1","authors":"XuMing Lu, JiaWei Zhang, LuoJie Zhu, XianYang Tan","doi":"10.1016/j.vlsi.2025.102383","DOIUrl":null,"url":null,"abstract":"<div><div>Convolutional neural networks (CNNs) have significantly enhanced image recognition performance through effective feature extraction and weight sharing, establishing themselves as a pivotal research area in computer vision. Despite these advances, CNNs demand substantial computational resources, posing challenges for deployment on resource-constrained embedded devices. Consequently, lightweight CNN models, such as MobileNet, have been developed to optimize computational efficiency. However, these models still necessitate accelerators to achieve optimal performance. Field-programmable gate arrays (FPGAs) present a viable solution for accelerating lightweight CNN models, thanks to their inherent capabilities for high parallelism, superior energy efficiency compared to traditional CPUs or GPUs, and reconfigurability, which adapts well to evolving network architectures. Nevertheless, compact FPGAs are limited by their on-chip logic resources. This limitation, coupled with the requirement to support multiple pruned versions of MobileNet networks due to advancements in model structure pruning, complicates the FPGA design process and escalates the resource allocation and associated costs. To address this issue, we propose a master-slave architecture for the MobileNetV1 computing framework, where the master module manages task scheduling and resource allocation, while slave modules execute the actual convolutional computations. This framework employs a dynamic configuration method, programming execution parameters for each network layer into the FPGA, allowing adaptability and optimization of resource usage. The proposed design was validated on the Altera De2-115 FPGA evaluation board using the MobileNet-V1-0.5-160 model. Experimental results demonstrated that, when implemented on the Altera De2-115 FPGA board, the recognition speed of our optimized MobileNetV1 model could reach 68.9 frames per second (FPS) with an 8-bit data width and a clock speed of 25 MHz, utilizing only 38K logic units—an efficient performance benchmark compared to previous FPGA implementations.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"103 ","pages":"Article 102383"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926025000409","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional neural networks (CNNs) have significantly enhanced image recognition performance through effective feature extraction and weight sharing, establishing themselves as a pivotal research area in computer vision. Despite these advances, CNNs demand substantial computational resources, posing challenges for deployment on resource-constrained embedded devices. Consequently, lightweight CNN models, such as MobileNet, have been developed to optimize computational efficiency. However, these models still necessitate accelerators to achieve optimal performance. Field-programmable gate arrays (FPGAs) present a viable solution for accelerating lightweight CNN models, thanks to their inherent capabilities for high parallelism, superior energy efficiency compared to traditional CPUs or GPUs, and reconfigurability, which adapts well to evolving network architectures. Nevertheless, compact FPGAs are limited by their on-chip logic resources. This limitation, coupled with the requirement to support multiple pruned versions of MobileNet networks due to advancements in model structure pruning, complicates the FPGA design process and escalates the resource allocation and associated costs. To address this issue, we propose a master-slave architecture for the MobileNetV1 computing framework, where the master module manages task scheduling and resource allocation, while slave modules execute the actual convolutional computations. This framework employs a dynamic configuration method, programming execution parameters for each network layer into the FPGA, allowing adaptability and optimization of resource usage. The proposed design was validated on the Altera De2-115 FPGA evaluation board using the MobileNet-V1-0.5-160 model. Experimental results demonstrated that, when implemented on the Altera De2-115 FPGA board, the recognition speed of our optimized MobileNetV1 model could reach 68.9 frames per second (FPS) with an 8-bit data width and a clock speed of 25 MHz, utilizing only 38K logic units—an efficient performance benchmark compared to previous FPGA implementations.
期刊介绍:
Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics:
Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.