Deep Neural Networks (DNNs) have achieved remarkable success in diverse applications such as image classification, signal processing, and video analysis. Despite their effectiveness, these models require substantial computational resources, making FPGA-based hardware acceleration a critical enabler for real-time deployment. However, current methods for mapping DNNs to hardware have experienced limited adoption, mainly because software developers often lack the specialized hardware expertise needed for efficient implementation. High-Level Synthesis (HLS) tools were introduced to bridge this gap, but they typically confine designs to fixed platforms and simple network structures. Most existing tools support only standard architectures like VGG or ResNet with predefined parameters, offering little flexibility for customization and restricting deployment to specific FPGA devices. To address these limitations, we introduce Py2C, an automated framework that converts AI models from Python to C. Py2C supports a wide range of DNN architectures, from basic convolutional and pooling layers with variable window sizes to advanced models such as VGG, ResNet, InceptionNet, ShuffleNet, NambaNet, and YOLO. Integrated with Xilinx’s Vitis HLS, Py2C forms the Py2RTL flow, enabling register-transfer level (RTL) generation with custom-precision arithmetic and cross-platform verification. Validated on multiple networks, Py2C has demonstrated superior hardware efficiency and power reduction, particularly in QRS detection for ECG signals. By streamlining the AI-to-RTL conversion process, Py2C makes FPGA-based AI deployment both high-performance and accessible.
扫码关注我们
求助内容:
应助结果提醒方式:
