Abstract
This paper presents a model-based design of AI accelerator following the Vitis TRD flow, implemented on the AMD Kria KV260 Vision AI Starter Kit. The ResNet-18 model, developed in PyTorch, was quantized, compiled with Vitis AI, and deployed to the FPGA via PYNQ. We deeply analyzed different DPU configurations and frequencies, focusing on resource utilization, power, FPS, and energy efficiency. Results show that resource utilization remains constant across frequencies, but lower frequencies increase energy consumption. For optimal performance and energy efficiency, high-MAC DPU configurations should be used at higher frequencies. To our knowledge, no prior research has fully detailed the Vitis TRD flow within Vitis AI. Most rely on Vivado TRD, which requires PetaLinux. This work offers a comprehensive guide for deploying AI models on FPGAs using Ubuntu, eliminating the need for PetaLinux expertise.
Original language | English |
---|---|
Title of host publication | 2024 32nd Telecommunications Forum, TELFOR 2024 - Proceedings of Papers |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798350391053 |
DOIs | |
Publication status | Published - 2024 |
Event | 32nd Telecommunications Forum, TELFOR 2024 - Belgrade, Serbia Duration: 26 Nov 2024 → 27 Nov 2024 |
Publication series
Name | 2024 32nd Telecommunications Forum, TELFOR 2024 - Proceedings of Papers |
---|
Conference
Conference | 32nd Telecommunications Forum, TELFOR 2024 |
---|---|
Country/Territory | Serbia |
City | Belgrade |
Period | 26/11/24 → 27/11/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- AI accelerator
- DPU
- FPGA
- Kria KV260
- MPSoC
- PYNQ
- Vitis AI
- Vitis TRD