Deep-Learning Processor Unit - 2.0 English

Vitis AI User Guide (UG1414)

Document ID
Release Date
2.0 English

The deep-learning processor unit (DPU) is a programmable engine optimized for deep neural networks. It is a group of parameterizable IP cores pre-implemented on the hardware with no place and route required. It is designed to accelerate the computing workloads of deep learning inference algorithms widely adopted in various computer vision applications, such as image/video classification, semantic segmentation, and object detection/tracking. The DPU is released with the Vitis AI specialized instruction set, thus facilitating the efficient implementation of deep learning networks.

An efficient tensor-level instruction set is designed to support and accelerate various popular convolutional neural networks, such as VGG, ResNet, GoogLeNet, YOLO, SSD, and MobileNet, among others. The DPU is scalable to fit various Xilinx, Zynq UltraScale+ MPSoCs, Xilinx Kria KV260, Versal cards, and Alveo boards from Edge to Cloud to meet the requirements of many diverse applications.

A configuration file, arch.json, is generated during the Vitis flow. The arch.json file is used by the Vitis AI compiler for model compilation. Once the configuration of the DPU is modified, a new arch.json must be generated. The models must be regenerated using the new arch.json file. In the DPU-TRD, the arch.json file is located at $TRD_HOME/prj/Vitis/binary_container_1/link/vivado/vpl/prj/prj.gen/sources_1/bd/xilinx_zcu102_base/ip/xilinx_zcu102_base_DPUCZDX8G_1_0/arch.json.

Vitis AI offers a series of different DPUs for both embedded devices such as Xilinx , Zynq® UltraScale+™ MPSoC, Kria KV260, Versal cards and Alveo cards such as U50LV, U200, U250, and U55C enabling unique differentiation and flexibility in terms of throughput, latency, scalability, and power.

Figure 1. DPU Options