The Vitis AI Library is a set of high-level libraries and APIs built for efficient AI inference with the Deep-Learning Processor Unit (DPU). It is built based on the Vitis AI runtime with unified APIs, and it supports XRT 2022.2.
The Vitis AI Library provides an easy-to-use and unified interface by encapsulating many efficient and high-quality neural networks. This simplifies the use of deep-learning neural networks, even for users without knowledge of deep-learning or FPGAs. The Vitis AI Library allows you to focus more on the development of your applications, rather than the underlying hardware.
For the intended audience for the Vitis AI Library, refer to the About this Document section.
The Vitis AI Library has four parts as shown in the following block diagram.
- Base libraries
- The base libraries provide the basic programming interface
with the DPU and the available post-processing modules of each model.
- dpu_task is the interface library for DPU operations.
- cpu_task is the interface library for operations that are assigned to the CPU.
- xnnpp is the post-processing library for each model, with built-in modules such as optimization and acceleration.
- Model libraries
- The model libraries implement most of the open-source neural network deployment including common types of networks, such as classification, detection, segmentation, and others. These libraries provide an easy-to-use and fast development method with a unified interface, which apply to the Xilinx models or custom models.
- Library samples
- The library test samples are used to quickly test and evaluate the model libraries.
- Application demos
- The application demos show you how to use the Vitis AI Library to develop applications.