Integrating the Application Using the Vitis Tools Flow - 2022.2 English

AI Engine Tools and Flows User Guide (UG1076)

Document ID
Release Date
2022.2 English

While developing an AI Engine design graph, many design iterations are typically performed using the AI Engine compiler or AI Engine simulator tools. This method provides quick design iterations when focused on developing the AI Engine application. When ready, the AI Engine design can be integrated into a larger system design using the flow described in this chapter.

The Vitis™ tools flow simplifies hardware design and integration with a software-like compilation and linking flow, integrating the three domains of the Versal® device: the AI Engine array, the programmable logic (PL) region, and the processing system (PS). The Vitis compiler flow lets you integrate your compiled AI Engine design graph (libadf.a) with additional kernels implemented in the PL region of the device, including HLS and RTL kernels, and link them for use on a target platform. You can call these compiled hardware functions from a host program running in the Arm® processor in the Versal device. The Vitis compiler provides abstract directives for accessing system memory, CPU control, and streaming I/O, so it is often possible to developAI Engine graphs and kernels on a standard development platform and quickly re-target the AI Engine code to a custom platform developed for your specific application.

The following figure shows the high-level steps required to use the Vitis tools flow to integrate your application. The command-line process to run this flow is described here.
Note: You can also use this flow from within the Vitis IDE as explained in Using the Vitis IDE.
Figure 1. Vitis Tools Flow

Important: Using Vitis tools and AI Engine tools require the setup described in Setting Up the Vitis Tool Environment.

The following steps can be adapted to any AI Engine design in a Versal device.

  1. As described in Compiling an AI Engine Graph Application, the first step is to create and compile the AI Engine graph into a libadf.a file using the AI Engine compiler. You can iterate between the AI Engine compiler, and the AI Engine simulator to develop the graph, until you are ready to proceed.
  2. Compiling PL Kernels: PL kernels are compiled for implementation in the PL region of the target platform using the v++ --compile command. These kernels can be C/C++ kernels or RTL kernels, in compiled Xilinx object (xo) form.
  3. Linking the System: Link the compiled AI Engine graph with the C/C++ kernels and RTL kernels onto a target platform. The process creates an XSA file to package the system.
  4. Compiling the Embedded Application for the Cortex-A72 Processor: Optionally compile a host application to run on the Cortex®-A72 core processor using the GNU Arm cross-compiler to create an ELF file. The host program interacts with the AI Engine kernels and kernels in the PL region. This compilation step is optional because there are several ways to deploy and interact with the AI Engine kernels, and the host program running in the PS is one way.
  5. Packaging the System for Hardware: Use the v++ --package process to gather the required files to configure and boot the system, to load and run the application, including the AI Engine graph and PL kernels. This builds the necessary package to run emulation and debug, or run your application on hardware.