Versal adaptive SoCs combine Scalar Engines, Adaptable Engines, and Intelligent Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous computing for any application. Most importantly, Versal adaptive SoC hardware and software are targeted for programming and optimization by data scientists, software and hardware developers.
Some Versal adaptive SoC devices incorporate an array of very-long instruction word (VLIW) processors with single instruction multiple data (SIMD) vector units called, AI Engines, that are highly optimized for compute-intensive applications such as 5G wireless and artificial intelligence (AI) applications. Embedded system designs using Versal devices can include AI Engine adaptive data flow (ADF) graph applications developed and tested using Vitis tools.
As described in AI Engine Kernel and Graph Programming Guide (UG1079), the ADF graph application consists of nodes and edges where nodes represent compute kernel functions, and edges represent data connections. The ADF graph is a static dataflow graph with the kernels operating in parallel. An AI Engine kernel is a C/C++ program written using specialized intrinsic calls that target the VLIW SIMD vector processor. Kernels operate on data streams, and are the fundamental building blocks of an ADF graph specification. These kernels consume input blocks of data and produce output blocks of data.
The following figure shows the development flow for AI Engine graph applications in which individual kernel
code is compiled and combined into the graph application. The AI Engine graph and kernel code is compiled using the AI Engine compiler (
v++ --mode aie command to produce an
executable file that is run on AI Engine processors.
An AI Engine component targeting the
aiengine domain of a Versal device can be built from the command line as described in Building and Running the System, or in the Vitis unified IDE as described in Using the Vitis Unified IDE. The process for building and analyzing an AI Engine graph application is described in brief in this
document, and detailed in
Engine Tools and Flows User Guide (UG1076).
By default, the AI Engine
compiler writes all outputs to a directory called ./Work and creates a file called
Work is a sub-directory
of the current directory where the tool was launched, and
libadf.a is written to the same directory as the AI Engine compiler was launched from and is used for linking with
PL kernels and the extensible platform using the
command as explained in the next section.