Iterative AI Engine Application Compilation - 2022.2 English

AI Engine Tools and Flows User Guide (UG1076)

Document ID
UG1076
Release Date
2022-10-19
Version
2022.2 English

AI Engine application development can start early in the system development stage. Gradually, the AI Engine development team and the hardware development team converge on an interface between the Programmable Logic and the AI Engine array. At some point, this interface is fixed and should not be changed, but the AI Engine can continue evolve as long as this interface remains unchanged.

As long as the interface is unchanged, the hardware and the AI Engine development teams can have independent development processes. After an AI Engine application compilation, the compiler generates many files in the Workdirectory that identify the decisions made during the entire process. One of the files called Work/temp/graph_aie_routed.aiecst contains, among other information, all the interface specification between the AI Engine Array, and the PL and PS in JSON format. The NodeConstraints area contains the description of all the interfaces you defined in your graph, with their column location and channel selection.

  1. You can extract this data and store it in another file in JSON format:
    {
     "NodeConstraints": {
       "DataIn1": {
         "shim": {
           "column": 24, 
           "channel": 0 
           } 
         },
       "clip_in": { 
         "shim": { 
           "column": 24, 
           "channel": 0 
           } 
         }, 
       "clip_out": { 
         "shim": { 
           "column": 25, 
           "channel": 0 
         } 
       }, 
       "DataOut1": { 
         "shim": { 
           "column": 25, 
           "channel": 0 
         } 
       } 
     } 
    }
  2. The AI Engine application can be modified and recompiled by providing the previously extracted interface constraints file to the compiler:
    aiecompiler $(AIE_FLAGS) --workdir=./Work2 --constraints=interface.aiecst graph.cpp
    

A new libadf.a is created, unless you specify another name, and can be directly packaged with the xsa that has been generated previously by the link stage and the host executable.

You can manually change the constraints file without having to wait for a new link of the system and provide your libadf.a file afterwards to the hardware team.

A tutorial including all the steps to perform this task is available at: https://github.com/Xilinx/Vitis-Tutorials/tree/2022.1/AI_Engine_Development/Feature_Tutorials/15-post-link-recompile

Developing the AI Engine Software using a Fixed Hardware Platform

After building the hardware design, the AI Engine / Programmable Logic (PL) interface is fixed, but you can recompile AI Engine graphs and kernels against this fixed hardware as often as desired. In fact, the kernels and graphs can be quite different across such AI Engine software-only compiles as long as the AI Engine / Programmable Logic interface remains fixed. The AI Engine compiler will issue an error when you attempt a software-only AI Engine compile that cannot conform the fixed AI Engine / PL interface. After the system link stage, Vitis™ generates a new platform with an .xsa extension . This file contains a lot of information beside the bitstream, in particular the AI Engine-PL interface constraints. The AI Engine developer can develop software based on this hardware platform. This file can by used by the aiecompiler to integrate automatically all the constraints:
aiecompiler -include ... -platform=LinkedPlatform.xsa  newgraph.cpp

The host code can also be adapted to the new graph, compiled and packaged with the new libadf.a to generate an image.

Note: The Vitis v++ linker and platform interfaces abstract CPU control, DDR memory, and streaming I/O, so it can be advantageous to build an application-specific hardware test harness targeting a standard development board so that you can develop, compile, and run AI Engine code at speed in hardware.  Re-targeting the AI Engine application code to a custom application-specific platform consists simply of reapplying the v++ linker directives for the AI Engine when targeting the custom platform.