Compiling and Running the Graph from the Command Line - 2022.2 English

AI Engine Kernel and Graph Programming Guide (UG1079)

Document ID
UG1079
Release Date
2022-10-19
Version
2022.2 English
  1. To compile your graph, execute the following command (see Compiling an AI Engine Graph Application in AI Engine Tools and Flows User Guide (UG1076) for more details).
    aiecompiler  project.cpp

    The program is called project.cpp. The AI Engine compiler reads the input graph specified, compiles it to the AI Engine array, produces various reports, and generates output files in the Work directory.

  2. After parsing the C++ input into a graphical intermediate form expressed in JavaScript object notation (JSON), the AI Engine compiler does the resource mapping and scheduling analysis and maps kernel nodes in the graph to the processing cores in the AI Engine array and data windows to memory banks. The JSON representation is augmented with mapping information. Each AI Engine also requires a schedule of all the kernels mapped to it.

    The input graph is first partitioned into groups of kernels to be mapped to the same core.

    The output of the mapper can also be viewed as a tabular report in the file project_mapping_analysis_report.txt. This reports the mapping of nodes to processing cores and data windows to memory banks. Inter-processor communication is appropriately double-banked as ping-pong buffers.

  3. The AI Engine compiler allocates the necessary locks, memory buffers, and DMA channels and descriptors, and generates routing information for mapping the graph onto the AI Engine array. It synthesizes a main program for each core that schedules all the kernels on the cores, and implements the necessary locking mechanism and data copy among buffers. The C program for each core is compiled using the Synopsys Single Core Compiler to produce loadable ELF files. The AI Engine compiler also generates control APIs to control the graph initialization, execution and termination from the main application and a simulator configuration script scsim_config.json. These are all stored within the Work directory under various sub-folders (see Compiling an AI Engine Graph Application in AI Engine Tools and Flows User Guide (UG1076) for more details).
  4. After the compilation of the AI Engine graph, the AI Engine compiler writes a summary of compilation results called <graph-file-name>.aiecompile_summary to view in the Vitis Analyzer. The summary contains a collection of reports, and diagrams reflecting the state of the AI Engine application implemented in the compiled build. The summary is written to the working directory of the AI Engine compiler as specified by the --workdir option, which defaults to ./Work.

    To open the AI Engine compiler summary, use the following command:

    vitis_analyzer ./Work/graph.aiecompile_summary
  5. To run the graph, execute the following command (see Simulating an AI Engine Graph Application in AI Engine Tools and Flows User Guide (UG1076) for more details).
    aiesimulator –-pkg-dir=./Work

    This starts the SystemC-based simulator with the control program being the main application. The graph APIs which are used in the control program configure the AI Engine array including setting up static routing, programming the DMAs, loading the ELF files onto the individual cores, and then initiates AI Engine array execution.

    At the end of the simulation, the output data is produced in the directory aiesimulator_output and it should match the reference data.

    The graph can be loaded at device boot time in hardware or through the host application. Details on deploying the graph in hardware and the flow associated with it is described in detail in Integrating the Application Using the Vitis Tools Flow in AI Engine Tools and Flows User Guide (UG1076).

Note: Only AI Engine kernels that have been modified are recompiled in subsequent compilations of the AI Engine graph. Any un-modified kernels will not be recompiled.