Simulating an AI Engine Graph Application - 2021.1 English

Versal ACAP AI Engine Programming Environment User Guide (UG1076)

Document ID
UG1076
Release Date
2021-07-19
Version
2021.1 English

This chapter describes the various execution targets available to simulate AI Engine applications at different levels of abstraction, accuracy, and speed. AI Engine graphs can be simulated in three different simulation environments.

The x86 simulator is a functional simulator as described in x86 Functional Simulator. It can be used to functionally simulate your AI Engine graph, and is very useful for early iterations in the kernel and graph development cycle. It, however, does not provide timing, resource, or performance information.

The AI Engine simulator (aiesimulator) models the timing and resources of the AI Engine array, while using transaction-level, timed SystemC models for the NoC, DDR memory, PL, and PS. This allows for faster performance analysis of your AI Engine applications and accurate estimation of the AI Engine resource use, with cycle-approximate timing information.

Finally, when you are ready to simulate the entire AI Engine graph targeting a specific board and platform, along with PL kernels and your host application you can use the Vitis™ hardware emulation flow. This flow includes the SystemC model of the AI Engine, transaction-level, SystemC models for the NoC, DDR memory, PL, and PS. You can also include RTL simulation models of your PL kernels and IPs. The options provided to this flow are described in this chapter.

As shown in Integrating the Application Using the Vitis Tools Flow and Using the Vitis IDE, the Vitis™ compiler builds the system-level project to run the simulator from the IDE. Alternatively, the options can be specified on a command line or in a script.