Setting up the Tool to Generate an Image File for Hardware Validation Flow - 2022.1 English

Vitis Model Composer User Guide (UG1483)

Document ID
UG1483
Release Date
2022-05-26
Version
2022.1 English
  1. Choose a platform. In the Vitis Model Composer Hub block, select the Target tab on the left and open the hardware selector menu as shown in the following figure.

  2. Select the Platform tab from the top and browse to select either a platform that ships with the product or a custom platform. As mentioned above, the supported topologies must include AI Engine and as such only platforms that have AI Engine arrays are valid for this selection.

  3. Select the AI Engine tab, specify the AI Engine subsystem, and then select Settings under the AI Engine tab. Select the Create Testbench check box.

  4. Select the Hardware Flow tab on the left. From here, choose between baremetal and Linux. For each selection, you can choose between hardware or hardware emulation using the Target drop-down menu.

    Note: For Linux applications only, further information is required (see the following figure). This is a once-only action. Use the following steps to obtain the necessary information.

    1. Click here to download the Versal common image. Unzip the download and specify the directory in the Common SW Dir field.
    2. Switch to bash shell and source sdk.sh in the common image directory. This will prompt for a target directory path for the SDK. Extracting the SDK will take about ten minutes. Specify the SDK directory in the Target SDK Dir field.
  5. In the Generate tab, select the AI Engine blockset row and select the Generate hardware image check box. Note that the hardware image gets generated under the run_hw directory in the Code Directory specified for AI Engines. For designs that also have HDL blocks, select the HDL blockset row also and make sure the Code Directory name is specified as netlist.
  6. Click Generate. Depending on your settings and the complexity of your design, generation may take up to one hour. Subsequent generation can be much faster if changes to the design do not cause a change in the PL. For example if you simply increase the simulation time in Simulink (to collect more data), change the data source, or make modifications to the AI Engine kernel, the subsequent image generations will be faster.