For Cloud (Alveo U50/U50LV/U280 Cards) - 1.4.1 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2021-12-11
Version
1.4.1 English
Set up the host on the Cloud by running the docker image.
  1. Clone the Vitis AI repository.
    git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
    cd Vitis-AI
  2. Run Docker container according to the instructions in the docker installation guide.
    ./docker_run.sh -X xilinx/vitis-ai-cpu:<x.y.z>
    Note: A workspace folder is created by the docker runtime system and is mounted in /workspace of the docker runtime system.
  3. Place the program, data and other files to be developed in the workspace folder. After the docker system starts, locate them in the /workspace of the docker system.

    Do not put the files in any other path of the docker system. They will be erased after you exit the docker system.

  4. Select the model for your platform. You can find the download links for the latest models in the yaml files of the model in the Vitis-AI/models/AI-Model-Zoo location.
    • If the /usr/share/vitis_ai_library/model folder does not exist, create it first.
      sudo mkdir -p /usr/share/vitis_ai_library/models
    • For DPUCAHX8H of the Alveo U50 card, take resnet_v1_50_tf as an example.
      wget https://www.xilinx.com/bin/public/openDownload?filename=resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8H-r1.4.0.tar.gz -O resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8H-r1.4.0.tar.gz
      tar -xzvf resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8H-r1.4.0.tar.gz
      sudo cp resnet_v1_50_tf /usr/share/vitis_ai_library/models -r
    • For DPUCAHX8L of the Alveo U50LV card, take resnet_v1_50_tf as an example.
      wget https://www.xilinx.com/bin/public/openDownload?filename=resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8L-r1.4.0.tar.gz -O resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8L-r1.4.0.tar.gz
      tar -xzvf resnet_v1_50_tf-u50-u50lv-u280-DPUCAHX8L-r1.4.0.tar.gz
      sudo cp resnet_v1_50_tf /usr/share/vitis_ai_library/models -r
  5. Download the cloud xclbin package from here. Untar it, select the Alveo card, and install it. For DPUCAHX8H, take U50 as an example.
    tar -xzvf xclbin-1.4.0.tar.gz
    sudo cp DPU_DPUCAHX8H/dpu_DPUCAHX8H_6E300_xilinx_u50_gen3x4_xdma_base_2.xclbin /opt/xilinx/overlaybins/dpu.xclbin
    export XLNX_VART_FIRMWARE=/opt/xilinx/overlaybins/dpu.xclbin
    For DPUCAHX8L, take U50lv as an example.
    tar -xzvf xclbin-1.4.0.tar.gz
    sudo cp DPU_DPUCAHX8L/dpu_DPUCAHX8L_1E250_xilinx_u50lv_gen3x4_xdma_base_2.xclbin /opt/xilinx/overlaybins/dpu.xclbin
    export XLNX_VART_FIRMWARE=/opt/xilinx/overlaybins/dpu.xclbin
  6. If there is more than one card installed on the server and you want to specify some cards to run the program, you can set XLNX_ENABLE_DEVICES to achieve this function. The following is the usage of XLNX_ENABLE_DEVICES:
    • export XLNX_ENABLE_DEVICES=0 --only use device 0 for DPU
    • export XLNX_ENABLE_DEVICES=0,1,2 --select device 0, device 1 and device 2 to be used for DPU
    • If you do not set this environment variable, use all devices for DPU by default.
  7. To compile the library sample in the Vitis AI Library, take classification for example, execute the following command:
    cd /workspace/demo/Vitis-AI-Library/samples/classification
    bash -x build.sh

    The executable program is now produced.

  8. To modify the library source code, view and modify them under /workspace/tools/Vitis-AI-Library.

    Before compiling the AI libraries, confirm the compiled output path. The default output path is: $HOME/build.

    If you want to change the default output path, modify the build_dir_default in cmake.sh. Such as, change from build_dir_default=$HOME/build/build.${target_info}/${project_name} to build_dir_default=/workspace/build/build.${target_info}/${project_name}.

    Note: If you want to modify the build_dir_default, modify $HOME only.

    Execute the following command to build the libraries all at once:

    cd /workspace/tools/Vitis-AI-Library
    ./cmake.sh --clean

    After compiling, you can find the generated AI libraries under build_dir_default. If you want to change the compilation rules, check and change the cmake.sh in the library’s directory.

Scaling Down the Frequency of the DPU

Due to the power limitation of the card, all CNN models on each Alveo card cannot run at the highest frequencies. Sometimes frequency scaling-down operation is necessary.

The DPU core clock is generated from an internal DCM module driven by the platform Clock_1 with the default value of 100 MHz, and the core clock is always linearly proportional to Clock_1. For example, in U50LV-10E275M overlay, the 275 MHz core clock is driven by the 100 MHz clock source. So, to set the core clock of this overlay to 220 MHz, set the frequency of Clock_1 to (220/275)*100 = 80 MHz.

You could use the XRT xbutil tools to scale down the running frequency of the DPU overlay before you run the VART/Library examples. Before the frequency scaling-down operation, the overlays should be programmed into the FPGA first. Refer to the following example commands to program the FPGA and scale down the frequency. These commands will set the Clock_1 to 80 MHz and can be run at host or in the docker.

/opt/xilinx/xrt/bin/xbutil reset -d 0
/opt/xilinx/xrt/bin/xbutil program -p /usr/lib/dpu.xclbin
/opt/xilinx/xrt/bin/xbutil clock -d0 -g 80

d0 is the Alveo card device number. For more information about xbutil tool, see the XRT documents.