For Edge - 3.5 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2023-06-29
Version
3.5 English
The Vitis AI runtime packages, and Vitis AI Library samples and models are compiled into the pre-built Vitis AI board images. You can run the examples directly. If you have a new program, compile it on the host side and copy the executable program to the target.
  1. Copy vitis_ai_library_r3.5.0_images.tar.gz and vitis_ai_library_r3.5.0_video.tar.gz from host to the target using the scp command as shown below:
    [Host]$scp vitis_ai_library_r3.5.0_images.tar.gz root@IP_OF_BOARD:~/
    [Host]$scp vitis_ai_library_r3.5.0_video.tar.gz root@IP_OF_BOARD:~/
  2. Untar the image and video packages on the target.
    cd ~
    tar -xzvf vitis_ai_library_r3.5*_images.tar.gz -C Vitis-AI/examples/vai_library
    tar -xzvf vitis_ai_library_r3.5*_video.tar.gz -C Vitis-AI/examples/vai_library
  3. Enter the extracted directory of the example on the target board and then compile the example. Take classification as an example.
    cd ~/Vitis-AI/examples/vai_library/samples/classification
  4. Run the example.
    ./test_jpeg_classification resnet50_pt sample_classification.jpg
    Note: It supports batch mode. If the DPU batch number is more than 1, you can also run the following command.
    ./test_jpeg_classification resnet50_pt <img1_url> [<img2_url> ...]
  5. View the running results.

    There are two ways to view the results. One is to view the results by printing the information. The other way is to view the images by downloading the 0_sample_classification_result.jpg image.

  6. To run the video example, run the following command:
    ./test_video_classification resnet50_pt video_input.webm -t 8

    where, video_input.webm is the name of the video file for input and -t is the number of threads. You must prepare the video file yourself.

    Note:
    • Pre-built Vitis AI board images only support video file input in the webm or raw format. If you want to use a video file in a format that is not natively supported, you have to install the relevant packages, such as the ffmpeg package, on the target.
    • When a display is used as a sink for the post-processed video, the performance will be limited to the maximum frame rate supported by the display interface on the target. This might not reflect the maximum performance, a fact that particularly important when you have enabled multi-threading to benchmark maximum frame rates. However, you can test the maximum inference performance of the Vitis AI Libraries by issuing the following command:
      env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_classification \
      resnet50_pt 'multifilesrc location=~/video_input.webm \
      ! decodebin  !  videoconvert ! appsink sync=false' -t 2
  7. To test the program with a USB camera as input, run the following command:
    ./test_video_classification resnet50_pt 0 -t 4

    Here, 0 is the first USB camera device node. If you have multiple USB cameras, the value is 1,2,3, etc., where, -t is the number of threads.

    Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because the test_video examples require a Linux windows subsystem target to work properly.
    export DISPLAY=192.168.0.10:0.0
  8. To test the performance of the model, run the following command:
    ./test_performance_classification resnet50_pt test_performance_classification.list -t 8 -s 60 

    Here, -t is the number of threads and -s is the number of seconds.

    To view a complete listing of command line options for the executable, run the command with the '-h' switch.

  9. To run the demo, refer to Application Demos.