For Edge - 2.0 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2022-01-20
Version
2.0 English
The Vitis AI runtime packages, and Vitis AI Library samples and models are built into the board image. You can run the examples directly. If you have a new program, compile it on the host side and copy the executable program to the target.
  1. Copy vitis_ai_library_r2.0.x_images.tar.gz and vitis_ai_library_r2.0.x_video.tar.gz from host to the target using the scp command as shown below:
    [Host]$scp vitis_ai_library_r2.0.x_images.tar.gz root@IP_OF_BOARD:~/
    [Host]$scp vitis_ai_library_r2.0.x_video.tar.gz root@IP_OF_BOARD:~/
  2. Untar the image and video packages on the target.
    cd ~
    tar -xzvf vitis_ai_library_r2.0*_images.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
    tar -xzvf vitis_ai_library_r2.0*_video.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
  3. Enter the extracted directory of the example on the target board and then compile the example. Take facedetect as an example.
    cd ~/Vitis-AI/demo/Vitis-AI-Library/samples/facedetect
  4. Run the example.
    ./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
    Note: It supports batch mode. If the DPU batch number is more than 1, you can also run the following command.
    ./test_jpeg_facedetect densebox_320_320 <img1_url> [<img2_url> ...]
  5. View the running results.

    There are two ways to view the results. One is to view the results by printing the information. The other way is to view the images by downloading the sample_facedetect_result.jpg image as shown in the following image:



  6. To run the video example, run the following command:
    ./test_video_facedetect densebox_320_320 video_input.webm -t 8

    where, video_input.webm is the name of the video file for input and -t is the number of threads. You must prepare the video file yourself.

    Note:
    • The official system image only supports video file input in the webm or raw format. If you want to use the video file in a format that is different from the input, you have to install the relevant packages, such as the ffmpeg package, on the system.
    • Due to the limitation of video playback and display in the base platform system, the video can only be displayed as per the frame rate of display standard, which can not reflect the real processing performance. But you can check the actual video processing performance, especially with the multithreading, with the following commands:
      env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_facedetect \
      densebox_320_320 'multifilesrc location=~/video_input.webm \
      ! decodebin  !  videoconvert ! appsink sync=false' -t 2
  7. To test the program with a USB camera as input, run the following command:
    ./test_video_facedetect densebox_320_320 0 -t 8

    Here, 0 is the first USB camera device node. If you have multiple USB cameras, the value is 1,2,3, etc., where, -t is the number of threads.

    Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because the video examples require a Linux windows system to work properly.
    export DISPLAY=192.168.0.10:0.0
  8. To test the performance of the model, run the following command:
    ./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60 

    Here, -t is the number of threads and -s is the number of seconds.

    To view more parameter information, enter -h.

  9. To run the demo, refer to Application Demos.