For Edge - 1.4 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2021-07-22
Version
1.4 English

The Vitis AI Runtime packages, Vitis AI Library samples and models are built into the board image. You can run the examples directly. If you have a new program, compile it on the host side and copy the executable program to the target.

  1. Copy vitis_ai_library_r1.4.x_images.tar.gz and vitis_ai_library_r1.4.x_video.tar.gz from host to the target using scp with the following command:
    [Host]$scp vitis_ai_library_r1.4.x_images.tar.gz root@IP_OF_BOARD:~/
    [Host]$scp vitis_ai_library_r1.4.x_video.tar.gz root@IP_OF_BOARD:~/
  2. Untar the image and video packages on the target.
    cd ~
    tar -xzvf vitis_ai_library_r1.4*_images.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
    tar -xzvf vitis_ai_library_r1.4*_video.tar.gz -C Vitis-AI/demo/Vitis-AI-Library
  3. Enter the extracted directory of example in target board and then compile the example. Take facedetect as an example.
    cd ~/Vitis-AI/demo/Vitis-AI-Library/samples/facedetect
  4. Run the example.
    ./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
  5. View the running results.

    There are two ways to view the results. One is to view the results by printing information. The other way is to view the images by downloading the sample_facedetect_result.jpg image as shown in the following image:



  6. To run the video example, run the following command:
    ./test_video_facedetect densebox_320_320 video_input.webm -t 8

    where, video_input.webm is the name of the video file for input and -t is the <num_of_threads>. You must prepare the video file yourself.

    Note:
    • The official system image only supports video file input in webm or raw format. If you want to use video file in other format as the input, you have to install the relevant packages on the system, such as ffmpeg package.
    • Due to the limitation of video playback and display in the base platform system, it could only be displayed according to the frame rate of display standard, which could not reflect the real processing performance. But you can check the actual video processing performance, especially with the multithreading, with the following commands:
      env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_facedetect \
      densebox_320_320 'multifilesrc location=~/video_input.webm \
      ! decodebin  !  videoconvert ! appsink sync=false' -t 2
  7. To test the program with a USB camera as input, run the following command:
    ./test_video_facedetect densebox_320_320 0 -t 8

    0: The first USB camera device node. If you have multiple USB camera, the value might be 1,2,3 etc. -t: <num_of_threads>

    Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because all the video examples require a Linux windows system to work properly.
    export DISPLAY=192.168.0.10:0.0
  8. To test the performance of model, run the following command:
    ./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60 

    -t: <num_of_threads>

    -s: <num_of_seconds>

    For more parameter information, enter -h for viewing.

  9. To run the demo, refer to Application Demos.