For Edge - 1.2 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2020-07-21
Version
1.2 English

The Vitis AI Runtime packages, Vitis-AI-Library samples and models have been built into the board image. You can run the examples directly. If you have a new program, compile it on the host side and copy the executable program to the target.

  1. Copy vitis_ai_library_r1.2.x_images.tar.gz and vitis_ai_library_r1.2.x_video.tar.gz from host to the target using scp with the following command.
    [Host]$scp vitis_ai_library_r1.2.x_images.tar.gz root@IP_OF_BOARD:~/
    [Host]$scp vitis_ai_library_r1.2.x_video.tar.gz root@IP_OF_BOARD:~/
  2. Untar the image and video packages on the target.
    #cd ~
    #tar -xzvf vitis_ai_library_r1.2*_images.tar.gz -C Vitis-AI/vitis-ai-library
    #tar -xzvf vitis_ai_library_r1.2*_video.tar.gz -C Vitis-AI/vitis-ai-library
  3. Enter the extracted directory of example in target board and then compile the example. Take facedetect as an example.
    #cd ~/Vitis-AI/vitis-ai-library/samples/facedetect
  4. Run the example.
    #./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
  5. View the running results.

    There are two ways to view the results. One is to view the results by printing information, while the other is to view images by downloading the sample_facedetect_result.jpg image as shown in the following image.

    Figure 1. Face Detection Example

  6. To run the video example, run the following command:
    #./test_video_facedetect densebox_320_320 video_input.webm -t 8
    video_input.webm: The video file's name for input. The user needs to prepare the video file by themselves.
    -t: <num_of_threads>
    Note:
    • The official system image only supports video file input in webm or raw format. If you want to use video file in other format as the input, you have to install the relevant packages on the system, such as ffmpeg package.
    • Due to the limitation of video playback and display in the base platform system, it could only be displayed according to the frame rate of display standard, which could not reflect the real processing performance. But you can check the actual video processing performance, especially with the multithreading, with the following commands.
      env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_facedetect \
      densebox_320_320 'multifilesrc location=~/video_input.webm \
      ! decodebin  !  videoconvert ! appsink sync=false' -t 2
  7. To test the program with a USB camera as input, run the following command:
    #./test_video_facedetect densebox_320_320 0 -t 8

    0: The first USB camera device node. If you have multiple USB camera, the value might be 1,2,3 etc. -t: <num_of_threads>

    Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because all the video examples require a Linux windows system to work properly.
    #export DISPLAY=192.168.0.10:0.0
  8. To test the performance of model, run the following command:
    #./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60 

    -t: <num_of_threads>

    -s: <num_of_seconds>

    For more parameter information, enter -h for viewing. The following image shows the result of performance testing in 8 threads.

    Figure 2. Face Detection Performance Test Result
  9. To run the demo, refer to Application Demos.

For Cloud