The Vitis AI runtime packages, and Vitis AI Library samples and models are compiled into the pre-built Vitis AI board images. You can run the examples directly. If you have a new program, compile
it on the host side and copy the executable program to the target.
- Copy vitis_ai_library_r3.0.0_images.tar.gz and vitis_ai_library_r3.0.0_video.tar.gz from host
to the target using the
scp
command as shown below:[Host]$scp vitis_ai_library_r3.0.0_images.tar.gz root@IP_OF_BOARD:~/ [Host]$scp vitis_ai_library_r3.0.0_video.tar.gz root@IP_OF_BOARD:~/
- Untar the image and video packages on the
target.
cd ~ tar -xzvf vitis_ai_library_r3.0*_images.tar.gz -C Vitis-AI/examples/vai_library tar -xzvf vitis_ai_library_r3.0*_video.tar.gz -C Vitis-AI/examples/vai_library
- Enter the extracted directory of the example on the target board and then
compile the example. Take facedetect as an
example.
cd ~/Vitis-AI/examples/vai_library/samples/facedetect
- Run the
example.
./test_jpeg_facedetect densebox_320_320 sample_facedetect.jpg
Note: It supports batch mode. If the DPU batch number is more than 1, you can also run the following command../test_jpeg_facedetect densebox_320_320 <img1_url> [<img2_url> ...]
- View the running results.
There are two ways to view the results. One is to view the results by printing the information. The other way is to view the images by downloading the sample_facedetect_result.jpg image as shown in the following image:
- To run the video example, run the following
command:
./test_video_facedetect densebox_320_320 video_input.webm -t 8
where, video_input.webm is the name of the video file for input and
-t
is the number of threads. You must prepare the video file yourself.Note:- Pre-built Vitis AI board images only support video
file input in the
webm
orraw
format. If you want to use a video file in a format that that is not natively supported, you have to install the relevant packages, such as the ffmpeg package, on the target. - When a display is used as a sink for the post-processed video, the
performance will be limited to the maximum frame rate supported by
the display interface on the target. This may not reflect the
maximum performance, a fact that particularly important when you
have enabled multi-threading in order to benchmark maximum frame
rates. However, you can test the maximum inference performance of
the Vitis AI Libraries by issuing the following
command:
env DISPLAY=:0.0 DEBUG_DEMO=1 ./test_video_facedetect \ densebox_320_320 'multifilesrc location=~/video_input.webm \ ! decodebin ! videoconvert ! appsink sync=false' -t 2
- Pre-built Vitis AI board images only support video
file input in the
- To test the program with a USB camera as input, run the following
command:
./test_video_facedetect densebox_320_320 0 -t 8
Here, 0 is the first USB camera device node. If you have multiple USB cameras, the value is 1,2,3, etc., where,
-t
is the number of threads.Important: Enable X11 forwarding with the following command (suppose in this example that the host machine IP address is 192.168.0.10) when logging in to the board using an SSH terminal because the test_video examples require a Linux windows subsystem target to work properly.export DISPLAY=192.168.0.10:0.0
- To test the performance of the model, run the following
command:
./test_performance_facedetect densebox_320_320 test_performance_facedetect.list -t 8 -s 60
Here,
-t
is the number of threads and-s
is the number of seconds.To view a complete listing of command line options for the executable, run the command with the '
-h
' switch. - To run the demo, refer to Application Demos.