Legacy DNNDK Examples - 1.2 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-07-21
Version
1.2 English

To keep forward compatibility, Vitis AI still supports the application of DNNDK for deep learning applications development over edge DPUCZDX8G. The legacy DNNDK C++/Python examples for ZCU102 and ZCU104 are available in https://github.com/Xilinx/Vitis-AI/tree/master/mpsoc/vitis_ai_dnndk_samples. You can follow the guidelines in https://github.com/Xilinx/Vitis-AI/blob/master/mpsoc/README.md to setup the environment and evaluate these samples.

After git clones the Vitis AI repository https://github.com/Xilinx/Vitis-AI.git and starts docker container, DNNDK samples can be found from the directory /workspace/mpsoc/vitis_ai_dnndk_samples/. These examples can be built with Arm GCC cross-compilation toolchains.

Follow these steps to set up the host cross compilation environment for DNNDK examples:

  1. Download sdk-2020.1.0.0.sh from https://www.xilinx.com/bin/public/openDownload?filename=sdk-2020.1.0.0.sh.
  2. Run the command below to install Arm GCC cross-compilation toolchain environment.
    ./sdk-2020.1.0.0.sh
  3. Run the following command to setup the environment.
    source /opt/petalinux/2020.1/environment-setup-aarch64-xilinx-linux

The DNNDK runtime package vitis-ai_v1.2_dnndk.tar.gz should be downloaded and copied to the ZCU102 or ZCU104 board, and then follow the steps below to set up the environment on the board:

tar -xzvf vitis-ai_v1.2_dnndk.tar.gz
cd vitis-ai_v1.2_dnndk
./install.sh
Note: The DNNDK runtime loads DPU overlay bin from the default directory /usr/lib/. Make sure that dpu.xclbin exists under /usr/lib/ as expected before running DNNDK examples. For the downloaded ZCU102 or ZCU104 system images, dpu.xclbin is copied to /usr/lib/ by default. For the customized image, it is up to you to copy dpu.xclbin manually.
The following table briefly describes all the available DNNDK examples.
Table 1. DNNDK Examples
Example Name Models Framework Notes
resnet50 ResNet50 Caffe Image classification with Vitis AI advanced C++ APIs.
resnet50_mt ResNet50 Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
tf_resnet50 ResNet50 TensorFlow Image classification with Vitis AI advanced Python APIs.
mini_resnet_py Mini-ResNet TensorFlow Image classification with Vitis AI advanced Python APIs.
inception_v1 Inception-v1 Caffe Image classification with Vitis AI advanced C++ APIs.
inception_v1_mt Inception-v1 Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
inception_v1_mt_py Inception-v1 Caffe Multi-threading image classification with Vitis AI advanced Python APIs.
mobilenet MiblieNet Caffe Image classification with Vitis AI advanced C++ APIs.
mobilenet_mt MiblieNet Caffe Multi-threading image classification with Vitis AI advanced C++ APIs.
face_detection DenseBox Caffe Face detetion with Vitis AI advanced C++ APIs.
pose_detection SSD, Pose detection Caffe Pose detection with Vitis AI advanced C++ APIs.
video_analysis SSD Caffe Traffic detection with Vitis AI advanced C++ APIs.
adas_detection YOLO-v3 Caffe ADAS detection with Vitis AI advanced C++ APIs.
segmentation FPN Caffe Semantic segmentation with Vitis AI advanced C++ APIs.
split_io SSD TensorFlow DPU split IO memory model programming with Vitis AI advanced C++ APIs.
debugging Inception-v1 TensorFlow DPU debugging with Vitis AI advanced C++ APIs.
tf_yolov3_voc_py YOLO-v3 TensorFlow Object detection with Vitis AI advanced Python APIs.

You must follow the descriptions in the following table to prepare several images before running the samples on the evaluation boards.

Table 2. Image Preparation for DNNDK Samples
Image Directory Note
vitis_ai_dnndk_samples/dataset/image500_640_480/ Download several images from ImageNet dataset and scale to the same resolution 640*480.
vitis_ai_dnndk_samples2/ image_224_224/ Download one image from ImageNet dataset and scale to resolution 224*224.
vitis_ai_dnndk_samples/ image_32_32/ Download several images from CIFAR-10 dataset https://www.cs.toronto.edu/~kriz/cifar.html.
vitis_ai_dnndk_samples/resnet50_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ mobilenet_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ inception_v1_mt/image/ Download one image from ImageNet dataset.
vitis_ai_dnndk_samples/ debugging/decent_golden/dataset/images/ Download one image from ImageNet dataset and save it as cropped_224x224.jpg.
vitis_ai_dnndk_samples/ tf_yolov3_voc_py/image/ Download one image from VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/ and save it as input.jpg.

This subsequent section illustrates how to run DNDNK examples, using the ZCU102 board as the reference as well. The samples are in the directory /workspace/mpsoc/vitis_ai_dnndk_samples. After all the samples are built by Arm GCC cross-compilation toolchains via running script ./build.sh zcu102 under the folder of each sample, it is recommended to copy the whole directory /workspace/mpsoc/vitis_ai_dnndk_samples to ZCU102 board directory /home/root/. You can choose to copy one single DPU hybrid executable from docker container to the evaluation board for running. Pay attention that the dependent image folder dataset or video folder video should be copied together, and the folder structures should also be kept as expected.

Note: You should run ./build.sh zcu104 for each DNNDK sample for ZCU104 board.

For the sake of simplicity, the directory of /home/root/vitis_ai_dnndk_samples/ is replaced by $dnndk_sample_base in the following descriptions.

ResNet-50

dnndk_sample_base/resnet50 contains an example of image classification using Caffe ResNet-50 model. It reads the images under the $dnndk_sample_base/dataset/image500_640_480 directory and outputs the classification result for each input image. You can then launch it with the ./resnet50 command.

Video Analytics

An object detection example is located under the $dnndk_sample_base/video_analysis directory. It reads image frames from a video file and annotates detected vehicles and pedestrians in real-time. Launch it with the command ./video_analysis video/structure.mp4 (where video/structure.mp4 is the input video file).

ADAS Detection

An example of object detection for Advanced Driver Assistance Systems (ADAS) application using YOLO-v3 network model is located under the directory $dnndk_sample_base/adas_detection. It reads image frames from a video file and annotates in real-time. Launch it with the ./adas_detection video/adas.avi command (where video/adas.avi is the input video file).

Semantic Segmentation

An example of semantic segmentation in the $dnndk_sample_base/segmentation directory. It reads image frames from a video file and annotates in real-time. Launch it with the ./segmentation video/traffic.mp4 command (where video/traffic.mp4 is the input video file).

Inception-v1 with Python

dnndk_sample_base/inception_v1_mt_py contains a multithreaded image classification example of Inception-v1 network developed with the advanced Python APIs. With the command python3 inception_v1_mt.py 4, it will run with four threads. The throughput (in fps) will be reported after it completes.

The Inception-v1 model is compiled to DPU ELF file first and then transformed into the DPU shared library libdpumodelinception_v1.so with the following command on the evaluation board. dpu_inception_v1_*.elf means to include all DPU ELF files generated by the VAI_C compiler. Refer to the DPU Shared Library section for more details.

 aarch64-xilinx-linux-gcc -fPIC -shared \	
 dpu_inception_v1_*.elf -o libdpumodelinception_v1.so

Within the Vitis AI cross compilation environment on the host, use the following command instead.

source /opt/petalinux/2020.1/environment-setup-aarch64-xilinx-linux

CC -fPIC -shared dpu_inception_v1_*.elf -o libdpumodelinception_v1.so
Note: The thread number for best throughput of multithread Inception-v1 example varies among evaluation boards because the DPU computation power and core number are different. Use dexplorer -w to view DPU signature information for each evaluation board.

miniResNet with Python

dnndk_sample_base/mini_resnet_py contains the image classification example of TensorFlow miniResNet network developed with Vitis AI advanced Python APIs. With the command python3 mini_resnet.py, the results of top-5 labels and corresponding probabilities are displayed. miniResNet is described in the second book Practitioner Bundle of the Deep Learning for Computer Vision with Python series. It is a customization of the original ResNet-50 model and is also well explained in the third book ImageNet Bundle from the same book’s series.

YOLO-v3 with Python

dnndk_sample_base/tf_yolov3_voc_py contains the object detection example of TensorFlow YOLOv3 network developed with Vitis AI advanced Python APIs. With the command python3 tf_yolov3_voc.py, the resulting image after object detection is displayed.