To maintain forward compatibility, Vitis AI still supports the application of DNNDK for deep learning applications development over edge DPUCZDX8G. The legacy DNNDK C++/Python examples for ZCU102 and ZCU104 are available in https://github.com/Xilinx/Vitis-AI/tree/master/demo/DNNDK. You can follow the guidelines in https://github.com/Xilinx/Vitis-AI/tree/master/demo/DNNDK/README.md to set up the environment and evaluate these samples.
Example Name | Models | Framework | Notes |
---|---|---|---|
resnet50 | ResNet50 | Caffe | Image classification with Vitis AI advanced C++ APIs. |
resnet50_mt | ResNet50 | Caffe | Multi-threading image classification with Vitis AI advanced C++ APIs. |
tf_resnet50 | ResNet50 | TensorFlow | Image classification with Vitis AI advanced Python APIs. |
mini_resnet_py | Mini-ResNet | TensorFlow | Image classification with Vitis AI advanced Python APIs. |
inception_v1 | Inception-v1 | Caffe | Image classification with Vitis AI advanced C++ APIs. |
inception_v1_mt | Inception-v1 | Caffe | Multi-threading image classification with Vitis AI advanced C++ APIs. |
inception_v1_mt_py | Inception-v1 | Caffe | Multi-threading image classification with Vitis AI advanced Python APIs. |
mobilenet | MiblieNet | Caffe | Image classification with Vitis AI advanced C++ APIs. |
mobilenet_mt | MobileNet | Caffe | Multi-threading image classification with Vitis AI advanced C++ APIs. |
face_detection | DenseBox | Caffe | Face detection with Vitis AI advanced C++ APIs. |
pose_detection | SSD, Pose detection | Caffe | Pose detection with Vitis AI advanced C++ APIs. |
video_analysis | SSD | Caffe | Traffic detection with Vitis AI advanced C++ APIs. |
adas_detection | YOLO-v3 | Caffe | ADAS detection with Vitis AI advanced C++ APIs. |
segmentation | FPN | Caffe | Semantic segmentation with Vitis AI advanced C++ APIs. |
split_io | SSD | TensorFlow | DPU split I/O memory model programming with Vitis AI advanced C++ APIs. |
debugging | Inception-v1 | TensorFlow | DPU debugging with Vitis AI advanced C++ APIs. |
tf_yolov3_voc_py | YOLO-v3 | TensorFlow | Object detection with Vitis AI advanced Python APIs. |
You must follow the descriptions in the following table to prepare several images before running the samples on the evaluation boards.
Image Directory | Note |
---|---|
vitis_ai_dnndk_samples/dataset/image500_640_480/ | Download several images from the ImageNet dataset and scale to the same resolution 640*480. |
vitis_ai_dnndk_samples2/ image_224_224/ | Download one image from the ImageNet dataset and scale to resolution 224*224. |
vitis_ai_dnndk_samples/ image_32_32/ | Download several images from the CIFAR-10 dataset https://www.cs.toronto.edu/~kriz/cifar.html. |
vitis_ai_dnndk_samples/resnet50_mt/image/ | Download one image from the ImageNet dataset. |
vitis_ai_dnndk_samples/ mobilenet_mt/image/ | Download one image from the ImageNet dataset. |
vitis_ai_dnndk_samples/ inception_v1_mt/image/ | Download one image from the ImageNet dataset. |
vitis_ai_dnndk_samples/ debugging/decent_golden/dataset/images/ | Download one image from the ImageNet dataset and save it as cropped_224x224.jpg. |
vitis_ai_dnndk_samples/ tf_yolov3_voc_py/image/ | Download one image from the VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/ and save it as input.jpg. |
The following section illustrates how to run DNDNK examples using the ZCU102
board as the reference. Suppose the samples are located in the /workspace/mpsoc/vitis_ai_dnndk_samples directory. After
all the samples are built by Arm GCC
cross-compilation toolchains by running the ./build.sh
zcu102
script in the folder of each sample, it is recommended to copy the
whole directory /workspace/mpsoc/vitis_ai_dnndk_samples to the ZCU102 board directory,
/home/root/. You can choose to copy one single
DPU hybrid executable from the Docker container to the evaluation board for running. Pay
attention that the dependent image folder dataset or video folder video aree copied
together, and that the folder structures are kept as expected.
./build.sh zcu104
for each DNNDK sample for ZCU104 board. For the sake of simplicity, the directory of /home/root/vitis_ai_dnndk_samples/ is replaced by $dnndk_sample_base in the following descriptions.
ResNet-50
dnndk_sample_base/resnet50
contains an example of image classification using Caffe ResNet-50 model. It reads
the images under the $dnndk_sample_base/dataset/image500_640_480 directory and outputs
the classification result for each input image. You can then launch it with the
./resnet50
command.
Video Analytics
An object detection example is located under the $dnndk_sample_base/video_analysis directory. It
reads image frames from a video file and annotates detected vehicles and pedestrians
in real-time. Launch it with the command ./video_analysis
video/structure.mp4
(where video/structure.mp4 is
the input video file).
ADAS Detection
An example of object detection for Advanced Driver Assistance Systems (ADAS)
application using the YOLO-v3 network model is located in the directory $dnndk_sample_base/adas_detection directory. It
reads image frames from a video file and annotates in real-time. Launch it with the
./adas_detection video/adas.avi
command (where
video/adas.avi is the input video
file).
Semantic Segmentation
An example of semantic segmentation in the $dnndk_sample_base/segmentation directory. It reads
image frames from a video file and annotates in real-time. Launch it with the
./segmentation video/traffic.mp4
command
(where video/traffic.mp4 is the input video file).
Inception-v1 with Python
dnndk_sample_base/inception_v1_mt_py contains a multithreaded image classification example of Inception-v1 network developed with the advanced Python APIs. With the command python3 inception_v1_mt.py 4, it will run with four threads. The throughput (in fps) will be reported after it completes.
The Inception-v1 model is compiled to DPU xmodel file first and then transformed into the DPU shared library libdpumodelinception_v1.so with the following command on the evaluation board. dpu_inception_v1_*.xmodel means to include all DPU xmodel files generated by the VAI_C compiler.
aarch64-xilinx-linux-gcc -fPIC -shared \
dpu_inception_v1_*.xmodel -o libdpumodelinception_v1.so
Within the Vitis AI cross compilation environment on the host, use the following command instead.
source /opt/petalinux/2020.2/environment-setup-aarch64-xilinx-linux
CC -fPIC -shared dpu_inception_v1_*.elf -o libdpumodelinception_v1.so
miniResNet with Python
dnndk_sample_base/mini_resnet_py contains the image classification
example of TensorFlow miniResNet network developed with Vitis AI advanced Python APIs. With the command python3 mini_resnet.py
, the results of top-5 labels
and corresponding probabilities are displayed. miniResNet is described in the second
book Practitioner Bundle of the Deep Learning for Computer Vision with Python
series. It is a customization of the original ResNet-50 model and is also well
explained in the third book ImageNet Bundle from the same book’s series.
YOLO-v3 with Python
dnndk_sample_base/tf_yolov3_voc_py contains the object detection
example of TensorFlow YOLOv3 network developed with Vitis AI advanced Python APIs. With the command python3 tf_yolov3_voc.py
, the resulting image after
object detection is displayed.