TensorFlow Workflow - 1.4.1 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2021-12-13
Version
1.4.1 English

To generate the quantized inference model and reference result, follow these steps:

  1. Generate the quantized inference model by running the following command to quantize the model.
    The quantized model, quantize_eval_model.pb, is generated in the quantize_model folder.
    vai_q_tensorflow quantize 	                               \
    	--input_frozen_graph ./float/resnet_v1_50_inference.pb   \
    	--input_fn input_fn.calib_input			  		    \
    	--output_dir quantize_model				              \
    	--input_nodes input								      \
    	--output_nodes resnet_v1_50/predictions/Reshape_1 	   \
    	--input_shapes	?,224,224,3					        \
    	--calib_iter	100
  2. Generate the reference result by running the following command to generate reference data.
    vai_q_tensorflow dump --input_frozen_graph        \
                quantize_model/quantize_eval_model.pb \
         --input_fn input_fn.dump_input               \
         --output_dir=dump_gpu 

    The following figure shows part of the reference data.

  3. Generate the DPU xmodel by running the following command to generate the DPU xmodel file.
    vai_c_tensorflow --frozen_pb quantize_model/quantize_eval_model.pb \
      --arch /opt/vitis_ai/compiler/arch/DPUCAHX8H/U50/arch.json       \
      --output_dir compile_model                                       \
      --net_name resnet50_tf
  4. Generate the DPU inference result by running the following command to generate the DPU inference result and compare the DPU inference result with the reference data automatically.
    env XLNX_ENABLE_DUMP=1 XLNX_ENABLE_DEBUG_MODE=1 XLNX_GOLDEN_DIR=./dump_gpu/dump_results_0 \
       xdputil run ./compile_model/resnet_v1_50_tf.xmodel            \
       ./dump_gpu/dump_results_0/input_aquant.bin                    \
       2>result.log 1>&2

    For xdputil more usage, execute xdputil --help command.

    After the above command runs, the DPU inference result and the comparing result result.log are generated. The DPU inference results are located in the dump folder.

  5. Crosscheck the reference result and the DPU inference result.
    1. View comparison results for all layers.
      grep --color=always 'XLNX_GOLDEN_DIR.*layer_name' result.log
    2. View only the failed layers.
      grep --color=always 'XLNX_GOLDEN_DIR.*fail ! layer_name' result.log

    If the crosscheck fails, use the following methods to further check from which layer the crosscheck fails.

    1. Check the input of DPU and GPU, make sure they use the same input data.
    2. Use xdputil tool to generate a picture for displaying the network's structure.
      Usage: xdputil xmodel <xmodel> -s <svg>
      Note: In the Vitis AI docker environment, execute the following command to install the required library.
      sudo apt-get install graphviz

      When you open the picture you created, you can see many little boxes around these ops. Each box means a layer on DPU. You can use the last op's name to find its corresponding one in GPU dump-result. The following figure shows parts of the structure.

    3. Submit the files to Xilinx.

      If certain layer proves to be wrong on DPU, prepare the quantized model, such as quantize_eval_model.pb as one package for further analysis by factory and send it to Xilinx with a detailed description.