After the successful execution of the vai_q_tensorflow command, two files
are generated in ${output_dir}
:
- quantize_eval_model.pb is used to evaluate on CPU/GPUs, and can be used to simulate the results on hardware. Run import tensorflow.contrib.decent_q explicitly to register the custom quantize operation, because tensorflow.contrib is now lazy-loaded.
- deploy_model.pb is used to compile the DPU codes and deploy on it. It can be used as the input file for the Vitis AI compiler.
No. | Name | Description |
---|---|---|
1 | deploy_model.pb | Quantized model for VAI compiler (extended TensorFlow format) for targeting DPUCZDX8G implementations. |
2 | quantize_eval_model.pb | Quantized model for evaluation (also, VAI compiler input for most DPU architectures, like DPUCAHX8H, DPUCAHX8L, and DPUCADF8H) |