Follow these steps to run the docker image, quantize and compile the model, and
process the final inference on board.
- Run the docker
image.
./docker_run_rnn.sh -g xilinx/vitis-ai-rnn:latest
- Docker image has two conda environments, vitis-ai-rnn-pytorch and
vitis-ai-rnn-tensorflow, in which the RNN quantizers for pytorch and tensorflow
are installed. Activate the conda environment.
- For Pytorch:
conda activate vitis-ai-rnn-pytorch
- For
Tensorflow:
conda activate vitis-ai-rnn-tensorflow
conda activate vitis-ai-rnn
- For Pytorch:
- Copy the example to the docker environment and run the following relevant
steps:
- For Pytorch:
-
Copy example/lstm_quant_pytorch to the docker environment. The contents in the working directory look like this.
pretrained.pth: pretrained model for sentiment detection. quantize_lstm.py: python script to run quantization of the model run_quantizer.sh: test script to run python script
-
Run the test script.
cd example/lstm_quant_pytorch sh run_quantizer.sh
Two files and one import sub-directory are generated in the output directory ./quantize_result.
Lstm_StandardLstmCell_layer_0_forward.py: converted format model quant_info.json: quantization steps of tensors xmodel: subdirectory that contain deployed model
-
- For TensorFlow:
-
Copy example/lstm_quant_tensorflow to the docker environment. The contents in the working directory as follows:
pretrained.h5: pretrained model for sentiment detection. quantize_lstm.py: python script to run quantization of the model run_quantizer.sh: test script to run python script
-
Run the test script.
cd example/lstm_quant_tensorflow sh run_quantizer.sh
Two files and one import sub-directory are generated in the output directory ./quantize_result.
rnn_cell_0.py: converted format model quant_info.json: quantization steps of tensors xmodel: subdirectory that contain deployed model
-
- For Pytorch:
- Compile the xmodel.
Compile the xmodel for DPURADR16L(U25) using batch size = 1. It generates the instructions in the output file (compiled_batch_1.xmodel).
vai_c_rnn -x xmodel/ -t DPURADR16L -b 1 -o output
- Activate pytorch runtime
environment.
conda activate rnn-pytorch-1.7.1
- Run model on DPURADR16L(U25). For more information, see VART. Note: Xilinx provides three prebuilt examples, Customer Satisfaction, IMDB Sentiment Analysis, and OpenIE. The steps in VART run the prebuilt xmodels. To run the xmodel you compiled, replace data/compiled_batch_1.xmodel with your output xmodel file after executing the
"TARGET_DEVICE=U25 source ./setup.sh"
command.