vai_q_caffe Quantize Finetuning - 1.1 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-03-23
Version
1.1 English
Generally, there is a small accuracy loss after quantization, but for some networks such as Mobilenets, the accuracy loss can be large. In this situation, quantize finetuning can be used to further improve the accuracy of quantized models.

Finetuning is almost the same as model training, which needs the original training dataset and a solver.prototxt. Follow the steps below to start finetuning with the fix_train_test.prototxt and caffemodel.

  1. Assign the training dataset to the input layer of fix_train_test.prototxt.
  2. Create a solver.prototxt file for finetuning. An example of a solver.prototxt file is provided below. You can adjust the hyper-parameters to get good results. The most important parameter is base_lr, which is usually much smaller than the one used in training.

  3. Run the following command to start finetuning:
    ./vai_q_caffe finetune -solver solver.prototxt -weights quantize_results/quantize_train_test.caffemodel -gpu all
  4. Deploy the finetuned model. The finetuned model is generated in the snapshot_prefix settings of the solver.prototxt file, such as ${snapshot_prefix}/finetuned_iter10000.caffemodel. You can use the test command to test its accuracy.
  5. Finally, you can use the deploy command to generate the deploy model (prototxt and caffemodel) for the Vitis AI compiler.
    ./vai_q_caffe deploy -model quantize_results/quantize_train_test.prototxt -weights finetuned_iter10000.caffemodel -gpu 0 -output_dir deploy_output