The following are a list of suggestions for better pruning results, higher pruning rate, and smaller accuracy loss.
- Use as much data as possible to perform model analysis. Ideally, you should use all the data in the validation dataset, which is quite time consuming. You can also use partial validation set data, but you need to make sure at least half of the data set is used.
- During the finetuning stage, experiment with a few parameters, including the initial learning rate, the learning rate decay policy. Use the best result as the input to the next round of pruning.
- The data used in fine-tuning should be the same as the data used to train the baseline.
- If the accuracy does not improve sufficiently after several finetuning experiments, try reducing the pruning rate and then re-run pruning and finetuning.