Vitis AI RNN quantizer performs fixed-point int16 quantization for model parameters and activations. Quantizer reduces the computing complexity without losing accuracy. The quantized model requires less memory bandwidth and provides faster speed and higher power efficiency than the floating-point model.
Figure 1. Vitis AI RNN Quantizer