AI Quantizer - 1.2 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-07-21
Version
1.2 English

By converting the 32-bit floating-point weights and activations to fixed-point like INT8, the AI Quantizer can reduce the computing complexity without losing prediction accuracy. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.

Figure 1. AI Quantizer