AI Quantizer - 1.1 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-03-23
Version
1.1 English

By converting the 32-bit floating-point weights and activations to fixed-point like INT8, the AI Quantizer can reduce the computing complexity without losing prediction accuracy. The fixed-point network model requires less memory bandwidth, thus providing faster speed and higher power efficiency than the floating-point model.

Figure 1. AI Quantizer