RNN quantizer supports different RNN networks, including RNN, GRU, LSTM, and bi-directional LSTM. It performs fixed-point int16 quantization for model parameters and activations. The compiler generates instructions based on quantized model and hardware architecture.
Note: Currently, RNN compiler only supports standard
LSTM, GRU, and OpenIE models, and the generated instructions can only be deployed in the
Alveo™
U25 and U50LV cards.