Vitis™ AI is a Xilinx® development kit for AI inference on Xilinx hardware platforms. Inference in machine learning is computation-intensive and requires high memory bandwidth to meet the low-latency and high-throughput requirements of various applications.
The Vitis AI optimizer provides the ability to optimize neural network models. Currently, the Vitis AI optimizer includes only one tool called the pruner. The Vitis AI pruner prunes redundant kernels in neural networks thereby reducing the overall computational cost for inference. The pruned models produced by the Vitis AI pruner are then quantized by the Vitis AI quantizer and deployed to a Xilinx FPGA, SoC, or ACAP devices. For more information on the Vitis AI quantizer and deployment, see the Vitis AI User Guide (UG1414).
The Vitis AI pruner supports four deep learning frameworks. The frameworks and their corresponding tool names (_p_ denotes pruning) are listed in the following table:
|TensorFlow||vai_p_tensorflow (TF1.15), vai_p_tensorflow2 (TF2.x)|
The Vitis AI optimizer requires a commercial license. Contact firstname.lastname@example.org to obtain access to the Vitis AI optimizer installation package and license.