Apache TVM is an open source deep learning compiler stack focusing on building efficient implementations for a wide variety of hardware architectures. It includes model parsing from TensorFlow, TensorFlow Lite (TFLite), Keras, PyTorch, MxNet, ONNX, Darknet, and others. Through the Vitis AI integration with TVM, Vitis AI is able to run models from these frameworks. TVM incorporates two phases. The first is a model compilation/quantization phase which produces the CPU/FPGA binary for your desired target CPU and DPU. Then by installing the TVM Runtime on your cloud or edge device, the TVM APIs in Python or C++ can be called to execute the model.
To read more about Apache TVM, see https://tvm.apache.org.
Vitis AI provides tutorials and installation guides on Vitis AI and TVM integration on the Vitis AI github repo: https://github.com/Xilinx/Vitis-AI/tree/master/tvm.