Supported Operators and DPU Limitations - 3.5 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2023-09-28
Version
3.5 English

AMD is continuously improving the DPU IP and the compiler to support more operators with better performance. The following table lists some typical operations and the configurations, such as kernel size and stride, that the DPU can support. The operator is assigned to the CPU if the operation configurations exceed these limitations. Additionally, the operators that the DPU can support depend on the DPU types, ISA versions, and configurations.

You can configure the DPUs to suit your requirements. You can choose engines, adjust intrinsic parameters, and create your own DPU IP with DPU reference design projects, but the limitations can vary between configurations. Use the following product guides for configuration information or compile the model with your DPU configuration. The compiler tells you which operators can be assigned to the CPU. The table shows a specific configuration of each DPU architecture.

  • DPUCZDX8G for Zynq UltraScale+ MPSoCs Product Guide(PG338)
  • DPUCAHX8H for Convolutional Neural Networks Product Guide (PG367)

  • DPUCVDX8G for Versal Adaptive SoCs Product Guide (PG389)
  • DPUCVDX8H for Convolutional Neural Networks v1.0 LogiCORE IP Product Guide (PG403)
  • DPUCV2DX8G for Versal Adaptive SoCs Product Guide(PG425)

The following operators are primitively defined in different deep learning frameworks. The compiler can automatically parse these operators, transform them into the XIR format, and distribute them to DPU or CPU. The tools partially supported by these operators are also listed. You can use Inspecting the Float Model to check the operators in your models.