AI Engine Architecture Details - 2023.2 English

Vitis Tutorials: AI Engine (XD100)

Document ID
XD100
Release Date
2024-03-05
Version
2023.2 English

Versal adaptive compute acceleration platforms (Adaptive SoCs) combine Scalar Engines, Adaptable Engines, and Intelligent Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application. Intelligent Engines are SIMD VLIW AI Engines for adaptive inference and advanced signal processing compute, and DSP Engines for fixed point, floating point, and complex MAC operations.

missing image

The Intelligent Engine comes as an array of AI Engines connected together using AXI-Stream interconnect blocks:

AI Engine array

missing image

As seen in the image above, each AI Engine is connected to four memory modules on the four cardinal directions. The AI Engine and memory modules are both connected to the AXI-Stream interconnect.

The AI Engine is a VLIW (7-way) processor that contains:

  • Instruction Fetch and Decode Unit

  • A Scalar Unit

  • A Vector Unit (SIMD)

  • Three Address Generator Units

  • Memory and Stream Interface

AI Engine Module

missing image

Have a look at the fixed-point unit pipeline, as well as floating-point unit pipeline within the vector unit.