Versal adaptive compute acceleration platforms (Adaptive SoCs) combine Scalar Engines, Adaptable Engines, and Intelligent Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application. Intelligent Engines are SIMD VLIW AI Engines for adaptive inference and advanced signal processing compute, and DSP Engines for fixed point, floating point, and complex MAC operations.
The Intelligent Engine comes as an array of AI Engines connected together using AXI-Stream interconnect blocks:
AI Engine array
As seen in the image above, each AI Engine is connected to four memory modules on the four cardinal directions. The AI Engine and memory modules are both connected to the AXI-Stream interconnect.
The AI Engine is a VLIW (7-way) processor that contains:
Instruction Fetch and Decode Unit
A Scalar Unit
A Vector Unit (SIMD)
Three Address Generator Units
Memory and Stream Interface
AI Engine Module
Have a look at the fixed-point unit pipeline, as well as floating-point unit pipeline within the vector unit.