PID Controller Design with Model Composer for Versal ACAPs (XAPP1376)

Document ID
Release Date
1.0 English

The Versal AI Core DSP58s provide SPFP operations more efficiently than the earlier 16 nm devices, as shown in the following figure and table (see Versal AI Core Series Data Sheet: DC and AC Switching Characteristics (DS957)).

Figure 1. DSP58 Single Precision Floating Point Support
Table 1. Direct Instantiation of Floating Point Functions
Symbol Description Performance as a Function of Speed Grade and Operating Voltage (VCCINT) Units
0.88V (H) 0.80V (M) 0.70V (L)
-3 -2 -2 -1 -2 -1
Floating Point Arithmetic
FMAX_FP Floating-point operations 805 805 750 700 532 476 MHz

Prior DSP48-based devices used the integer unit and a combination of DSP48 and PL to build single, double, or custom precision floating point operators (FPO). The new DSP58 is backwards compatible with the DSP48 but adds a hard macro, an SPFP (aka FP32) multiplier, and an adder. The FP32 multiplier and adder have the following features:

  • Support for cascading
  • A, B, C, and D inputs 32-bit SPFP
  • Adder and multiplier output 32 bits
  • Both outputs available simultaneously

The SPFP multiplier and adder are IEEE-754 and OpenCL™ compliant and have the following inherent characteristics:

  • Multiply-Add, Multiply-Subtract, Multiply-Acc
  • Multiplication
  • Addition
  • Subtraction
  • Round-towards-nearest-even
  • Overflow, underflow, and invalid flags
  • Instantiation and FPO IP core
  • Four times more efficient SPFP implementation vs 16 nm DSP48 with no additional PL support logic required

Further, the addition of AI Engines gives the software a programmable, deterministic, and dedicated SPFP processing data path as demonstrated in the following figure. The vector processor has a dedicated floating point data path the following capabilities:

  • Single precision
  • Eight multiply-accumulate per cycle
  • Sign change (FPSGN) is on per-lane basis
Figure 2. AI Engine Single Precision Floating Point Support Unit

Whether combined or independent, the DSP58 and the AI Engine high performance compute engines enhance any DSP-centric design. But implementing, debugging, and validating a Versal ACAP design can be challenging with multiple design entry approaches such as RTL, C/C++ for PL, AI Engine, and intrinsics or AI Engine APIs for AI Engine. The VMC simplifies the AI Core DSP development by assisting in the following tasks:

  • DSP test bench development through pre-built Simulink toolboxes or MATLAB® source code
  • DSP verification and validation by taking advantage of the many Simulink visualization and debug methodologies
  • Node by node in situ comparison of a golden reference model to the algorithm in development
  • Co-simulation and development of the mixed language model designs using C++ for PL or AI Engines, LogiCORE™ IP, RTL, and intrinsics or AI Engine APIs for AI Engines
  • Functional debugging for reduced development cycles and cycle approximate AI Engine simulations
  • Evaluation and exportation of a C++ design and test bench as a Vitis HLS project for resource and timing optimization
  • Creation of an automated Adaptive Dataflow Graph for AI Engine designs
  • AI Engine hardware validation targeting a VCK190

To demonstrate DSP development using VMC, this application note provides three examples of methodologies to implement the same SPFP PID algorithm. All models discussed in this application note use fast bit accurate C++ models for functional verification and debug for the Versal AI Core designs. It also provides a node for node comparison to a Simulink golden reference model.