Version: Vitis 2023.1
AMD Versal™ adaptive SoC AI Core Series are heterogeneous devices containing many domains with compute capabilities. With respect to Digital Signal Processing (DSP) and particularly Finite Impulse Response (FIR) filters, the two domains of interest are:
The Programmable Logic (PL), which is the “classical” domain of AMD devices.
The AI Engine Processor Array, which is a new domain within Versal adaptive SoC AMD devices
FIR filter architecture is a rich and fruitful electrical engineering domain, especially when the input sampling rate becomes higher than the clock rate of the device (Super Sampling Rate or SSR). For the PL, there exists a number of solutions that are already available using turnkey IP solution (FIR Compiler). The AI Engine array is a completely new processor and processor array architecture with enormous compute capabilities, so an efficient filtering architecture has to be found using all the capabilities of the AI Engine array, but also all the communications that are possible with the PL.
The purpose of this tutorial is to provide a methodology to enable you to make appropriate choices depending on the filter characteristics, and to provide examples on how to implement Super Sampling Rate (SSR) FIR Filters on a Versal adaptive SoC AI Engine processor array.
Before You Begin
Before beginning this tutorial, you should be familiar with Versal adaptive SoC architecture and more specifically on the AI Engine array processor and interconnect architecture.
IMPORTANT: Before beginning the tutorial, make sure that you have installed the Vitis 2023.1 software. The AMD Vitis™ release includes all the embedded base platforms, including the VCK190 base platform that is used in this tutorial. In addition, ensure that you have downloaded the Common Images for Embedded Vitis Platforms from this link https://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/embedded-platforms/2023-1.html. The ‘common image’ package contains a prebuilt Linux kernel and root file system that can be used with the Versal board for embedded design development using Vitis. Before starting this tutorial, run the following steps:
Go to the directory where you have unzipped the Versal Common Image package.
In a Bash shell, run the
/Common Images Dir/xilinx-versal-common-v2023.1/environment-setup-cortexa72-cortexa53-xilinx-linuxscript. This script sets up the
CXXvariables. If the script is not present, run the
/Common Images Dir/xilinx-versal-common-v2023.1/sdk.sh.
Set up your
IMAGEto point to the
Imagefiles located in the
/Common Images Dir/xilinx-versal-common-v2023.1directory.
Set up your
PLATFORM_REPO_PATHSenvironment variable to
$XILINX_VITIS/lin64/Vitis/2023.1/base_platforms/xilinx_vck190_base_dfx_202310_1/xilinx_vck190_base_dfx_202310_1.xpfm. This tutorial targets VCK190 production board for 2023.1 version.
Data generation for this tutorial requires Python:
Packages: math, shutils, functools, matplotlib, numpy, random
Accessing the Tutorial Reference Files
To access the reference files, type the following into a terminal:
git clone https://github.com/Xilinx/Vitis-Tutorials.git.
Navigate to the
Vitis-Tutorials/AI_Engine_Development/Design_Tutorials/02-super_sampling_rate_fir/directory, and type
source addon_setup.shto update the path for Python libraries and executable.
You can now start the tutorial.
SSR FIR Tutorial
This tutorial is decomposed into multiple steps:
Polyphase FIR (SSR)
Summary of AI Engine Architecture
You should have already read the AI Engine Detailed Architecture, so the purpose of this chapter is simply to highlight the features of the AI Engine that are useful for this tutorial.
Versal adaptive SoCs combine Scalar Engines, Adaptable Engines, and Intelligent Engines with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application.
Intelligent Engines are SIMD VLIW AI Engines for adaptive inference and advanced signal processing compute.
DSP Engines are for fixed point, floating point, and complex MAC operations.
The SIMD VLIW AI Engines come as an array of interconnected processors using the AXI-Stream interconnect blocks as shown in the following figure: