Adding Hardware Interfaces - 2022.2 English

Vitis Unified Software Platform Documentation: Application Acceleration Development (UG1393)

Document ID
UG1393
Release Date
2022-12-07
Version
2022.2 English

The following table shows the possible Vitis inputs and the minimal requirements for an acceleration embedded platform.

Table 1. Available Interfaces for Vitis
Inputs Types Vitis Can Use Minimum Requirements for AXI MM Kernels
Control Interfaces AXI Master Interfaces from PS or from AXI Interconnect IP or SmartConnect IP One AXI4-Lite Master for kernel control
Memory Interfaces AXI Slave Interfaces One memory interface for data exchange
Streaming Interfaces AXI4-Stream Interfaces Not required
Clock Multiple clock signals One clock
Interrupt Multiple interrupt signals One Interrupt

General Requirements

Important: The source files for all elements of the Vivado project must be local to the project prior to exporting it as an XSA, or an error can be returned when using the platform in the Vitis tool.
  • Every IP used in the platform design that is not part of the standard Vivado IP catalog must be local to the Vivado Design Suite project. References to IP repository paths external to the project are not supported when creating extensible XSA.
  • Any platform interface, used for linking to kernels by the Vitis compiler, must be an AXI4, AXI4-Lite, AXI4-Stream, interrupt, clock, or reset type of interface.
  • Any platform IP that has an AXI interface for linking to kernels by the Vitis compiler must also have associated clock pins to enable v++ to correctly infer and insert clock domain crossing logic when needed.
  • Custom bus type and hardware interfaces on the platform or on kernels are not supported through v++ linker --connectivity.sp and --connectivity.sc directives. If a data bus with a custom bus type needs to be connected to kernels by the Vitis compiler, it must be converted to an AXI4, AXI4-Lite, or AXI4-Stream interface.

Project Type

To create a new XSA platform for the Vivado project type, select RTL Project and enable Project is an extensible Vitis platform check box.

Figure 1. Project Type
Tip: You will see these settings in the New Project wizard.

When creating a new project, select Project is an extensible Vitis platform.

To change an existing Vivado project to an extensible Vitis platform project, select Project Manager > Settings in the Flow Navigator and enable Project is an extensible Vitis platform.

set_property platform.extensible true [current_project]

Adding Platform Interfaces

If a component in block design has a PFM property, this component can be recognized by v++ linker and can be used by the acceleration kernel.

In Vivado IDE, the Platform interface (PFM) properties can be set in the Platform Setup window if the project is created as an extensible platform project. Click Window menu > Platform Setup to open the settings.

Figure 2. Platform Setup
Tip: Platform interfaces can be defined manually in the Tcl Console, or by a Tcl script as well.

The four Platform Interface Tcl APIs include:

  • AXI memory-mapped interfaces:
    set_property PFM.AXI_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
  • AXI4-Stream interfaces:
    set_property PFM.AXIS_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
  • Clocks and resets:
    set_property PFM.CLOCK { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]
  • Interrupts:
    set_property PFM.IRQ {pin_name {id id_number range irq_count}} [get_bd_cells <cell_name>]

The requirements for the PFM Properties are:

  • The value of the PFM interface properties must be specified as a Tcl dictionary, a list of name/"value" pairs.
    Important: The "value" must be quoted, and both the name and value are case-sensitive.
  • A bd_cell can have multiple PFM interface definitions. However, for each type of PFM interface, all ports are required to be set in a single set_property Tcl command.
  • For each PFM interface property, the name specified for the port object must match the name of an external port or interface on a bd_cell. Each external port or interface object can only have one PFM interface definition.
  • Each different type of PFM interface can have different parameters.
  • Setting the PFM property with a NULL ("") string will delete previously defined PFM interfaces.

Adding AXI Interfaces

To support AXI memory mapped kernels, the platform needs to declare at least one AXI control interface with AXI memory-mapped master port (M_AXI) and one memory interface with AXI Slave port (S_AXI). They can be exported from the PS block directly or have an interconnect IP connected. If the platform does not work with AXI memory mapped kernels, these interfaces are not required.

Due to security DFX decoupler IP is required to control the kernel if it is a DFX platform. The DFX decoupler can turn off the channels during reconfiguration to prevent unexpected requests from the static region cause invalid status on AXI interface and prevent random toggles generated by RP reconfiguration cause unexpected side-effects in static region. XRT will turn on DFX decoupler before the reconfiguration process and turn off after the reconfiguration completes.

NOC stub is required in a DFX platform in Vitis Region to export memory interface for the platform. V++ linker can connect the PL kernel memory interfaces to the NOC stub to access memory.

The following is the Tcl command syntax:

set_property PFM.AXI_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]

The AXI control interfaces and AXI memory interfaces share the same PFM.AXI property. They have different memport types.

Memport
AXI control interface can be defined as M_AXI_GP. Memory interfaces use other types: S_AXI_HP, S_AXI_ACP, S_AXI_HPC, or MIG.
SP Tag ID
(Optional) A user-defined ID that should start with an alphabetic character. The ID is case-sensitive. The system port tag (sptag) is a symbolic identifier that represents a class of platform port connections, such as S_AXI_HP, S_AXI_ACP,... Multiple block design platform ports can share the same sptag. For more information on how sptags are used, see Mapping Kernel Ports to Memory.
Tip: The sptag property for is not supported M_AXI_GP ports.
Memory
(Optional) Specify the associated MIG IP instance and address_segment. The memory tag is a unique identifier that combines the Cell Name and Base Name columns in the IP integrator Address Editor. This tag will be associated with connections to the Memory Subsystem HIP, where multiple block design platform ports can share the same memory tag.

Exporting AXI interconnect master and slave ports involves the following requirements:

  • All ports on the interconnect used within the platform must precede in index order any declared platform interfaces.
  • There can be no gaps in the port indexing.
  • The maximum number of master IDs for the S_AXI_ACP port is 8, so on a connected AXI interconnect, available ports to declare must be one of {S00_AXI, S01_AXI, ..., S07_AXI}. Do not declare any ports that are used within the platform itself. Declaring as many as possible will allow sds++ to avoid cascaded axi_interconnects.
  • The maximum number of master IDs for an S_AXI_HP or MIG port is 16, so on an connected AXI interconnect, available ports to declare must be one of {S00_AXI, S01_AXI, ..., S15_AXI}. Do not declare any ports that are used within the platform itself. Declaring as many as possible will allow v++ to avoid cascaded axi_interconnects in generated user systems.
  • The maximum number of master ports declared on an interconnect connected to an M_AXI_GP port is 64, so on an connected AXI interconnect, available ports to declare must be one of {M00_AXI, M01_AXI, ..., M63_AXI}. Do not declare any ports that are use within the platform itself. Declaring as many as possible will allow v++ to avoid cascaded axi_interconnects in generated user systems.

The following shows an example of defining an AXI master ports on AXI Interconnect IP:

set parVal []
for {set i 2} {$i < 64} {incr i} {
lappend parVal M[format %02d $i]_AXI \
{memport "M_AXI_GP"}
}
set_property PFM.AXI_PORT $parVal [get_bd_cells /axi_interconnect_0]

The following shows an example of defining AXI memory ports with MIG on SmartConnect IP:

set parVal []
for {set i 1} {$i < 16} {incr i} {
lappend parVal S[format %02d $i]_AXI 
{memport "MIG" sptag "Bank0"}
}
set_property PFM.AXI_PORT $parVal [get_bd_cells /smartconnect_0]

The following is an example of the PFM.AXI_PORT setting for control interface and memory interface.

set_property PFM.AXI_PORT {
M_AXI_HPM1_FPD {memport "M_AXI_GP"} 
S_AXI_HPC0_FPD {memport "S_AXI_HPC" sptag "HPC0" memory "zynq_ultra_ps_e_0 HPC0_DDR_LOW"}  
S_AXI_HPC1_FPD {memport "S_AXI_HPC" sptag "HPC1" memory "zynq_ultra_ps_e_0 HPC1_DDR_LOW"}  
S_AXI_HP0_FPD {memport "S_AXI_HP" sptag "HP0" memory "zynq_ultra_ps_e_0 HP0_DDR_LOW"}  
S_AXI_HP1_FPD {memport "S_AXI_HP" sptag "HP1" memory "zynq_ultra_ps_e_0 HP1_DDR_LOW"}  
S_AXI_HP2_FPD {memport "S_AXI_HP" sptag "HP2" memory "zynq_ultra_ps_e_0 HP2_DDR_LOW"}
} [get_bd_cells /ps_e]
Tip: In the examples above, zynq_ultra_ps_e_0 is the instance name of the Zynq UltraScale+ MPSoC module, and HPC0_DDR_LOW is the address range name.

Adding AXI4-Stream Interfaces

To support AXI4-Stream stream kernels, the platform needs to declare the corresponding master or slave AXI4-Stream interfaces.

AXI4-Stream kernel interfaces are specified with the PFM.AXIS_PORT sptag interface property and a matching connectivity.sc command argument to the v++ linker.

The following is the Tcl command syntax:

set_property PFM.AXIS_PORT { <port_name> {parameters} <port2> {parameters} ...} [get_bd_cells <cell_name>]

Argument Description

Port_name
AXI4-Stream port name.
Parameters
type value: Streaming interface port type. Valid values for type include:
  • M_AXIS: A general-purpose AXI master port
  • S_AXIS: A high-performance AXI slave port

Example

set_property PFM.AXIS_PORT {AXIS_P0 {type "S_AXIS"}} [get_bd_cells /zynq_ultra_ps_e_0]
Note: For more information on linking AXI4-Stream interfaces between kernels and platforms, see Specifying Streaming Connections.

Adding Clock and Resets

If it is a DFX platform static region and dynamic region can have their own clock and reset signals. The Clock Wizard in static region is required so that device tree generator (DTG) can generate correct device tree to describe this clock topology.

Figure 3. Platform Setup – Clock

You can export any clock source with the platform, but for each clock you must also export synchronized reset signals using a Processor System Reset IP block in the platform. For details of defining clocks and resets, see the Vitis-Tutorials/Vitis_Platform_Creation. The PFM.CLOCK property can be set on a BD cell, external port, or external interface.

In the figure above you can see the details of the platform clocks. There must be at least one enabled clock for the platform and one clock must be specified as the default.

The following is the Tcl command for setting the PFM.CLOCK property:

set_property PFM.CLOCK { <port_name> {parameters} \
<port2> {parameters} ...} [get_bd_cells <cell_name>]

Adding Interrupts

DFX decoupler IP is required when it is a DFX platform. It is used to connect the interrupt controller in the static region and the concat IP which is used to export the Interrupt signals in the Vitis region. Interrupt Controller outputs to CIPS IRQ.

Vitis provides a way to automatically connect the kernel output IRQ signal to an IRQ in the platform during the v++ link stage. The following shows the Tcl command syntax:

set_property PFM.IRQ {pin_name {id id_number}} bd_cell
set_property PFM.IRQ {port_name {id id_number range irq_count}} [get_bd_cell <cell_name>]

Argument Description

Port_name
IRQ port name of bd_cell.
id_number
Integer from 0 to 127 to specify the IRQ number or the starting number if range is specified.
irq_count
Used for labeling interfaces that are otherwise subject to parameter propagation for specifying sizing of a bus (for example, interrupt controller intr interface).

The example shows how to enable 32 IRQ inputs to axi_intc_0 intr port.

set_property PFM.IRQ {intr {id 0 range 32}} [get_bd_cells /axi_intc_0]

The example shows how to enable 63 IRQ with cascaded interrupt controller in VCK190 base platform.

set_property PFM.IRQ {intr {id 0 range 32}}  [get_bd_cells /axi_intc_cascaded_1]
set_property PFM.IRQ {In0 {id 32} In1 {id 33} In2 {id 34} In3 {id 35} In4 {id 36} In5 {id 37} In6 {id 38} In7 {id 39} In8 {id 40} \
                               In9 {id 41} In10 {id 42} In11 {id 43} In12 {id 44} In13 {id 45} In14 {id 46} In15 {id 47} In16 {id 48} In17 {id 49} In18 {id 50} \
                               In19 {id 51} In20 {id 52} In21 {id 53} In22 {id 54} In23 {id 55} In24 {id 56} In25 {id 57} In26 {id 58} In27 {id 59} In28 {id 60} \
                               In29 {id 61} In30 {id 62}} [get_bd_cells /xlconcat_0]

Platform settings for DFX only

Address aperture setting
Setting the address range for the dynamic region is required so that the CIPS and the SmartConnect can access kernels in the dynamic region after linking stage by V++.
Setting the apertures to DDR interfaces is required to export the accessible DDR range for the kernel.
Lock the aperture settings by typing the following command in the Tcl console.
set_property HDL_ATTRIBUTE.LOCKED TRUE [get_bd_intf_pins /VitisRegion/PL_CTRL_S_AXI] set_property HDL_ATTRIBUTE.LOCKED TRUE [get_bd_intf_pins /VitisRegion/DDR_0] set_property HDL_ATTRIBUTE.LOCKED TRUE [get_bd_intf_pins /VitisRegion/DDR_1] set_property HDL_ATTRIBUTE.LOCKED TRUE [get_bd_intf_pins /VitisRegion/DDR_2] set_property HDL_ATTRIBUTE.LOCKED TRUE [get_bd_intf_pins /VitisRegion/DDR_3]
Setup Block Design Container (BDC) for DFX
Update VitisRegion BDC properties for DFX to freeze the boundary of this container and Enable Dynamic Function eXchange on this container.
Configure Dynamic Function eXchange Wizard and add a configuration.
Setup the DFX platforms properties

# Specify that this platform supports 
DFX set_property platform.uses_pr true [current_project] 
# Specify the dynamic region instance path for hardware run 
set_property platform.dr_inst_path {design_1_i/VitisRegion} [current_project] 
# Specify the dynamic region instance path for emulation 
set_property platform.emu.dr_bd_inst_path {design_1_wrapper_sim_wrapper/design_1_wrapper_i/design_1_i/VitisRegion} [current_project]

Exporting the Extensible Platforms

Hardware platforms are encapsulated in XSA file format. There are two kinds of XSA formats: fixed XSA for embedded software development and extensible XSA for Vitis application acceleration projects. To create an embedded platform for the Vitis application acceleration flow, you must use an extensible XSA.

When the Vivado project type is set to extensible Vitis platform, the Export Platform wizard is available from the File > Export > Export Platform menu command.

Important: The block design must have an HDL wrapper and have output targets generated to export the platform XSA. Use the Create HDL Wrapper from the right-click menu in the Sources window to create the wrapper, and Generate Block Design from the Flow Navigator in the Vivado IDE.
Figure 4. Export Hardware Platform Wizard

The Export Platform wizard contains five pages to help you export the extensible platform XSA:

Platform Type
Specifies the XSA as supporting hardware emulation, and hardware targets.
Platform State
Specifies the XSA as including platform implementation or only synthesis.
Platform Properties
Defines platform properties and lets you specify Tcl scripts and XDC constraints for the Vitis compiler to use when building the system.
Output File
Specifies the output file name and location.
Summary
Reports the various settings that will be used during export.

In the Export Hardware Platform wizard, select the platform type. There are three types of platforms: platforms intended to run on hardware only, platforms intended for hardware emulation only, or platforms that can run in hardware emulation and on hardware. The differences between these options is that if some modules are not supported by emulation, you should create a separate emulation specific design, export it as "hardware emulation" platform and then use "Combine XSAs" option to combine a hardware XSA and a hardware emulation XSA into one XSA that is capable of performing both jobs.

For basic platforms, use the following steps:

  1. Select Hardware and Hardware Emulation. Click Next.
  2. Select Pre-synthesis for Platform State. Post-implementation is only needed when creating DFX platforms. Click Next.
  3. Input Platform Properties. Click Next.
  4. Input the XSA file name and the export target directory. Click Next.
  5. Check summary and click Finish.

You can also perform this in the command line using the following command:

set_property pfm_name {vendor:board:name:version} [get_files <bd_file>]
write_hw_platform -hw -force <XSA file>
Commands can be used to export the XSA file for DFX platform only.

#emulation XSA
set_property platform.platform_state "pre_synth" [current_project] 
write_hw_platform -hw_emu -force -file vck190_custom_dfx_hw_emu.xsa
#hardware and RP XSA
set_property platform.platform_state "impl" [current_project]
write_hw_platform -force -fixed -static -file vck190_custom_dfx_static.xsa
write_hw_platform -force -rp design_1_i/VitisRegion vck190_custom_dfx_rp.xsa

To create and combine a hardware XSA and a hardware emulation XSA, use the following commands:

write_hw_platform -hw <hw_platform> 
write_hw_platform -hw_emu <hw_emu_platform>
combine_hw_platform -hw <hw_platform> -hw_emu <hw_emu_platform> -o <combined_platform>