Architecture - 2022.1 English

Versal ACAP System and Solution Planning Methodology Guide (UG1504)

Document ID
UG1504
Release Date
2022-05-25
Version
2022.1 English

The primary challenge to resolve as part of system design architecture is power and performance optimization. Your choice of acceleration hardware, whether PL or AI Engines, depends on the type of algorithm and data ingress and egress paths. In the case of streaming data ingress and egress to and from sensors (e.g., LiDAR, RADAR, dual-camera vision systems), data is available to fabric through high-speed transceivers. This data is aggregated from external protocol interfaces on AXI4-Stream buses and can be distributed to the PL or AI Engines.

The Scalar Engines (processor subsystem), Adaptable Engines (programmable logic), and Intelligent Engines (AI Engines) form a tightly-integrated, heterogeneous compute platform. Scalar Engines provide complex software support. Adaptable Engines provide flexible custom compute and data movement. Given their high compute density, AI Engines are well suited for vector-based algorithms.

During this step, you develop a mapping of the core application and of each algorithm to the most appropriate architectural area (e.g., AI Engine, PS, PL, NoC, DDRMC) in the Versal ACAP. This consists of mapping all of the major blocks in the application and considering requirements on these major blocks in terms of bandwidth and availability. This application mapping and design partition step is manual.

You can implement additional features, such as clock gating, for regions of the PL and AI Engine that are not used concurrently. You can handle traditional multi-clock domain fabric design and datapath clock domain crossing using the same approach that is used with FPGA architectures.