Reference Design Overview - 2020.2 English

Versal ACAP VCK190 Base Targeted Reference Design (UG1442)

Document ID
UG1442
Release Date
2021-01-08
Version
2020.2 English
The TRD built on the Versal ACAP device provides a framework for building and customizing video platforms that consist of three pipeline stages.
  • Capture pipeline (input)
  • Acceleration pipeline (memory-to-memory)
  • Display pipeline (output)
The reference design has platforms and integrated accelerators. The platform consists of Capture pipeline and Display pipeline for Video in and Video out. This approach makes the design leaner and provides users the maximum Programmable logic (PL) for accelerator/role development. Platforms supported in this reference design are:
  • Platform 1: MIPI single sensor Capture and HDMI TX display
  • Platform 2: MIPI quad sensor Capture and HDMI TX display
  • Platform 3: HDMI RX Capture and HDMI TX display. Along with video this platform also supports audio capture

Platforms also include a virtual video device (vivid), a USB webcam, and a file as an input capture source. The platforms support audio replay from a file as well.

The following types of acceleration kernels can be run on the platforms:
  • PS: Running software kernels directly on the PS (OpenCV for example)
  • PL: Running HLS or RTL kernels on the PL (Vitis Vision Libraries for example)
  • AIE+PL: Running kernels on AI engines with data movers in the PL

The following figure shows the various platforms supported by the design.

Figure 1. Base TRD Block Diagram

The application processing unit (APU) in the Versal ACAP consists of two Arm Cortex-A72 cores and is configured to run in SMP (symmetric multi-processing) Linux mode in the reference design. The application running on Linux is responsible for configuring and controlling the audio/video pipelines and accelerators using Jupyter notebooks. It also communicates with the APU to visualize system performance.

The following figure shows the software state after the boot process has completed and the individual applications have been started on the APU. Details are described in Software Architecture.

Figure 2. Key Reference Design Components
The APU application controls the following video data paths implemented in a combination of the PS and PL:
  • Capture pipeline capturing video frames into DDR memory from
    • A file on a storage device such as an SD card
    • A USB webcam using the USB interface inside the PS
    • An image sensor on an FMC daughter card connected via MIPI CSI-2 Rx through the PL
    • A quad image sensor on an FMC daughter card connected via MIPI CSI-2 Rx through the PL
    • An HDMI source such as a laptop connected via the HDMI Rx subsystem through the PL. HDMI Rx also captures audio along with video.
  • Memory-to-memory (M2M) pipeline implementing typical video processing algorithms
    • A 2D convolution filter – In this reference design this algorithm is implemented in the PS, PL and AIE. Video frames are read from DDR memory, processed by the accelerator, and then written back to memory.
  • Display pipeline reading video frames from memory and sending them to a monitor by means of the HDMI TX subsystem through the PL. Along with video, the HDMI TX subsystem also forwards audio data to a HDMI speaker.

The APU reads performance metrics from the AXI performance monitors (APM) and sends the data to the Jupyter notebook to be displayed.

The following figure shows an example end-to-end pipeline with a single image sensor as the video source, 2D convolution filter as an accelerator, and HDMI display as the video sink. The figure also shows the image processing blocks used in the capture path. The video format in the figure is the output format on each block. Details are described in the Hardware Architecture chapter.
Figure 3. End-to-End Pipeline from Video In to Video Out
Note: The audio works in a pass-through mode, RX to TX. There is no processing done on the audio data.