Kernel Subsystems - 2020.2 English

Versal ACAP VCK190 Base Targeted Reference Design (UG1442)

Document ID
UG1442
Release Date
2021-01-08
Version
2020.2 English

In order to model and control video capture pipelines such as the ones used in this TRD on Linux systems, multiple kernel frameworks and APIs are required to work in concert. For simplicity, we refer to the overall solution as Video4Linux (V4L2) although the framework only provides part of the required functionality. The individual components are discussed in the following sections.

Driver Architecture

The Video Capture Software Stack figure in the Capture section shows how the generic V4L2 driver model of a video pipeline is mapped to the single-sensor MIPI CSI-2 Rx capture pipelines. The video pipeline driver loads the necessary sub-device drivers and registers the device nodes it needs, based on the video pipeline configuration specified in the device tree. The framework exposes the following device node types to user space to control certain aspects of the pipeline:
  • Media device node: /dev/media*
  • Video device node: /dev/video*
  • V4L2 sub-device node: /dev/v4l-subdev*

Media Framework

The main goal of the media framework is to discover the device topology of a video pipeline and to configure it at run-time. To achieve this, pipelines are modeled as an oriented graph of building blocks called entities connected through pads.

An entity is a basic media hardware building block. It can correspond to a large variety of blocks such as physical hardware devices (e.g. image sensors), logical hardware devices (e.g. soft IP cores inside the PL), DMA channels or physical connectors. Physical or logical devices are modeled as sub-device nodes and DMA channels as video nodes.

A pad is a connection endpoint through which an entity can interact with other entities. Data produced by an entity flows from the entity's output to one or more entity inputs. A link is a point-to-point oriented connection between two pads, either on the same entity or on different entities. Data flows from a source pad to a sink pad.

A media device node is created that allows the user space application to configure the video pipeline and its sub-devices through the libmediactl and libv4l2subdev libraries. The media controller API provides the following functionality:
  • Enumerate entities, pads and links
  • Configure pads
    • Set media bus format
    • Set dimensions (width/height)
  • Configure links
    • Enable/disable
    • Validate formats
The following figures show the media graphs for MIPI CSI-2 Rx (single-sensor and quad-sensor) as well as the HDMI Rx video capture pipeline as generated by the media-ctl utility. The sub-devices are shown in green with their corresponding control interface base address and sub-device node in the center. The numbers on the edges are pads and the solid arrows represent active links. The yellow boxes are video nodes that correspond to DMA channels, in this case write channels (outputs).
Figure 1. Video Capture Media Pipeline: Single MIPI CSI-2 RX
Figure 2. Video Capture Media Pipeline: Quad MIPI CSI-2 Rx
Figure 3. Video Capture Media Pipeline: HDMI RX

V4L2 Framework

The V4L2 framework is responsible for capturing video frames at the video device node, typically representing a DMA channel, and making those video frames available to user space. The framework consists of multiple sub-components that provide certain functionality.

Before video frames can be captured, the buffer type and pixel format need to be set using the VIDOC_S_FMT ioctl. On success the driver can program the hardware, allocate resources, and generally prepare for data exchange. Optionally, you can set additional control parameters on V4L devices and sub-devices. The V4L2 control framework provides ioctls for many commonly used, standard controls such as brightness and contrast.

The videobuf2 API implements three basic buffer types but only physically contiguous memory is supported in this driver because of the hardware capabilities of the Frame Buffer Write IP. Videobuf2 provides a kernel internal API for buffer allocation and management as well as a user-space facing API. VIDIOC_QUERYCAP and VIDIOC_REQBUFS ioctls are used to determine the I/O mode and memory type. In this design, the streaming I/O mode in combination with the DMABUF memory type is used.

DMABUF is dedicated to sharing DMA buffers between different devices, such as V4L devices or other video-related devices such as a DRM display device (see the GStreamer Pipeline Control section). In DMABUF, buffers are allocated by a driver on behalf of an application. These buffers are exported to the application as file descriptors.

For capture applications, it is customary to queue a number of empty buffers using the VIDIOC_QBUF ioctl. The application waits until a filled buffer can be de-queued with the VIDIOC_DQBUF ioctl and re-queues the buffer when the data is no longer needed. To start and stop capturing applications, the VIDIOC_STREAMON and VIDIOC_STREAMOFF ioctls are used.

The ioctls for buffer management, format and stream control are implemented inside the v4l2src plugin and the application developer does not need to know the implementation details.

Video IP Drivers

Xilinx adopted the V4L2 framework for most of its video IP portfolio. The currently supported video IPs and corresponding drivers are listed under V4L2. Each V4L driver has a sub-page that lists driver-specific details and provides pointers to additional documentation. The following table provides a quick overview of the drivers used in this design.
Table 1. V4L2 Drivers Used in Capture Pipelines
Linux Driver Function
Xilinx Video Pipeline (XVIPP)
  • Configures video pipeline and register media, video and sub-device nodes.
  • Configures all entities in the pipeline and validate links.
  • Configures and controls DMA engines (Xilinx Video Framebuffer Write).
  • Starts/stops video stream.
Xilinx Video Processing Subsystem (Scaler Only configuration)
  • Sets media bus format and resolution on input pad.
  • Sets media bus format and resolution on output pad. (Output configuration can be different from the input configuration as the block enables color space conversion and scaling).
MIPI CSI-2 Rx
  • Sets media bus format and resolution on input pad.
  • Sets media bus format and resolution on output pad.
Xilinx Video Image Signal Processing (ISP)
  • Sets media bus format and resolution on input pad.
  • Sets media bus format and resolution on output pad.
Sony IMX274 Image Sensor
  • Sets media bus format and resolution on output pad.
  • Sets sensor control parameters: exposure, gain, test pattern, vertical flip.
OnSemi AR0231 Image Sensor
  • Sets media bus format and resolution on output pad.
  • Sets sensor control parameters: exposure, gain, test pattern, h/v flip, r/g/b balance.
MAX9286 GMSL Deserializer
  • Sets media bus format and resolution on input pad.
  • Sets media bus format and resolution on output pad.
AXI-Stream Switch
  • Sets media bus format and resolution on input pad.
  • Sets media bus format and resolution on output pad.
HDMI Rx Subsystem
  • Query digital video (DV) timings on output pad.
  • Sets media bus format and resolution on output pad.