Example System with DPUCAHX8H - 1.2 English

DPUCAHX8H for Convolutional Neural Networks (PG367)

Document ID
PG367
Release Date
2022-06-15
Version
1.2 English

The following figure shows two example system block diagrams with the Alveo U280 Data Center accelerator card which includes an UltraScale+™ XCU280 FPGA and a PCIe® port. The card is inserted into the PCIe slot of the host server. The first example does not reserve space for user logic, meaning that the entirety of the device is used for CNN inference acceleration. In the second example, one SLR is reserved for user logic. One example of such a use-case may be that the developer chose to implement customized pre/post processing acceleration logic on the reserved SLR. The DPU cores are integrated into the system through AXI interfaces that connect to the HBM, and the whole system is integrated into the server via PCIe. It can be used to perform deep learning inference tasks such as image classification, object detection, and semantic segmentation.

Figure 1. Example System with Integrated DPU