After identifying devices and loading the program, the host application
should identify the kernels that execute on the device, and set up the kernel arguments.
All kernels the host application interacts with are defined within the loaded
.xclbin file, and so should be identified from there.
For XRT-managed kernels, the XRT API provides a Kernel class (
xrt::kernel), that is used to access the kernels contained
within the .xclbin file. The kernel object
identifies an XRT-managed kernel in the .xclbin
loaded into the Xilinx device that can be run by
the host application.
xrt::ip) to identify the user-managed kernels in the .xclbin file.
The use of the kernel and buffer objects require the addition of the
include statements in your source code:
#include <xrt/xrt_kernel.h> #include <xrt/xrt_bo.h>
The following code example identifies a kernel (
"vadd") defined in the program (
uuid) loaded onto the
auto krnl = xrt::kernel(device, uuid, "vadd");
xclbinutilcommand to examine the contents of an existing .xclbin file and determine the kernels contained within.
std::cout << "Allocate Buffer in Global Memory\n"; auto bo0 = xrt::bo(device, vector_size_bytes, krnl.group_id(0)); auto bo1 = xrt::bo(device, vector_size_bytes, krnl.group_id(1)); auto bo_out = xrt::bo(device, vector_size_bytes, krnl.group_id(2));
The kernel object (
includes a method to return the memory associated with each kernel argument,
kernel.group_id(). You will assign a buffer object to each
kernel buffer argument because buffer is not created for scalar arguments.