std::pair<uint32_t, int> execute_async( const std::vector<TensorBuffer*>& input, const std::vector<TensorBuffer*>& output);
- Submit input tensors for execution, and output tensors to store results. The
host pointer is passed via the TensorBuffer object. This function returns a job
ID and the status of the function
int wait(int jobid, int timeout);
The job ID returned by execute_async is passed to
wait()to block until the job is complete and the results are ready.
- Query the DpuRunner for the tensor format it expects.
Returns DpuRunner::TensorFormat::NCHW or DpuRunner::TensorFormat::NHWC
- Query the DpuRunner for the shape and name of the input tensors it expects
for its loaded AI
- Query the DpuRunner for the shape and name of the output tensors it expects for its loaded AI model.
- To create a DpuRunner object call the
DpuRunner::create_dpu_runner(const std::string& model_directory);
The input to create_dpu_runner is a model runtime directory generated by the AI compiler. The directory contains a meta.json that distinguishes each directory for each Vitis Runner, along with files needed by the Runner at runtime.