Programming with Vitis AI Runtime (VART) - 1.2 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2020-07-21
Version
1.2 English

Vitis AI provides a C++ DpuRunner class with the following interfaces:

 std::pair<uint32_t, int> execute_async(  const std::vector<TensorBuffer*>& input,  const std::vector<TensorBuffer*>& output);
Note: For some historical reasons, this function is actually a blocking function, not an asynchronous non-blocking function.
  1. Submit input tensors for execution, and output tensors to store results. The host pointer is passed via the TensorBuffer object. This function returns a job ID and the status of the function call.
    int wait(int jobid, int timeout);

    The job ID returned by execute_async is passed to wait() to block until the job is complete and the results are ready.

    TensorFormat get_tensor_format()
  2. Query the DpuRunner for the tensor format it expects.

    Returns DpuRunner::TensorFormat::NCHW or DpuRunner::TensorFormat::NHWC

    std::vector<Tensor*> get_input_tensors()
  3. Query the DpuRunner for the shape and name of the output tensors it expects for its loaded AI model.
    std::vector<Tensor*> get_output_tensors()
  4. To create a DpuRunner object call the following:
    create_runner(const xir::Subgraph* subgraph, const std::string& mode = "")

    It returns the following:

    std::unique_ptr<Runner>

The input to create_runner is a XIR Subgraph generated by the AI compiler.