vitis::ai::OpenPose - 1.4.1 English

Vitis AI Library User Guide (UG1354)

Document ID
UG1354
Release Date
2021-12-11
Version
1.4.1 English
openpose model, input size is 368x368.

Base class for detecting poses of people.

Input is an image (cv:Mat).

Output is a OpenPoseResult.

Sample code :

 auto image = cv::imread("sample_openpose.jpg");
 if (image.empty()) {
   std::cerr << "cannot load image" << std::endl;
   abort();
 }
 auto det = vitis::ai::OpenPose::create("openpose_pruned_0_3");
 int width = det->getInputWidth();
 int height = det->getInputHeight();
 vector<vector<int>> limbSeq = {{0,1}, {1,2}, {2,3}, {3,4}, {1,5}, {5,6},
{6,7}, {1,8}, \ {8,9}, {9,10}, {1,11}, {11,12}, {12,13}}; float scale_x =
float(image.cols) / float(width); float scale_y = float(image.rows) /
float(height); auto results = det->run(image); for(size_t k = 1; k <
results.poses.size(); ++k){ for(size_t i = 0; i < results.poses[k].size();
++i){ if(results.poses[k][i].type == 1){ results.poses[k][i].point.x *=
scale_x; results.poses[k][i].point.y *= scale_y; cv::circle(image,
results.poses[k][i].point, 5, cv::Scalar(0, 255, 0), -1);
       }
   }
   for(size_t i = 0; i < limbSeq.size(); ++i){
       Result a = results.poses[k][limbSeq[i][0]];
       Result b = results.poses[k][limbSeq[i][1]];
       if(a.type == 1 && b.type == 1){
           cv::line(image, a.point, b.point, cv::Scalar(255, 0, 0), 3, 4);
       }
   }
 }

Display of the openpose model results:

Figure 1. openpose result image
Image sample_openpose_result.jpg

Quick Function Reference

The following table lists all the functions defined in the vitis::ai::OpenPose class:

Table 1. Quick Function Reference
Type Name Arguments
std::unique_ptr< OpenPose > create
  • const std::string & model_name
  • bool need_preprocess
OpenPoseResult run
  • const cv::Mat & image
std::vector< OpenPoseResult > run
  • const std::vector< cv::Mat > & images
int getInputWidth
  • void
int getInputHeight
  • void
size_t get_input_batch
  • void