PoseEstimation

class HumanPoseResult(poses, duration, input_dimension, image, **kwargs)

The results of pose estimation from PoseEstimation.

Parameters
  • poses (List[Pose]) – The poses from the inference.

  • duration (float) – Total time taken by the inference.

  • input_dimension (Tuple[int, int]) – The dimensions of the input image after padding.

  • image (ndarray) – The image that the inference was performed on.

property duration

The duration of the inference in seconds.

Return type

float

property poses

Poses found in image.

Return type

List[Pose]

property image

The image the results were processed on.

Return type

ndarray

draw_poses_background(color)

Draw poses found on image on a background color.

Parameters

color (Tuple[int, int, int]) – The color of the background in which the poses will be drawn on, in the format (B, G, R).

Return type

ndarray

Returns

image: numpy array of image in BGR format

draw_poses(image=None)

Draws poses found on image.

Parameters

image (Optional[ndarray]) – An image to draw the poses found

Return type

ndarray

Returns

image: numpy array of image in BGR format

draw_aliens()
Return type

ndarray

Returns

image: numpy array of image in BGR format

class PoseEstimation(model_id, model_config=None)

Find poses within an image.

Typical usage:

pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose")
pose_estimator.load(engine=edgeiq.Engine.DNN)

<get image>
results = pose_estimator.estimate(image)

for ind, pose in enumerate(results.poses):
            print('Person {}'.format(ind))
            print('-'*10)
            print('Key Points:')
            for key_point in pose.key_points:
                print(str(key_point))
image = results.draw_poses(image)
Parameters

model_id (str) – The ID of the model you want to use for pose estimation.

estimate(image)

Estimate poses within the specified image.

Parameters

image (ndarray) – The image to analyze.

Return type

HumanPoseResult

publish_analytics(results, tag=None)

Publish Pose Estimation results to the alwaysAI Analytics Service

Parameters
  • results (HumanPoseResult) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

property accelerator

The accelerator being used.

Return type

Optional[Accelerator]

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Return type

Optional[ndarray]

property engine

The engine being used.

Return type

Optional[Engine]

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Return type

Optional[List[str]]

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config

The configuration of the model that was loaded

Return type

ModelConfig

property model_id

The ID of the loaded model.

Return type

str

property model_purpose

The purpose of the model being used.

Return type

str