PoseEstimation¶
-
class
Pose
(key_points, score)¶ -
property
key_points
¶ Key Points corresponding to body parts.
Body Part Names
Nose
Neck
Right Shoulder
Right Elbow
Right Wrist
Left Shoulder
Left Elbow
Left Wrist
Right Hip
Right Knee
Right Ankle
Left Hip
Left Knee
Left Ankle
Right Eye
Left Eye
Right Ear
Left Ear
- Returns
dict
of coordinate tuples (x, y) mapped to their corresponding body part names
-
property
score
¶ Confidence level associated with pose.
- Type
float in range [0.0, 1.0]
-
property
-
class
HumanPoseResult
(poses, duration, input_dimension, image, **kwargs)¶ The results of pose estimation from
PoseEstimation
.-
property
duration
¶ The duration of the inference in seconds.
- Type
float
-
property
image
¶ The image the results were processed on.
- Type
numpy array – The image in BGR format
-
draw_poses_background
(color)¶ Draw poses found on image on a background color.
- Parameters
color (tuple which contains values for B, G, R color channels.) – The color of the background in which the poses will be drawn on.
- Returns
image: numpy array of image in BGR format
-
draw_poses
(image=None)¶ Draws poses found on image.
- Parameters
image (numpy.array) – An image to draw the poses found
- Returns
image: numpy array of image in BGR format
-
draw_aliens
()¶ - Returns
image: numpy array of image in BGR format
-
property
-
class
PoseEstimation
(model_id)¶ Find poses within an image.
Typical usage:
pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose") pose_estimator.load(engine=edgeiq.Engine.DNN) <get image> results = pose_estimator.estimate(image) for ind, pose in enumerate(results.poses): print('Person {}'.format(ind)) print('-'*10) print('Key Points:') for key_point in pose.key_points: print(str(key_point)) image = results.draw_poses(image)
- Parameters
model_id (string) – The ID of the model you want to use for pose estimation.
-
estimate
(image)¶ Estimate poses within the specified image.
- Parameters
image (numpy array of image) – The image to analyze.
- Returns
-
publish_analytics
(results, tag=None)¶ Publish Pose Estimation results to the alwaysAI Analytics Service
- Parameters
results (
HumanPoseResult
) – The results to publish.tag (JSON-serializable object) – Additional information to assist in querying and visualizations.
-
property
accelerator
¶ The accelerator being used.
- Return type
Optional
[Accelerator
]
-
property
colors
¶ The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- Return type
List
[Tuple
[int
,int
,int
]]
-
property
labels
¶ The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- Return type
List
[str
]
-
load
(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)¶ Load the model to an engine and accelerator.
- Parameters
engine (
Engine
) – The engine to load the model toaccelerator (
Accelerator
) – The accelerator to load the model to
-
property
model_config
¶ The configuration of the model that was loaded
- Return type
ModelConfig
-
property
model_id
¶ The ID of the loaded model.
- Return type
str
-
property
model_purpose
¶ The purpose of the model being used.
- Return type
str