ObjectDetection

class ObjectDetectionPrediction(box, confidence, label, index)

A single prediction from ObjectDetection.

Parameters
  • box (BoundingBox) – The bounding box around the detected object.

  • confidence (float) – The confidence of this prediction.

  • label (str) – The label describing this prediction result.

  • index (int) – The index of this result in the master label list.

property label

The label describing this prediction result.

Return type

str

property index

The index of this result in the master label list.

Return type

int

property box

The bounding box around the object.

Type

BoundingBox

property confidence

The confidence of this prediction.

Type

float

class ObjectDetectionResults(predictions, duration, image, **kwargs)

All the results of object detection from ObjectDetection.

Predictions are stored in sorted order, with descending order of confidence.

Parameters
  • predictions (list) – The boxes for each prediction.

  • duration (float) – The duration of the inference.

  • image (ndarray) – The image that the inference was performed on.

property duration

The duration of the inference in seconds.

Return type

float

property predictions

The list of predictions.

Return type

list

property image

The image the results were processed on.

Image is not available when results are obtained from EyeCloud Cameras.

Return type

ndarray

class ObjectDetection(model_id)

Analyze and discover objects within an image.

Typical usage:

obj_detect = edgeiq.ObjectDetection(
        'alwaysai/ssd_mobilenet_v1_coco_2018_01_28')
obj_detect.load(engine=edgeiq.Engine.DNN)

<get image>
results = obj_detect.detect_objects(image, confidence_level=.5)
image = edgeiq.markup_image(
        image, results.predictions, colors=obj_detect.colors)

for prediction in results.predictions:
        text.append("{}: {:2.2f}%".format(
            prediction.label, prediction.confidence * 100))
Parameters

model_id (str) – The ID of the model you want to use for object detection.

property accelerator

The accelerator being used.

Return type

Optional[Accelerator]

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Return type

List[Tuple[int, int, int]]

detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on an image

Parameters
  • image (ndarray) – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

ObjectDetectionResults

property engine

The engine being used.

Return type

Optional[Engine]

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Return type

List[str]

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config

The configuration of the model that was loaded

Return type

ModelConfig

property model_id

The ID of the loaded model.

Return type

str

property model_purpose

The purpose of the model being used.

Return type

str

publish_analytics(results, tag=None)

Publish Object Detection results to the alwaysAI Analytics Service

Parameters
  • results (ObjectDetectionResults) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.