SemanticSegmentation

class SemanticSegmentationResults(class_map, duration, image)

The results of semantic segmentation from SemanticSegmentation.

property duration

The duration of the inference in seconds.

Type

float

property class_map

The class label with the highest probability for each and every (x, y)-coordinate in the image.

Type

numpy array

property image

The image the results were processed on.

Returns

numpy array

class SemanticSegmentation(model_id)

Classify every pixel in an image.

The build_legend() is useful when used with the Streamer.

Typical usage:

semantic_segmentation = edgeiq.SemanticSegmentation(
        'alwaysai/enet')
semantic_segmentation.load(engine=edgeiq.Engine.DNN)

with edgeiq.Streamer() as streamer:
    <get image>
    results = semantic_segmentation.segment_image(image)

    text = 'Inference time: {:1.3f} s'.format(results.duration)
    text.append('Legend:')
    text.append(semantic_segmentation.build_legend())

    mask = semantic_segmentation.build_image_mask(results.class_map)
    blended = edgeiq.blend_images(image, mask, alpha=0.5)

    streamer.send_data(blended, text)
Parameters

model_id (string) – The ID of the model you want to use for semantic segmentation.

segment_image(image)

Classify every pixel within the specified image.

Parameters

image (numpy array of image) – The image to analyze.

Returns

SemanticSegmentationResults

build_image_mask(class_map)

Create an image mask by mapping colors to the class map. Colors can be set by the colors attribute.

Parameters

class_map (numpy array) – The class label with the highest probability for each and every (x, y)-coordinate in the image

Returns

numpy array – Class color visualization for each pixel in the original image.

build_legend()

Create a class legend that associates color with a class object

Returns

string – An HTML table with class labels and colors that can be used with the streamer.

build_object_map(class_map, class_list)

Create a object map by isolating classes within the class map.

Parameters
  • class_map (numpy array of integers) – The class with the highest probability for each and every (x, y)-coordinate in the image

  • class_list (list of strings) – The list of labels to include in the object map.

Returns

numpy array of integers – The specific classes from the class list for each and every (x, y)-coordinate in the original image. Other classes not in the specified class list are rendered as non-labled or background.

property accelerator

The accelerator being used.

Return type

Optional[Accelerator]

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Return type

List[Tuple[int, int, int]]

property engine

The engine being used.

Return type

Optional[Engine]

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Return type

List[str]

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config

The configuration of the model that was loaded

Return type

ModelConfig

property model_id

The ID of the loaded model.

Return type

str

property model_purpose

The purpose of the model being used.

Return type

str