Oak

class Sensor(value)

Bases: enum.Enum

Sensors supported by the Oak Camera.

res_1080 = <SensorResolution.THE_1080_P: 0>
res_4k = <SensorResolution.THE_4_K: 1>
res_12_mp = <SensorResolution.THE_12_MP: 2>
class VideoMode(value)

Bases: enum.Enum

Video modes supported by the Oak Camera.

VideoMode.preview - view from the preview feed which is the image that is fed into the neural network.

VideoMode.video - view from the actual sensor to get a full resolution image.

preview = 1
video = 2
class Oak(model_id=None, sensor=<Sensor.res_1080: <SensorResolution.THE_1080_P: 0>>, video_mode=<VideoMode.video: 2>, capture_depth=False, usb2=False, device_info=None)

Class to be used to with Oak cameras to capture frames and execute neural network models.

Typical usage:

from edgeiq import oak

with edgeiq.oak.Oak('alwaysai/mobilenet_ssd_oak') as camera:

    while True:
        frame = camera.get_frame()
        result = camera.get_model_result(confidence_level=.75)
        if result:
            frame = edgeiq.markup_image(frame, result.predictions)
Parameters
  • model_id (string) – The ID of the model you want to initialize on the camera. Supported model purposes: Object Detection, Classification, Pose Estimation. Intializing the camera with no model_id or supplying None will still allow you to capture frames.

  • sensor (Sensor) – The sensor to use on the camera.

  • video_mode (VideoMode) – The video output mode for get_frame().

  • capture_depth (bool) – Set to capture the depth stream on OAK-D cameras. Depth stream can be retrieved by calling Oak.get_depth().

  • usb2 (bool) – Set to use the device on USB2.

  • device_info (depthai.DeviceInfo) – The device info for the target camera.

property model_id

The id of the model running on the camera

Type

string

property model_purpose

The purpose of the model running on the camera.

Type

string

classmethod get_devices_in_use()

Get the list of devices in use.

Returns

list

static get_available_devices()

Return a list of detected oak devices.

Returns

list [depthai.DeviceInfo]

get_frame()

Retrieve and process the latest image data.

Call Type: Blocking.

Returns

numpy array – The frame captured from the camera.

get_model_result(block=True, **inference_args)

Retrieve and process the latest model output data.

The results objects are the same as the results object returned across our platform.

Parameters
  • block (bool) – The call type when retrieving data from the device. When set to False, returns None when new model data is not available.

  • inference_args – Named arguments that are fed into the post processing functions. Parameters can differ based on model purpose and they are described below.

ObjectDetection

Parameters

confidence_level (float) – The minimum cofindence level required to successfully access an object identification.

Returns

ObjectDetectionResults

Classification

Parameters

confidence_level (float) – The minimum confidence level required to successfully accept a classification.

Returns

ClassificationResults

Pose Estimation

Returns

HumanPoseResult

get_model_data(block=True)

Retrieve the latest model output data.

Parameters

block (bool) – The call type when retrieving data from the device. When set to False, returns None when new model data is not available.

Returns

list - A list of the models output layers in the format [(layer_1_name, layer_1_data), …(layer_n_name, layer_n_data)].

get_depth(block=True)

Retrieve the latest depth data available from the camera.

Parameters

block (bool) – The call type when retrieving data from the device. When set to False, returns None when new depth data is not available.

Returns

numpy array – The depth data captured from the camera.

start()

Initialize the Oak camera.

Returns

self

stop()

Uninitialize the Oak camera.