alwaysAI Reference Applications¶
|Detector Tracker||Use Object Detection and Object Tracking to follow unique objects as
they move across the frame.
|Face Detector||Use Object Detection to detect human faces in a real-time camera
|Image Classifier||Use Image Classification to classify a batch of images. The
classification labels can be changed by selecting different models.
|Instance Segmentation||Use Instance Segmentation to detect, segment and classify individual
objects in a real-time camera stream or a video file stream. The types
of objects detected can be changed by selecting different models.
|Object Detector||Use Object Detection to detect objects in a real-time camera stream.
The types of objects detected can be changed by selecting different
|Pose Estimator||Use Pose Estimation to determine human poses in realtime. Human Pose
returns a list of key points indicating joints that can be used for
applications such as activity recognition and augmented reality.
|Semantic Segmentation VOC||Perform Semantic Segmentation on a series of images using a model
trained on the Pascal VOC dataset.
|Hailo Real-Time Object
|This alwaysAI app performs object detection in real-time on Hailo
|CUDA Applications||This alwaysAI application set use a CUDA (Compute Unified Device
Architecture) interface which allows CNN models to execute on NVIDIA
GPUs found on Jetson devices. CUDA APIs give applications executing
on the CPU direct access to NVIDIA’s GPU’s virtual instruction set and
parallel computational elements.
|NVIDIA Real-Time Object
|This alwaysAI app performs object detection in real-time on NVIDIA
|This alwaysAI app performs semantic segmentation on a video clip of a
driving scene on NVIDIA Jetson devices.
|TensorRT Applications||This alwaysAI applications set uses TensorRT binaries to do the local
inferencing on a NVIDIA Jetson device, these binaries can be found in
the alwaysAI model catalog. The model will start with TRT and end
with the Jetson device name it should be run on, for example nano.
These binaries are the most efficient way to do inferencing on NVIDIA
Jetson device. Currently alwaysAI supports TensorRT binaries for
Jetson Nano, Xavier NX, and AGX Xavier.
|Eyecloud Starter App||A basic application to get you started working with Eyecloud Cameras.|
|oak-cameras-repo||The Oak-1 and Oak-D cameras have built-in chips for artificial
intelligence, eliminating the security, latency and cost issues found
in cloud based inferencing. The cameras can determine where objects
are located in the space around them, as well as their trajectory, in
real time. The cameras features built-in Intel Myriad X chips for
running the machine learning inferencing locally. This Repo shows you
how to use these amazing cameras with the alwaysAI platform providing
powerful easy to use option for them to create commercial products.
|Oak Starter Apps||This repository contains a collection of starter apps for using OAK
cameras with the alwaysAI platform.
|RealSense Object Detector||Use Object Detection to detect objects and the RealSense camera to get
the distances in meters to those objects in real-time. The types of
objects detected can be changed by selecting different models.
|Dash Interactive Streamer||This application provides a base for an interactive streamer using
Plotly Dash (https://dash.plotly.com). This app includes a video
stream, which depicts object detection results, as well as a table
that updates every 5 seconds with object detection results. You can
create more complex applications by adding components.
|Interactive Image Capture
|This alwaysAI app builds off of the app found here to provide a
customized data collection server that allows the user to take snap
shots or record videos and store them for later use.
|Multiple Camera Streams||This alwaysAI application demonstrates how to incorporate video
streams from multiple cameras into a single app.
|Simple Video Recorder||This alwaysAI app records and saves video from a USB webcam or CSI
|Video Streamer||This alwaysAI app performs realtime object detection and streams video
and text data to a Flask-SocketIO server that can be located on
|Video Streamer Using
|This alwaysAI app performs realtime object detection and streams video
and text data to a Flask-SocketIO server running in another container
managed by Docker Compose.
|ZMQ Video Streamer||This repo has two parts: an alwaysAI computer vision app which
performs realtime object detection and streams video via ZeroMQ to a
Flask server, and the server to display the video stream in a browser.
|Age and Gender Classifier||Use two image classifiers to classify a batch of human face images by
age and gender.
|Twilio Computer Vision
|A simple computer vision app that can detect a chair, and which sends
a text message letting the user know when a chair has been detected.
|April Tag Detector||This repo contains a basic application that shows you how to detect
april tags using the
|Barcode Detector||Use Barcode Detection to detect and decode Barcode(s) in a real-time
|Capacity Detector||This application is meant to detect people from a live or recorded
video stream and determine if they are entering or exiting a location.
That info is then can be sent via a POST call to a URL endpoint for
|Classifier based Image
|This app can use an image classifier to detect any number of target
labels, then sort those images into 1 of 2 folders.
|Contraband Detector App||This app uses object detection and employs a correlation tracker to
monitor a frame for items deemed 'contraband'. For working or learning
at home, these items may be headphones, cell phones, books, etc. We
use two object detection models in order to maximize the number of
objects we can detect given a set of freely available detection
models. You can add or remove models and alter labels to suit your
|This app performs object detection on a video of a crosswalk in
simulated real-time. The app will automatically perform inference on
the CUDA GPU on Jetson devices.
|Darts||Play darts with alwaysAI and the Intel RealSense depth camera!|
|Distance Detector||This app will display the distance between two hands in inches. This
app uses a web camera as the video stream and uses reference object
metrics to approximate distance between two objects in an image.
|Face Counter||Use Object Detection and Correlation Tracking to count unique human
faces in the frame on a real-time camera stream.
|Face Swapper||Swaps the faces of people in a video stream when more than one person
is in the frame.
|Facial Landmarks with
|Using DLIB to do facial detection and find landmarks.|
|Gesture Controlled Home
|This app is designed to let you use your own custom gesture model to
activate voice-activated technology, such as Alexa, Google Home, or
Siri, using computer vision. After training your own gesture-detection
model, using hand signals of your choice, you can use these signals to
play audio files that give commands to smart tech.
|Hacky Hour 08/06/2020||Source code for the August 6 2020 Hacky Hour! Demonstrates how a
computer vision application can be used in manufacturing to confirm
that a worker has chosen the correct component for assembly of a
product. The CV application finds the roi associated with an
illuminated LED on a tray shelve. It then checks to see if the
workers hand has picked up the correct component using a hand
detection model and overlapping rectangles algorithm between roi and
hand detection bounding box. The bounding box of hand detector turns
green when the correct component is chosen and remains red until that
|Jumping Jacks Counter||This app counts jumping jacks for a single person on a real-time video
stream or a video file. The app defaults to the
NVIDIA Jetson devices, and the
|Kalman Tracking Hacky
|Source for the May 12th Hacky Hour! This app demonstrates how to use
the Kalman Tracker to track objects in motion.
|License Plate Tracker App||This app uses object detection and employs a centroid tracker to
monitor a frame for license plates and vehicles. Please note that the
model used in this app does not detect the content or read the
characters from a license plate. Pair this app with an OCR library to
|License Plate OCR App||This app uses object detection and employs a centroid tracker to
monitor a frame for license plates and vehicles and then uses an OCR
library to read the license plates. This app will work on development
machines, such as MacOS and Windows, but not on ARM 64 architectures
due to the dependency on PyTorch. Note that we have provided the model
file that EasyOCR provides at runtime to avoid model download issues.
We have only provided the necessary files for the English option. If
you would like to explore other uses of the EasyOCR library, please
see the source: https://github.com/JaidedAI/EasyOCR/tree/393c2d966aa37
160bcfdef5f55ba50daff994409. There is a link to the JaidedAI Model Hub
on the repository's README.
|Classifier based Image
|This app can use any number of image classifiers to detect any number
of target labels, then sort those images into 1 of 2 folders if the
target label is detected.
|Multiple Object Detectors
|This app utilizes two object detection models, with the option of
adding additional detection models. This may be helpful for including
models that comprise very different libraries. In this example, one
model detects numerous objects of small to medium size, and the other
has a more limited library but detects some larger objects, such as
airplanes, trains, and sofas. As there is some overlap between models,
for instance they both detect people, you can compare the prediction
confidences. Additionally, the output for each model appears in a
separate video frame.
|Server app sets up an alwaysAI model to run on a device where images
can be sent and inferenced. The server then returns the marked-up
image to the client, which is stored in the
|Object Detection &
Classifier Sample App
|This app enables two alwaysAI models to be used simultaneously: one to
detect facial objects, and a second to classify the detected faces in
terms of age ranges.
|Package Detection System
|This app is designed to let you use your own custom package detection
model to detect when packages arrive and when they are removed You'll
need an alwaysAI account and to have alwayAI installed:
|Pedestrian Segmentation||This app expands on the semantic_segmentation_cityscape starter app
to build an app that segments pedestrians and bicyclists in each frame
of a video, and performs actions based on the results. The full
tutorial can be found on the alwaysAI blog.
|People Counter App||This app expands on the alwaysAI face counter starter app to both
count the number of non-unique people (same person will be counted
multiple times if they exit and re-enter the frame) detected and to
display basic time metrics on those appearances.
|Classify Poses with Pose
Estimation and Support
Vector Machines (SVM)
|This repo contains an alwaysAI app and Jupyter Notebook to classify
poses using Pose Estimation from alwaysAI and machine learning tools
from Scikit-Learn. This tutorial will walk through training an SVM to
classify a few common yoga poses.
|Posture Corrector with
Pose Estimation Example
|This app uses pose estimation to help users correct their posture by
alerting them with a sound when they are slouching, leaning, or
tilting their head down, as well as a printed message with specific
suggestions for correcting posture. A scale variable is used to
adjust the keypoints measurements for different individuals,
accounting for greater or smaller natural distances between keypoints
used to detect poor posture.
|QR Code Detector||Use QR Code Detection to detect and decode QR Code(s) in a real-time
|Object Detector and RTSP
|Use Object Detection and RTSP Server to stream Object Detection
Inference through RTSP stream. The types of objects detected can be
changed by selecting different models.
|Use Semantic Segmentation to determine a class for each pixel of an
image. The classes of objects detected can be changed by selecting
different models. This particular starter application uses a model
trained on the Cityscapes Dataset. The Cityscapes Dataset focuses on
semantic understanding of urban street scenes, and is a favorite
dataset for building autonomous driving machine learning models.
|Simple Object Counter||Use Object Detection to count the number of each type of object in the
frame of a real-time video stream.
|Virtual Green Screen with
Edge Smoothing Example
|This app expands on an app that uses semantic segmentation to segment
out a person from background noise in a video stream and replace the
background with an image or blur it out. This app demonstrates how to
subsequently smooth segmentation boundaries. This app also
demonstrates how to separate your app configuration information into a
separate JSON file. For more details on this aspect of the app, please
see the original blog post.
|Snapshot Security Camera
|This app expands on the realtime_object_detector starter app to
build a simple security camera app that takes a picture of each new
person who enters the frame. The full tutorial can be found on the
|Spatial AI Webinar Apps||Spatial AI is the blending of machine learning inferencing with
geometric data from sensors, enabling robots, drones or autonomous
vehicles to better understand the world around them.
|Temperature Tracker App||This app demonstrates how to use a temperature tracking utility class
for the Raspberry Pi.
|Thermal Imaging||This repo contains applications that demonstrate how to use FLiR
Lepton thermal camera. The FLiR sensor uses microbolometer array that
changes resistance as its heated up. By measuring resistance the
sensor can determine the temperature of detected objects. The
sensor's firmware creates a colored image that encodes that resistance
data into heat map. The image can be processed using the same
techniques as visible light.
|A series of applications to help monitor a waiting room, log
vaccination events, and monitor a post-vaccination room, and write out
event logs to help inform vaccination center logistics.
|Virtual Green Screen
|This app uses semantic segmentation to segment out a person from
background noise in a video stream and replace the background with an
image or blur it out. This app builds off of a methodology for
segmenting out areas of interested, which can be found here. This app
also demonstrates how to separate your app configuration information
into a separate JSON file. For more details on this aspect of the app,
please see the original blog.
|YMCA APP||In this example we'll be using a simple set of cases to determine if
someone is doing a Y, M, C, or A in our image then displaying the
letter on the screen if they do creating a fun virtual experience.