Computer Vision on Edge Vs. Cloud

By Komal Devjani • May 12, 2020

Computer vision on edge vs. Cloudhitepaperblog

Computer vision (CV) is a subset of artificial intelligence that will have an enormous impact on many industries such as retail, health care, manufacturing, agriculture, and smart cities. When implementing computer vision within a solution, technical and strategic stakeholders need to decide which computing framework to use: edge computing, or cloud computing. 

Edge provides several advantages over cloud, including reach, speed and privacy; but most dramatically, a lower operational cost. An edge-based platform like alwaysAI enables easy and affordable deployment of CV on the edge. 

What is the difference between Edge computing and Cloud computing? 

In cloud computing, devices send the visual data into the cloud for analysis, which then returns appropriate responses to the device for further action. This can lead to latencies in system response time. 

Edge computing, on the other hand, is a form of distributed computing where all computations occur outside of the cloud, or at the “edge” of a centralized server. This function of edge computing is supported by devices that can capture visual data and perform computations locally on the device, closer to the source of the data, making that data immediately actionable. Common examples of edge devices are cameras, drones, robots, sensors or edge development environments like the Raspberry Pi or NVIDIA Jetson

Today, there are over 4 billion edge devices capable of performing real-time inferencing away from the cloud. This is a huge source of computational capacity, largely untapped with distinct advantages over cloud.  

What are the advantages of using Edge computing? 

There are three major advantages that edge computing has over cloud computing for CV applications:

  1. Edge has greater reach and is faster. The edge has no dependency on a central data center or network bandwidth, allowing for computer vision operations in remote locations that lack the necessary network infrastructure and faster response times. This enables the development of new IoT technologies in industries such as agriculture, waste management, and human search & rescue. This also supports more time-sensitive use cases such as robotic surgery, autonomous driving and human fall detection.
  2. Edge is less risky and more private. Edge computing distributes the risk of exposure across multiple devices, and can perform all processing disconnected from a central server - a more secure and private architecture. For example, edge devices can be installed inside of a person’s home, process real-time data and act on that data without relying on a shared cloud service that could compromise the privacy of their day-to-day activity.
  3. Edge is significantly less expensive. Computer vision is a powerful technology, but complex and can become very expensive to build, deploy and maintain. For example, cloud companies charge for inferencing per endpoint, per minute. This may help organizations that want to pay on an ‘as needed’ basis; but becomes enormously burdensome for organizations that demand large amounts of real-time processing - such as a smart city or hospital with many cameras and sensors running 24 hours a day.

And this doesn’t even take into account the significant labor costs associated with developing and deploying a computer vision app. Top-quality CV engineers are in short supply and expensive; going to an outside development firm is equally costly (and keeps the application knowledge outside the hiring firm). All in all, developers need a platform that can easily and affordably help them develop a CV app, deploy it to edge devices, and make changes as needed over time.

The table below shows the difference in cost, for a typical computer vision implementation*, of building and deploying a CV application leveraging an outside consulting firm, developing in-house, or by using the alwaysAI platform:

Costs of Computer Vision Implementation Comparison Chart

* A commercial-ready CV deployment on 20 end-points (e.g., cameras) running 24 hours a day

As you can see, using the alwaysAI platform to run a computer vision application on an edge device generates up to 12X in annual cost savings - as well as speed and privacy benefits - over the cloud. This opens up the power of computer vision on the edge to a broad array of developers and companies - we are proud to be part of this movement.

For a detailed look at the enormous cost advantages of edge over cloud, and the alwaysAI platform, download our white paper below.

By Komal Devjani • May 12, 2020

Developer stories to your inbox.

Subscribe to the Developer Digest, a monthly dose of all things code.

You may unsubscribe at any time using the unsubscribe link in the digest email. See our privacy policy for more information.

alwaysAI Ad
stylized image of a computer chip

Sign up today and start your project

We can't wait to see what you'll build!