Covid Mask Detection

Covid19 Mask Detector on Live Video Stream

Igor Rendulic

TL;DR

This post is an adaptation of an article written by Adrian Rosebrock titled COVID-19: Face Mask Detector with OpenCV, Keras/TensorFlow, and Deep Learning. In case you’re interested how to train and build a COVID-19 mask detector please refer to aforementioned article. Here we simply adapt Adrains code to work on Chrysalis Cloud with any remote camera capable of RTMP streaming.

Mask Detection
Mask Detection

About Covid19 Mask Detector

This post is a demonstration of one possible way how to distribute such machine learning algorithms that can potentially help insure your safety and safety of others into the wild.

In the original tutorial Adrian is describing also how to train a neural network. Here we focus only on “Phase 2”: loading the mask detector, performing face detection, and then classifying each face as with_mask or without_mask

Prerequisites

Creating an endpoint on Chrysalis Cloud

For creating an endpoint you’ll need a developer account on Chrysalis Cloud.

Once logged in create your first RTMP Stream. It should take about a minute or less. Chrysalis cloud gives out a free trial for a month for one camera.

Install OBS Studio

In this example we will use OBS studio for development. You could also develop with OpenCV VideoCapture and later move onto Chrysalis Cloud. You will see from the code example how easy we can transition from local solution to cloud solution utilizing Chrysalis Cloud SDK.

Read this if you need  more information how to stream live video to Chrysalis Cloud from OBS Studio.

Alternative streaming methods:

Install Anaconda

Since Chrysalis Cloud SDK is dealing with somewhat complex dependencies the easiest and recommended way is to use anaconda environment.

Find how to install here

Python Code

Clone our example repository:

git clone https://github.com/chryscloud/chryscloud-ai-examples.git

Create and activate conda environment:

cd ..
conda env create -f environment.yml
conda activate chryscovid

Run

Get Chrysalis Cloud SDK streaming credentials for your connected camera (OBS Studio in this case) and export environment variables:

export chrys_port=1111
export chrys_host=url.at.chrysvideo.com
export chrys_password=mypassword
export chrys_cert=pathtomycertificate.cer

Run the example:

python mask_detection.py

Code Walk-through

Imports

In our case we’re using Keras/Tensorflow 2.1.0 to run pre-trained model by Adrian. All the associated files are already in the GitHub and were downloaded along with code when you cloned the repository.

Lines 13-16 are used when we don’t want to allocate the complete GPU memory of our physical device. Basically we can remove those lines if we don’t encounter error: could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED. 

Another option is to experiment with limiting GPU to e.g. 1024MB with Tensorflow 2.0


gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
 try:
  tf.config.experimental.set_virtual_device_configuration(gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
 except RuntimeError as e:
print(e)

Main

validate_param method simply verifies if all expected environment variables are set.

On line 16 we connect to our live camera stream.

Lines 19-23 load Adrias pre-trained model, after which enter the infinite while loop. There we perform “mask”, “no mask” detection on a single frame resized down to 400 px in width, while preserving aspect ratio on the height.

The rest of the code is for display purposes only. We use the information returned from the model: bounding boxes and mask prediction to draw with OpenCV the results of the model.

Possible improvements

It might be worth while mentioning that running a GPU machines is expensive. To further reduce the cost and increase number of cameras we can run Covid-19 Mask Detector with Chrysalis Edge Proxy. Chrysalis Edge Proxy can easily collect video streams from multiple RTSP cameras on one machine (for instance a machine with GPU enabled).

I haven’t tested the performance of running this ML model on my graphic card GeForce RTX 2060, but it seems to perform well on 20 FPS. If we would request only I-frames which usually come in every 2 seconds on RTSP cameras, we could run our model on a single GPU machine for about 40 cameras simultaneously.

Resources

0 Comments

Leave a Reply

More great articles

Chrysalis Cloud Demo Video SDk

New to Chrysalis Cloud? Check out the demo video.

If you're taking advantage of our one month free trial, then take advantage of this 15-min demo video. Click below…

Read Story
ObjectDetection

Object detection in live video stream with Chrysalis Cloud

TL;DR This post is an adaptation of a existing pre-trained object detection API ML model form Googles Colab Tutorial to…

Read Story

It’s here! The Chrysalis Cloud beta.

After months of building, we are excited to unveil our new video streaming platform to the developer community. If you…

Read Story

Never miss a minute

Get great content to your inbox every week. No spam.
Only great content, we don’t share your email with third parties.
Arrow-up