Chrysalis Cloud facial landmarks using OpenCV

Facial Landmarks example with dlib, OpenCV in the Chrysalis Cloud

Igor Rendulic


In this post we show how to use Chrysalis Cloud SDK on the example of detecting facial landmarks. The goals is to bring facial landmarks into the cloud. Building your own solution around OpenCVs VideoCapture() would require us to manage a video stream separately, usually within its own thread and a shared queue. You also need to keep in mind we’re dealing with H264 streams. Read why H264 is difficult to deal with here. We demonstrate that VideoCapture() can be simply replaced with Chrysalis VideoLatestImage().

Facial Landmarks
Facial Landmarks

About facial landmarks

Facial landmarks are usually the second step in facial recognition. They are used to to scale and align faces for more accurate comparisons in facial recognition. They do have some use cases on their own, such as drowsiness detection, face swap or Snapschat like filters.

This is not a detailed tutorial on how facial landmarks work although we’ll take a quick look what it is. There are many resources out there if you’re interested in hows and whys of facial recognition or facial landmark detection (check the resources on the bottom).


dlib pre-trained model is essentially trying to localize and also label the following facial regions, producing the estimated location of 68 point coordinates:

  • The left eye is accessed with points [42, 47].
  • The mouth is accessed through points [48, 67].
  • The left eyebrow is accessed through points [22, 26].
  • The nose is accessed using points [27, 34].
  • The right eyebrow is accessed through points [17, 21].
  • The right eye is accessed using points [36, 41].
  • And the jaw is accessed via points [0, 16].
dlib face
Facial landmarks, used to label and identify key facial attributes in an image



Creating an endpoint on Chrysalis Cloud

For creating an endpoint you’ll need a developer account on Chrysalis Cloud.

Once logged in create your first RTMP Stream. It should take about a minute or less. Chrysalis cloud gives out a free trial for a month for one camera.

Install OBS Studio

In this example we will use OBS studio for development. You could also develop with OpenCV VideoCapture and later move onto Chrysalis Cloud. You will see from the code example how easy we can transition from local solution to cloud solution utilizing Chrysalis Cloud SDK.

Read this if you need  more information how to stream live video to Chrysalis Cloud from OBS Studio.

Alternative streaming methods:

Install Anaconda

Since Chrysalis Cloud SDK is dealing with somewhat complex dependencies the easiest and recommended way is to use anaconda environment. Anaconda also comes in handy later when we need something even more complicated such as GPU acceleration or different versions of Tensorflow.

Find how to install here

Python Code

Clone the development repository:

git clone

Download pre-trained shape predictor model in the ‘data’ folder and unzip it there:

cd facial-landmarks/data
bunzip2 shape_predictor_68_face_landmarks.dat.bz2

Create and activate conda environment:

cd ..
conda env create -f environment.yml
conda activate chrysface

Get Chrysalis Cloud SDK streaming credentials for your connected camera and export to environment variables:

export chrys_port=1111
export chrys_password=mypassword
export chrys_cert=pathtomycertificate.cer

Run the example code:


Code walk-through


We’re importing some of the usual suspects here for computer vision. The library that stands out is perhaps imutils which includes a few OpenCV convenience functions.

As we’ll see later we’ll use it mainly for converting the facial landmarks to :

  1. shape_to_np (predicted 68 shapes from dlib to list of tuples with x,y coordinates)
  2. rect_to_bb (rectangle to bounding box with x,y and width, height for display purposes with OpenCV)

We also prepare a dlibs HOG algorithm to extract faces from the image and load a pre-trained facial landmark detector that returns 68 (x,y)-coordinates mapping to facial structures.


Method validate_params simply checks if environment variables have been set.

The main method connects to Chrysalis Cloud utilizing the preset environment variables and runs an infinite loop requesting an Image/Frame from the Chrysalis Streaming server.

It’s worthy noting here that Chrysalis Cloud SDK may occasionally return a None object in case the stream is stalled due to e.g. network latency. Therefor a simple check if recommended.

The returned image is in ChImage object. Check the ChImage object here.


Facial landmarks

The method responsible for returning a single augmented frame with detected faces and their landmarks.

After converting an image to gray scaled image we run it through dlibs detector. Iterating all the detected faces we run it through dlibs pre-trained facial landmark detector.

The rest of the code is for display purposes:

  1. Converting dlibs results to face bounding box
  2. Displaying the face number
  3. Drawing small circles (68 of them) on the frame itself to indicate predicted facial landmarks


We’ve used Chrysalis Cloud SDK to perform facial landmark detection while streaming a video to the cloud using OBS Studio.

Our pipeline consists of:

  1. Creating a streaming server on the Chrysalis Cloud
  2. Simulating an RTMP camera from OBS Studio
  3. Ingesting individual video frames into dlib using Chrysalis Cloud SDK
  4. Display with OpenCV



Leave a Reply

More great articles

Chrysalis Cloud Object detection cctv live stream

Object detection in live video stream with Chrysalis Cloud

TL;DR This post is an adaptation of a existing pre-trained object detection API ML model form Googles Colab Tutorial to…

Read Story
Chrysalis Cloud Mask detection Covid19

Covid19 Mask Detector on Live Video Stream

TL;DR This post is an adaptation of an article written by Adrian Rosebrock titled COVID-19: Face Mask Detector with OpenCV,…

Read Story

Never miss a minute

Get great content to your inbox every week. No spam.

    Only great content, we don’t share your email with third parties.