You are on page 1of 3

Face Landmark Detection Demo Tutorial

Step 1: create a working directory, in a terminal, do following:


mkdir ML
cd ML
mkdir CNNTraining
cd CNNTraning

Step 2: Install dlib and necessary packages, in a terminal, do following


pip install dlib
pip install ipython
pip install imutils
pip install opencv
pip install numpy

Step 3: download training data

http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz

The file is 1.8G, will take a few mins, you can download the file to the ML folder, then
unzip it to your CNNTraining folder. The unzipped directory has 4 sub-directories, each
one has a training dataset and 4 in total. Meanwhile there are 3 files with xml extension
type.

Step 4: set training options, in a terminal, do following (>> means the system output)
ipython
mod= dlib.shape_predictor_training_options()
type(mod)
>>dlib.shape_predictor_training_options
print("[INFO] setting shape predictor options...")
>>[INFO] setting shape predictor options...
mod = dlib.shape_predictor_training_options()
mod.tree_depth = 4
mod.nu = 0.1
mod.cascade_depth = 15
mod.feature_pool_size = 400
mod.num_test_splits = 50
mod.oversampling_amount = 5
mod.oversampling_translation_jitter = 0.1
mod.be_verbose = True
import multiprocessing
mod.num_threads = multiprocessing.cpu_count()
print("[INFO] shape predictor options:")
>>[INFO] shape predictor options:
print(mod)
>>shape_predictor_training_options(be_verbose=1, cascade_depth=15, tree_depth=4,
num_trees_per_cascade_level=500, nu=0.1, oversampling_amount=5,
oversampling_translation_jitter=0.1, feature_pool_size=400, lambda_param=0.1,
num_test_splits=50, feature_pool_region_padding=0, random_seed=, num_threads=0,
landmark_relative_padding_mode=1)

Step 5: training, it may take 10-30 mins

dlib.train_shape_predictor(“labels_ibug_300W_train.xml”,”face-68.dat”,mod)

>>Training with cascade depth: 15


>>Training with tree depth: 4
>>Training with 500 trees per cascade level.
>>Training with nu: 0.1
>>Training with random seed:
>>Training with oversampling amount: 5
>>Training with oversampling translation jitter: 0.1
>>Training with landmark_relative_padding_mode: 1
>>Training with feature pool size: 400
>>Training with feature pool region padding: 0
>>Training with 8 threads.
>>Training with lambda_param: 0.1
>>Training with 50 split tests.
>>Fitting trees...
>>Time remaining: 4.68 minutes. This number will change
>>Training complete

>>Training complete, saved predictor to file face-68.dat

Exit ipython by

exit()

Step 6: check the training results, in a terminal

ls -l

the model (face-68.dat) size is a bit less than 100M

Step 7: evaluate the model using inference function, in a terminal


import dlib
import cv2
from imutils.video import VideoStream as vs
detector=dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("./face-68.dat")
type(predictor)
>>dlib.shape_predictor
type(detector)
>>dlib.fhog_object_detector

Step 8: run the demo: copy and paste the python code ‘predictor.py’ in the working
directory CNNTraining, then open a terminal and cd to CNNTraining, run following

python predictor.py -p face-68.dat

Your camera turns on and display a window with landmark inside.

You might also like