You are on page 1of 7

Applied Mathematics, Modeling and Computer Simulation 737

C.-H. Chen et al. (Eds.)


© 2022 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/ATDE221091

Design and Implementation of Embedded


Face Recognition System
Yuanzi HE1
Guangdong University of Science and Technology, Dongguan, Guangdong, China

Abstract. This paper introduces the design scheme of embedded face recognition
system, and introduces the face detection and recognition algorithm of MTCNN
and FaceNet. A simple face recognition system is implemented on the RK-3399
embedded platform by using the traditional face recognition database of OpenCV.
The face detection test is carried out under the conditions of different lighting
conditions, diverse postures and partial occlusion of the face, The results show that
the embedded face recognition system based on ARM can run stably and normally
with high recognition rate, and has a good application prospect.

Keywords. Face detection; face recognition; embedded.

1. Introduction

The face recognition system needs to process very large and complex data, so it has
high requirements for the running speed, accuracy, stability and other aspects of the
hardware platform. At present, the face recognition system is applied to the computer
platform, which is bulky and bulky, and can no longer meet the needs of the market. It
needs to run on a portable and easy-to-use platform, The rapid development of
embedded technology makes it possible to build a portable and easy-to-use face
recognition system. Combined with the advantages of embedded technology and face
recognition technology, an embedded face recognition system is formed[1].

2. System Design Scheme

The deep learning algorithm is used to realize the face recognition function on the
ARM embedded platform, and then the suitable hardware platform is selected
according to the processing accuracy and speed of the processor, that is, the Ruixin
micro RK-3399 development board with cortex-a72 as the core, the CMOS camera
with USB as the interface, and the electric lock as the actuator of the system. Build the
development environment of embedded hardware platform, choose Ubuntu system as
the software running platform of this topic, compile and transplant boot loader as the
system startup program, customize and cut the Ubuntu kernel, add the file system, and
transplant it to RK-3399 embedded development board. Finally, OpenCV computer

1
Corresponding Author, Yuanzi He, Guangdong University of Science & Technology, Dongguan,
Guangdong, China; E-mail: 404250469@qq.com.
738 Y. He / Design and Implementation of Embedded Face Recognition System

vision library and tensorflow artificial intelligence framework are transplanted and
installed on RK-3399 development board[2].

3. Research on MTCNN and FaceNet Face Detection and Recognition Algorithm

3.1. MTCNN Face Detection Algorithm

MTCNN is a unified cascaded convolution neural network (CNN) through multi task
learning. It combines face detection and facial feature point location, and is composed
of three network structures and three convolution neural networks. First, P-net
generates a large number of candidate windows through shallow convolution neural
networks, and then further identifies candidate windows in R-Net to exclude non faces
to optimize candidate windows, Finally, the face window is optimized again in O-net,
and the positions of five facial feature points are output. Gaussian pyramid algorithm is
used to input the image and adjust the size of the image to make it suitable for the input
image of three network structures of MTCNN algorithm. MTCNN uses NMS
algorithm to exclude non candidate face windows, and finds the most accurate face
detection location in the three-level network.

3.2. FaceNet Face Recognition Algorithm

FaceNet face recognition algorithm takes advantage of the high cohesion of the same
face under photos with different angles and equal postures, and the low coupling of
different faces. It is realized by CNN + Triplet Mining method, and the accuracy is
95.12% on YouTube face data set. FaceNet model is based on one of the features, by
learning the mapping from image to European space, and then do face recognition
through mapping. Assuming that the user inputs an image, FaceNet algorithm adjusts
the size of the image to (160*160) by calling MTCNN algorithm. Taking this size as
the input image of FaceNet algorithm, an embedding layer is connected through an
ordinary convolutional neural network, so that the original feature space is mapped to a
new feature space, and the new feature space can be called an embedding of the
original feature space[3]. The mapping relationship here is to map the characteristics of
the output of the full connection layer at the end of the convolutional neural network to
a hypersphere, that is, normalize the two norms of its characteristics, and then take
Triplet Loss as the supervision signal to obtain the loss and gradient of the network.
After training, the 128 dimensional vector of an image is obtained, and the Euclidean
Distance of the two images is obtained by comparing with the 128 dimensional face
feature vector calculated by the face in the database, and compared with the given
threshold. When the Distance threshold is less than 1, it means the same person, and
when it is greater than 1, it means that it is not the same person.
Y. He / Design and Implementation of Embedded Face Recognition System 739

4. Implementation of Embedded Face Recognition System

4.1. Implementation of Embedded Face Detection Module

The face detection based on MTCNN algorithm is realized on the embedded


development board RK-3399. In order to ensure the accuracy of face detection and
avoid the interference of other environmental factors in face recognition, the computer
vision library OpenCV is used in the video stream collected from the USB camera for
early format conversion and gray-scale conversion to remove the environmental
interference of face detection to improve the success rate of recognition[4].When the
RK-3399 embedded platform drives the USB camera to collect images and MTCNN
algorithm for face detection, it needs to load the pre training model and model
parameters during initialization. The embedded Ubuntu system uses the tensorflow
framework to realize face detection. The implementation steps are:
1) Drive the USB camera to collect images, and capture a picture every 3 frames;
2)Load the pre training model of MTCNN algorithm, P_Net,R_Net,O_Net =
create_ MTCNN(session,model_path);
3) Use CV2 of opnecv CVT color() function converts YUV format to RGB format
and grayscale;
4)Using MTCNN algorithm to recognize faces in images, bounding_ boxes=align.
detect_ face. detect_ face(gray, minsize, pnet, rnet, onet, threshold,factor);
5)Change the face image size, img2=cv2
resize(framet,(image_size,image_size),interpolation=cv2. INTER_ CUBIC)
6)Save the image and execute cv2 Imwrite() to save the resized face image;The
embedded face detection module collects video images by calling the computer vision
library OpenCV. In the program, the images in the video stream are detected every
three frames and input into the MTCNN algorithm to determine whether the collected
image has a face area to improve the recognition rate. If a face is detected, the
rectangular box is marked, and the part with a face is cut and saved for face
recognition. Otherwise, the image is discarded and re collected, The effect of MTCNN
face detection is shown in figure 1:

MTCNN face detector


Figure 1. MTCNN face detection rendering
The image on the left shows the image collected by OpenCV library with USB
camera, and the face area is detected by MTCNN algorithm and marked with a
rectangular box. The image on the right shows the image marked by MTCNN
algorithm and intercepts the face image of standard size. The face area with 160 * 160
pixels is cut out, and the redundant background part is removed, which highlights the
740 Y. He / Design and Implementation of Embedded Face Recognition System

face, reduces the calculation amount of subsequent algorithms, and facilitates


subsequent face recognition.

4.2. Implementation of Face Comparison Algorithm on Embedded Platform

Embedded face recognition system requires real-time face detection. After successful
face detection, the detected face will be recognized, and the face threshold size in the
library will be used to determine whether it matches the face in the library. The trained
model parameters will be loaded using the FaceNet network model, and the saved face
image will be input into the network to get a 128-dimensional feature vector. Verify it
by comparing it with 128-dimensional feature vectors of faces in the library[5]. The
steps to realize face matching on RK-3399 embedded platform are as follows:
1) Load tensorflow configuration parameters
#tf. Graph(). As_ Default()
#sess = tf. Session()
2) Loading and training model parameters
#FaceNet. Load_ Model ('. /20180402-114759/20180402-114759.pb')
3) Get input and output parameters
#images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
#embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
#phase_train_placeholder =
tf.get_default_graph().get_tensor_by_name("phase_train:0")
4) Read the pictures saved in the previous section
image2 = FaceNet.prewhiten(image2)
5) Start facet network to get 128 dimensional eigenvector
emb_array1[0,:]=sess.run(embeddings,feed_dict={images_placeholder:scaled_resh
ape[0],phase_train_placeholder:False})[0];
6) Calculate the Euclidean Distance from the feature vector of the face in the
database
Dist = np.sqrt(np.sum(np.square(emb_array1[0] - emb_array2[0])))
7) Validation
If Dist is less than 1, it is the face in the database; if Dist is greater than 1, it is the
face not in the database.
In the face recognition stage, the face information of the person to be recognized
detected by MTCNN face detector is extracted and compared with the 128 dimensional
face features stored in the database. The Euclidean Distance is applied to calculate the
Distance between vectors. Through the Distance between the face features to be
detected and the face features stored in the database, whether the face to be detected
exists in the face database is determined. If the threshold value is less than 1, it
indicates the same person; if it is greater than 1, it indicates that it is not the same
person. Complete the real-time identity authentication of others to be known. His face
identity authentication is successful, as shown in figure 2.
Y. He / Design and Implementation of Embedded Face Recognition System 741

Figure 2 .Face authentication successful

5. System Test and Analysis

Face recognition can be accomplished through two steps: face detection and face
recognition. The most important is the first step of face detection. Only high-quality
face detection can be used for face recognition. However, under normal circumstances,
the face quality of the pictures taken by the camera is not high, and the success rate of
detection can be improved only after appropriate processing of the pictures manually.
Even so, the detection and extraction of faces cannot be completely guaranteed.

5.1. Test Environment Preparation

The display uses a screen with a resolution of 1024*768 of HDMI HD display. The
display is connected to the RK-3399 development board through HDMI, and the
camera is connected to the development board through USB[6].

5.2. Face Detection Test

Through a large number of experimental tests on the variety of lighting conditions and
postures of human faces in real-time face detection, and the partial occlusion of human
faces, it is concluded that the range of human faces can be correctly recognized and the
success rate of detection can be improved.
Table 1. Test results of lighting conditions
Point of Test Depth success Traditional Deep success rate Traditional
time times success times success rate
8:00 100 98 96 98% 96%
12:00 100 100 100 100% 100%
19:00 100 92 86 92% 86%
22:00 100 0 0 0% 0%

It can be seen from table 1 that the system cannot detect at night (the light is dark),
and it needs a fill light lamp to fill the light. It has high recognition accuracy in other
742 Y. He / Design and Implementation of Embedded Face Recognition System

time periods, and can reach 100% detection rate in the strongest light period. With the
weakening of light intensity, face detection will be affected to a certain extent.
Table 2. Attitude angle value

Posture Depth angle Accuracy rate Traditional perspective Accuracy rate

Left face 45° 0 30° 100%


Right face 45° 0 30° 100%
Upper face 30° 0 20° 100%
Lower face 30° 0 20° 100%

According to table 2, the correct rate is 0 when the left and right side faces are
greater than 45 degrees, 100% when the left and right side faces are less than 45
degrees, 0 when the upper and lower side faces are greater than 30 degrees, and 100%
when the upper and lower side faces are less than 30 degrees. In order to ensure the
accuracy of embedded face recognition equipment, face images should be collected
according to the above attitude angle.
Table 3. Face occlusion test
Occlusion Number of Depth success Traditional Deep success Traditional
tests times success times rate success rate
10% 100 98 84 98% 84%
15% 100 94 40 94% 40%
20% 100 80 0 80% 0%
25% 100 2 0 2% 0%

As shown in table 3, the accuracy of 100% can not be guaranteed when the main
feature points of the face (eyebrows, eyes, nose) are covered. 94% success rate can be
achieved when the cover is 15%, but the maximum value of the face covered when the
cover is 20% can not be detected when the cover is more than 20%.

5.3. Face recognition Test

The best range of face detection is obtained by testing faces in different environments
through face detection test. Based on this range, five people in the database are tested
in turn. The testers are the people in the database, and the Euclidean Distance of their
recognizers is obtained.
Table 4. Face recognition test

Number Tester 1 Tester 2 Tester 3 Tester 4 Tester 5

1 0.467(Yes) 1.553(No) 1.562(No) 1.865(No) 1.723(No)


2 1.566(No) 0.763(Yes) 1.469(No) 1.325(No) 1.412(No)
3 1.453(No) 1.481(No) 0.567(Yes) 1.121(No) 1.347(No)
4 1.356(No) 1.582(No) 1.332(No) 0.520(Yes) 1.264(No)
5 1.455(No) 1.753(No) 1.429(No) 1.731(No) 0.578(Yes)
Y. He / Design and Implementation of Embedded Face Recognition System 743

According to table 4, if the Euclidean Distance of face recognition test is less than
1, it will be judged as the same person, and if the Euclidean Distance is greater than 1,
it will be judged as not the same person. The test results show that the system can
recognize 100% correctly within the range of the best face detection.

6. Conclusion

Based on the premise of biometrics, this paper adopts RK-3399 embedded platform.
First, the face detection and face recognition algorithm is realized on PC by using the
deep learning framework tensorflow, and the model parameters with good recognition
rate are trained. Finally, the face recognition algorithm is deployed to the embedded
development platform to complete the design of embedded face recognition system
based on ARM

Acknowledgments

2019 Guangdong Higher Education Teaching Reform Project -- Exploration of


embedded system curriculum reform under CDIO mode. Supported by Natural
Science Project of Guangdong University of Science and Technology
(GKY-2021KYZDK-6).

References

[1] Design of embedded face recognition system based on ARM [d] Tian Zhiqiang Inner Mongolia
University of science and technology 2019
[2] Design and implementation of embedded face recognition system based on ARM [d] Liu Jian Xi'an
University of science and technology 2017
[3] Yang Jian, Zhang Weiqiang, Wang Xiaoxing Design of embedded face recognition system based on
ARM [j] Wireless communication technology, 2020,29 (04): 25-29
[4] Lichangxiang, Bai Chuang Design and implementation of embedded face recognition system [j]
Intelligent computer and application 2018,8(03):115-117+121.
[5] Research on face detection and matching recognition algorithm [d] Wang Yimeng Harbin Institute of
technology 2018
[6] Local Polynomial Contrast Binary Patterns for Face Recognition[J].Zhen Xu,Yinyan Jiang,Yichuan
Wang,Yicong Zhou,Weifeng Li,Qingmin Liao. Neurocomputing . 2018

You might also like