Professional Documents
Culture Documents
net/publication/338797332
CITATIONS READS
0 66
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Chenyang Shi on 03 March 2020.
Abstract—Haar-like feature is the main way to detect some special 23], respectively. The feature based algorithms provide a
features. To deal with Haar-like feature detection under different possibility real time detection, due to the little computation
light environment, some experiments were designed according to time[22] and high detection accuracy[24]. Haar-like features[20-
illumination condition based on machine vision. During the 22, 25-27] are some rectangles with stripes, which can represent
experiments, face feature was chosen as the represent Haar-like objects’ feature under gray scale condition.
feature on the objects. For the low illumination conditions, Taguchi
method was chosen to optimize light parameters. The influence of the During recent years, multiple algorithms to deal with face
design parameters on the objects were separately evaluated by recognition under vary illumination conditions have been
analysis of variance (ANOVA). And the parameters were optimized proposed based on Yale B, CMU PIE, Yale B extended databases
with desirable performance was designed at last. For the high and so on [13, 17, 28]. In these databases, the variation in lighting
illumination conditions, the influence degree of average illumination just are longitudinal and latitudinal angles of light source
and relative distance and their interaction had indeed a significant direction. And the images of these databases only include human
effect on face feature recognition. A threshold value to assess the face with simple background. According to [29], a face detection
effect of variable on face feature recognition was set and a relation algorithms based on color space can accurately detect a front or
curve was fitted by the threshold value between average illumination slightly sloping of face under strong light condition. A face
and relative distance. Depend on the guidance of this curve, a good recognition rate under varying illumination condition based on
recognition effect was obtained. These results have filled the gap in Green's function have reached 100%[24]. But the relationship
estimating face feature recognition under different light environment
between feature recognition and light environment has not been
and will support the researches of Haar-like feature detection
analysis under different light environment.
determined.
As we all know, camera to take photo is affected by light
Keywords-Haar-like feature; light; machine vision; face easily, low accuracy of 3D information measurement. Light
recognition condition affects the face feature contrast, some features will not
I. INTRODUCTION be recognized or detected as wrong feature by the same detection
algorithm depend on complex background.
During the last decades, researches about the identity of human
detection by machine vision have attracted much attention, such In this paper, to deal with the relationship between Haar-like
as pedestrian[1], face[2], ear[3] and so on. A large number of feature recognition and light environment, camera was selected to
vision-based pedestrian detection systems have been proposed. take a photo of 2D object under different light conditions. Four
Several remarkable surveys have been presented[4, 5], and most parameters of experiments are chosen, and they are average
of the work with regard to human motion have been summarized illumination, CCT, Ra and relative distance, respectively. The 2D
in [6-8]. Face feature is one of the most important detection object is an image include no less than 10 face features with
objects, and numerous algorithms have been proposed [9-11]. complex background. Some experiments are designed depend on
illumination conditions. 2D face feature recognition with low
For feature detection, there are many different methods, e.g., illumination light environment is designed and optimized by using
Haar-like feature recognition, support vector machines the Taguchi method [30-33] with its experiments conducted.
(SVM)[12-14], deep learning[15, 16] and so on[17]. Many of the Under high illumination light environment, all parameters’
face feature recognition methods are based on geometric and influence on face feature recognition is tested by ANOVA and
appearance approach[18] and are likely break down under extreme SPSS software.
lighting condition. There are two design basis and thinking of face
feature recognition, which depend on pixel[19] and feature[20- II. IMAGE PROCESSING METHOD
F ( x ) = Weightall
Pixel∈all
Pixel + Weightblack
Pixel∈black
Pixel (1) Figure 2. The face feature recognition algorithm flow chart based on Haar-like
feature
The integral image at location x, y contains the sum of the III. EXPERIMENT
pixels above and to the left of x, y, inclusive: A. Experiment Setting
In the experiments, images were acquired through camera
ii ( x, y ) = i ( x ', y ')
x '≤ x , y '≤ y
(2) under different light environments, and face recognition was
completed by OpenCV3. After completing the image conversion
where ii (x, y) is the integral image and i (x, y) is the original image from chroma to gray scale and histogram equalization, face feature
(see Fig. 1). Using the following pair of recurrences: would be identified depend on effective features with comparison
of pixel. Then the face region would be marked with a circle on
s ( x, y ) = s ( x, y − 1) + i ( x, y ) (3) the original one. The contrast of effective parts and its surrounding
features in the image and the degree of differentiation were
changing depend on light environment. In specific recognition,
ii ( x, y ) = ii ( x − 1, y ) + s ( x, y ) (4) this change would show in the recognition results.
(where s(x, y) is the cumulative row sum, s(x,−1) = 0, and ii(−1, y) TABLE I. PARAMETERS OF THE CAMERA
= 0) the integral image can be computed in one pass over the
original image[20]. After the integral graph is constructed, the Parameters Value
sum of pixels of any matrix region in the image can be obtained Main lens aperture F2.0
by (1)-(4). Image resolution 1080P
Photosensitive pixels 10M
The actual pixel 1M
The picture perspective 130º
Our experiments were under different illumination condition,
and the light parameters are all on the basis of GB 50034-2013,
for example indoor parking area, tunnel and so on. And the
PHILIPS CVR108/93 was chosen as the detection camera. The
parameters of the camera were also showed in Table I.
Light conditions of experiments were controlled by
Figure 1. The value of the integral image at point (x, y) is the sum of all the THOUSLITE LED cube (as showed in Fig.3). Light was
pixels above and to the left[20]. simulated by this device with high quality solar, correlated color
temperature, color rendering index, color deviation, adjustable
illumination for the experiment of different light environment. where L is the relative distance, m is the length of face feature in
Light parameters were measured by Konica Minolta CL-500A the object, X is the real distance between camera and real object
illuminance spectrophotometer. and M is the real length of face. To simplify, face of human can
be approximately an oval, and the length and width of face can be
represented by the long axis and short axis of an oval (see Fig.6).
The data the face on the objects is showed in Table II. And the
data of real human face is showed in Table III depend on Head-
face dimensions of adult (GB/T 2428-1998).
L X
= (5) Figure 7 Recognition truth image illustration (Red circle number: TP, Green
m M circle number: FN, Blue circle number: FP)
Different light conditions on the object, the camera took a where yi denotes the value of A associated with the ith test, i is the
photo of the object. Light conditions affected the face feature index number of the performed test, and n is the total number of
contrast, the images were processed by the same detection data points per trial.
algorithm. Some features would not be recognized or detect wrong
feature. For performance evaluation of the face feature The S/N ratio of the different levels could be shown clearly
recognition, precision and accuracy indices are used[35]. from Fig.8. For A4 object, the best recognition could be provided
Accuracy denotes the ratio of detections that are correct and with the parameter settings a3-b2-c3-d2, while the parameter
precision implies the completeness of the extracted data. The settings a2-b1-c1-d2 presented best recognition for A3 object
effect of light environment on the Haar-like algorithm’s among all the parameter combinations.
performance enhances with greater values of these two TABLE IV. CONTROL FACTORS AND LEVELS
parameters. The illustration of sample recognition truth data of
one image is shown in Fig.7. Symbol
Parameters Number of Level Level Level
Description Levels 1 2 3
The efficiency or success rate is the ability of the face detection Average
a 3 30 50 70
algorithm to successfully extract the field of interest, and it can be illumination (E)/lx
defined using three parameters based on the presence and absence b CCT/K 3 3500 5000 6500
of the circle. In Fig.7, the number of successfully detected face, c Ra 3 85 90 95
which are actually true positives (TP), the face region missed by d
Relative Distance
3 35 45 55
the algorithm and erroneously labeled as non-face, which are false (L)/cm
negatives (FN), and non-face region erroneously labeled as face,
which are false positives (FP). TP + FN region depicts the face TABLE V. EXPERIMENTS DESIGNED WITH A L9 (34) ORTHOGONAL
ARRAY
data in the truth image and TP + FP region represents the detected
face region by the algorithm. In order to obtain more accurate Experiment A of
S/N
A of
S/N
effect, the object with ten features was selected. The accuracy (A), a b c d of of
Number A4 A3
precision (P), error ratio (ER) of the segmentation method can be A4 A3
denoted as 1 1 1 1 1 0.5 -6.02 0.9 -0.92
2 1 2 2 2 0.8 -1.94 1 0.00
TP 3 1 3 3 3 0.5 -6.02 0.8 -1.94
A= (6) 4 2 1 2 3 0.6 -4.44 1 0.00
TP + FN + FP 5 2 2 3 1 0.9 -0.92 0.9 -0.92
TP 6 2 3 1 2 0.9 -0.92 1 0.00
P= (7) 7 3 1 3 2 0.9 -0.92 0.9 -0.92
TP + FP 8 3 2 1 3 0.7 -3.10 0.9 -0.92
FN + FP 9 3 3 2 1 0.8 -1.94 0.8 -1.94
ER = =1-A (8)
TP + FN + FP
C. Low Illumination Experiments
Under the low illumination conditions, Taguchi method is
chosen to optimize light parameters. The levels of the control
factors were chosen in an appropriate range based on the indoor
lighting standard, as shown in Table IV. After picking the factors
and levels, the experiments that were assembled in the orthogonal
array as shown in Table V were tested in the laboratory and (a) (b)
processed by the computer program depend on Fig.2. Figure 8 S/N ratios of (a) A4 and (b) A3 at different levels
Table V also included the A of A4 and A3 paper objects, and
the signal-to-noise (S/N) ratio, which quantified the quality As shown in Fig.8 and Table V, factors b and c yielded a
characters. In these experiments, A was expected to be larger and bigger influence on A3 paper than A4 paper. Especially, factor c
thus better. Therefore, the S/N ratio[31] of the average had an obvious increase. Therefore, c1 was chosen as the optimal
illumination and relative distance can be expressed as parameter because it effectively enhanced the recognition of A3
paper. For A3 paper, b1 and b2 had same S/N ratio, so b2 was
1 chosen to ensure a highly A4 paper recognition effect. Factor a
n
was different from others, it had similar influence on A4 paper and
i =1
yi2 (9) A3 paper and parameter a2 was chosen to ensure a highly
η = −10log recognition effect for both objects. While factor d yielded a
n
smaller influence on A3 paper than A4 paper, but the optimal
values of factor d were same. Finally, based on Fig.8 and
conclusion as previously discussed, the parameter setting is average illumination and relative distance. Other parameters, such
defined as a2-b2-c1-d2. as Ra and CCT were remained constant. Depend on the real light
environment condition, the values for the two independent
The analysis of variance (ANOVA) technique was utilized to variables were selected. The average illumination parameter
estimate the cause-and-effect relationship between the design gradient was set as 50lx, 100lx, 200lx, 300lx, 400lx, 500lx and
factors and performance. The significance of the factors was 600lx. The relative distance parameter gradient was set as 35cm,
calculated and known as the “important level”. The important 45cm, 55cm, 65cm, 75cm, 85cm, 95cm and 105cm. The specific
level can be defined as ρ: experimental parameters were set as follows:
SS d
ρ= (10) TABLE VII. PARAMETERS SETTING
SS e Parameters
Parameters Name Value
Type
SSt = SSd + SSe (11) Average illumination
50,100,200,300,400,500,600
Variable (E)/lx
where SSd is the sum of the squared deviations, and SSe is the sum Relative distance (L)/cm 35,45,55,65,75,85,95,105
of the squared error. (Note that SSe is approximated as 0 since the Ra >85
Constant
simulations are repeatable.) The total sum of the squared deviation CCT/K 4500
SSt from the total mean S/N ratio ηn can be expressed as [13] All combinations of the different values for the two variables
ni were used in the experiment with the aim to collect sufficient data
SSt = SS d = (η i −η n ) to find the relationship of these variables. This resulted in 56
2
(12)
different light conditions, which were evaluated on perceived
i =1
degree of face feature recognition. Table VIII shows that the
where ni is the number of experiments in the orthogonal array, and conversion from relative distance to actual distance between
ηi means the S/N ratio of the ith experiment. camera and objects, according to (5). For human, the minimum
distance for face recognition is usually suggested as 4m and the
Table VI summarizes the ANOVA results for A4 and A3. The ideal distance is 10m as a recommendation[36]. As we can see,
results showed that relative distance (factor d, 44.94%) had the these two values are included in these experiments.
most influence on the face feature recognition of A4 paper object,
followed by average illumination (factor a, 38.50%). The TABLE VIII. THE REDUCTION OF RELATIVE DISTANCE
contribution of these two factors are larger than the other. In
addition, the average illumination (factor a, 31.09%) was the same Relative
35 45 55 65 75 85 95 105
distance/cm
influence with relative distance (factor d, 31.09%) on the face
Distance X
feature recognition of A3 paper object. It was evident that factor c (A3)/cm
370.6 476.5 582.4 688.2 794.1 900.0 1005.9 1111.8
affects both A4 and A3 the least, but the contribution increases the Distance X
most. For the A3 paper object, the contribution of all four factors (A4)/cm
525 675 825 975 1125 1275 1425 1575
are more average, which means that the influence of light Note that all statistical analyses mentioned further in paper
environment more obvious. were performed with SPSS software. Then, the relationship ER
value and average illumination and relative distance was tested by
TABLE VI. ANOVA RESULTS
MANOVA (multivariate analysis of variance), as shown in Table
Objects Factor a b c d IX.
SSd 4.59 1.64 0.29 5.32
A4
Contribution (%) 38.77 13.81 2.48 44.94
According to the results in Table XII, MANOVA results are
SSd 0.47 0.31 0.26 0.47
given, and the test results are consistent. For the A4 paper object,
A3
Contribution (%) 31.09 20.44 17.38 31.09 the inspection results between average illumination and ER value
is P=0.008<0.05, showing statistically significant. And the
D. High Illumination Experiments inspection results between relative distance and ER value is
In low illumination experiments, average illumination and P=0.000<0.05, also showing statistically significant. For the A3
relative distance are more influence factors face feature paper object, the inspection results between average illumination
recognition than others. Then this experiments were designed to and ER value is P=0.025<0.05, showing statistically significant.
research the relationship between optical parameters and face And the inspection results between relative distance and ER value
feature recognition under high illumination environment. The ER is P=0.000<0.05, also showing statistically significant. And For
value was also defined as face feature recognition number all object, the results between average illumination * relative
dividing by total feature number. distance and ER value is P=0.000<0.005, it also showing
statistically significant. In other words, this means that different
Relevant parameters for further experiment are summarized in combinations of average illumination and relative distance can
Table VII. Of these parameters two independent variables were significantly affect ER value.
selected for their presumed effect on face feature recognition:
TABLE IX. MANOVA RESULTS variable on ER value and get a comparatively good recognition
Type III Partial result. The threshold curve by nonlinear curve fit was showed in
Dependent Mean Fig.10. The correlation of corrected curve is r²=0.98, this shows
Source Sum of df F Sig. Eta
Variable Square
Squares Squared that the curve has a very accurate fitting degree. The threshold
Average illumination curve shows that the average illumination would be the important
(E) 0.340 6 0.057 3.381 0.008 0.326 influence factor for ER under less than 200lx light environment. If
ER value the illumination larger than 200lx, the important factor would be
(A4)
Relative distance (L)
distance. Depend on the threshold curve, the relationship between
5.599 7 0.800 47.7750.000 0.888
average illumination and distance can be described as (13).
Average illumination 75.5 × 1.023E
ER value
(E) 0.122 6 0.020 2.728 0.025 0.280
X = 44 + (13)
(A3) 1.023E + 4.966
Relative distance (L) 0.301 7 0.043 5.784 0.000 0.491
Average illumination *
ER value Relative distance(E * L) 7.967 55 0.145 3.584 0.000 0.638