You are on page 1of 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/338797332

Recognition by Haar-Like Feature Under Different Light Environment


Conditions Based on Machine Vision

Conference Paper · October 2019


DOI: 10.1109/CISP-BMEI48845.2019.8965854

CITATIONS READS

0 66

2 authors:

Chenyang Shi Yandan Lin


Anhui Polytechnic University Fudan University
11 PUBLICATIONS   32 CITATIONS    124 PUBLICATIONS   805 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

High Colour Quality Material View project

Image quality View project

All content following this page was uploaded by Chenyang Shi on 03 March 2020.

The user has requested enhancement of the downloaded file.


Recognition By Haar-Like Feature Under Different
Light Environment Conditions Based on Machine
Vision

Chenyang Shi Yandan Lin


Institute for Electric Light Source Institute for Electric Light Source
Fudan University Fudan University
Shanghai, China Shanghai, China

Abstract—Haar-like feature is the main way to detect some special 23], respectively. The feature based algorithms provide a
features. To deal with Haar-like feature detection under different possibility real time detection, due to the little computation
light environment, some experiments were designed according to time[22] and high detection accuracy[24]. Haar-like features[20-
illumination condition based on machine vision. During the 22, 25-27] are some rectangles with stripes, which can represent
experiments, face feature was chosen as the represent Haar-like objects’ feature under gray scale condition.
feature on the objects. For the low illumination conditions, Taguchi
method was chosen to optimize light parameters. The influence of the During recent years, multiple algorithms to deal with face
design parameters on the objects were separately evaluated by recognition under vary illumination conditions have been
analysis of variance (ANOVA). And the parameters were optimized proposed based on Yale B, CMU PIE, Yale B extended databases
with desirable performance was designed at last. For the high and so on [13, 17, 28]. In these databases, the variation in lighting
illumination conditions, the influence degree of average illumination just are longitudinal and latitudinal angles of light source
and relative distance and their interaction had indeed a significant direction. And the images of these databases only include human
effect on face feature recognition. A threshold value to assess the face with simple background. According to [29], a face detection
effect of variable on face feature recognition was set and a relation algorithms based on color space can accurately detect a front or
curve was fitted by the threshold value between average illumination slightly sloping of face under strong light condition. A face
and relative distance. Depend on the guidance of this curve, a good recognition rate under varying illumination condition based on
recognition effect was obtained. These results have filled the gap in Green's function have reached 100%[24]. But the relationship
estimating face feature recognition under different light environment
between feature recognition and light environment has not been
and will support the researches of Haar-like feature detection
analysis under different light environment.
determined.
As we all know, camera to take photo is affected by light
Keywords-Haar-like feature; light; machine vision; face easily, low accuracy of 3D information measurement. Light
recognition condition affects the face feature contrast, some features will not
I. INTRODUCTION be recognized or detected as wrong feature by the same detection
algorithm depend on complex background.
During the last decades, researches about the identity of human
detection by machine vision have attracted much attention, such In this paper, to deal with the relationship between Haar-like
as pedestrian[1], face[2], ear[3] and so on. A large number of feature recognition and light environment, camera was selected to
vision-based pedestrian detection systems have been proposed. take a photo of 2D object under different light conditions. Four
Several remarkable surveys have been presented[4, 5], and most parameters of experiments are chosen, and they are average
of the work with regard to human motion have been summarized illumination, CCT, Ra and relative distance, respectively. The 2D
in [6-8]. Face feature is one of the most important detection object is an image include no less than 10 face features with
objects, and numerous algorithms have been proposed [9-11]. complex background. Some experiments are designed depend on
illumination conditions. 2D face feature recognition with low
For feature detection, there are many different methods, e.g., illumination light environment is designed and optimized by using
Haar-like feature recognition, support vector machines the Taguchi method [30-33] with its experiments conducted.
(SVM)[12-14], deep learning[15, 16] and so on[17]. Many of the Under high illumination light environment, all parameters’
face feature recognition methods are based on geometric and influence on face feature recognition is tested by ANOVA and
appearance approach[18] and are likely break down under extreme SPSS software.
lighting condition. There are two design basis and thinking of face
feature recognition, which depend on pixel[19] and feature[20- II. IMAGE PROCESSING METHOD

978-1-7281-4852-6/19/$31.00 ©2019 IEEE


Image processing is an important part in this experiment. Only In our experiments, face feature of static images was converted
through the image processing, recognition results can be obtained, to grayscale one, and the basic steps of algorithm was written by
which induces the image features under the influence of light importing the cv2 module. Pass parameters of scaleFactor and
environment law. minNeighbors were chosen to represent the minimum value of
image compression rate. Each face round was reserved by the
There are a total of three kinds of commonly used color space neighbor number in each iteration in the process of face detection.
in computer vision, which are gray scale, BGR (blue, green and Then the variable values were extracted in turn to find the face,
red color) and HSV (Hue, Saturation and Value)[34], respectively. and to draw on the original image rather than the image gray
In order to make the law universal, a most popular method was version a green circle around face. Fig.2 was the face feature
used in this research. For this method, the procedure of recognition algorithm flow chart based on Haar-like feature in our
transforming images into grayscale by removing color experiments.
information was used to the face detection and recognition, which
is especially effective for intermediate processing.
There are many methods can realize face recognition including
OpenCV, which is always commonly used in all kinds of software,
because of its powerful functions. One of the important features
of OpenCV for face detection is the Haar-like feature [34]. It is a
feature that is used to realize real-time face tracking. Each Haar-
like feature describes the contrast mode of adjacent image areas.
The Haar-like feature template contains two rectangles, which
are white and black, respectively. The feature value of Haar is
represented as:

F ( x ) = Weightall  
Pixel∈all
Pixel + Weightblack  
Pixel∈black
Pixel (1) Figure 2. The face feature recognition algorithm flow chart based on Haar-like
feature

The integral image at location x, y contains the sum of the III. EXPERIMENT
pixels above and to the left of x, y, inclusive: A. Experiment Setting
In the experiments, images were acquired through camera
ii ( x, y ) =  i ( x ', y ')
x '≤ x , y '≤ y
(2) under different light environments, and face recognition was
completed by OpenCV3. After completing the image conversion
where ii (x, y) is the integral image and i (x, y) is the original image from chroma to gray scale and histogram equalization, face feature
(see Fig. 1). Using the following pair of recurrences: would be identified depend on effective features with comparison
of pixel. Then the face region would be marked with a circle on
s ( x, y ) = s ( x, y − 1) + i ( x, y ) (3) the original one. The contrast of effective parts and its surrounding
features in the image and the degree of differentiation were
changing depend on light environment. In specific recognition,
ii ( x, y ) = ii ( x − 1, y ) + s ( x, y ) (4) this change would show in the recognition results.
(where s(x, y) is the cumulative row sum, s(x,−1) = 0, and ii(−1, y) TABLE I. PARAMETERS OF THE CAMERA
= 0) the integral image can be computed in one pass over the
original image[20]. After the integral graph is constructed, the Parameters Value
sum of pixels of any matrix region in the image can be obtained Main lens aperture F2.0
by (1)-(4). Image resolution 1080P
Photosensitive pixels 10M
The actual pixel 1M
The picture perspective 130º
Our experiments were under different illumination condition,
and the light parameters are all on the basis of GB 50034-2013,
for example indoor parking area, tunnel and so on. And the
PHILIPS CVR108/93 was chosen as the detection camera. The
parameters of the camera were also showed in Table I.
Light conditions of experiments were controlled by
Figure 1. The value of the integral image at point (x, y) is the sum of all the THOUSLITE LED cube (as showed in Fig.3). Light was
pixels above and to the left[20]. simulated by this device with high quality solar, correlated color
temperature, color rendering index, color deviation, adjustable
illumination for the experiment of different light environment. where L is the relative distance, m is the length of face feature in
Light parameters were measured by Konica Minolta CL-500A the object, X is the real distance between camera and real object
illuminance spectrophotometer. and M is the real length of face. To simplify, face of human can
be approximately an oval, and the length and width of face can be
represented by the long axis and short axis of an oval (see Fig.6).
The data the face on the objects is showed in Table II. And the
data of real human face is showed in Table III depend on Head-
face dimensions of adult (GB/T 2428-1998).

Figure 3. LED cube

B. The Parameters of Experiments


Figure 5. Relative distance
After understanding the principles well, control factors should
be discussed. Illumination mainly influenced the feature
recognition effect. The more illumination was, the more light
shined on the paper object. And a clearer image would be obtained
by the camera. It was obvious that the relative distance between
camera and the object was an important control factor for
detection. The shorter relative distance was, the image taken by
camera was clearer. Besides, light environment parameters also
included CCT and Ra. Therefore, the control factors that affected
face feature recognition could be determined as: (1) average Figure 6. Approximate graphic of face
illumination of object (E), (2) CCT, (3) Ra, and (4) historical
relative distance (L). Different size for same objects were also TABLE II. THE SIZE OF FACE ON THE OBJECTS
selected for this experiment, which were A4 and A3 size paper. Objects Length(m1)/cm Width(m2)/cm
A4 1.20 1.00
The illumination values were measured the central point of the
A3 1.70 1.40
face on the object by Konica Minolta CL-500A illuminance
spectrophotometer. As showed in Fig.4, the average illumination
TABLE III. THE SIZE OF REAL FACE ON HUMAN
is the average value of the three points on the object, which are the
central face and edge two faces. Human Length(M1)/cm Width(M2)/cm
Female 17.60 15.40
Male 18.40 14.90
Average 18.00 15.15

Figure 4. Measuring points of light parameters

As showed in Fig.5, the relative distance equation can be


expressed as

L X
= (5) Figure 7 Recognition truth image illustration (Red circle number: TP, Green
m M circle number: FN, Blue circle number: FP)
Different light conditions on the object, the camera took a where yi denotes the value of A associated with the ith test, i is the
photo of the object. Light conditions affected the face feature index number of the performed test, and n is the total number of
contrast, the images were processed by the same detection data points per trial.
algorithm. Some features would not be recognized or detect wrong
feature. For performance evaluation of the face feature The S/N ratio of the different levels could be shown clearly
recognition, precision and accuracy indices are used[35]. from Fig.8. For A4 object, the best recognition could be provided
Accuracy denotes the ratio of detections that are correct and with the parameter settings a3-b2-c3-d2, while the parameter
precision implies the completeness of the extracted data. The settings a2-b1-c1-d2 presented best recognition for A3 object
effect of light environment on the Haar-like algorithm’s among all the parameter combinations.
performance enhances with greater values of these two TABLE IV. CONTROL FACTORS AND LEVELS
parameters. The illustration of sample recognition truth data of
one image is shown in Fig.7. Symbol
Parameters Number of Level Level Level
Description Levels 1 2 3
The efficiency or success rate is the ability of the face detection Average
a 3 30 50 70
algorithm to successfully extract the field of interest, and it can be illumination (E)/lx
defined using three parameters based on the presence and absence b CCT/K 3 3500 5000 6500
of the circle. In Fig.7, the number of successfully detected face, c Ra 3 85 90 95
which are actually true positives (TP), the face region missed by d
Relative Distance
3 35 45 55
the algorithm and erroneously labeled as non-face, which are false (L)/cm
negatives (FN), and non-face region erroneously labeled as face,
which are false positives (FP). TP + FN region depicts the face TABLE V. EXPERIMENTS DESIGNED WITH A L9 (34) ORTHOGONAL
ARRAY
data in the truth image and TP + FP region represents the detected
face region by the algorithm. In order to obtain more accurate Experiment A of
S/N
A of
S/N
effect, the object with ten features was selected. The accuracy (A), a b c d of of
Number A4 A3
precision (P), error ratio (ER) of the segmentation method can be A4 A3
denoted as 1 1 1 1 1 0.5 -6.02 0.9 -0.92
2 1 2 2 2 0.8 -1.94 1 0.00
TP 3 1 3 3 3 0.5 -6.02 0.8 -1.94
A= (6) 4 2 1 2 3 0.6 -4.44 1 0.00
TP + FN + FP 5 2 2 3 1 0.9 -0.92 0.9 -0.92
TP 6 2 3 1 2 0.9 -0.92 1 0.00
P= (7) 7 3 1 3 2 0.9 -0.92 0.9 -0.92
TP + FP 8 3 2 1 3 0.7 -3.10 0.9 -0.92
FN + FP 9 3 3 2 1 0.8 -1.94 0.8 -1.94
ER = =1-A (8)
TP + FN + FP
C. Low Illumination Experiments
Under the low illumination conditions, Taguchi method is
chosen to optimize light parameters. The levels of the control
factors were chosen in an appropriate range based on the indoor
lighting standard, as shown in Table IV. After picking the factors
and levels, the experiments that were assembled in the orthogonal
array as shown in Table V were tested in the laboratory and (a) (b)
processed by the computer program depend on Fig.2. Figure 8 S/N ratios of (a) A4 and (b) A3 at different levels
Table V also included the A of A4 and A3 paper objects, and
the signal-to-noise (S/N) ratio, which quantified the quality As shown in Fig.8 and Table V, factors b and c yielded a
characters. In these experiments, A was expected to be larger and bigger influence on A3 paper than A4 paper. Especially, factor c
thus better. Therefore, the S/N ratio[31] of the average had an obvious increase. Therefore, c1 was chosen as the optimal
illumination and relative distance can be expressed as parameter because it effectively enhanced the recognition of A3
paper. For A3 paper, b1 and b2 had same S/N ratio, so b2 was
1 chosen to ensure a highly A4 paper recognition effect. Factor a

n
was different from others, it had similar influence on A4 paper and
i =1
yi2 (9) A3 paper and parameter a2 was chosen to ensure a highly
η = −10log recognition effect for both objects. While factor d yielded a
n
smaller influence on A3 paper than A4 paper, but the optimal
values of factor d were same. Finally, based on Fig.8 and
conclusion as previously discussed, the parameter setting is average illumination and relative distance. Other parameters, such
defined as a2-b2-c1-d2. as Ra and CCT were remained constant. Depend on the real light
environment condition, the values for the two independent
The analysis of variance (ANOVA) technique was utilized to variables were selected. The average illumination parameter
estimate the cause-and-effect relationship between the design gradient was set as 50lx, 100lx, 200lx, 300lx, 400lx, 500lx and
factors and performance. The significance of the factors was 600lx. The relative distance parameter gradient was set as 35cm,
calculated and known as the “important level”. The important 45cm, 55cm, 65cm, 75cm, 85cm, 95cm and 105cm. The specific
level can be defined as ρ: experimental parameters were set as follows:
SS d
ρ= (10) TABLE VII. PARAMETERS SETTING
SS e Parameters
Parameters Name Value
Type
SSt = SSd + SSe (11) Average illumination
50,100,200,300,400,500,600
Variable (E)/lx
where SSd is the sum of the squared deviations, and SSe is the sum Relative distance (L)/cm 35,45,55,65,75,85,95,105
of the squared error. (Note that SSe is approximated as 0 since the Ra >85
Constant
simulations are repeatable.) The total sum of the squared deviation CCT/K 4500
SSt from the total mean S/N ratio ηn can be expressed as [13] All combinations of the different values for the two variables
ni were used in the experiment with the aim to collect sufficient data
SSt = SS d =  (η i −η n ) to find the relationship of these variables. This resulted in 56
2
(12)
different light conditions, which were evaluated on perceived
i =1
degree of face feature recognition. Table VIII shows that the
where ni is the number of experiments in the orthogonal array, and conversion from relative distance to actual distance between
ηi means the S/N ratio of the ith experiment. camera and objects, according to (5). For human, the minimum
distance for face recognition is usually suggested as 4m and the
Table VI summarizes the ANOVA results for A4 and A3. The ideal distance is 10m as a recommendation[36]. As we can see,
results showed that relative distance (factor d, 44.94%) had the these two values are included in these experiments.
most influence on the face feature recognition of A4 paper object,
followed by average illumination (factor a, 38.50%). The TABLE VIII. THE REDUCTION OF RELATIVE DISTANCE
contribution of these two factors are larger than the other. In
addition, the average illumination (factor a, 31.09%) was the same Relative
35 45 55 65 75 85 95 105
distance/cm
influence with relative distance (factor d, 31.09%) on the face
Distance X
feature recognition of A3 paper object. It was evident that factor c (A3)/cm
370.6 476.5 582.4 688.2 794.1 900.0 1005.9 1111.8

affects both A4 and A3 the least, but the contribution increases the Distance X
most. For the A3 paper object, the contribution of all four factors (A4)/cm
525 675 825 975 1125 1275 1425 1575

are more average, which means that the influence of light Note that all statistical analyses mentioned further in paper
environment more obvious. were performed with SPSS software. Then, the relationship ER
value and average illumination and relative distance was tested by
TABLE VI. ANOVA RESULTS
MANOVA (multivariate analysis of variance), as shown in Table
Objects Factor a b c d IX.
SSd 4.59 1.64 0.29 5.32
A4
Contribution (%) 38.77 13.81 2.48 44.94
According to the results in Table XII, MANOVA results are
SSd 0.47 0.31 0.26 0.47
given, and the test results are consistent. For the A4 paper object,
A3
Contribution (%) 31.09 20.44 17.38 31.09 the inspection results between average illumination and ER value
is P=0.008<0.05, showing statistically significant. And the
D. High Illumination Experiments inspection results between relative distance and ER value is
In low illumination experiments, average illumination and P=0.000<0.05, also showing statistically significant. For the A3
relative distance are more influence factors face feature paper object, the inspection results between average illumination
recognition than others. Then this experiments were designed to and ER value is P=0.025<0.05, showing statistically significant.
research the relationship between optical parameters and face And the inspection results between relative distance and ER value
feature recognition under high illumination environment. The ER is P=0.000<0.05, also showing statistically significant. And For
value was also defined as face feature recognition number all object, the results between average illumination * relative
dividing by total feature number. distance and ER value is P=0.000<0.005, it also showing
statistically significant. In other words, this means that different
Relevant parameters for further experiment are summarized in combinations of average illumination and relative distance can
Table VII. Of these parameters two independent variables were significantly affect ER value.
selected for their presumed effect on face feature recognition:
TABLE IX. MANOVA RESULTS variable on ER value and get a comparatively good recognition
Type III Partial result. The threshold curve by nonlinear curve fit was showed in
Dependent Mean Fig.10. The correlation of corrected curve is r²=0.98, this shows
Source Sum of df F Sig. Eta
Variable Square
Squares Squared that the curve has a very accurate fitting degree. The threshold
Average illumination curve shows that the average illumination would be the important
(E) 0.340 6 0.057 3.381 0.008 0.326 influence factor for ER under less than 200lx light environment. If
ER value the illumination larger than 200lx, the important factor would be
(A4)
Relative distance (L)
distance. Depend on the threshold curve, the relationship between
5.599 7 0.800 47.7750.000 0.888
average illumination and distance can be described as (13).
Average illumination 75.5 × 1.023E
ER value
(E) 0.122 6 0.020 2.728 0.025 0.280
X = 44 + (13)
(A3) 1.023E + 4.966
Relative distance (L) 0.301 7 0.043 5.784 0.000 0.491

Average illumination *
ER value Relative distance(E * L) 7.967 55 0.145 3.584 0.000 0.638

As showed in Fig.9, the ER value with different average


illumination and relative distance by B-Spline fitting. Fig.9 (a)-(g)
are ER value with the relative distance under the light environment
of the average illumination at 50-600lx. It can be seen from the
figures that ER value increase with the increase of relative distance
in seven illumination conditions. In the low illumination
conditions, short relative distance results in a high ER value,
which is directly related to visibility. In the high illumination
conditions, the ER value will increase rapidly after the relative Figure 10. The relationship between illumination and distance at the threshold
distance larger than a value. Before this value, the ER value is
always stable. In the same relative distance, there is no obvious In addition, the curve can be used to predict the unmeasured
law between the ER value of different illumination. value due to the limit of parameter. We can also conclude that the
ER value will be a good result of the combination of variables
under the threshold curve.
IV. CONCLUSIONS
In this paper, face feature recognition based on machine vision
under different illumination with optimal light environment was
proposed. Face feature is the main part of Harr-like feature
detection object.
Under low illumination environment, the optimal design
parameters a2-b2-c1-d2 for A4 paper and A3 paper were easily
achieved by using the Taguchi method. After that, the ANOVA
technique was utilized to analyze the contribution of the factors to
performance. The result shows that average illumination and
relative distance are all the main influence on A4 paper object
(38.77% and 44.943%, respectively) and on the A3 paper object
(both 31.09%) among the factors. Overall, Face feature
recognition with optimal light environment applying the Taguchi
Figure 9. ER value with different average illumination and relative distance method cannot only shorten the design cycle with desirable
performance.
From the seven figures, the conclusion can be that the ER
value is fluctuation within a narrow range under 0.3. In other Under high illumination environment, average illumination
word, during this scope the influence of distance on ER value is and relative distance also show significant effect (p<0.05) on face
not obvious. When the ER value large than 0.3, it will increase feature recognition for A4 and A3 paper objects by MANOVA.
rapidly. During this scope the influence of distance on ER value is These results are similar to those of the low illumination. The
obvious. So, ER=0.3 was set as a threshold to assess the effect of average illumination and relative distance have interactive effects
on face feature recognition for A4 and A3 paper objects by
MANOVA. A threshold value to assess the effect of variable on [17] X. Bai, Y. Fang, W. Lin, L. Wang, and B. F. Ju, “Saliency-Based Defect
ER value is set and a relation curve was fitted by the threshold Detection in Industrial Images by Using Phase Spectrum,” Industrial
Informatics IEEE Transactions on, vol. 10, no. 4, pp. 2135-2145, 2014.
value between average illumination and relative distance. This [18] S. A. Khan, S. Shabbir, R. Akram, N. Altaf, M. O. Ghafoor, and M. Shaheen,
curve also shows average illumination is the main influence on “Brief review of facial expression recognition techniques,” International
face feature recognition under less than 200lx light environment. Journal Of Advanced And Applied Sciences, vol. 4, no. 4, pp. 27-32, Apr,
In larger than 200lx light environment the most influence factor is 2017.
relative distance. Under the guidance of this curve, average [19] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs.
Fisherfaces: Recognition using class specific linear projection,” Ieee
illumination and relative distance can be set to get a good Transactions on Pattern Analysis And Machine Intelligence, vol. 19, no. 7, pp.
recognition effect. In the future, Other object of Haar-like feature 711-720, Jul, 1997.
should be study under different condition. [20] P. Viola, and M. Jones, “Robust Real-time Face Detection,” International
Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.
ACKNOWLEDGMENT [21] C. P. Papageorgiou, M. Oren, and T. Poggio, “A general framework for object
detection,” Sixth International Conference on Computer Vision, pp. 555-562,
This work was support by National Key R&D Program of 1998.
China (Project No. 2017YFB0403700). [22] P. Viola, and M. Jones, “Rapid object detection using a boosted cascade of
simple features,” 2001 Ieee Computer Society Conference on Computer Vision
REFERENCES and Pattern Recognition, Vol 1, Proceedings, pp. 511-518, 2001.
[1] C. Wojek, P. Dollar, B. Schiele, and P. Perona, “Pedestrian Detection: An [23] Y. Zhang, C. H. Hu, and X. B. Lu, “Face recognition under varying
Evaluation of the State of the Art,” IEEE Transactions on Pattern Analysis & illumination based on singular value decomposition and retina modeling,”
Machine Intelligence, vol. 34, no. 4, pp. 743, 2012. Multimedia Tools and Applications, vol. 77, no. 21, pp. 28355-28374, Nov,
[2] M. R. Faraji, and X. J. Qi, “Face recognition under varying illuminations using 2018.
logarithmic fractal dimension-based complete eight local directional patterns,” [24] Z. J. Yang, X. He, W. Y. Xiong, and X. F. Nie, “Face Recognition under
Neurocomputing, vol. 199, pp. 16-30, Jul 26, 2016. Varying Illumination Using Green's Function-based Bidimensional Empirical
[3] Z. Emersic, V. Struc, and P. Peer, “Ear recognition: More than a survey,” Mode Decomposition and Gradientfaces,” 3rd Annual International
Neurocomputing, vol. 255, pp. 26-39, Sep 13, 2017. Conference on Information Technology and Applications (Ita 2016), vol. 7,
[4] D. Geronimo, A. M. Lopez, A. D. Sappa, and T. Graf, “Survey of Pedestrian 2016.
Detection for Advanced Driver Assistance Systems,” IEEE Trans on Pattern [25] Y. Wei, Q. Tian, J. H. Guo, W. Huang, and J. D. Cao, “Multi-vehicle detection
Analysis & Machine Intelligence, vol. 32, no. 7, pp. 1239-1258, 2010. algorithm through combining Harr and HOG features,” Mathematics and
[5] M. Enzweiler, and D. M. Gavrila, “Monocular Pedestrian Detection: Survey Computers in Simulation, vol. 155, pp. 130-145, Jan, 2019.
and Experiments,” IEEE Transactions on Pattern Analysis & Machine [26] M. Oren, C. Papageorgiou, P. Sinha, E. Osuna, and T. Poggio, “Pedestrian
Intelligence, vol. 31, no. 12, pp. 2179, 2009. detection using wavelet templates,” 1997 Ieee Computer Society Conference
[6] T. B. Moeslund, A. Hilton, and V. Krüger, “A survey of advances in vision- on Computer Vision and Pattern Recognition, Proceedings, pp. 193-199,
based human motion capture and analysis,” Computer Vision & Image 1997.
Understanding, vol. 104, no. 2, pp. 90-126, 2006. [27] C. E. Jacobs, A. Finkelstein, and D. H. Salesin, "Fast multiresolution image
[7] R. Poppe, “Vision-based human motion analysis: An overview,” Computer querying."
Vision & Image Understanding, vol. 108, no. 1, pp. 4-18, 2007. [28] K. Juneja, A. Verma, and S. Goel, “An Improvement on Face Recognition
[8] T. Gandhi, and M. M. Trivedi, “Pedestrian Protection Systems: Issues, Survey, Rate using Local Tetra Patterns with Support Vector Machine under varying
and Challenges,” IEEE Transactions on Intelligent Transportation Systems, Illumination Conditions,” 2015 International Conference on Computing,
vol. 8, no. 3, pp. 413-430, 2007. Communication & Automation (Iccca), pp. 1079-1084, 2015.
[9] Y. Xu, Z. Z. Fan, and Q. Zhu, “Feature space-based human face image [29] W. Pan, and Y. J. Huang, “A new algorithm of face detection under strong
representation and recognition,” Optical Engineering, vol. 51, no. 1, Jan, light condition,” 2nd Ieee International Conference on Advanced Computer
2012. Control (Icacc 2010), Vol. 3, pp. 487-491, 2010.
[10] Y. L. Li, L. Meng, and J. F. Feng, “Face illumination compensation [30] Y. Chen, S. Wen, and P. Song, “Design of a backlight module with a freeform
dictionary,” Neurocomputing, vol. 101, pp. 139-148, Feb 4, 2013. surface by applying the Taguchi method,” Chinese Optics Letters, vol. 13, no.
[11] X. D. Xie, and K. M. Lam, “An efficient method for face recognition under 3, pp. 76-80, 2015.
varying illumination,” 2005 Ieee International Symposium on Circuits and [31] C. Y. Shi, S. S. Wen, and Y. C. Chen, “Study of Cured Surface LED Array
Systems (Iscas), Vols 1-6, Conference Proceedings, pp. 3841-3844, 2005. Multi-shadow Problem Based on Taguchi Method,” Chinese Journal of
[12] Z. Lining, W. Lipo, and L. Weisi, “Conjunctive patches subspace learning Luminescence, vol. 36, no. 6, pp. 651-656, 2015.
with side information for collaborative image retrieval,” IEEE Transactions [32] E. M. Anawa, and A. G. Olabi, “Using Taguchi method to optimize welding
on Image Processing, vol. 21, no. 8, pp. 3707, 2012. pool of dissimilar laser-welded components,” Optics & Laser Technology, vol.
[13] L. Zhang, L. Wang, and W. Lin, “Semisupervised biased maximum margin 40, no. 2, pp. 379-388, 2008.
analysis for interactive image retrieval,” IEEE Transactions on Image [33] C. Y. Shi, S. S. Wen, and Y. C. Chen, “Study on Curved Surface LED Array
Processing, vol. 21, no. 4, pp. 2294-2308, 2012. Illumination Problem Based on Taguchi Method,” Chinese Journal of
[14] F. Ratle, G. Camps-Valls, and J. Weston, “Semisupervised Neural Networks Luminescence, vol. 36, no. 3, pp. 7, 2015.
for Efficient Hyperspectral Image Classification,” IEEE Transactions on [34] M. Joe, and H. Joseph, Learning OpenCV 3 Computer Vision with Python,
Geoscience & Remote Sensing, vol. 48, no. 5, pp. 2271-2282, 2010. p.pp. 36,70-82, Beijing: China Machine Press, 2016.
[15] J. Ker, L. P. Wang, J. Rao, and T. Lim, “Deep Learning Applications in [35] K. Manoharan, and P. Daniel, “Detection of unstructured roads from a single
Medical Image Analysis,” IEEE Access, vol. 6, pp. 9375-9389, 2017. image for autonomous navigation applications,” Journal of Electronic
[16] X. Wei, Z. Yang, Y. Liu, D. Wei, L. Jia, and Y. Li, “Railway track fastener Imaging, vol. 27, no. 4, Jul, 2018.
defect detection based on image processing and deep learning techniques: A [36] S. Fotios, and P. Raynham, “Correspondence: Lighting for pedestrians: Is
comparative study,” IEEE Trans. Image Processing, vol. 21, no. 8, pp. 3707- facial recognition what matters?,” Lighting Research & Technology, vol. 43,
3720, 2012. no. 1, pp. 129-130, Mar, 2011.

View publication stats

You might also like