This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Cisp Modified Paper|Views: 2|Likes: 0

Published by mariauxilium

See more

See less

https://www.scribd.com/doc/93403726/Cisp-Modified-Paper

05/03/2015

text

original

**Method Using Artificial Neural Network
**

Kolhandai Yesu, Himadri Jyoti Chakravorty, Prantik Bhuyan, Rifat Hussain, Kaustubh Bhattacharyya

Department of Electronics and Communication Engineering

The Assam Don Bosco University, Assam, India

Email: mariayesudass@gmail.com, pauljyoti04@gmail.com, prantik.bhuyan@rediffmail.com,

rifathussain33@gmail.com, kaustubh.d.electronics@gmail.com

Abstract – Face recognition is a biometric tool for

authentication and verification having both research

and practical relevance. A facial recognition based

verification system can further be deemed a computer

application for automatically identifying or verifying a

person in a digital image. Varied and innovative face

recognition systems have been developed thus far with

widely accepted algorithms. In this paper, we present

an intelligent hybrid features based face recognition

method configuring the central moment and Eigen

vectors and the standard deviation of the eyes, nose and

mouth segments of the human face as the decision

support entities of the Generalized Feed Forward

Artificial Neural Network(GFFANN).The proposed

method’s correct recognition rate is over 95%.

Keywords - recognition; central moment; eigen vectors;

standard deviation; neural network, training, testing;

cosine transform;

I. INTRODUCTION

Biometrics refers to a science of analyzing human

body parts for security purposes. The word

biometrics is derived from the Greek words bios

(life) and metrikos (measure) [1]. Biometric

identification is becoming more popular of late

owing to the current security requirements of society

in the field of information, business, military, e-

commerce and etc [2].

Face recognition is a nonintrusive method, and facial

images are the most common biometric

characteristics used by humans to make a personal

recognition. Human faces are complex objects with

features that can vary over time. However, we

humans have a natural ability to recognize faces and

identify person at the spur of the second. Of course,

our natural recognition ability extends beyond face

recognition too. Nevertheless, in the interaction

between humans and machines, also commonly

known as Human Robot Interface [3] or Human

Computer Interface (HCI), the machines are to be

trained to recognize and identify and differentiate the

human faces. There is thus a need to simulate

recognition artificially in our attempts to create

intelligent autonomous machines.

The popular approaches for face recognition are

based either on the location and shape of facial

attributes such as the eyes, eyebrows, nose, lips and

chin, and their spatial relationships, or the overall

analysis of the face image that represents a face as a

weighted combination of a number of canonical

faces.

The former approach is robust and efficient enough

as the vital attributes of the face are considered in

training and testing while the latter approach reckons

in the global information of the whole face.

Basically, any face recognition system can be

depicted by the following block diagram.

Figure 1. Basic blocks of a face recognition system.

1) Pre-processing Unit: In the initial phase, the

image captured in the true colour format is converted

to gray scale image and resized to a predefined

standard and noise is removed. Further Histogram

Equalization (HE) and Discrete Wavelet Transform

(DWT) are carried out for illumination normalization

and expression normalization respectively [4].

2) Feature Extraction: In this phase, facial features

are extracted using Edge Detection Techniques,

Principal Component Analysis (PCA) Technique,

Discrete Cosine Transform (DCT) coefficients, DWT

coefficients or fusion of different techniques [5].

3) Training and Testing: Here, Euclidean Distance

(ED), Hamming Distance, Support Vector Machine

(SVM), Neural Network [6] and Random Forest (RF)

[7] may be used for training followed by testing the

new images or the test images for recognition.

II. RELATED WORKS

The past few years have witnessed an increased

interest in researches aiming at developing reliable

face recognition techniques.

Pre-processing

Unit

Feature

Extraction

Training

and

Testing

One of the commonly employed techniques involves

representing the image by a vector in a dimensional

space of size similar to the image [8]. However, the

large dimensional space of the image reduces the

speed and robustness of face recognition. This

problem is overcome rather effectively by

dimensionality reduction techniques such as the

Principal Component Analysis (PCA) and the Linear

Discriminant Analysis (LDA).

PCA is an eigenvector method designed to model

linear variation in high-dimensional data. PCA

performs dimensionality reduction by projecting an

original n-dimensional data onto a k (<< n)-

dimensional linear subspace spanned by the leading

eigenvectors of the data‟s covariance matrix [9].

LDA is a supervised learning algorithm. LDA

features are obtained by computing the edge

response values in all eight directions at each pixel

position and generating a code from the relative

strength magnitude. Each face is represented as a

collection of LDP codes for face recognition process

[10].

While PCA uses orthogonal linear space for

encoding information, LDA encodes using linearly

separable space in which bases are not necessarily

orthogonal. Experiments carried out by researchers

thus far points to the superiority of algorithms based

on LDA over PCA.

Another face analysis technique is the Locality

Preserving Projections (LPP). It consists in

obtaining a face subspace and finding the local

structure of the manifold. Basically it is obtained by

finding the optimal linear approximations to the

eigen functions of the Laplace Betrami operator on

the manifold. Therefore, it recovers important

aspects of the intrinsic nonlinear manifold structure

by preserving local structure though it is a linear

technique [11].

Ramesha K and K B Raja, proposed Dual Transform

based Feature Extraction for Face Recognition

(DTBFEFR). Here Dual Tree Complex Wavelet

Transform (DT-CWT) is employed to form the

feature vector and Euclidean Distance (ED), Random

Forest (RF) and Support Vector Machine (SVM) are

used as the classifiers [12].

Weng and Huang presented a face recognition model

based on hierarchical neural network which is grown

automatically and not trained with gradient-descent.

Good results for discrimination of ten distinctive

subjects are reported [13].

This paper presents the face recognition method

using both the geometrical features of the biometrical

characteristic of the face such as eyes, nose, and

mouth and the overall analysis of the whole face.

After the pre-processing stage, segments of the eyes,

nose and mouth are extracted from the faces of the

database. These blocks are then resized and the

training features are computed. These facial features

reduce the dimensionality by gathering the essential

information while removing all redundancies present

in the segment. Besides, the global features of the

total image are also computed. These specially

designed features are then used as decision support

entities of the classifier system configured using the

GFFANN which provides a decision in the testing

phase with an accuracy of over 95%.

III. LOCAL AND GLOBAL FACE FEATURES

EXTRACTION WITH MARKED DIMENSIONALITY

REDUCTION

Local facial feature extraction consists in localizing

the most characteristic face components (eyes, nose,

mouth, etc.) within images that depict human faces.

The purpose of feature extraction is to extract the

feature vectors or information which represents the

face and reduces computation time and memory

storage.

Global feature extraction consists in considering the

face as a single whole entity and then extracting the

predetermined vital features of the face.

In this work, Central Moment, eigenvector and

standard deviation of the eyes, nose and mouth are

computed as the training features for the local feature

extraction while standard deviation and eigenvector

of the covariance of the whole face are assessed for

the global features.

These features besides extracting the quintessential

information of the face also account for

dimensionality reduction.

A. Central Moment

Central moment finds its application in recognition

of shape features which are independent of

parameters and which cannot be controlled in an

image are generated. Such features are called

invariant features. There are several types of

invariance. For example, if an object may occur in

an arbitrary location in an image, then one needs the

moments to be invariant to location. For binary

connected components, this can be achieved simply

by using the central moments, µ

pq

[14].

In image processing, computer vision and related

fields, an image moment is a certain particular

weighted average (moment) of the image pixels'

intensities, or a function of such moments, usually

chosen to have some attractive property or

interpretation. Image moments are useful to describe

objects after segmentation. Simple properties of the

image which are found via image moments include

area (or total intensity), its centroid, and information

about its orientation [15].

Central moments are mathematically defined as [16]

µ

pq

=

∫ ∫ ( ̅ )

( ̅)

( ) ()

̅

, ̅

**̅ and ȳ and are the components of the centroid.
**

If ƒ(x, y) is a digital image, then the previous

equation becomes

µ

pq

=∑ ∑ ( ̅ )

( ̅)

( ) (2)

The central moments of order up to 3 are:

µ

00

=M

00,

µ

01

=0,

µ

10

=0,

µ

11

=M

11

- ̅ M

01

=M

11

-̅M

10

µ

20

=M

20

÷ ̅ M

10

,

µ

02

=M

02

– ̅M

01

,

µ

21

=M

21

-2 ̅ M

11

-̅M

20

+2̅

2

M

01

,

µ

21

=M

21

-2 ̅ M

11

-̅M

20

+2̅

2

M

01

,

µ

12

=M

12

-2 ̅M

11

-̅ M

02

+2̅

2

M

10

,

µ

30

=M

30

-3 ̅ M

20

+2̅

2

M

10

,

µ

03

=M

03

-3 ̅M

02

+2̅

2

M

01

It can be shown that

µ

pq

=∑ ∑ (

)

(

)()

()

(̅)

()

(3)

Central moments are translational invariant.

Information about image orientation can be derived

by first using the second order central moments to

construct a covariance matrix.

µ'

20

= µ

20

/ µ

00

= M

20

/ M

00

÷ ̅

2

µ'

02

= µ

02

/ µ

00

= M

02

/ M

00

÷ ̅

2

µ'

11

= µ

11

/ µ

00

= M

11

/ M

00

÷ ̅ ̅

The covariance matrix of the image I(x,y) is

cov[( )] = [

] (4)

The eigenvectors of this matrix correspond to the

major and minor axes of the image intensity, so

the orientation can thus be extracted from the angle

of the eigenvector associated with the largest

eigenvalue.

For higher order moments it is common to normalize

these moments by dividing by m

0

(or m

00

). This

allows one to compute moments which depend only

on the shape and not the magnitude of f(x). The

result of normalizing moments gives measures which

contain information about the shape or distribution

(not probability distribution) of f(x). This is what

makes moments useful for the analysis of shapes in

image processing, for which f(x, y) is the image

function. These computed moments are usually used

as features for shape recognition [17].

B. Eigenvector with Highest Eigen Value

An eigenvector of a matrix is a vector such that, if

multiplied with the matrix, the result is always an

integer multiple of that vector. This integer value is

the corresponding eigenvalue of the eigenvector.

This relationship can be described by the equation:

M × u = ì × u, where u is an eigenvector of the

matrix M is the matrix and ì is the corresponding

eigenvalue. Eigenvectors possess following

properties:

- They can be determined only for square

matrices.

- There are n eigenvectors (and

corresponding eigenvalues) in an n × n

matrix.

- All eigenvectors are perpendicular, i.e. at

right angle with each other.

The traditional motivation for selecting the

Eigenvectors with the largest Eigenvalues is that the

Eigenvalues represent the amount of variance along a

particular Eigenvector. By selecting the Eigenvectors

with the largest Eigenvalues, one selects the

dimensions along which the gallery images vary the

most. Since the Eigenvectors are ordered high to low

by the amount of variance found between images

along each Eigenvector, the last Eigenvectors find

the smallest amounts of variance. Often the

assumption is made that noise is associated with the

lower valued Eigenvalues where smaller amounts of

variation are found among the images [12].

C. Artificial Neural Network

Artificial Neural Networks (ANNs) are non-linear

mapping structures based on the function of the

human brain. They are computational structures

inspired by observed process in natural networks of

biological neurons in the brain. They consist of

simple computational units called neurons which are

highly interconnected.

ANNs identify and correlate patterns between input

data sets and corresponding target values even when

underlying data relationship is unknown. Once

trained, these can predict the outcome of new

independent input data. A very important feature of

ANNs is their adaptive nature, where „learning by

example‟ replaces „programming‟ in solving

problems. This feature makes such computational

models very appealing in application domains where

one has little or incomplete understanding of the

problem to be solved but where training data is

readily available.

The most widely used learning algorithm in an ANN

is the Backpropagation algorithm. There are various

types of ANNs like Multilayered Perceptron, Radial

Basis Function and Kohonen networks. These

networks are „neural‟ in the sense that they may have

been inspired by neuroscience but not necessarily

because they are faithful models of biological neural

or cognitive phenomena [13, 14].

In this work we use a Multilayer Feed-Forward

Network consisting of multiple layers. The

architecture of this class of network, besides having

the input and the output layers, also have one or

more intermediary layers called hidden layers. The

computational units of the hidden layer are known as

hidden neurons. The hidden layer does intermediate

computation before directing the input to output

layer. The input layer neurons are linked to the

hidden layer neurons; the weights on these links are

referred to as input-hidden layer weights. The hidden

layer neurons and the corresponding weights are

referred to as output-hidden layer weights.

Figure 2. Multilayered feed-forward network configuration.

IV. EXPERIMENTAL MODEL

The experimental model can be divided into two

phases namely the training phase and the testing

phase. The training phase denotes the training of the

faces of the data base while the testing phase

involves the recognition of test image. Figure 2 gives

the block diagram of the training phase and Figure 3

depicts the block diagram of the testing phase.

Figure 3. Block Diagram of the training phase.

Figure 4. Block diagram of the testing phase.

Algorithm for training phase

Algorithm for testing phase

Database

Image

Pre-

processing

Feature

Extraction

Averaging the

Features

ANN

Training

Result

Test Image Pre-

processing

Feature

Extraction

ANN

Test Result

Input: Data base face images

Output: Column vector of the extracted features

Begin:

Step 1: Carry out pre-processing for all the images

of the data base.

Step 2: The eyes, nose and mouth are segmented

from each of the pre-processed face images of the

data base.

Step 3: Compute the Eigenvector of the

Covariance, central moment and standard deviation

of the segmented blocks of step 2. Store the values

in a column vector.

Step 4: Store the results computed in step 3 of

different face images in different column vectors.

Step 5: Train the designed network with the

column vectors of step 4 as input data with unique

binary vectors as corresponding targets.

End

Input: Face Test Image

Output: Matched face image from data base

Begin:

Step 1: Carry out pre-processing of the test face

image.

Step 2: The eyes, nose and mouth are segmented

from each of the pre-processed test image.

Step 3: Compute the Eigenvector of the

Covariance, central moment and standard deviation

of the segmented blocks of step 2. Store the values

in a column vector.

Step 4: Simulate using the trained network to

match with the data base face images.

End

In this work, we have used our own image data base

consisting of 120 images of 8 individuals. There are

15 images of each individual captured at different

instances of the day representing all possible

variations of light intensity, image tilt, image size,

noise levels, varying illumination, pose and distance

from the camera. Figure 4 shows a sample of the

acquired data base with the above specifications.

Figure 5. Sample images of the acquired database.

The pre-processing stage involved removal of noise

(Figure 6), histogram equalization (Figure 7), size

normalization and illumination normalization.

Facial feature extraction is a special form of

dimensionality reduction. When the input data is too

large and it is suspected to be redundant then the

input data is transformed into a reduced

representation set of features (also named feature

vector). Transforming the input data into the set of

features is called feature extraction. If the extracted

features are carefully chosen it is expected that the

features set will extract the relevant information from

the input data in order to perform the desired task

using this reduced representation instead of the full

size input.

The method of global feature extraction carried out

in this work is listed in the following algorithm.

From the pre-processed images, the eyes, nose and

mouth of the image are detected for procuring the

local feature vector as shown in Figure 9. These

segments are used to compute the Eigen Vector of

the covariance, central moment and Standard

Deviation.

Figure 9. Samples of the vital features of the face used for

Training the ANN.

TABLE II. SIZE OF SELECTED FACIAL SECTIONS

Facial

Region

Size in Pixels

(M x N)

Size in % of the Full Face

(M x N)

Right Eye 24 x 38 0.14 x 0.25

Left Eye 24 x 38 0.14 x 0.25

Nose 36 x 40 0.21 x 0.26

Mouth 27 x 57 0.16 x 0.38

TABLE III. DIMENSIONS OF LOCALLY EXTRACTED FEATURES

Facial

Region

Length of the Extracted Features Total

Feature

Length

Central

Moment

Eigen

Vector

Standard

Deviation

Row Col.

Right

Eye

38 24 x 4 24 38 196

Left

Eye

38 24 x 4 24 38 196

Nose 40 36 x 4 36 40 260

Mouth 57 27 x 4 27 57 249

Total

Length 173 444 111 173 901

V. RESULTS AND PERFORMANCE ANALYSIS

In the present work, the high performance

Backpropagation training algorithm and the variable

learning rate Backpropagation is employed for

training the network. This algorithm is based on

heuristic technique. The network training algorithm

used here (GDMBPAL) updates weight and bias

values according to gradient descent momentum and

an adaptive learning. We have used a learning rate of

0.7 and momentum of 0.6. The number of neurons in

the hidden layers is fixed to be 1.5 times the number

of neurons in the input layer.

Figure 6. Sample of noise removal process

Figure 7. Histogram plots of a bright image before and after

Histogram Equalization.

The specifications of the Neural Network used for

training phase are as tabulated in Table 4. The input

layer consists of the global as well as the local

features of the database image.

We have carried out the training using both log

sigmoid function and the tan sigmoid activation

functions in the network. Further, the effect of the

different activation functions with different iterations

on the convergence of Mean Squared Error (MSE)

was also tested.

The convergence of Mean Squared Error (MSE) for

different number of iterations and for different

activation function of the hidden and output layers is

tabulated in Table 5.

From Table 5, we make the inference that the

convergence of MSE is better for log sigmoid

activation function with 1500 iterations. Further, the

convergence of MSE decreases if the number of

iterations is very high.

TABLE IV. SPECIFICATIONS OF THE NEURAL NETWORK

Type: Feed Forward Backpropagation Network

Parameters Specifications

Number of Layers 3 (Input layer, Hidden Layer,

Output Layer)

Number of Input Unit 1 Feature Matrix

Number of Output Unit 1 Binary Encoded Vector

Number of Neurons in the

Input Layer

1821

Number of Neurons in the

Hidden Layer

1821 * 1.5 = 2732

Number of Neurons in the

Output Layer

8

Number of Iterations 1000, 1500, 2000

Number of Validation Checks 6

Learning Rate 0.7

Momentum 0.6

Activation Functions Log-Sigmoid and Tan-Sigmoid

TABLE V. CONVERGENCE OF MSE FOR DIFFERENT NUMBER OF

ITERATIONS AND DIFFERENT ACTIVATION FUNCTION

Iterations

Activation

Function of

Hidden Layers

Activation

Function of

Output

Layers

MSE

1000

Tansigmoid

Tansigmoid

1x10

-4

1500 1.2x10

-4

2000 1.4x10

-4

1000

Logsigmoid

Logsigmoid

1x10

-7

1500 1x10

-12

2000 1x10

-6

We also studied the effect of Gaussian noise of

different SNR levels on the efficiency of our face

recognition system. We find that if the noise is added

before pre-processing phase the system‟s Correct

Recognition Rate (CRR) is not affected much.

However, if the image is affected by Gaussian noise

after the pre-processing phase, the system‟s CRR is

adversely affected.

When the face is affected after pre-processing phase

by Gaussian noise of SNR 25dB and above the

system has a CRR of 100% while CRR reduces for

SNR less than 25dB. The other observations are

tabulated in Table VI.

TABLE VI. EFFECT OF GAUSSIAN NOISE IN THE CORRECT

RECOGNITION RATE (CRR) OF THE PROPOSED SYSTEM

AWGN

SNR (dB)

CRR (%) for

Gaussian Noise

Present before

Pre-processing

Phase

CRR (%) for

Gaussian Noise

Present after Pre-

processing Phase

25 100 100

22 100 81.25

20 100 62.52

18 98.74 43.75

16 96.47 37.50

15 89.56 12.51

14 86.25 06.25

VI. CONCLUSION

In this paper, face recognition based on ANN is

proposed. ANN with Back propagation algorithm is

found to be the efficient method for recognising the

faces. It is observed that the proposed feature vectors

are useful for proper recognition of human faces with

the activation function „log sigmoid‟ in the hidden

layer of the Neural Network and it gives a better

convergence of the MSE. Further, the system‟s

efficiency is reduced if the image is affected by

Gaussian noise after the pre-processing phase.

ACKNOWLEDGMENT

The authors would like to thank the staff and

management of DBCET for their support and

encouragement in the completion of this work. Our

sincere thanks to Ms. Jhimli Kumari Das, HoD of

Electronics and Communication Engineering for her

efforts in initiating us into this work. We place on

record our deepest gratitude to Mr. Kaustubh

Bhattacharyya, the guide for his scholarly guidance

and masterly expertise all through this work.

REFERENCES

[1] Marcos Faundez-Zanuy, “Biometric security technology,”

Encyclopedia of Artificial Intelligence, 2000, pp. 262-264.

[2] K. Ramesha and K. B. Raja, “Dual transform based feature

extraction for face recognition,” International Journal of

Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-

120.G. Eason, B. Noble, and I. N. Sneddon, “On certain

integrals of Lipschitz-Hankel type involving products of

Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247,

pp. 529–551, April 1955. (references)

[3] Khashman, “Intelligent face recognition: local versus global

pattern averaging”, Lecture Notes in Artificial Intelligence,

4304, Springer-Verlag, 2006, pp. 956 – 961.

[4] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,

“Expression and illumination invariant preprocessing

technique for face recognition,” Proceedings of the

International Conference on Computer Engineering and

System, 2008, pp. 59-64.

[5] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M.

Patnaik, “Feature extraction based face recognition, gender

and age classification,” International Journal on Computer

Science and Engineering, 2010, vol. II, no. 01S, pp. 14-23.

[6] S. Ranawade, “Face recognition and verification using

artificial neural network,” International Journal of Computer

Applications, 2010, vol. I, no. 14, pp. 21-25.

[7] Albert Montillo and Haibin Ling, “Age regression from

faces using random forests,” Proceedings of the IEEE

International Conference on Image Processing, 2009, pp.

2465-2468.

[8] H. Murase and S. K. Nayar, “Visual learning and recognition

of 3-D objects from appearance,” Journal of Computer

Vision, vol. XIV, 1995, pp. 5-24.

[9] M. Turk and A. P. Pentland, “Face recognition using

Eigenfaces,” Proceedings of IEEE Conference on Computer

Vision and Pattern Recognition, 1991, pp. 586-591.

[10] Peter Belhumeur, J. Hespanha and David Kriegman,

“Eigenfaces versus Fisherfaces: Recognition using class

specific linear projection,” IEEE Transactions on Pattern

Analysis and Machine Intelligence (PAMI), 1997, vol. XIX,

no. 7, pp.711-720.

[11] P. Niyogi, “Locality preserving projections,” Proceedings of

Conference on Advances in Neural Information Processing

Systems, 2003.

[12] K. Ramesha and K. B. Raja, “Dual Transform based Feature

Extraction for Face Recognition,” International Journal of

Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.

[13] J. Weng, N. Ahuja, and T. S. Huang, “Learning recognition

and segmentation of 3D objects from 2D images,”

Proceedings of the International Conference on Computer

Vision, 1993, pp 121–128.

[14] Sundos A. Hameed Al_azawi, “Eyes Recognition System

Using Central Moment Features”, Engineering &

Technology Journal, 2011, vol. 29, no. 7, pp. 1400-1407.

[15] K. Bonsor, "How Facial Recognition Systems Work".

http://computer.howstuffworks.com/facial-recognition.htm.

[16] M. K. Hu, “Visual pattern recognition by moment

invariants”, IRE transactions on Information Theory, 1962,

pp. 179–187.

[17] Bob Bailey, “Moments in image processing”,

http://www.csie.ntnu.edu.tw/~bbailey/Moments%20in%20IP

.htm, Nov. 2002.

[18] Peter Belhumeur, J. Hespanha and David Kriegman,

“Eigenfaces versus Fisherfaces: Recognition using class

specific linear projection,” IEEE Transactions on Pattern

Analysis and Machine Intelligence (PAMI), 1997, vol. XIX,

no. 7, pp.711-720.

[19] J. Anderson, An Introduction to Neural Networks, Prentice

Hall, 2003

[20] Simon Haykin, Fundamentals of Neural Networks, Pearson

Education, 2003.

[1] Marcos Faundez-Zanuy, “Biometric security technology,”

Encyclopedia of Artificial Intelligence, 2000, pp. 262-264.

[2] K. Ramesha and K. B. Raja, “Dual transform based feature

extraction for face recognition,” International Journal of Computer

Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.

A face recognition system recognizes an individual

by matching the input image against images of all

users in a database and finding the best match.

[3] Khashman, “Intelligent face recognition: local versus global

pattern averaging”, Lecture Notes in Artificial Intelligence, 4304,

Springer-Verlag, 2006, pp. 956 – 961.

[4] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,

“Expression and illumination invariant preprocessing technique

for face recognition,” Proceedings of the International Conference

on Computer Engineering and System, 2008, pp. 59-64.

[5] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M. Patnaik,

“Feature extraction based face recognition, gender and age

classification,” International Journal on Computer Science and

Engineering, 2010, vol. II, no. 01S, pp. 14-23.

[6] S. Ranawade, “Face recognition and verification using

artificial neural network,” International Journal of Computer

Applications, 2010, vol. I, no. 14, pp. 21-25.

[7] Albert Montillo and Haibin Ling, “Age regression from faces

using random forests,” Proceedings of the IEEE International

Conference on Image Processing, 2009, pp. 2465-2468.

[8] H. Murase and S. K. Nayar, “Visual learning and recognition

of 3-D objects from appearance,” Journal of Computer Vision,

vol. XIV, 1995, pp. 5-24.

[9] M. Turk and A. P. Pentland, “Face recognition using

Eigenfaces,” Proceedings of IEEE Conference on Computer

Vision and Pattern Recognition, 1991, pp. 586-591.

[10] Peter Belhumeur, J. Hespanha and David Kriegman,

“Eigenfaces versus Fisherfaces: Recognition using class specific

linear projection,” IEEE Transactions on Pattern Analysis and

Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.

[11] P. Niyogi, “Locality preserving projections,” Proceedings of

Conference on Advances in Neural Information Processing

Systems, 2003.

[12] K. Ramesha and K. B. Raja, “Dual Transform based Feature

Extraction for Face Recognition,” International Journal of

Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.

[13] J. Weng, N. Ahuja, and T. S. Huang, “Learning recognition

and segmentation of 3D objects from 2D images,” Proceedings of

the International Conference on Computer Vision, 1993, pp 121–

128.

[14] Peter Belhumeur, J. Hespanha and David Kriegman,

“Eigenfaces versus Fisherfaces: Recognition using class specific

linear projection,” IEEE Transactions on Pattern Analysis and

Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.

[15] J Anderson, An Introduction to Neural Networks, Prentice

Hall, 2003

[16] Simon Haykin, Fundamentals of Neural Networks, Pearson

Education, 2003.

M. K. Hu, “Visual pattern recognition by moment invariants”, IRE

transactions on Information Theory, 1962, pp. 179–187.

Sundos A. Hameed Al_azawi, Eyes Recognition System Using

Central Moment Features, Eng. & Tech. Journal, 2011, vol. 29,

no. 7.

If an object is not at a fixed distance from a fixed

focal length camera, then the sizes of objects will not

be fixed. In this case size invariance is needed. This

can be achieved by normalizing the moments.

mouth. PCA performs dimensionality reduction by projecting an original n-dimensional data onto a k (<< n)dimensional linear subspace spanned by the leading eigenvectors of the data‟s covariance matrix [9]. For example. This problem is overcome rather effectively by dimensionality reduction techniques such as the Principal Component Analysis (PCA) and the Linear Discriminant Analysis (LDA). usually chosen to have some attractive property or interpretation. Image moments are useful to describe objects after segmentation. Global feature extraction consists in considering the face as a single whole entity and then extracting the predetermined vital features of the face. The purpose of feature extraction is to extract the feature vectors or information which represents the face and reduces computation time and memory storage.One of the commonly employed techniques involves representing the image by a vector in a dimensional space of size similar to the image [8]. and mouth and the overall analysis of the whole face. Besides. Central Moment Central moment finds its application in recognition of shape features which are independent of parameters and which cannot be controlled in an image are generated. proposed Dual Transform based Feature Extraction for Face Recognition (DTBFEFR). Good results for discrimination of ten distinctive subjects are reported [13]. Therefore. PCA is an eigenvector method designed to model linear variation in high-dimensional data. etc. In image processing. III. if an object may occur in an arbitrary location in an image. the large dimensional space of the image reduces the speed and robustness of face recognition. it recovers important aspects of the intrinsic nonlinear manifold structure by preserving local structure though it is a linear technique [11]. Here Dual Tree Complex Wavelet Transform (DT-CWT) is employed to form the feature vector and Euclidean Distance (ED). computer vision and related fields. For binary connected components. It consists in obtaining a face subspace and finding the local structure of the manifold. These facial features reduce the dimensionality by gathering the essential information while removing all redundancies present in the segment. the global features of the total image are also computed. nose. However. an image moment is a certain particular weighted average (moment) of the image pixels' intensities. Experiments carried out by researchers thus far points to the superiority of algorithms based on LDA over PCA. Such features are called invariant features.) within images that depict human faces. There are several types of invariance. Basically it is obtained by finding the optimal linear approximations to the eigen functions of the Laplace Betrami operator on the manifold. These specially designed features are then used as decision support entities of the classifier system configured using the GFFANN which provides a decision in the testing phase with an accuracy of over 95%. After the pre-processing stage. A. pq [14]. Each face is represented as a collection of LDP codes for face recognition process [10]. then one needs the moments to be invariant to location. nose and mouth are computed as the training features for the local feature extraction while standard deviation and eigenvector of the covariance of the whole face are assessed for the global features. These features besides extracting the quintessential information of the face also account for dimensionality reduction. These blocks are then resized and the training features are computed. LDA encodes using linearly separable space in which bases are not necessarily orthogonal. While PCA uses orthogonal linear space for encoding information. this can be achieved simply by using the central moments. segments of the eyes. LOCAL AND GLOBAL FACE FEATURES EXTRACTION WITH MARKED DIMENSIONALITY REDUCTION Local facial feature extraction consists in localizing the most characteristic face components (eyes. LDA features are obtained by computing the edge response values in all eight directions at each pixel position and generating a code from the relative strength magnitude. Simple properties of the image which are found via image moments include . Ramesha K and K B Raja. nose. Central Moment. nose and mouth are extracted from the faces of the database. In this work. Random Forest (RF) and Support Vector Machine (SVM) are used as the classifiers [12]. or a function of such moments. This paper presents the face recognition method using both the geometrical features of the biometrical characteristic of the face such as eyes. Weng and Huang presented a face recognition model based on hierarchical neural network which is grown automatically and not trained with gradient-descent. eigenvector and standard deviation of the eyes. LDA is a supervised learning algorithm. Another face analysis technique is the Locality Preserving Projections (LPP).

and information about its orientation [15].̅M20+2 ̅ 2M01. if multiplied with the matrix. 10=0. ANNs identify and correlate patterns between input data sets and corresponding target values even when The eigenvectors of this matrix correspond to the major and minor axes of the image intensity. By selecting the Eigenvectors with the largest Eigenvalues.̅ M01=M11. This integer value is the corresponding eigenvalue of the eigenvector. for which f(x. Information about image orientation can be derived by first using the second order central moments to construct a covariance matrix. C. They consist of simple computational units called neurons which are highly interconnected. The result of normalizing moments gives measures which contain information about the shape or distribution (not probability distribution) of f(x). y) is the image function.e. so . This is what makes moments useful for the analysis of shapes in image processing. 11=M11. 01=0. ̅ ̅) ( ̅) ( ) ( ) ̅ and ȳ and are the components of the centroid. Eigenvectors possess following properties: pq= ∫ ̅ ∫ ( . 12=M12-2 ̅M11. where u is an eigenvector of the matrix M is the matrix and is the corresponding eigenvalue. Artificial Neural Network Artificial Neural Networks (ANNs) are non-linear mapping structures based on the function of the human brain. 2 21=M21-2 ̅ M11. All eigenvectors are perpendicular. Often the assumption is made that noise is associated with the lower valued Eigenvalues where smaller amounts of variation are found among the images [12].̅ M02+2 ̅2M10. then the previous equation becomes pq=∑ ∑ ( ̅) ( ̅) ( ) (2) The central moments of order up to 3 are: 00=M00. y) is a digital image. '20= ̅ '02= ̅ '11= ̅ ̅ The covariance cov[ ( )] = [ matrix of ] the image I(x.̅M10 ̅ M10. These computed moments are usually used as features for shape recognition [17]. Since the Eigenvectors are ordered high to low by the amount of variance found between images along each Eigenvector. at right angle with each other. 30=M30-3 ̅ M20+2 ̅ 2M10. Eigenvector with Highest Eigen Value An eigenvector of a matrix is a vector such that. They are computational structures inspired by observed process in natural networks of biological neurons in the brain. i. its centroid. For higher order moments it is common to normalize these moments by dividing by m0 (or m00). There are n eigenvectors (and corresponding eigenvalues) in an n × n matrix. If ƒ(x. one selects the dimensions along which the gallery images vary the most.area (or total intensity). 03=M03-3 ̅M02+2 ̅2M01 It can be shown that pq=∑ ∑ ( ) ( )( )( ) They can be determined only for square matrices. the result is always an integer multiple of that vector. B. ( ̅)( ) (3) Central moments are translational invariant.y) is (4) The traditional motivation for selecting the Eigenvectors with the largest Eigenvalues is that the Eigenvalues represent the amount of variance along a particular Eigenvector. This allows one to compute moments which depend only on the shape and not the magnitude of f(x). Central moments are mathematically defined as [16] the orientation can thus be extracted from the angle of the eigenvector associated with the largest eigenvalue. 21=M21-2 ̅ M11. This relationship can be described by the equation: M × u = × u. the last Eigenvectors find the smallest amounts of variance. 02=M02 – ̅M01.̅M20+2 ̅ M01.

The hidden layer neurons and the corresponding weights are referred to as output-hidden layer weights. Step 4: Store the results computed in step 3 of different face images in different column vectors. central moment and standard deviation of the segmented blocks of step 2. There are various types of ANNs like Multilayered Perceptron. This feature makes such computational models very appealing in application domains where one has little or incomplete understanding of the problem to be solved but where training data is readily available. also have one or more intermediary layers called hidden layers. Step 5: Train the designed network with the column vectors of step 4 as input data with unique binary vectors as corresponding targets. Algorithm for training phase Input: Data base face images Output: Column vector of the extracted features Begin: Step 1: Carry out pre-processing for all the images of the data base. nose and mouth are segmented from each of the pre-processed face images of the data base. These networks are „neural‟ in the sense that they may have been inspired by neuroscience but not necessarily because they are faithful models of biological neural or cognitive phenomena [13. Store the values in a column vector. The computational units of the hidden layer are known as hidden neurons. . Block Diagram of the training phase. EXPERIMENTAL MODEL The experimental model can be divided into two phases namely the training phase and the testing phase. End Figure 2. Step 3: Compute the Eigenvector of the Covariance. Step 3: Compute the Eigenvector of the Covariance. Step 2: The eyes. Step 4: Simulate using the trained network to match with the data base face images. Block diagram of the testing phase. The architecture of this class of network. IV. the weights on these links are referred to as input-hidden layer weights. these can predict the outcome of new independent input data. Store the values in a column vector. A very important feature of ANNs is their adaptive nature. where „learning by example‟ replaces „programming‟ in solving problems. Step 2: The eyes. Multilayered feed-forward network configuration. The most widely used learning algorithm in an ANN is the Backpropagation algorithm. Once trained. Radial Basis Function and Kohonen networks. besides having the input and the output layers. 14].underlying data relationship is unknown. Figure 2 gives the block diagram of the training phase and Figure 3 depicts the block diagram of the testing phase. Test Image Preprocessing Feature Extraction Test Result ANN Figure 4. In this work we use a Multilayer Feed-Forward Network consisting of multiple layers. End Algorithm for testing phase Input: Face Test Image Output: Matched face image from data base Begin: Step 1: Carry out pre-processing of the test face image. Database Image Preprocessing Feature Extraction Training Result ANN Averaging the Features Figure 3. nose and mouth are segmented from each of the pre-processed test image. central moment and standard deviation of the segmented blocks of step 2. The training phase denotes the training of the faces of the data base while the testing phase involves the recognition of test image. The hidden layer does intermediate computation before directing the input to output layer. The input layer neurons are linked to the hidden layer neurons.

From the pre-processed images. image size. Figure 7. The pre-processing stage involved removal of noise (Figure 6). TABLE II.6. nose and mouth of the image are detected for procuring the local feature vector as shown in Figure 9. Transforming the input data into the set of features is called feature extraction.38 TABLE III. image tilt. representation set of features (also named feature vector).25 0. Histogram plots of a bright image before and after Histogram Equalization.7 and momentum of 0. The number of neurons in the hidden layers is fixed to be 1. Figure 9. The method of global feature extraction carried out in this work is listed in the following algorithm.25 0. Sample images of the acquired database.21 x 0. DIMENSIONS OF LOCALLY EXTRACTED FEATURES Figure 6. Figure 4 shows a sample of the acquired data base with the above specifications. Samples of the vital features of the face used for Training the ANN. the high performance Backpropagation training algorithm and the variable learning rate Backpropagation is employed for training the network. size normalization and illumination normalization.14 x 0. The network training algorithm used here (GDMBPAL) updates weight and bias values according to gradient descent momentum and an adaptive learning. pose and distance from the camera.16 x 0. central moment and Standard Deviation. varying illumination. If the extracted features are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. SIZE OF SELECTED FACIAL SECTIONS Facial Region Right Eye Left Eye Nose Mouth Size in Pixels (M x N) 24 x 38 24 x 38 36 x 40 27 x 57 Size in % of the Full Face (M x N) 0.26 0.5 times the number of neurons in the input layer.14 x 0. Facial feature extraction is a special form of dimensionality reduction. histogram equalization (Figure 7). the eyes. Sample of noise removal process Facial Region Length of the Extracted Features Central Moment Eigen Vector Standard Deviation Row Col. we have used our own image data base consisting of 120 images of 8 individuals. This algorithm is based on heuristic technique. RESULTS AND PERFORMANCE ANALYSIS In the present work.In this work. noise levels. Figure 5. These segments are used to compute the Eigen Vector of the covariance. When the input data is too large and it is suspected to be redundant then the input data is transformed into a reduced . We have used a learning rate of 0. There are 15 images of each individual captured at different instances of the day representing all possible variations of light intensity. Total Feature Length 196 196 260 249 901 Right Eye Left Eye Nose Mouth Total Length 38 38 40 57 173 24 x 4 24 x 4 36 x 4 27 x 4 444 24 24 36 27 111 38 38 40 57 173 V.

B. “On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. 2011.4x10-4 1x10-7 1x10-12 1x10-6 We also studied the effect of Gaussian noise of different SNR levels on the efficiency of our face recognition system. A247. EFFECT OF GAUSSIAN NOISE IN THE CORRECT RECOGNITION RATE (CRR) OF THE PROPOSED SYSTEM AWGN SNR (dB) CRR (%) for Gaussian Noise Present before Pre-processing Phase 100 100 100 98. Jhimli Kumari Das. Further. ACKNOWLEDGMENT The authors would like to thank the staff and management of DBCET for their support and encouragement in the completion of this work. CONCLUSION In this paper. vol.The specifications of the Neural Network used for training phase are as tabulated in Table 4. Khalil. The convergence of Mean Squared Error (MSE) for different number of iterations and for different activation function of the hidden and output layers is tabulated in Table 5. B. 5. Noble. (references) Khashman. 529–551.” Proceedings of the Number of Layers Number of Input Unit Number of Output Unit Number of Neurons in the Input Layer Number of Neurons in the Hidden Layer Number of Neurons in the Output Layer Number of Iterations Number of Validation Checks Learning Rate Momentum Activation Functions 3 (Input layer. I. SPECIFICATIONS OF THE NEURAL NETWORK Type: Feed Forward Backpropagation Network Parameters Specifications When the face is affected after pre-processing phase by Gaussian noise of SNR 25dB and above the system has a CRR of 100% while CRR reduces for SNR less than 25dB. Raja. 2000. CONVERGENCE OF MSE FOR DIFFERENT NUMBER OF ITERATIONS AND DIFFERENT ACTIVATION FUNCTION Activation Function of Hidden Layers Activation Function of Output Layers Tansigmoid MSE Iterations 1000 1500 2000 1000 1500 2000 Tansigmoid Logsigmoid Logsigmoid 1x10-4 1. TABLE VI. From Table 5. face recognition based on ANN is proposed.2x10-4 1.” Encyclopedia of Artificial Intelligence.VIII. April 1955. we make the inference that the convergence of MSE is better for log sigmoid activation function with 1500 iterations. pp. TABLE IV. Trans. vol.47 89. [3] [4] . M. 4304. 956 – 961.” Phil. Soc. The other observations are tabulated in Table VI. Abbas. M.7 0. the effect of the different activation functions with different iterations on the convergence of Mean Squared Error (MSE) was also tested. ANN with Back propagation algorithm is found to be the efficient method for recognising the faces. Fahmy.52 43. Ramesha and K. Lecture Notes in Artificial Intelligence. We place on record our deepest gratitude to Mr. 2006.25 CRR (%) for Gaussian Noise Present after Preprocessing Phase 100 81. Springer-Verlag. the guide for his scholarly guidance and masterly expertise all through this work. Kaustubh Bhattacharyya. Roy. pp.6 Log-Sigmoid and Tan-Sigmoid TABLE V.51 06. S. K. the system‟s efficiency is reduced if the image is affected by Gaussian noise after the pre-processing phase.56 86. 115120.74 96. Hidden Layer. Eason.25 25 22 20 18 16 15 14 VI. 2000 6 0. 262-264. 1500. pp.5 = 2732 8 1000. no. We find that if the noise is added before pre-processing phase the system‟s Correct Recognition Rate (CRR) is not affected much.50 12. and I. Further. The input layer consists of the global as well as the local features of the database image. “Expression and illumination invariant preprocessing technique for face recognition.25 62. if the image is affected by Gaussian noise after the pre-processing phase. the convergence of MSE decreases if the number of iterations is very high. Output Layer) 1 Feature Matrix 1 Binary Encoded Vector 1821 1821 * 1. Our sincere thanks to Ms. Abdel-Hay and H. We have carried out the training using both log sigmoid function and the tan sigmoid activation functions in the network. REFERENCES [1] [2] Marcos Faundez-Zanuy. the system‟s CRR is adversely affected.75 37. Sneddon. N. “Intelligent face recognition: local versus global pattern averaging”. However. HoD of Electronics and Communication Engineering for her efforts in initiating us into this work.” International Journal of Computer Science Issues. pp. It is observed that the proposed feature vectors are useful for proper recognition of human faces with the activation function „log sigmoid‟ in the hidden layer of the Neural Network and it gives a better convergence of the MSE. “Biometric security technology. Further. London.G. “Dual transform based feature extraction for face recognition.

” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Fundamentals of Neural Networks. 14-23. 2011. http://www. K. pp. M. vol. 179–187. vol. 1400-1407. Hameed Al_azawi. M. 7. J.” International Journal on Computer Science and Engineering. 2010. Turk and A. “Visual pattern recognition by moment invariants”. 2465-2468. gender and age classification. P.” International Journal of Computer Science Issues. 5-24. no. R. “Face recognition using Eigenfaces. Hu. S. “Learning recognition and segmentation of 3D objects from 2D images. vol. and T. 21-25. Engineering & Technology Journal.edu. Peter Belhumeur. 2003. “Dual Transform based Feature Extraction for Face Recognition. Ahuja. 586-591. IRE transactions on Information Theory. pp. 2010. vol. S.VIII. pp. pp. Pearson Education. 01S. pp. “Visual learning and recognition of 3-D objects from appearance. 1993. N. XIX.[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] International Conference on Computer Engineering and System. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. 2011. Weng. pp 121–128. Ramesha and K. XIV. no. vol. 2008. pp. vol.tw/~bbailey/Moments%20in%20IP . K.” International Journal of Computer Applications. Peter Belhumeur.711-720.711-720. Hespanha and David Kriegman. B.” Proceedings of Conference on Advances in Neural Information Processing Systems. . Raja. J. “Feature extraction based face recognition. II. vol. K. Anderson. pp. 7. 29.” Proceedings of the IEEE International Conference on Image Processing. 1962. 2003 Simon Haykin. “Eyes Recognition System Using Central Moment Features”. 1995. H. An Introduction to Neural Networks. Hespanha and David Kriegman. Raja. Sundos A. no. 115-120.csie. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. 7.htm. pp.ntnu. K.” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Huang. Prentice Hall.” Proceedings of the International Conference on Computer Vision. Murase and S. Ranawade. “Moments in image processing”. pp. “Locality preserving projections. Niyogi. Ramesha . 5. Bonsor. 1991.” Journal of Computer Vision. pp. Nov. J.com/facial-recognition. 1997. I. 1997. pp. 59-64. 2003. 14. Nayar. Albert Montillo and Haibin Ling. K. Pentland. no. M.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). 2002. P. Bob Bailey. “Age regression from faces using random forests. "How Facial Recognition Systems Work". B. Venugopal and L. XIX.htm. J. K. “Face recognition and verification using artificial neural network. K. no. 2009.howstuffworks. Patnaik. http://computer. no.

Journal. 14-23. [15] J Anderson. 2010. S. 2011. vol. K. 1995. pp. pp. I. 5-24. 5. no. Nayar. 1997. Ranawade. vol. [5] K. 1991. Ramesha and K.” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 7. Hu. [11] P. vol. 2465-2468. pp. and T. no. R. “Feature extraction based face recognition. [12] K. Ahuja.” International Journal of Computer Applications.711-720. 2010.711-720. Venugopal and L. Sundos A. Huang. Abdel-Hay and H.VIII. 262-264. “Face recognition using Eigenfaces. K. S. 01S. “Biometric security technology. no. Hespanha and David Kriegman. pp. vol. 2003. no. 4304. [8] H. II. 586-591. . Pentland. 2006. 2000. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. M. 59-64. 115-120. vol. pp.” Proceedings of the IEEE International Conference on Image Processing. 7. pp. In this case size invariance is needed. XIX. Niyogi. M. Lecture Notes in Artificial Intelligence. gender and age classification. 14. If an object is not at a fixed distance from a fixed focal length camera. “Locality preserving projections.” International Journal on Computer Science and Engineering. vol. 115-120. vol. 1993. P. “Dual Transform based Feature Extraction for Face Recognition. 2003 [16] Simon Haykin. pp. “Dual transform based feature extraction for face recognition. pp.” Proceedings of the International Conference on Computer Vision.” Proceedings of the International Conference on Computer Engineering and System. “Learning recognition and segmentation of 3D objects from 2D images. This can be achieved by normalizing the moments. 2008. [4] Abbas. “Expression and illumination invariant preprocessing technique for face recognition. “Visual learning and recognition of 3-D objects from appearance. “Face recognition and verification using artificial neural network. pp. Turk and A. 2003.[1] Marcos Faundez-Zanuy. pp. Hespanha and David Kriegman. pp 121– 128. K. pp. [6] S. “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection. XIV. Springer-Verlag. [7] Albert Montillo and Haibin Ling. Eyes Recognition System Using Central Moment Features. Raja. An Introduction to Neural Networks. Prentice Hall. J. A face recognition system recognizes an individual by matching the input image against images of all users in a database and finding the best match. vol. K. B. Ramesha and K. Fahmy. 1997. [9] M. “Intelligent face recognition: local versus global pattern averaging”. [2] K. M. [14] Peter Belhumeur. no. no. 2011. [13] J.” International Journal of Computer Science Issues. Ramesha . Murase and S.” Encyclopedia of Artificial Intelligence. pp. Eng. 1962.” Proceedings of Conference on Advances in Neural Information Processing Systems. B. N. pp. M. B. Pearson Education. 7. 179–187. 29. “Age regression from faces using random forests. [10] Peter Belhumeur. “Visual pattern recognition by moment invariants”.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). no. XIX. [3] Khashman. 5. Khalil. Raja. 2009. then the sizes of objects will not be fixed. I. & Tech. 21-25.” Journal of Computer Vision. Hameed Al_azawi. Patnaik. IRE transactions on Information Theory. Fundamentals of Neural Networks. Weng. 2011. 956 – 961.” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). J. Raja.VIII.” International Journal of Computer Science Issues.

World Heritage Sites Day

NCETACS Final Kaveri

DIT and DIF Algorithms

Intuition & Scientific Approach

Diodes

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->