You are on page 1of 4


Final Year Project Over View

Group Members:

Muhammad Aqib Jamil (11-ENC-72)
Khawar Iqbal (11-ENC-89)
Farhan Nasir (11-ENC-73)

Project Statement:
Recognition of Human Iris Patterns for Biometric Identification + Speech Detection &
Noise Extraction by Filtering. (For Class Room)
The project is basically consist of two parts. Biometric identification and speech extraction. A
biometric system provides automatic identification of an individual based on a unique feature or
characteristic possessed by the individual. Iris recognition is regarded as the most reliable and
accurate biometric identification system available.
In the second part of project Speech of teacher from whole recorded lecture is extracted (using
appropriate filters) and play with recorded video of teacher. In this process Noise estimation and
reduction is a very challenging problem. In addition, noise characteristics may vary in time. It is
therefore very difficult to develop a versatile algorithm that works in diversified environments.
The following steps will help to use generalized transform domain. (I). Reformulate the noise
reduction problem into a more generalized transform domain, where any unitary matrix can be
used to serve as a transform and (ii) Design different optimal and suboptimal filters in the
generalized transform domain.

Iris Recognition: The iris is an externally visible, yet protected organ whose unique epigenetic
pattern remains stable throughout adult life. These characteristics make it very attractive for use
as a biometric for identifying individuals. Image processing techniques can be employed to
extract the unique iris pattern from a digitized image of the eye, and encode it into a biometric
template, which can be stored in a database. This biometric template contains an objective
mathematical representation of the unique information stored in the iris, and allows
comparisons to be made between templates. When a subject wishes to be identified by an iris
recognition system, their eye is first photographed, and then a template created for their iris
region. This template is then compared with the other templates stored in a database until either
a matching template is found and the subject is identified, or no match is found and the subject
remains unidentified.

Major steps involved in iris recognition system are given following:
Feature Extraction
Localization and Image Matching


Speech detection & noise extraction by filtering:

Recursive Least Squares (RLS) algorithm is used to improve the presence of speech
In a background noise.
Experimental Steps for Implementing RLS Algorithm
Recording speech, WAV file was recorded from different speakers
RLS : The RLS was used in preprocessing for noise cancellation
Framing, Normalization, Filtering
Mel Frequency Cepstral Coefficient (MFCC) is chosen as the feature
Extraction method.
Weighting signal, Time normalization, Vector Quantization (VQ) and labeling.
Then HMM is used to calculate the reference patterns and DTW is used to
Normalize the training data with the reference patterns.
Fusion HMM and DTW:
o DTW measures the distance between recorded speech and a template.
o Distance of the signals is computed at each instant along the warping
o HMM trains cluster and iteratively moves between clusters based on their
likelihoods given by the various models.

As a result, this algorithm performs almost perfect segmentation for recoded voice, recoding is
done at noisy places, segmentation problem happens because in some cases the algorithm
produces different values caused by background noise. This causes the cut off for silence to be
raised as it may not be quite zero due to noise being interpreted as speech. On the other hand
for clean speech both zero crossing rate and short term energy should be zero for silent regions.

Our project emphasize on human recognition through iris and speech recognition will emphasize
on specific voice considering other voices as a noise. Under your consideration and help our effort
will make our thinking an ideas more efficient.