You are on page 1of 66

PATTERN RECOGNITION QUESTION & SOLUTION

1Marks Question
1) Define feature vector.
Ans: a feature vector is an n-dimensional vector of numerical features
that represent some object
2) Define Pattern Recognition.
Ans: Pattern recognition can be defined as the classification of
unknown data based on statistical information extracted from patterns
and/or their representation.
3) Define feature extractor.
Ans: Feature extraction refers to the process of transforming raw data
into numerical features that can be processed while preserving the
information in the original data set
4) The model learns and updates itself through reward/punishment in case of ---
------------
Ans: Reinforcement learning algorithm
5) Define feature selection.
Ans: Feature selection is the process of isolating the most consistent,
non-redundant, and relevant features to use in model construction
6) Define unsupervised learning.
Ans: Unsupervised learning is a learning method in which a system
learns without any supervision.
7) What is meant by decision surface?
Ans: Decision surface is a diagnostic tool for understanding how a
classification algorithm divides up the feature space.
8) In the generalized formula for Bayes' theorem, what does the Greek letter
Sigma in the denominator mean?

Ans: It guarantees that the denominator can never be zero


9) What is the definition of discriminant function?
Ans: Functions based on a set of variables that can be used for
discriminating between or classifying samples of events or objects
10) What is univariate Gaussian distribution?
Ans: The normal distribution, also known as Gaussian distribution,
is defined by two parameters, mean μ, which is expected value of the

AOT/CSBS/ 6SEM/ SKL 1


PATTERN RECOGNITION QUESTION & SOLUTION

distribution and standard deviation σ which corresponds to the


expected squared deviation from the mean.
11) The covariance between two random variables X and Y measures the
degree to which X and Y are (linearly) related, which means how X varies
with Y and vice versa. What is the formula for Cov (X,Y)?
Ans: Cov(X,Y) = E(XY)−E(X)E(Y)
12) What is the formula for multivariate normal distributions?

13) The likelihood of class ω1 and ω2 followed normal distribution N (–


0·5, 2) and N (0·5, 2), respectively. For equal prior, a pattern X = 1·0
belongs to which class----
Ans: Class ω2
14) If the covariance matrices for all of the classes are identical, then the
discriminant functions will be-----------
Ans: Linear
15) For uniform prior we can estimate the parameter of a density function
by using----
Ans: maximum likelihood ( ML )
16) The most commonly used measure of similarity is the .................. .. .
or its square.
Ans: Euclidean distance
17) ...................................... is frequently referred to as k-means clustering
Ans: Non-hierarchical clustering
18) Example of non-parametric density estimation methods is------------
Ans: Parzen Window, histogram, kernel density estimation
19) Why are linearly separable problems of interest to neural network
researchers ?
Ans: because they are the only class of problem that Perceptron can
solve successfully.
20) Linear Discriminant function is given by-------
Ans: WTX – W0
21) Full form of iid is-------

AOT/CSBS/ 6SEM/ SKL 2


PATTERN RECOGNITION QUESTION & SOLUTION

Ans: independent and identical distribution


22) The most natural representation of hierarchical clustering is a
corresponding tree, called------
Ans: Dendrogram
23) Principal component analysis is one important step in ----------
Ans: Data dimension reduction
24) What is the use of the Hidden Markov Model?
Ans: Speech recognition
25) How does the state of the process is described in HMM?
Ans: An HMM is a temporal probabilistic model in which the state of
the process is described by a single discrete random variable
26) Where does the additional variables are added in HMM?
Ans: Additional state variables can be added to a temporal model while
staying within the HMM framework.
27) Which algorithm works by first running the standard forward pass to
compute?
Ans: The modified smoothing algorithm works by first running the
standard forward pass to compute and then running the backward pass.
28) Which suggests the existence of an efficient recursive algorithm for
online smoothing?
Ans: Constant space
29) Which reveals an improvement in online smoothing?
Ans: Matrix formulation reveals an improvement in online smoothing
with a fixed lag.
30) Deep neural networks generally have more than ___ hidden layers
Ans: More than 2
31) A binary sigmoid function has range of ___.
Ans: (0, 1)
32) SVM stands for----------
Ans: Support vector machine
33) SVM is an example of -----------------classifier.
Ans: Linear Classifier and Maximum Margin Classifier
34) The distance between hyperplane and data points is called as:-----------
-----
Ans: Margin.

AOT/CSBS/ 6SEM/ SKL 3


PATTERN RECOGNITION QUESTION & SOLUTION

35) Decision tree is also referred to as__________________ algorithm


Ans: Recursive partitioning
36) ______ is to create a training model that can be used to predict the
class or value of the target variable by learning simple decision rules.
Ans: Decision Tree
37) _____is used for cutting or trimming the tree in Decision trees.
Ans: Pruning
38) ______ measure of the randomness in the information being
processed in the Decision Tree.
Ans: Entropy
39) _________computes the difference between entropy before the split
and average entropy after the split of the dataset based on given attribute
values
Ans: Information gain
40) K-means clustering algorithm is an example of which type of
clustering method?
Ans: Partitioning
41) Agglomerative has _________ approach
Ans: Bottom up
42) Unsupervised learning makes sense of ________ data without having
any predefined dataset for its training.
Ans: Unlabeled
43) What can we use in Hierarchical Clustering to find the right number
of clusters?
Ans: Dendrograms
44) Given two fuzzy clusters A1 and A2 . A data point X in two-class (
fuzzy Cmeans clustering ) then satisfies -----------
Ans: µ A1 ( x ) + µ A 2 ( x ) = 1
45) PCA works better if there is ________
Ans: If variables are scaled in the same unit and the data are linear
structure.
46) What happens when you get features in lower dimensions using PCA?
Ans: The features will lose interpretability and the features may not
carry all information present in the data

AOT/CSBS/ 6SEM/ SKL 4


PATTERN RECOGNITION QUESTION & SOLUTION

47) Under which condition do SVD and PCA produce the same projection
result?
Ans: When the data has a zero mean vector, otherwise, you have to
center the data first before taking SVD.
48) If we project the original data points into the 1-d subspace by the
principal component [ √ 2 /2, √ 2 /2 ] T. What are their coordinates in the 1-d
subspace?
Ans: The coordinates of three points after projection should be z1
= x T 1 v = [−1, −1][ √ 2/ 2, √ 2 /2 ] T = − √ 2, z2 = x T 2 v = 0, z3 =
x T 3 v = √ 2.
49) When would you use PCA over EFA?
Ans: When you are interested in explaining the total variance in a variance-
covariance matrix
Imagine you are dealing with text data. To represent the words, you are
using word embedding (Word2vec). In word embedding, you will end up
with 1000 dimensions. Now, you want to reduce the dimensionality of this
high-dimensional data such that similar words should have a similar
meaning in the nearest neighbor space. In such a case, which algorithms
would you choose t-SNE, stands for t-Distributed Stochastic Neighbor
embedding, which considers the nearest neighbors for reducing the data?
50) In Fisher's linear discriminant classifier, --- objective function (J(W))
is maximized.
Ans: inter class scatter matrix

AOT/CSBS/ 6SEM/ SKL 5


PATTERN RECOGNITION QUESTION & SOLUTION

MODULE 1
5 Marks Questions
1. Explain supervised learning and unsupervised learning.
Supervised Learning: Supervised learning is a type of learning method in which
we provide sample labeled data to the learning system in order to train it, and on
that basis, it predicts the output. The system creates a model using labeled data to
understand the datasets and learn about each data, once the training and processing
are done then we test the model by providing a sample data to check whether it is
predicting the exact output or not.
The goal of supervised learning is to map input data with the output data. The
supervised learning is based on supervision, and it is the same as when a student
learns things in the supervision of the teacher. The example of supervised learning
is spam filtering.
Unsupervised Learning: Unsupervised learning is a learning method in which a
system learns without any supervision. The training is provided to the system with
the set of data that has not been labeled, classified, or categorized, and the
algorithm needs to act on that data without any supervision. The goal of
unsupervised learning is to restructure the input data into new features or a group
of objects with similar patterns.
In unsupervised learning, we don't have a pre-determined result. The system tries
to find useful insights from the huge amount of data.
2. Explain the Design Cycle of a Pattern Recognition System.
The basic stages involved in the design of a classification system are shown below

Figure shows the various stages followed for the design of a classification system.
As is apparent from the feedback arrows, these stages are not independent. On the
contrary, they are interrelated and, depending on the results, one may go back to
redesign earlier stages in order to improve the overall performance. Furthermore,
there are some methods that combine stages, for example, the feature selection and
the classifier design stage, in a common optimization task.

AOT/CSBS/ 6SEM/ SKL 6


PATTERN RECOGNITION QUESTION & SOLUTION

3. Write some of the application of pattern recognition


The applications of pattern recognition are:
MachineVision: A machine vision system captures images via a camera and
analyzes them to produce descriptions of images = d dimensional objects. For
example, during inspection in manufacturing industry when the manufactured
objects are passed through the camera, the images have to be analyzed online.
Computer Aided Diagnosis (CAD): CAD helps to assist doctors in making
diagnostic decision. Computer assisted diagnosis has been applied in medical field
such as X-rays, ECGs, ultrasound images etc.
Speech Recognition: This process recognizes the spoken information. In this, the
software in built around a pattern recognition system which recognizes the spoken
text and translated it into ASCII characters which are shown on the screen. In this
we can also identify the identity of speaker.
Character Recognition: This application recognizes both letter and number. In
this the optically scanned image is provided as input and alphanumeric characters
are generated as output. Its major implication is in automation and information
handling. It is also used in page readers, zip code, license plate etc.
Manufacturing: In this the 3-D images such as structured light, laser, stereo etc is
provided as input and as a result we can identify the objects.
Fingerprint Identification: In this the input image is obtained from fingerprint
sensors and by this technique various fingerprint classes are obtained and we can
identify the owner of the fingerprint.
Industrial Automation: In this we provide the intensity or range image of the
product and by this the defective or non-defective product is identified.
15 Marks Questions
1. a) What is pattern recognition? b) State the advantages and disadvantages
of PR Systems. c) State the application of pattern recognition [5+6+4]
a) Pattern recognition (PR) is the scientific discipline that concerns the description
and classification (recognition) of objects (patterns) into a number of categories or
classes. Depending on the application, these objects can be images or signal
waveforms or any type of measurements that need to be classified.
Patterns may be a finger print image, handwritten character, a human face, iris of
human eye, speech signal. These examples are called input stimuli. Recognition
establishes a close match between some new stimulus and previously stored
stimulus patterns.

AOT/CSBS/ 6SEM/ SKL 7


PATTERN RECOGNITION QUESTION & SOLUTION

Pattern recognition systems are in many cases trained from labeled "training" data
(supervised learning), but when no labeled data are available other algorithms can
be used to discover previously unknown patterns (unsupervised learning). At the
most abstract level patterns can also be some ideas, concepts, thoughts, procedures
activated in human brain and body. This is known as the study of human
psychology (Cognitive Science)
PR techniques are an important component of intelligent systems and are used for
many application domains
 Decision making
 Object and pattern classification
b) Given below are the advantages and disadvantages of PR techniques.
Advantages
 Pattern recognition cracks the problems in classification.
 There are various problems in day to day life that are handled by the
intelligent PR systems such as Facial expression recognition systems.
 Visually impaired people are also benefitting the PR systems in many
domains.
 The speech recognition systems are doing wonders and helping in research
fields
 The object detection is a miraculous achievement of PR systems that is
helpful in many industries such as aviation, health, etc
Disadvantages
 The process is quite complex and lengthy, which consumes time.
 The dataset needs to be large for accuracy.
 The logic is not certain of object recognition.
c) The applications of pattern recognition are:
Machine Vision: A machine vision system captures images via a camera and
analyzes them to produce descriptions of images = d dimensional objects. For
example, during inspection in manufacturing industry when the manufactured
objects are passed through the camera, the images have to be analyzed online.
Computer Aided Diagnosis (CAD): CAD helps to assist doctors in making
diagnostic decision. Computer assisted diagnosis has been applied in medical field
such as X-rays, ECGs, ultrasound images etc.
Speech Recognition: This process recognizes the spoken information. In this, the
software in built around a pattern recognition system which recognizes the spoken

AOT/CSBS/ 6SEM/ SKL 8


PATTERN RECOGNITION QUESTION & SOLUTION

text and translated it into ASCII characters which are shown on the screen. In this
we can also identify the identity of speaker.
Character Recognition: This application recognizes both letter and number. In
this the optically scanned image is provided as input and alphanumeric characters
are generated as output. Its major implication is in automation and information
handling. It is also used in page readers, zip code, license plate etc.
Manufacturing: In this the 3-D images such as structured light, laser, stereo etc is
provided as input and as a result we can identify the objects.
Fingerprint Identification: In this the input image is obtained from fingerprint
sensors and by this technique various fingerprint classes are obtained and we can
identify the owner of the fingerprint.
Industrial Automation: In this we provide the intensity or range image of the
product and by this the defective or non-defective product is identified.
2. a) Define Measurement space and Feature space in classification process for
objects. b) Explain the different types of learning with examples. c) Discuss
about the four best approaches for a Pattern Recognition system. [4+5+6]
a) Measurement space: This is the set of all pattern attributes which are stored in
a vector form.
It is a range of characteristic attribute values. In vector form measurement space is
also called observation space /data space. E.g : W = [ W1 , W2 ,……,Wn-1, Wn ]
for n pattern classes. W is a pattern vector.
Let X = [X1, X2] X is a pattern vector for flower, x1 is petal length and x2 is
petal width.
Feature Space: The range of subset of attribute values is called Feature Space F.
This subset represents a reduction in attribute space and pattern classes are divided
into sub classes. Feature space signifies the most important attributes of a pattern
class observed in measurement space.
b) At a broad level, learning can be classified into three types:
1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
1) Supervised Learning: Supervised learning is a type of machine learning
method in which we provide sample labeled data to the machine learning system in
order to train it, and on that basis, it predicts the output.

AOT/CSBS/ 6SEM/ SKL 9


PATTERN RECOGNITION QUESTION & SOLUTION

The system creates a model using labeled data to understand the datasets and learn
about each data, once the training and processing are done then we test the model
by providing a sample data to check whether it is predicting the exact output or
not.
The goal of supervised learning is to map input data with the output data. The
supervised learning is based on supervision, and it is the same as when a student
learns things in the supervision of the teacher. The example of supervised learning
is spam filtering. Supervised learning can be grouped further in two categories of
algorithms:
 Classification
 Regression
2) Unsupervised Learning: Unsupervised learning is a learning method in which
a machine learns without any supervision. The training is provided to the machine
with the set of data that has not been labeled, classified, or categorized, and the
algorithm needs to act on that data without any supervision. The goal of
unsupervised learning is to restructure the input data into new features or a group
of objects with similar patterns. In unsupervised learning, we don't have a
predetermined result. The machine tries to find useful insights from the huge
amount of data. It can be further classifieds into two categories of algorithms:
 Clustering
 Association
3) Reinforcement Learning: Reinforcement learning is a feedback-based learning
method, in which a learning agent gets a reward for each right action and gets a
penalty for each wrong action. The agent learns automatically with these feedbacks
and improves its performance. In reinforcement learning, the agent interacts with
the environment and explores it. The goal of an agent is to get the most reward
points, and hence, it improves its performance. The robotic dog, which
automatically learns the movement of his arms, is an example of Reinforcement
learning.
c) Approaches of PR system are as mentioned below:
1) Template Matching
2) Statistical Approach
3) Syntactic Approach
4) ANN Approach.

AOT/CSBS/ 6SEM/ SKL 10


PATTERN RECOGNITION QUESTION & SOLUTION

TEMPLATE MATCHING: This approach of pattern recognition is based on


finding the similarity between two entities (points, curves / shapes) of same type. A
2-D shape or a prototype of a pattern to be recognized is available. Template is a
dxd mask or window. Pattern to be recognized is matched against stored template
in a knowledge base.
STATISTICAL APPROACH: Each pattern is represented in terms of d- features
in d- dimension space. Goal is to select those features that allow pattern vectors
belonging to different categories to occupy compact and disjoint regions.
Separation of pattern classes is determined. Decision surfaces and lines are drawn
which are determined by probability distribution of random variables w.r.t each
pattern class.
SYNTACTIC APPROACH: This approach solves complex pattern classification
problems. A hierarchal rule is defined. E.g: Grammar rules for natural language,
syntax tree structure. These are used to decompose complex patterns into simpler
sub patterns. Patterns can be viewed as sentences where sentences are decomposed
into words and further words are sub divided into letters.
NEURAL NETWORKS APPROACH: Artificial neural networks are massively
parallel computing systems consisting of extremely large number of simple
processors with many interconnections.
Network Models attempt to use some principles like Learning, Generalization,
Adaptivity, Fault Tolerance, Distributed representation & computation. Learning
process involves updating network architecture and connection mapping and
weights so that network may perform better clustering.
3. a) What are the design principles of a Pattern Recognition System? b) What
are major steps involved in this process? c) Explain Design Cycle of a Pattern
Recognition System. [4+4+7]
a) Design principles of a Pattern Recognition System are as mentioned below:
51) Designing of a pattern recognition system is based on the construction
of following AI techniques:
 Multi layer perceptron in Artificial Neural Network.
 Decision tree implementation.
 Nearest neighbor classification.
 Segmentation of large objects.
1. Designing of a robust PR system against the variation in illumination and
brightness in environment.

AOT/CSBS/ 6SEM/ SKL 11


PATTERN RECOGNITION QUESTION & SOLUTION

2. Designing parameters based on translation, scaling and rotation.


3. Color and texture representation by histograms.
4. Designing brightness based and feature based PR systems.
b) This system comprises of mainly five components namely sensing,
segmentation, feature extraction, classification and post processing. All of these
together generates a System and works as follows:
Sensing and Data Acquisition: It includes, various properties that describes the
object, such as its entities and attributes which are captured using sensing device.
Segmentation: Data objects are segmented into smaller segments in this step.
Post Processing & Decision: Certain refinements and adjustments are done as per
the changes in features of the data objects which are in the process of recognition.
Thus, decision making can be done once, post processing is completed.
c) Design Cycle of a Pattern Recognition System

Pattern classification involves finding three major attribute spaces: (a)


Measurement space (b) Feature space (c) decision space. After this appropriate
neural network set up is trained with these attribute sets to make system learn for
unknown set of patterns and objects. Steps of classification process are as follows:

AOT/CSBS/ 6SEM/ SKL 12


PATTERN RECOGNITION QUESTION & SOLUTION

Step1: Stimuli produced by the objects are perceived by sensory devices. Important
attributes like (shape, size, color, texture) produce the strongest inputs. Data
collection involves identification of attributes of objects and creating Measurement
space.
Measurement space: This is the set of all pattern attributes which are stored in a
vector form. It is a range of characteristic attribute values. In vector form
measurement space is also called observation space /data space. E.g : W = [ W1 ,
W2 ,……,Wn-1, Wn ] for n pattern classes W is a pattern vector. Let X = [X1, X2]
X is a pattern vector for flower, x1 is petal length and x2 is petal width. Pattern
classes can be W1= Lilly, W2= Rose, W3 = Sunflower.
Step2: After this features are selected and feature space vector is designed. The
range of subset of attribute values is called Feature Space F. This subset represents
a reduction in attribute space and pattern classes are divided into sub classes.
Feature space signifies the most important attributes of a pattern class observed in
measurement space. Feature space is shown in smaller size than M- space
Step3: AI models based on probability theory, E.g : Bayesian Model and Hidden
Markov Models are Used for grouping or clustering the objects. Attributes selected
are those which provide High Inter Class and Low Inter Class groupings.
Step4: Using Unsupervised (for feature extraction) or Supervised learning
techniques (classification) training of classifiers is performed. When we present
pattern recognition with a set of classified patterns so that it can learn the
characteristics of the set, we call it training.
Step5: In evaluation of classifier testing is performed. In this an unknown pattern is
given to the PR System for identifying its correct class .Using the selected attribute
values, object/class characterization models are learned by forming generalized
prototype descriptors, Classification rules or Decision Functions. The range of
decision function values is known as Decision space D of r – dimensions. We also
evaluate performance, efficiency of the classifier for further improvement

AOT/CSBS/ 6SEM/ SKL 13


PATTERN RECOGNITION QUESTION & SOLUTION

MODULE 2
5 Marks Question
1) Explain Normal Distribution with its characteristics.
In a normal distribution, data is symmetrically distributed with no skew. When
plotted on a graph, the data follows a bell shape, with most values clustering
around a central region and tapering off as they go further away from the center.
Normal distributions are also called Gaussian distributions or bell curves because
of their shape.

Normal distributions have key characteristics that are easy to spot in graphs:
 The mean, median and mode are exactly the same.
 The distribution is symmetric about the mean—half the values fall below the
mean and half above the mean.
 The distribution can be described by two values: the mean and the standard
deviation.
2) Define Bayesian classifier. Show that it is an optimal classifier.
A Bayesian classifier is a probabilistic model where the classification is a latent
variable that is probabilistically related to the observed variables. Classification
then becomes inference in the probabilistic model.
It can be shown that of all classifiers, the Optimal Bayes classifier is the one that
will have the lowest probability of miss classifying an observation, i.e. the lowest
probability of error. So if we know the posterior distribution, then using the Bayes
classifier is as good as it gets
The bayes classifier is the theoretically optimal classifier for a given classification
problem. This is why it is also called the target classifier: it is the classifier we aim
at when using learning algorithms.

AOT/CSBS/ 6SEM/ SKL 14


PATTERN RECOGNITION QUESTION & SOLUTION

For a given input pattern, the Bayes classifier outputs the label that is the most
likely, and thus provides a prediction that is the less likely to be an error compared
with the other choices of label. Since the Bayes classifier applies this scheme for
all possible inputs, this yields the smallest probability of error, a quantity also
known as the Bayes' risk, i.e., the risk of the Bayes classifier which is by definition
the smallest risk one can obtain for a given problem.
Note that the Bayes classifier requires knowledge of the class-membership
probabilities, which are assumed unknown. This is why the Bayes classifier cannot
be applied in practice. However, it plays an important role in the analysis of other
learning algorithms.
The Bayes classifier is the best classifier among all possible classifiers. Another
theoretically important classifier is the best classifier among a given set of
classifiers.
3. In a two class problem with single feature X the pdf’s are Gaussians with
variance σ2 = 0.5 for both classes and mean value 0 and 1 respectively. If P (ω1) =
P (ω2) = 0.5, calculate the threshold value X 0 for minimum error probability
Solution: We know in a two class problem with single feature X the pdf’s are
Gaussians:
WT(X-X0) = 0
Where, W = μi - μj
X0 = threshold value =

Here, μi = 0, μj =1, P (ω1) = P (ω2) = 0.5, and σ2 = 0.5


Putting the values in above equation, we get:
X0 = threshold value =1/2
15 Marks Question
1. In a particular binary hypothesis testing application, the conditional density for a
scalar feature x given class ω1 is p(x| ω1) = k1 exp(−x2/20), Given class ω2 the
conditional density is p(x| ω2) = k2 exp(−(x − 6)2/12
a). Find k1 and k2
b). Assume that the prior probabilities of the two classes are equal, and that the
cost for choosing correctly is zero. If the costs for choosing incorrectly are

AOT/CSBS/ 6SEM/ SKL 15


PATTERN RECOGNITION QUESTION & SOLUTION

C12 = √3 and C21 = √5 (where Cij corresponds to predicting class i when it belongs
to class j), what is the expression for the conditional risk?
c). Find the decision regions which minimize the Bayes risk. [4+5+6]

AOT/CSBS/ 6SEM/ SKL 16


PATTERN RECOGNITION QUESTION & SOLUTION

AOT/CSBS/ 6SEM/ SKL 17


PATTERN RECOGNITION QUESTION & SOLUTION

2. a) Explain the concept of Prior, Posterior, and Likelihood with an example.


b) Explain Bayesian Decision Theory
c) Suppose box A contains 4 red and 5 blue coins and box B contains 6 red and 3
blue coins. A coin is chosen at random from the box A and placed in box B.
Finally, a coin is chosen at random from among those now in box B. What is the
probability a blue coin was transferred from box A to box B given that the coin
chosen from box B is red? [6+4+5]
a) Prior: The prior knowledge or belief about the probabilities of various
hypotheses in H is called Prior in context of Bayes’ theorem. For example, if we
have to determine whether a particular type of tumour is malignant for a patient,
the prior knowledge of such tumours becoming malignant can be used to validate
our current hypothesis and is a prior probability or simply called prior.
Posterior: The probability that a particular hypothesis holds for a data set based on
the Prior is called the posterior probability or simply Posterior. In the above
example, the probability of the hypothesis that the patient has a malignant tumour
considering the Prior of correctness of the malignancy test is a posterior
probability. In our notation, we will say that we are interested in finding out P(h|T),
which means whether the hypothesis holds true given the observed training data T.
This is called the posterior probability or simply Posterior in machine learning
language. So, the prior probability P(h), which represents the probability of the
hypothesis independent of the training data (Prior), now gets refined with the
introduction of influence of the training data as P(h|T).
Likelihood: In certain machine learning problems, if every hypothesis in H has
equal probable priori as P(h ) = P(h ), and then, we can determine P(h|T) from the
probability P(T|h) only. Thus, P(T|h) is called the likelihood of data T given h, and
any hypothesis that maximizes P(T|h) is i j called the maximum likelihood (ML)
hypothesis, h
b) In Bayes’s decision theory, we are interested in computing the posterior
distribution P(X\ɷ). Using Bayes’ theorem, it is easy to show that the posterior
distribution P(X\ɷ)can be computed via the conditional distribution P(ɷ\X)) and
the prior distribution P(ɷ). The prior distribution P(ɷ)) represents the prior
knowledge we may have for the distribution of the ɷ parameter before we obtain
additional information for our dataset. In other words, Bayes’ detection theory
utilizes prior knowledge in the decision.

AOT/CSBS/ 6SEM/ SKL 18


PATTERN RECOGNITION QUESTION & SOLUTION

c) Let E represent the event of moving a blue coin from box A to box B. We want
to find the probability of a blue coin which was moved from box A to box B given
that the coin chosen from B was red.
The probability of choosing a red coin from box A is P(R) = 7Ú9 and the probability
of choosing a blue coin from box A is P(B) = 5Ú9.
If a red coin was moved from box A to box B, then box B has 7 red coins and 3
blue coins. Thus the probability of choosing a red coin from box B is 7Ú10.
Similarly, if a blue coin was moved from box A to box B, then the probability of
choosing a red coin from box B is 6Ú10.
Hence, the probability that a blue coin was transferred from box A to box B given
that the coin chosen from box B is red is given by: -
P(E\R) = (P(R│E)P(E))/(P(R))
= ((6/10)(5/9))/((7/10)(4/9)+(6/10)(5/9))
= 15/29
3. a) Suppose two categories consist of independent binary features in three
dimensions with known feature probabilities. Construct the Bayesian decision
boundary if P(ω1) = P(ω2)=0.5 and the individual components obey p i = 0.8 qi =
0.5, where i = 1, 2, 3.
b) Show that in a multiclass classification task the Bayes decision rule minimizes
the error probability

c) Prove that the covariance estimate

is an unbiased one, where


[4+5+6]
Solution
a) We know,

AOT/CSBS/ 6SEM/ SKL 19


PATTERN RECOGNITION QUESTION & SOLUTION

By above Eqs. we have that the weights are

The decision boundary for the problem involving three-dimensional binary


features: On the left we show the case pi =0.8 and qi =0.5. On the right we use the
same values except p3 = q3, which leads to w3 = 0 and a decision surface parallel to
the x3 axis.
b)

AOT/CSBS/ 6SEM/ SKL 20


PATTERN RECOGNITION QUESTION & SOLUTION

c)

AOT/CSBS/ 6SEM/ SKL 21


PATTERN RECOGNITION QUESTION & SOLUTION

Module 3
5 Marks Questions
1) What is the fundamental difference between maximum likelihood
parameter estimation and Bayesian parameter estimation?
There are two methods for estimating parameters that are Maximum Likelihood
Estimation (MLE) and Bayesian parameter estimation. Despite the difference in
theory between these two methods, they are quite similar when they are applied in
practice. Maximum Likelihood (ML) and Bayesian parameter estimation make
very different assumptions. Here the assumptions are contrasted briefly:
MLE
 Deterministic (single, non-random) estimate of parameters, theta_ML
 Determining probability of a new point requires one calculation: P(x\theta)
 No "prior knowledge"
 Estimate of variance and other parameters is often biased
 Overfitting solved with regularization parameters
Bayes
 Probabilistic (probability density) estimate of parameters, p(theta | Data)
 Determining probability of a new point requires integration over the
parameter space
 Prior knowledge is necessary
 With certain specially-designed priors, leads naturally to unbiased estimate
of variance
 Overfitting is solved by the selection of the prior
In the end, both methods require a parameter to avoid overfitting. The parameter
used in Maximum Likelihood Estimation is not as intellectually satisfying, because
it does not arise as naturally from the derivation.
Comparison table between MLE and BPE

AOT/CSBS/ 6SEM/ SKL 22


PATTERN RECOGNITION QUESTION & SOLUTION

2. Explain Maximum Likelihood technique under Parameter Estimation of


Classification.
Estimation model consists of a number of parameters. So, in order to calculate or
estimate the parameters of the model, the concept of Maximum Likelihood is used.
Whenever the probability density functions of a sample are unknown, they can be
calculated by taking the parameters inside sample as quantities having unknown but
fixed values.
Consider we want to calculate the height of a number of boys in a school. But, it
will be a time consuming process to measure the height of all the boys. So, the
unknown mean and unknown variance of the heights being distributed normally,
by maximum likelihood estimation we can calculate the mean and variance by only
measuring the height of a small group of boys from the total sample.
Let we separate a collection of samples as per the class, having C data sets, D1,
D2,….Dc with samples in Dj drawn accurately to probability p(x\ωj). Let this has a
known parametric form and is determined by value 𝜃𝑗. E.g : p(x\ωj) ~ N (𝝁𝒊 ,𝜮𝒋), 𝜽𝒋
consists of these parameter. To show dependence we have:
p(x\ωj,𝜽𝒋).
Objective is to use information provided by training samples to achieve good
estimates for unknown parameter vectors 𝜽𝟏, 𝜽𝟐, 𝜽𝟑 … . . 𝜽𝒄−𝟏, 𝜽𝒄 associated with
each category. Assume samples in Di give no information about , if i ≠ 𝑗 i.e
Parameters of Different Classes are functionally independent. Let set D has n
samples [X1, X2,…..Xn],

p(D\ 𝜃) is likelihood of 𝜃 w.r.t set of samples. Maximum likelihood estimate of 𝜽


is by definition value 𝜽̂ that maximizes p(D\ 𝜃).
Logarithmic Form: Since Log makes the expressions simpler in the form of
addition, 𝜃 that maximizes log likelihood also maximizes likelihood. If number of
parameters to be estimated is p, we let 𝜽 denote p – component vector i.e 𝜽 = ( , 𝜽𝟐
, 𝜽𝟑 … . . 𝜽𝒑−𝟏, 𝜽𝒑) 𝒕 .

AOT/CSBS/ 6SEM/ SKL 23


PATTERN RECOGNITION QUESTION & SOLUTION

15 Marks Question
a) Explain Optimum Statistical Classifier.
b) Using Bayesian approach, evolve expressions for mean and variances of
samples drawn from a univariate normal distribution. [5+5+5]
Optimum Statistical Classifier: This is a pattern classification approach
developed on the basis of probabilistic technique because of randomness under
which pattern classes are normally generated.
It is based on Bayesian theory and conditional probabilities.” Probability that a
particular pattern X is from class ωi is denoted as P (ωi | X)”. If a pattern
classifier decides that X came from ωj when it actually came from ωi it incurs a
loss Lij .
Average Loss incurred in assigning X to class ωj is given by following equation

P(X\ωk) 𝒊𝒔 𝑷𝒓𝒐𝒃𝒂𝒃𝒊𝒍𝒊𝒕𝒚 𝒅𝒆𝒏𝒔𝒊𝒕𝒚 𝒇𝒖𝒏𝒄𝒕𝒊𝒐𝒏 of the patterns from class ωk and


P (ωk) is the probability of occurrence of class ωk.P(X) is priori probability and

AOT/CSBS/ 6SEM/ SKL 24


PATTERN RECOGNITION QUESTION & SOLUTION

independent of k (Has same value for all the classes), Hence equation can be
again rewritten as below:

Assume that we have a set of n data samples from a Normal distribution with
unknown mean m and known standard deviation s. We would like to estimate the
mean together with the appropriate level of uncertainty.
A Normal distribution can have a mean anywhere in [-∞, +∞], so we could use a
Uniform improper prior p(m) = k.
We assign an uniformed (Uniform) prior for m and use a Normal likelihood
function for the observed n measurements {xi}. No prior is needed for s since it is
known and we arrive at a posterior distribution for m given by:

Taking logs:

Since s is known:

Where k is some constant, Differentiating twice we get:

The best estimate m0 of m is that value for which

i.e., m0 is the average of the data values x - no surprise there! A Taylor series
expansion of this function about m0 gives:

AOT/CSBS/ 6SEM/ SKL 25


PATTERN RECOGNITION QUESTION & SOLUTION

(1)
The second term is missing because it equals zero and there are no other higher

order terms since is independent of m and any further differential


therefore equals zero.
Consequently, Equation 1 is an exact result.
Taking exponents to convert back to f(m), and rearranging a little, we get:

Where K is a normalizing constant, Comparison with the probability density


function for the Normal distribution, shows that this is a Normal density function

with mean x and standard deviation . In other words:

Which is the same as the classical statistics result.


The likelihood function for n observations from a Normal distribution is given by
the product of the Normal probability densities for each sample:

With the uninformed prior:

This gives a posterior distribution of:

Expanding the exponent:

AOT/CSBS/ 6SEM/ SKL 26


PATTERN RECOGNITION QUESTION & SOLUTION

Where is the sample variance


To get to the marginal posterior distribution for s2 we have to average this joint
distribution over all m:

The only component in m is:

Which, with a normalizing factor of would be a Normal distribution


density. Thus the integral is just the reciprocal of this factor and we get:

(1)
If a variable X = Gamma(a,b), then the variable Y=1/X has the Inverse-Gamma
density:

(2)
Comparing Equations 1 and 2 we see that:

AOT/CSBS/ 6SEM/ SKL 27


PATTERN RECOGNITION QUESTION & SOLUTION

The last identity comes from here. Finally:

a) Define maximum likelihood (ML) estimation? Show that if the likelihood


function is univariate Gaussian with unknowns the mean µ as well as variance
σ2, then ML estimate are given by

Where X k is the k th pattern and N is the total number of training patterns.


b) In a two-class one-dimensional problem, the pdfs are the Gaussians N (0,
σ2) and N (1,σ2) for the two classes, respectively. Show that the threshold x0
minimizing the average risk is equal to

[10+5]
a) In statistics, maximum likelihood estimation (MLE) is a method of estimating
the parameters of a statistical model given observations, by finding the parameter
values that maximize the likelihood of making the observations given the
parameters.

AOT/CSBS/ 6SEM/ SKL 28


PATTERN RECOGNITION QUESTION & SOLUTION

AOT/CSBS/ 6SEM/ SKL 29


PATTERN RECOGNITION QUESTION & SOLUTION

b)

AOT/CSBS/ 6SEM/ SKL 30


PATTERN RECOGNITION QUESTION & SOLUTION

Module 7:
5 Marks Question
1. Define Perceptron. Explain why the perceptron cost function is a
continuous piecewise linear function
A Perceptron is a neural network unit that does certain computations to detect
features or business intelligence in the input data. It is a function that maps its
input “x,” which is multiplied by the learned weight coefficient, and generates an
output value ”f(x).

2. Use a simple perceptron with weights w0, w1, and w2 as −1, 2, and 1,
respectively, to classify data points (3, 4); (5, 2); (1, −3); (−8, −3); (−3, 0).
Let us examine if this perceptron is able to classify a set of points given below
P1= (3, 4);
P2= (5, 2);
P3= (1, −3);
P4= (−8, −3);
P5= (−3, 0)
w0= -1, w1 = 2, and w2 = 1
point V = Σwixi Yout = f(v) class
P1 -1+2*3+1*4 = 9 1 C1
P2 -1+2*5+1*2 = 11 1 C1
P3 -1+2*1+1*(-3) = -2 0 C2
P4 -1+2*(-8)+1*(-3) = -20 0 C2
P5 -1+2*(-3)+1*0 = -7 0 C2
As depicted in table above, we can see that on the basis of activation function
output, only points p1 and p2 generate an output of 1. Hence, they are assigned to
class c1 as expected. On the other hand, p3, p4 and p5 points having activation
function output as negative generate an output of 0. Hence, they are assigned to
class c2 , again as expected.

AOT/CSBS/ 6SEM/ SKL 31


PATTERN RECOGNITION QUESTION & SOLUTION

15 Marks Question
a) Explain Support Vector Machines in detail. b). State the advantages and
disadvantages associated with SVM? Show that if the soft margin SVM cost
function is chosen to be 5+5+5

a) Support Vector Machines: This is a linear machine with a case of separable


patterns that may arise in the context of pattern classification. Idea is to construct a
HYPERPLANE as a direction surface in such a way that the margin of separation
between positive and negative examples is maximized.
A good example of such a system is classifying a set of new documents into
positive or negative sentiment groups, based on other documents which have
already been classified as positive or negative. Similarly, we could classify new
emails into spam or non-spam, based on a large corpus of documents that have
already been marked as spam or non-spam by humans. SVMs are highly applicable
to such situations.
 SVM is an approximate implementation of Structural Risk Minimization.
 Error rate of a machine on test data is bounded by the sum of training error
rate and term that depends on Vapnik Chervonenki’s dimension.
 SVM sets first term to zero and minimizes second term. We use SVM
learning algorithm to construct following three types of learning machines:
(i) Polynomial learning machine
(ii) Two layer perceptrons
(iii) Radial basis function N/W Condition is as: Test error rate ≤ Train
error rate + f (N, h, p).
Where N: Size of training set,
h: Measure of model complexity
P: Probability that this bound fails.
If we consider an element of our p-dimensional feature space, i.e. →x= (x 1,x2,...,xp)
∈ Rp, then we can mathematically define an affine Hyper plane by the following
equation:
b0+b1x1+...+bpXp=0,
b0≠0 gives us an affine plane (i.e. it does not pass through the origin).

AOT/CSBS/ 6SEM/ SKL 32


PATTERN RECOGNITION QUESTION & SOLUTION

We can use a more concise notation for this equation by introducing the
summation sign: b0+p∑ j=1bj xj=0. The line that maximizes the minimum margin
is better. Maximum margin separator is determined by a subset of data points. Data
points in the subset are called Support Vectors. Support vectors are used to decide
which side of separator a test case is ON.
Consider a training set { ( 𝑿𝒊 , 𝒅𝒊 ,) } for i= 1 to n , where Xi is input pattern for ith
example. And di is the desired response (Target output).
Let , = +𝟏 and 𝒅𝒊 , = −𝟏 Pattern classes for positive and negative examples are
linearly separable. Hyper Plane decision surface is given as below equation:
𝑾𝑻X + b = 0, then di =0 (when data point is on the line)
Where W: adjustable weight factor and b is Bias
Therefore, 𝑾𝑻 Xi + bi ≥ 𝟎 for , = +𝟏 and 𝑾𝑻 Xi + bi < 𝟎 for 𝒅𝒊 , = −𝟏.
Closest data point is called Margin of Separation. Denoted by ρ. Objective of
SVM is to maximize ρ for Optimal Hyper plane.
b) Advantages of SVM
 SVM can be used for both classification and regression.
 It is robust, i.e. not much impacted by data with noise or outliers.
 The prediction results using this model are very promising.
Disadvantages of SVM
 SVM is applicable only for binary classification, i.e. when there are only two
classes in the problem domain.
 The SVM model is very complex – almost like a black box when it deals
with a high-dimensional data set. Hence, it is very difficult and close to
impossible to understand the model in such cases.
 It is slow for a large dataset, i.e. a data set with either a large number of
features or a large number of instances.
 It is quite memory-intensive.
C

AOT/CSBS/ 6SEM/ SKL 33


PATTERN RECOGNITION QUESTION & SOLUTION

a) Consider a case in which class ω1 consists of the two feature vectors [0,
0]T and [0, 1]T and class ω2 of [1, 0]T and [1, 1]T . Use the perceptron
algorithm in its reward and punishment form, with ρ = 1 and ω(0) = [0, 0]T ,
to design the line separating the two classes.
b) Suggest how to change either the weights or the threshold level of this
single–unit in order to implement the logical OR function (true when at least
one of the arguments is true):

C) Explain, in details, the back propagation algorithm.


a

AOT/CSBS/ 6SEM/ SKL 34


PATTERN RECOGNITION QUESTION & SOLUTION

B
One solution is to increase the weights of the unit: w1 = 2 and w2 = 2:

Alternatively, we could reduce the threshold to 1:

C) The back propagation algorithm is used to train multilayer perceptrons. In this


method, errors, i.e. difference in output values of the output layer and the expected
values, are propagated back from the output layer to the preceding layers. Hence,

AOT/CSBS/ 6SEM/ SKL 35


PATTERN RECOGNITION QUESTION & SOLUTION

the algorithm implementing this method is known as backpropagation, i.e.


propagating the errors backward to the preceding layers. This algorithm consists of
multiple iterations, also known as epochs. Each epoch consists of two phases
 A forward phase in which the signals flow from the neurons in the input
layer to the neurons in the output layer through the hidden layers. The
weights of the interconnections and activation functions are used during the
flow. In the output layer, the output signals are generated.
 A backward phase in which the output signal is compared with the expected
value. The computed errors are propagated backwards from the output to the
preceding layers. The errors propagated back are used to adjust the
interconnection weights between the layers.
 The iterations continue till a stopping criterion is reached. Figure below
depicts a reasonably simplified version of the backpropagation algorithm.

AOT/CSBS/ 6SEM/ SKL 36


PATTERN RECOGNITION QUESTION & SOLUTION

One main part of the algorithm is adjusting the interconnection weights. This is
done using a technique termed as gradient descent. In simple terms, the algorithm
calculates the partial derivative of the activation function by each interconnection
weight to identify the ‘gradient’ or extent of change of the weight required to
minimize the cost function. Quite understandably, therefore, the activation function
needs to be differentiable

Module 8
5 Marks Question
What is decision tree? What are the different types of nodes? Explain in
detail.
Ans: Decision tree learning is one of the most widely adopted algorithms for
classification. As the name indicates, it builds a model in the form of a tree
structure. Its grouping exactness is focused with different strategies, and it is
exceptionally productive.
Decision trees start with a root node that acts as a starting point and is followed by
splits that produce branches, also known as edges. Each node (or decision node) of
a decision tree corresponds to one of the feature vector/attributes. The branches
then link to leaves, also known as nodes, which form decision points for each of
the possible values (or range of values) of the feature associated with the node.
This process is repeated using the data points collected in each new leaf. A final
categorization is produced when a leaf no longer generates any new branches and
results in what’s called a terminal node. The tree terminates at different leaf nodes
(or terminal nodes) where each leaf node represents a possible value for the output
variable. The output variable is determined by following a path that starts at the
root and is guided by the values of the input variables.
A decision tree consists of three types of nodes:
 Root Node: A root node that has no incoming edges and zero or more
outgoing edges.
 Branch Node: Internal nodes, each of which has exactly one incoming edge
and two or more outgoing edges.
 Leaf Node: Leaf or terminal nodes, each of which has exactly one incoming
edge and no outgoing edges.
In a tree classification task, the set Xt , associated with node t, contains Nt =10
vectors. Four of these belong to class ω1, four to class ω2, and two to class ω3,

AOT/CSBS/ 6SEM/ SKL 37


PATTERN RECOGNITION QUESTION & SOLUTION

in a three-class classification task. The node splitting results into two new
subsets XtY , with three vectors from ω1, and one from ω2, and XtN with one
vector from ω1, three from ω2, and two from ω3. Compute the decrease in
node impurity after splitting.

15 Marks Question
Use the dataset below to learn a decision tree which predicts if people pass
Pattern Recognition subject (Yes or No), based on their previous CGPA
(High, Medium, or Low) and whether or not they studied.
CGPA STUDIED PASSED
L F N
L T Y
M F N
M T Y
H F Y
H T Y
Find the
a) What is the entropy H(Passed)?
b) What is the entropy H(Passed \ GPA)?
c) What is the entropy H(Passed \ Studied)?
d) Draw the full decision tree that would be learned for this dataset. You
do not need to show any calculations. (4+4+4+3)
a)

AOT/CSBS/ 6SEM/ SKL 38


PATTERN RECOGNITION QUESTION & SOLUTION

b)

c)

d) We want to split first on the variable which maximizes the information gain
H(Passed) -H(Passed \ A). This is equivalent to minimizing H(Passed \ A), so we
should split on “Studied" first.

a) What are the strengths and weaknesses of the decision tree method?
b) Suppose there is a student that decides whether or not to go into campus on
any given day based on the weather, wake-up time and whether there is a
seminar talk he is interested in attending. There are data collected from 13
days

AOT/CSBS/ 6SEM/ SKL 39


PATTERN RECOGNITION QUESTION & SOLUTION

i) Build a decision tree based on these observations, using entropy impurity.


ii) Show your work and the resulting tree.
c) Classify the following sample using above decision tree: Wake-up = Late,
Have Talk = No and Weather = Rain.
a) Strengths of decision tree:
 It produces very simple understandable rules. For smaller trees, not much
mathematical and computational knowledge is required to understand this
model.
 Works well for most of the problems.
 It can handle both numerical and categorical variables.
 Can work well both with small and large training data sets.
 Decision trees provide a definite clue of which features are more useful for
classification.
Weaknesses of decision tree
 Decision tree models are often biased towards features having more number
of possible values, i.e. levels.
 This model gets overfitted or underfitted quite easily.
 Decision trees are prone to errors in classification problems with many
classes and relatively small number of training examples.
 A decision tree can be computationally expensive to train.
 Large trees are complex to understand.

AOT/CSBS/ 6SEM/ SKL 40


PATTERN RECOGNITION QUESTION & SOLUTION

AOT/CSBS/ 6SEM/ SKL 41


PATTERN RECOGNITION QUESTION & SOLUTION

c) According to the tree learned, the sample should be classified to NO.


Module 9
5 Marks Question
What are the different measures of similarity of the assignment of patterns for
the domain of a particular cluster centre?
Distance or similarity measures are essential in solving many pattern recognition
problems such as classification and clustering. Clustering is done based on a
similarity measure to group similar data objects together.
A similarity measure can be defined as the distance between various data points.
While, similarity is an amount that reflects the strength of relationship between two
data items, dissimilarity deals with the measurement of divergence between two
data items. In fact, the performance of many algorithms depends upon selecting a
good distance function over the input data set
Here, a brief overview of similarity measure functions commonly used for
clustering in the following subsections:.
Euclidean distance: Euclidean distance is considered as the standard metric for
geometrical problems. It is simply the ordinary distance between two points.
Euclidean distance is extensively used in clustering problems, including clustering
text. The default distance measure used with the K-means algorithm is also the
Euclidean distance. The Euclidean distance determines the root of square
differences between the coordinates of a pair of objects.
15 Marks Question
a) What is K means Clustering Algorithm?
b) Why the plot of the within-cluster sum of is squares error (inertia) vs K in
K means clustering algorithm elbow-shaped? Discuss if there exists any other
possibility for the same with proper explanation.
c) Why do you prefer Euclidean distance over Manhattan distance in the K
means Algorithm?
d) During a research work, you found 7 observations as described with the
data points below. You want to create 3 clusters from these observations using
K-means algorithm. After first iteration, the clusters C1, C2, C3 has following
observations:
C1: {(2,2), (4,4), (6,6)}
C2: {(0,4), (4,0)}
C3: {(5,5), (9,9)}

AOT/CSBS/ 6SEM/ SKL 42


PATTERN RECOGNITION QUESTION & SOLUTION

If you want to run a second iteration then what will be the cluster centroids?
What will be the Manhattan distance for observation (9, 9) from cluster
centroid C1, in second iteration? 2+4+3+6’
Ans:
a)
K Means algorithm is a centroid-based clustering (unsupervised) technique. This
technique groups the dataset into k different clusters having an almost equal
number of points. Each of the clusters has a centroid point which represents the
mean of the data points lying in that cluster. The idea of the K-Means algorithm is
to find k-centroid points and every point in the dataset will belong to either of the
k-sets having minimum Euclidean distance.
b)
Let’s understand this with an example, Say, we have 10 different data
points present, now consider the different cases:
 k=10: For the max value of k, all points behave as one cluster. So, within the
cluster sum of squares is zero since only one data point is present in each of
the clusters. So, at the max value of k, this should tend to zero.
 K=1: For the minimum value of k i.e, k=1, all these data points are present
in the one cluster, and due to more points in the same cluster gives more
variance i.e, more within-cluster sum of squares.
 Between K=1 from K=10: When you increase the value of k from 1 to 10,
more points will go to other clusters, and hence the total within the cluster
sum of squares (inertia) will come down. So, mostly this forms an elbow
curve instead of other complex curves.
Hence, we can conclude that there does not exist any other possibility for the plot.
c)
Euclidean distance is preferred over Manhattan distance since Manhattan distance
calculates distance only vertically or horizontally due to which it has dimension
restrictions. On the contrary, Euclidean distance can be used in any space to
calculate the distances between the data points. Since in K means algorithm the
data points can be present in any dimension, so Euclidean distance is a more
suitable option.
d)
Finding centroid for data points in cluster C1 = ((2+4+6)/3, (2+4+6)/3) = (4, 4)
Finding centroid for data points in cluster C2 = ((0+4)/2, (4+0)/2) = (2, 2)
Finding centroid for data points in cluster C3 = ((5+9)/2, (5+9)/2) = (7, 7)

AOT/CSBS/ 6SEM/ SKL 43


PATTERN RECOGNITION QUESTION & SOLUTION

Hence, C1: (4,4), C2: (2,2), C3: (7,7)


Manhattan distance between centroid C1 i.e. (4, 4) and (9, 9) = (9-4) + (9-4) = 10
a) Explain the function of clustering.
b) What is Hierarchical Clustering? How is it different from k-Means?
c) What is a Dendrogram. Give an Example
d) Solve it with the help of K-mean clustering. 4+3+3+5

Subject A B
1 1.0 1.0
2 1.5 2.0
3 3.0 4.0
4 5.0 7.0
5 3.5 5.0
6 4.5 5.0
7 3.5 4.5
Centre points are: (1, 1) and (5, 7)
a) To measure the quality of clustering ability of any partitioned data set, criterion
function is used.
1. Consider a set , B = { x1,x2,x3…xn} containing “n” samples, that is
partitioned exactly into “t” disjoint subsets i.e. B1, B2,…..,Bt.
2. The main highlight of these subsets is, every individual subset represents a
cluster.
3. Sample inside the cluster will be similar to each other and dissimilar to
samples in other clusters.
4. To make this possible, criterion functions are used according the occurred
situations.

AOT/CSBS/ 6SEM/ SKL 44


PATTERN RECOGNITION QUESTION & SOLUTION

Criterion Function for Clustering


1. Internal Criterion Function
a) This class of clustering is an intra-cluster view.
b) Internal criterion function optimizes a function and measures the quality of
clustering ability various clusters which are different from each other.
2. External Criterion Function
a) This class of clustering criterion is an inter-class view.
b) External Criterion Function optimizes a function and measures the quality of
clustering ability of various clusters which are different from each other.
3. Hybrid Criterion Function
a) This function is used as it has the ability to simultaneously optimize multiple
individual Criterion Functions unlike as Internal Criterion Function and
External Criterion Function
b) Hierarchical clustering an alternative clustering approach to K-Means. Unlike
the K-Means, hierarchical clustering does not require specifying the initial number
of clusters. It produces a set of nested clusters that can be organized and visualized
as a hierarchical tree. This tree is called dendrogram. The two types of hierarchical
clustering are: Agglomerative and Divisive
Agglomerative: Begin with each data point as its own cluster. Then at each step, it
merges the nearest cluster until one cluster (or k clusters) is left.
Divisive: In this case, we start with one large cluster that includes all the elements.
Then at each step, we split the cluster until all each cluster contains only a single
data point.
c) A dendrogram is a tree-like diagram obtains from the hierarchical clustering
process. It indicates the merges(or the split) carried out at each step of hierarchical
clustering. The bottom of the tree are leaves which represent clusters of individual
data points. As we move up the tree, the branches fuse together to for larger
clusters. An example of a dendrogram is given below:

AOT/CSBS/ 6SEM/ SKL 45


PATTERN RECOGNITION QUESTION & SOLUTION

d)

Subject A B
1 1.0 1.0
2 1.5 2.0
3 3.0 4.0
4 5.0 7.0
5 3.5 5.0
6 4.5 5.0
7 3.5 4.5
This data set is to be grouped into two clusters. As a first step in finding
a sensible initial partition, let the A & B values of the two individuals
furthest apart (using the Euclidean distance measure), define the initial
cluster means, giving:
Individual Mean Vector (centroid)
Group 1 1 (1.0, 1.0)
Group 2 4 (5.0, 7.0)
The remaining individuals are now examined in sequence and allocated to the
cluster to which they are closest, in terms of Euclidean distance to the cluster
mean. The mean vector is recalculated each time a new member is added. This
leads to the following series of steps:
Cluster 1 Cluster 2
Mean Vector Mean Vector
Step Individual (centroid) Individual (centroid)
1 1 (1.0, 1.0) 4 (5.0, 7.0)
2 1, 2 (1.2, 1.5) 4 (5.0, 7.0)

AOT/CSBS/ 6SEM/ SKL 46


PATTERN RECOGNITION QUESTION & SOLUTION

3 1, 2, 3 (1.8, 2.3) 4 (5.0, 7.0)


4 1, 2, 3 (1.8, 2.3) 4, 5 (4.2, 6.0)
5 1, 2, 3 (1.8, 2.3) 4, 5, 6 (4.3, 5.7)
6 1, 2, 3 (1.8, 2.3) 4, 5, 6, 7 (4.1, 5.4)
7 1, 2, 3 (1.8, 2.3) 4, 5, 6, 7 (4.1, 5.4)
Now the initial partition has changed, and the two clusters at this stage having the
following characteristics:
Individual Mean Vector (centroid)
Cluster 1 1, 2, 3 (1.8, 2.3)
Cluster 2 4, 5, 6, 7 (4.1, 5.4)
But we cannot yet be sure that each individual has been assigned to the right
cluster. So, we compare each individual’s distance to its own cluster mean and to
that of the opposite cluster. And we find:
Distance to mean Distance to mean
Individual (centroid) of Cluster 1 (centroid) of Cluster 2
1 1.5 5.4
2 0.4 4.3
3 2.1 1.8
4 5.7 1.8
5 3.2 0.7
6 3.8 0.6
7 2.8 1.1
Only individual 3 is nearer to the mean of the opposite cluster (Cluster 2) than its
own (Cluster 1). In other words, each individual's distance to its own cluster mean
should be smaller that the distance to the other cluster's mean (which is not the case
with individual 3). Thus, individual 3 is relocated to Cluster 2 resulting in the new
partition:
Individual Mean Vector (centroid)
Cluster 1 1, 2 (1.3, 1.5)
Cluster 2 3, 4, 5, 6, 7 (3.9, 5.1)

AOT/CSBS/ 6SEM/ SKL 47


PATTERN RECOGNITION QUESTION & SOLUTION

Consider the following proximity matrix:

Draw the resulting dendrogram by applying single link clustering algorithm.


X1 X2 X3 X4 X5
X1 0 6 8 2 7
X2 6 0 1 5 3
X3 8 1 0 10 9
X4 2 5 10 0 4
X5 7 3 9 0

X1 X2 X3 X4 X5
X1 0 6 2 7
X2 X3 6 0 5 3

X4 2 5 0 4
X5 7 3 4 0

X1 X4 X2 X3 X5
X1 X4 0 5 4
X2 X3 5 0 3
X5 4 3 0

X1 X4 X2 X3 X5
X1 X4 0 4
X2 X3 X5 4 0

AOT/CSBS/ 6SEM/ SKL 48


PATTERN RECOGNITION QUESTION & SOLUTION

Module 5
5 Marks Question
Write down the steps for K nearest Neighbor estimation. Mention some of the
advantages and disadvantages of KNN technique.
K –Nearest Neighbor Estimation:
 Calculate “d(x,xi)” i =1, 2,….., n; where d denotes the Euclidean
distance between the points.
 Arrange the calculated n Euclidean distances in non-decreasing order.
 Let k be a +ve integer, take the first k distances from this sorted list.
 Find those k-points corresponding to these k-distances.
 Let ki denotes the number of points belonging to the ith class among k points
i.e. k ≥ 0
 If ki >kj ∀ i ≠ j then put x in class i.
Advantages of KNN:
 Easy to understand
 No assumptions about data
 Can be applied to both classification and regression
 Works easily on multi-class problems
Disadvantages are:
 Memory Intensive / computationally expensive
 Sensitive to scale of data
 Not work well on rare event (skewed) target variable
 Struggle when high number of independent variables

AOT/CSBS/ 6SEM/ SKL 49


PATTERN RECOGNITION QUESTION & SOLUTION

Discuss principal component analysis.


Principal Component Analysis is an unsupervised learning algorithm that is used
for the dimensionality reduction in machine learning. It is a statistical process that
converts the observations of correlated features into a set of linearly uncorrelated
features with the help of orthogonal transformation. These new transformed
features are called the Principal Components. It is one of the popular tools that is
used for exploratory data analysis and predictive modeling. It is a technique to
draw strong patterns from the given dataset by reducing the variances.
PCA generally tries to find the lower-dimensional surface to project the high-
dimensional data.
PCA works by considering the variance of each attribute because the high attribute
shows the good split between the classes, and hence it reduces the dimensionality.
Some real-world applications of PCA are image processing, movie
recommendation system, optimizing the power allocation in various
communication channels. It is a feature extraction technique, so it contains the
important variables and drops the least important variable.
The PCA algorithm is based on some mathematical concepts such as:
o Variance and Covariance
o Eigenvalues and Eigen factors
Some common terms used in PCA algorithm:
o Dimensionality: It is the number of features or variables present in the
given dataset. More easily, it is the number of columns present in the
dataset.
o Correlation: It signifies that how strongly two variables are related to each
other. Such as if one changes, the other variable also gets changed. The
correlation value ranges from -1 to +1. Here, -1 occurs if variables are
inversely proportional to each other, and +1 indicates that variables are
directly proportional to each other.
o Orthogonal: It defines that variables are not correlated to each other, and
hence the correlation between the pair of variables is zero.
o Eigenvectors: If there is a square matrix M, and a non-zero vector v is
given. Then v will be eigenvector if Av is the scalar multiple of v.
o Covariance Matrix: A matrix containing the covariance between the pair of
variables is called the Covariance Matrix.

AOT/CSBS/ 6SEM/ SKL 50


PATTERN RECOGNITION QUESTION & SOLUTION

15 Marks Question
a) What is dimensionality reduction problem?
b) Explain Linear Discriminant Analysis with its derivation
c) State some advantages and disadvantage with application of LDA. [3+8+4]
a)
In pattern recognition; classification problems, there are often too many factors on
the basis of which the final classification is done. These factors are basically
variables called features. The higher the number of features, the harder it gets to
visualize the training set and then work on it. Sometimes, most of these features are
correlated, and hence redundant. This is where dimensionality reduction
algorithms come into play. Dimensionality reduction is the process of reducing the
number of random variables under consideration, by obtaining a set of principal
variables. It can be divided into feature selection and feature extraction. The
various methods used for dimensionality reduction include:
 Principal Component Analysis (PCA)
 Linear Discriminant Analysis (LDA)
b)
PCA finds components that are useful for data representation, but drawback is that
PCA cannot discriminate components /data between different classes. If we group
all the samples, then those directions that are discarded by PCA might be exactly
the directions needed for distinguishing between classes.
 PCA is based on representation for efficient direction
 LDA is based on discrimination for efficient direction.
Objective of LDA is to perform dimensionality reduction while preserving as much
of the class discrimination information as possible. Here in LDA data is projected
from d – dimensions onto a line. If the samples formed well separated compact
clusters in d- space then projection onto an arbitrary line will usually produce poor
recognition performance. By rotating the line we can find an orientation for which
projected samples are well separated.

AOT/CSBS/ 6SEM/ SKL 51


PATTERN RECOGNITION QUESTION & SOLUTION

In order to find a good projection vector, we need to define a measure of separation


between the projections.

AOT/CSBS/ 6SEM/ SKL 52


PATTERN RECOGNITION QUESTION & SOLUTION

The solution proposed by Fisher is to maximize a function that represents the


difference between the means, normalized by a measure of the within-class
variability, or the so-called scatter.
For each class we define the scatter, an equivalent of the variance, as; (sum of
square differences between the projected samples and their class mean).

The Fisher linear discriminant is defined as the linear function wTx that maximizes
the criterion function: (the distance between the projected means normalized by the
within class scatter of the projected samples.

In order to find the optimum projection w*, we need to express J(w) as an explicit
function of w.. We will define a measure of the scatter in multivariate feature space
x which are denoted as scatter matrices.

AOT/CSBS/ 6SEM/ SKL 53


PATTERN RECOGNITION QUESTION & SOLUTION

Where Si is the covariance matrix of class ωi, and Sw is called the within-class
scatter matrix. Similarly, the difference between the projected means (in y-space)
can be expressed in terms of the means in the original feature space (x-space).

The matrix SB is called the between-class scatter of the original samples/feature


vectors, while is the between-class scatter of the projected samples y
c)
Advantages of Linear Discriminant Analysis
 Suitable for larger data set.
 Calculations of scatter matrix in LDA is much easy as compared to
covariance matrix
Disadvantages of Linear Discriminant Analysis
 More redundancy in data.
 Memory requirement is high.
 More Noisy.
Applications Of Linear Discriminant Analysis
 Face Recognition.
 Earth Sciences.
 Speech Classification.
a) What are the advantages and disadvantages of the histogram density
estimator?
b) What can we learn from Histogram Density Estimation?
c) Consider the covariance matrix for a Gaussian with mean = (0,0) and variance =
σ 2 × I2 where σ 2 is a positive constant, and I2 is a 2 × 2 identity matrix.
i. What are the two principle components for this matrix? What are their
eigenvalues?

AOT/CSBS/ 6SEM/ SKL 54


PATTERN RECOGNITION QUESTION & SOLUTION

ii. Given a data point (x,y) from this distribution, what is the reconstructed data
using the projection onto the first principal component of this matrix?
iii. For this reconstructed value, what is the expected value of the reconstruction
error (squared error between the true value and reconstructed value).
4+3+4+2+2
Advantages:
 Simple to evaluate and simple to use.
 One can throw away D once the histogram is computed.
 Can be computed sequentially if data continues to come in.
Disadvantages:
 The estimated density has discontinuities due to the bin edges rather than
any property of the underlying density.
 Scales poorly (curse of dimensionality): we would have M D bins if we
divided each variable in a D-dimensional space into M bins.
Lesson 1: To estimate the probability density at a particular location, we should
consider the data points that lie within some local neighborhood of that point.
 This requires we define some distance measure.
 There is a natural smoothness parameter describing the spatial extent of the
regions (this was the bin width for the histograms).
Lesson 2: The value of the smoothing parameter should neither be too large or too
small in order to obtain good results.

AOT/CSBS/ 6SEM/ SKL 55


PATTERN RECOGNITION QUESTION & SOLUTION

Module 6
5 Marks Question
What is density estimation? What are the advantage and disadvantages of
Non-parametric Techniques?
Density estimation is the problem of reconstructing the probability density function
using a set of given data points. Namely, we observe X1, · · · , Xn and we want to
recover the underlying probability density function generating our dataset. A
classical approach of density estimation is the histogram. Here we will talk about
another approach–the kernel density estimator (KDE; sometimes called kernel
density estimation). The KDE is one of the most famous method for density
estimation. The follow picture shows the KDE and the histogram of the faithful
dataset in R. The blue curve is the density curve estimated by the KDE.
Non-parametric Techniques
Advantages
 Generality: same procedure for unimodal, normal and bimodal mixture.
 No assumption about the distribution required ahead of time.
 With enough samples we can converge to an arbitrarily complicated target
density.

AOT/CSBS/ 6SEM/ SKL 56


PATTERN RECOGNITION QUESTION & SOLUTION

Disadvantages
 Number of required samples may be very large (much larger than would be
required if we knew the form of the unknown density).
 Curse of dimensionality.
 In case of PW and KNN computationally expensive (storage & processing).
 Sensitivity to choice of bin size, bandwidth,…
15 Marks Question
Explain Parzen window. Derive the conditions for (i) Convergence of means
and (ii) Convergence of the Variance
Parzen Windows: The Parzen-window approach to estimating densities can be
introduced by temporarily assuming that the region Rn is a d-dimensional
hypercube. If hn is the length of an edge of that hypercube, then its volume is given
by
Vn = hdn
We can obtain an analytic expression for kn, the number of samples falling in the
hypercube, by defining the following window function:

Thus, ϕ(u) defines a unit hypercube centered at the origin. It follows that ϕ((x −
xi)/hn) is equal to unity if xi falls within the hypercube of volume Vn centered at x,
and is zero otherwise. The number of samples in this hypercube is therefore given
by

By substitute, we get

This equation suggests a more general approach to estimating density functions.


Rather than limiting ourselves to the hypercube window function

AOT/CSBS/ 6SEM/ SKL 57


PATTERN RECOGNITION QUESTION & SOLUTION

AOT/CSBS/ 6SEM/ SKL 58


PATTERN RECOGNITION QUESTION & SOLUTION

What is Nearest Neighbor rule of classification? Mention some of the metrics


used in method.
Nearest neighbor algorithm assigns to a test pattern the class label of its closest
neighbor. Let n training patterns (𝑋1,1,) , (𝑋2,𝜃2,) …….., (𝑋𝑛 ,𝜃𝑛 ) where Xi is of
dimension d and 𝜃𝑖 , 𝑖𝑠 𝑖𝑡ℎ 𝑝𝑎𝑡𝑡𝑒𝑟𝑛. If P is the test pattern then if d (P, Xk ) = min
{d (P, Xi)}, i = 1 to n.
Error: In NN classifier error is at most twice the Bayes Error , when the number of
training samples tends to infinity.

AOT/CSBS/ 6SEM/ SKL 59


PATTERN RECOGNITION QUESTION & SOLUTION

Module 4
15 Marks Question
a) Explain the different steps used in Gradient Descent Algorithm.
b) Explain the different types of Gradient Descent in detail.
c) Logical operators (i.e. NOT, AND, OR, XOR, etc) are the building blocks of any
computational device. Logical functions return only two possible values, true or false,
based on the truth or false values of their arguments. For example, operator AND returns
true only when all its arguments are true, otherwise (if any of the arguments is false) it
returns false. If we denote truth by 1 and false by 0, then logical function AND can be
represented by the following table:

This function can be implemented by a single–unit with two inputs:

AOT/CSBS/ 6SEM/ SKL 60


PATTERN RECOGNITION QUESTION & SOLUTION

if the weights are w1 = 1 and w2 = 1 and the activation function is:

Note that the threshold level is 2 (v ≥ 2).


Test how the neural AND function works.
d) Suggest how to change either the weights or the threshold level of this single–unit in
order to implement the logical OR function (true when at least one of the arguments is
true):

(4+3+4+4)

a)
The five main steps that are used to initialize and use the gradient descent algorithm are as
follows:
 Initialize biases and weights for the neural network.
 Pass the input data through the network i.e, the input layer.
 Compute the difference or the error between the expected and the predicted values.
 Adjust the values i.e, weight updation in neurons to minimize the loss function.
 We repeat the same steps i.e, multiple iterations to determine the best weights for
efficient working.
b)
There are three types of gradient descent:
 Mini-Batch Gradient Descent
 Stochastic Gradient Descent
 Batch Gradient Descent

AOT/CSBS/ 6SEM/ SKL 61


PATTERN RECOGNITION QUESTION & SOLUTION

Mini-batch Gradient Descent: In Mini-Batch Gradient Descent, the batch size must be between
1 and the size of the training dataset. This results in k batches, thus updating the neural network
weights after each mini-batch.
Stochastic Gradient Descent: In Stochastic gradient descent, a batch size of 1 is used. As a
result, we get n batches. Therefore, the weights of the neural networks are updated after each
training sample.
Batch Gradient Descent: In Batch Gradient Descent, the batch size is equal to the size of the
training dataset. Therefore, the weights of the neural network are updated after each epoch.

d) One solution is to increase the weights of the unit: w1 = 2 and w2 = 2:

Alternatively, we could reduce the threshold to 1:

a) Briefly Explain the Concept of Neural Network.


b) Discuss different types of back propagation. Below is a diagram if a single artificial
neuron (unit):

The node has three inputs x = (x1; x2; x3) that receive only binary signals (either 0 or 1).
i) How many different input patterns this node can receive?
ii) What if the node had four inputs?
iii) Number of inputs Can you give a formula that computes the number of binary
input patterns for a given?
iv) Explain Sigmoidal Neuron
Note that here, we are talking about Artificial Neural Network(ANN).

AOT/CSBS/ 6SEM/ SKL 62


PATTERN RECOGNITION QUESTION & SOLUTION

In simple terms, a neural network is a computing system made up of of


interconnected nodes (called neurons) the that tries to model the behavior of
biological (or animal) systems. It is normally represented as a directed graph.
Each neuron in a neural network receive a signal from its input, process it and then
sent the output the the next neuron.
The neurons are connected by edges, each of which has a weight associated with it.
The weight adjusts through a learning process.
Components of a neural network include:
 an activation aj(t): this is the current state of the neuron
 a threshold θj: a value such that when exceeded, the neuron produces a 1
 an activation function : a function that computes new activation
 an output function: a function that computes the output from the activation
b) Types of back propagation:There are two types of backpropagation networks.
Static backpropagation
Recurrent backpropagation
Static backpropagation: In this network, mapping of a static input generates
static output. Static classification problems like optical character recognition will
be a suitable domain for static backpropagation.
Recurrent backpropagation: Recurrent backpropagation is conducted until a
certain threshold is met. After the threshold, the error is calculated and
propagated backward.
The difference between these two approaches is that static backpropagation is as
fast as the mapping is static.
i) For three inputs the number of combinations of 0 and 1 is 8:

ii) for four inputs the number of combinations is 16:

AOT/CSBS/ 6SEM/ SKL 63


PATTERN RECOGNITION QUESTION & SOLUTION

iii) Note that 8 = 23, 16 = 2 4 and 32 = 2 5 (for three, four and five inputs). Thus, the
formula for the number of binary input patterns is: 2 n , where n in the number of
inputs
The sigmoid neuron is sometimes referred to as the building block of deep neural
network. To understand the sigmoidal neuron, you first need to understand the
sigmoid function. This is because a sigmoidal neuron is based on the sigmoid
function.
A sigmoid function is a mathematical function that produces the sigmoid curve (a
curve that has the characteristic ‘S’ shape). An example is shown below:

The sigmoid neuron is similar to the perceptron except that for the sigmoid neuron,
the output is a smooth curve while for the perceptron, we have a stepped function.
An example of the sigmoid function is the logistic function which is given by:

Another example of a sigmoidal function is the hyperbolic tangent, tanh. This is


given by the formula:

5 Marks Question
What is Markov Chain? Explain Mean-Squared Error (MSE)
Markov chain is a stochastic model (random or probabilistic model) used for
modelling a sequence of possible events. It is such that he probability of each event
depends on the state of the previous event.
Events in a Markov Chain must satisfy the Markov property: predictions on future
events can be made based only on the present state.

AOT/CSBS/ 6SEM/ SKL 64


PATTERN RECOGNITION QUESTION & SOLUTION

The Mean-Squared-Error is a measure of the the quality of a model. When a model


makes a prediction, the MSE provide a way to know ow well the prediction match
the real data.
For example if we have a data set X = {x1, x2, …, xn}.
Each observation has a corresponding y given by Y = {y1, y2, …, yn}
So we develop a model that maps the Y values to the X values and our model is in
form a function f(x). In ideal case, f(xi) should equal yi. But that is not the case in
real scenario. There is always a difference between f(xi) and yi. This difference is
the error.
The Mean-Square-Error defines this difference using the formula:

The MSE is computed using the training dataset. Therefore it also called training
MSE.
What is Network Parameter Optimization
This is the process of adjusting the network parameters in order to improve the
performance of the network. On way is to adjust the weights of the edges in terms
of the error they contributed.
During optimization, two phases are carried out:
 propagation
 weight update
propagation: when an input vector enters the input layer, it is propagated forward
layer by layer through the network. When it gets to the output, then the output is
compared to the correct output. The difference is an error given by a loss function
E(w).
The error value is calculated for each neuron in the output layer. Then the errors
are propagated backwards (backpropagation) through the network. For each
neuron, the gradient of the loss function is calculated.
weight update: in this phase, the gradient calculated in the propagation phase is
used. This gradient is then fed into the optimization method to update the weights
of the neurons. the objective is to minimize the loss function.

AOT/CSBS/ 6SEM/ SKL 65


PATTERN RECOGNITION QUESTION & SOLUTION

Consider a radar station monitoring air traffic. For simplicity we chunk time
into periods of five minutes and assume that they are independent of each
other. Within each five minute period, there may be an airplane flying over
the radar station with probability 5%, or there is no airplane (we exclude the
possibility that there are several airplanes). If there is an airplane, it will be
detected by the radar with a probability of 99%. If there is no airplane, the
radar will give a false alarm and detect a non-existent airplane with a
probability of 10%.
a) How many airplanes fly over the radar station on average per day
(24 hours)?
b) How many false alarms (there is an alarm even though there is no
airplane) and how many false no-alarms (there is no alarm even
though there is an airplane) are there on average per day.
c) If there is an alarm, what is the probability that there is indeed an
airplane? [4+6+5]
a) There are 24×12 = 288 five-minute periods per day. In each period there is a
probability of 5% for an airplane being present. Thus the average number of
airplanes is 288×5% = 288×0.05 = 14.4.
b) On average there is no airplane in 288 − 14.4 of the five-minute periods.
This times the probability of 10% per period for a false alarm yields (288 −
14.4) × 10% = 273.6 × 0.1 = 27.36 false alarms.
On average there are 14.4 airplanes, each of which has a probability of 1%
of getting missed. Thus the number of false no-alarms is 14.4 × 1% = 14.4 ×
0.01 = 0.144.
c) For this question we need Bayes theorem.
P(airplane \ alarm) (1)
= P(alarm \ airplane) P(airplane) / P(alarm) (2)
= P(alarm \ airplane)P(airplane) / [P(alarm\ airplane)P(airplane) + P(alarm
\no airplane)P(no airplane)] (3)
= 0.99 x 0.05 /[0.99 x 0.05 + 0.1 x(1 − 0.05)]
= 0.342... (4)
≈ 0.05 0.05 + 0.1 = 0.333... . (5)
It might be somewhat surprising that the probability of an airplane being present
given an alarm is only 34% even though the detection of an airplane is so reliable
(99%). The reason is that airplanes are not so frequent (only 5%) and the
probability for an alarm given no airplane is relatively high (10%).

AOT/CSBS/ 6SEM/ SKL 66

You might also like