## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

A.Sravya

Given some knowledge of how certain objects may

appear and an image of a scene possibly

containing those objects, report which objects are

present in the scene and where.

Image panoramas

Image watermarking

Global robot localization

Face Detection

Optical Character Recognition

Manufacturing Quality Control

Content-Based Image Indexing

Object Counting and Monitoring

Automated vehicle parking systems

Visual Positioning and tracking

Video Stabilization

Pattern or Object: Arrangement of

descriptors(features)

Pattern class: Family of patterns that share some

common properties

Pattern Recognition: Techniques for assigning

patterns to their respective classes

Common pattern arrangements:

1. vectors – ( for quantitative descriptors)

2. strings 3. trees – (for structural descriptors)

Approaches to pattern recognition

Decision – theoretic quantitative descriptors

Structural qualitative descriptors

where

x

i

represents the ith descriptor

n is no: of descriptors associated with the pattern

Example : Consider 3 types of iris flowers- setosa,virginica and versicolor

Each flower is described by petal length and width .

Therefore the pattern vector is given by:

Here is another example of pattern vector

generation.

In this case, we are interested in different types of

noisy shapes.

Recognition problems in which not only quantitative

measures about each feature but also the spatial

relationships between them determine class

membership, are solved by structural approach

Example: Fingerprint recognition

Strings

String descriptions generate patterns of objects whose

structure is based on relatively simple connectivity of

primitives usually associated with boundary shape

String of symbols w =……..abababab……….

Tree descriptors more powerful than strings

Most hierarchical ordering schemes lead to tree

structures

Example:

Based on the use of decision functions ( d(x) )

Here we find W decision functions d

1

(x), d

2

(x),....... d

W

(x) with the property

that, if a pattern x belongs to class ω

i

, then

i j W j d d

j i

= = > ; ,..., 2 , 1 ) ( ) ( x x

The decision boundary separating class and is given by

0 ) ( ) ( or ) ( ) ( = ÷ = x x x x

j i j i

d d d d

Now the objective is to develop various approaches for finding decision

functions that satisfy Eq(1)

1

Here we represent each class by a prototype

pattern vector

An unknown pattern is assigned to the class to

which it is closest in terms of a predefined

approach

The two approaches are:

Minimum distance classifier – calculate the

Euclidean distance

correlation

Prototype pattern vector

Calculate the Euclidean distance between the unknown vector and the

prototype vector

Distance measure is the decision function

…….large numerical value

Decision boundary b/w classes and is

….perpendicular bisector

If d

Ij

(x) > 0, then x belongs to

If d

Ij

(x) < 0, then x belongs to

i

e

j

e

Correlation is used for finding matches of a sub image w(x,y)

of size J X K within an image f(x,y) of size M X N

Correlation between w(x,y) and f(x,y) is given by

1 ,..., 2 , 1 , 0

, 1 ,..., 2 , 1 , 0 for

) , ( ) , ( ) , (

÷ =

÷ =

+ + =

¿¿

N y

M x

t y s x w t s f y x c

s t

The maximum values of c indicates the

positions where w best matches f

This is a probabilistic approach to pattern

recognition

Average loss

The classifier that minimizes the total average loss is

called the Bayes classifier

Bayes classifier assigns an unknown pattern x to class if

i

e

Loss for a correct decision is assigned ‘0’ and for incorrect decision ‘1’

Further simplified to

Finally

….Bayes Decision Function

BDF depends on the pdfs of the patterns in each class and the

probability of occurrence of each class

Sample patterns are assigned to each class and then necessary

parameters are estimated

Most commonly used form for is the Gaussian pdf

Bayes decision function for Gaussian pattern classes is

) (

2

1

) ( ) | ( ) (

2

2

2

) (

j

m x

j

j j j

p e p x p x d

j

j

e

o t

e e

o

÷

÷

= =

here n = 1 & W = 2

In n-dimensional case

Bayesian decision function for gaussian pattern classes under 0-1

loss function

•BDF reduces to minimum distance classifier if:

1. Pattern classes are Gaussian

2. All covariance matrices are equal to the identity matrix

3. All classes are equally likely to occur

• Therefore minimum distance classifier is optimum in Bayes

sense if the above conditions are satisfied

Neural network: information processing paradigm inspired by

biological nervous systems, such as our brain

Structure: large number of highly interconnected processing

elements (neurons) working together

Neurons are arranged in layers

Each neuron within the network is usually a simple processing unit

which takes one or more inputs and produces an output. At each

neuron, every input has an associated weight which modifies the

strength of each input. The neuron simply adds together all the inputs

and calculates output.

Neurons: Elemental nonlinear computing elements

We use these networks for adaptively developing

the coefficients of decision functions via successive

presentations of training set of patterns

Training patterns: Sample patterns used to

estimate desired parameters

Training set: Set of such patterns from each class

Learning or Training: Process by which a training

set is used to obtain decision functions

Perceptron model basic model of a neuron

Perceptrons are learning machines

Another way :

then

If ω

2

and

This algorithm makes a change in w

only if the pattern being considered at

the k

th

step in the training sequence is

misclassified

This method minimizes the error between

the actual and the desired response

From gradient descent algorithm

Changing weights reduces the error by

a factor

We focus on decision functions of multiclass pattern recognition

problems, independent of whether the classes are separable or not

Activation element is a sigmoid function

Input to the activation element of each node in layer J

The outputs of layer K are

The final sigmoid function is

We begin by concentrating on the output layer

The process starts with an arbitrary set of weights through out

the network

Generalized delta rule has two basic phases:

Phase 1

A training vector is propagated through the layers to compute

the output O

j

for each node

The outputs O

q

of the nodes in the output layer are then

compared against their desired responses r

p

, to generate the

error terms δ

q

Phase 2

A backward pass through the network during which the

appropriate error signal is passed to each node and the

corresponding weight changes are made

Thank you

Given some knowledge of how certain objects may appear and an image of a scene possibly containing those objects, report which objects are present in the scene and where.

Image panoramas Image watermarking Global robot localization Face Detection Optical Character Recognition Manufacturing Quality Control Content-Based Image Indexing Object Counting and Monitoring Automated vehicle parking systems Visual Positioning and tracking Video Stabilization

Pattern or Object: Arrangement of descriptors(features) Pattern class: Family of patterns that share some common properties Pattern Recognition: Techniques for assigning patterns to their respective classes Common pattern arrangements: 1. vectors – ( for quantitative descriptors) 2. strings 3. trees – (for structural descriptors) Approaches to pattern recognition Decision – theoretic quantitative descriptors Structural qualitative descriptors

virginica and versicolor Each flower is described by petal length and width .where xi represents the ith descriptor n is no: of descriptors associated with the pattern Example : Consider 3 types of iris flowers. Therefore the pattern vector is given by: .setosa.

.

.

.

. we are interested in different types of noisy shapes. In this case. Here is another example of pattern vector generation.

are solved by structural approach Example: Fingerprint recognition Strings String descriptions generate patterns of objects whose structure is based on relatively simple connectivity of primitives usually associated with boundary shape . Recognition problems in which not only quantitative measures about each feature but also the spatial relationships between them determine class membership.

abababab……….. String of symbols w =……. .

Tree descriptors more powerful than strings Most hierarchical ordering schemes lead to tree structures Example: .

...... then d i ( x) d j ( x) j 1. W . j i 1 The decision boundary separating class and is given by d i (x) d j (x) or d i (x) d j (x) 0 Now the objective is to develop various approaches for finding decision functions that satisfy Eq(1) ..2.. d2(x)..Based on the use of decision functions ( d(x) ) Here we find W decision functions d1(x). if a pattern x belongs to class ωi .. dW(x) with the property that..

Here we represent each class by a prototype pattern vector An unknown pattern is assigned to the class to which it is closest in terms of a predefined approach The two approaches are: Minimum distance classifier – calculate the Euclidean distance correlation .

Prototype pattern vector Calculate the Euclidean distance between the unknown vector and the prototype vector Distance measure is the decision function …….large numerical value .

then x belongs to If dIj(x) < 0.Decision boundary b/w classes and is ….perpendicular bisector If dIj(x) > 0. then x belongs to i j .

.

. t ) w( x s.M 1. y t ) s t for x 0.y) is given by c( x...2. Correlation is used for finding matches of a sub image w(x..y) of size J X K within an image f(x. y 0. y ) f ( s...N 1 .1..y) of size M X N Correlation between w(x.1.y) and f(x..2.

The maximum values of c indicates the positions where w best matches f .

This is a probabilistic approach to pattern recognition Average loss The classifier that minimizes the total average loss is called the Bayes classifier .

Bayes classifier assigns an unknown pattern x to class i if Loss for a correct decision is assigned ‘0’ and for incorrect decision ‘1’ .

Further simplified to Finally ….Bayes Decision Function BDF depends on the pdfs of the patterns in each class and the probability of occurrence of each class Sample patterns are assigned to each class and then necessary parameters are estimated Most commonly used form for is the Gaussian pdf .

Bayes decision function for Gaussian pattern classes is d j ( x) p( x | j ) p( j ) 1 2 j ( xm j )2 2 2 j e p( j ) here n = 1 & W = 2 .

In n-dimensional case Bayesian decision function for gaussian pattern classes under 0-1 loss function .

Pattern classes are Gaussian 2. All covariance matrices are equal to the identity matrix 3.•BDF reduces to minimum distance classifier if: 1. All classes are equally likely to occur • Therefore minimum distance classifier is optimum in Bayes sense if the above conditions are satisfied .

such as our brain Structure: large number of highly interconnected processing elements (neurons) working together Neurons are arranged in layers .Neural network: information processing paradigm inspired by biological nervous systems.

.Each neuron within the network is usually a simple processing unit which takes one or more inputs and produces an output. The neuron simply adds together all the inputs and calculates output. At each neuron. every input has an associated weight which modifies the strength of each input.

Neurons: Elemental nonlinear computing elements We use these networks for adaptively developing the coefficients of decision functions via successive presentations of training set of patterns Training patterns: Sample patterns used to estimate desired parameters Training set: Set of such patterns from each class Learning or Training: Process by which a training set is used to obtain decision functions Perceptron model basic model of a neuron Perceptrons are learning machines .

.

Another way : .

then If ω2 and This algorithm makes a change in w only if the pattern being considered at the kth step in the training sequence is misclassified .

This method minimizes the error between the actual and the desired response From gradient descent algorithm .

Changing weights reduces the error by a factor .

We focus on decision functions of multiclass pattern recognition problems. independent of whether the classes are separable or not .

Activation element is a sigmoid function Input to the activation element of each node in layer J The outputs of layer K are The final sigmoid function is .

to generate the error terms δq Phase 2 A backward pass through the network during which the appropriate error signal is passed to each node and the corresponding weight changes are made . We begin by concentrating on the output layer The process starts with an arbitrary set of weights through out the network Generalized delta rule has two basic phases: Phase 1 A training vector is propagated through the layers to compute the output Oj for each node The outputs Oq of the nodes in the output layer are then compared against their desired responses rp.

.

.

.

.

.

Thank you .

- nn
- ANN Module2
- Workload
- artificial intelligence_ch7_mach3
- Spam
- ANN QB_BIF
- 2-Perceptrons
- Lecture 2
- Data Set
- 91329-0136097111_ch01
- red neuronal
- Encoding Techniques HOG Encoder.pptx
- 2 r Neural Networks
- HW2 -S18
- unit1
- Errores solucionados
- A Design Hierarchy_doulos
- 1007.1268
- Module Basic Ovm Session7 Sequences and Tests Jaynsley
- Study guide 4
- Creating OUTBOUND Proxy Example
- VJM+13_Resgistration
- Assets-Introduction-to-Machine-Learning.pdf
- Assignment of Cma
- A Berkeley View of Systems Challenges for AI
- India and the Artificial Intelligence Revolution
- A Review of Machine Learning based Anomaly Detection Techniques
- Why Are Computational Neuroscience and Systems Biology So Separate?
- Guide
- A NOVEL APPROACH FOR PROCESSING BIG DATA

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading