Professional Documents
Culture Documents
• Supervised Learning
Network is provided with a set of examples
of proper network behavior (inputs/targets)
{p1, t 1} , {p2, t 2} , … , {pQ, tQ}
• Reinforcement Learning
Network is only provided with a grade, or score,
which indicates network performance
• Unsupervised Learning
Only network inputs are available to the learning
algorithm. Network learns to categorize (cluster)
the inputs.
2
4 Perceptron Architecture
w 1, 1 w 1, 2 … w 1, R
Input Hard Limit Layer w 2, 1 w 2, 2 … w 2, R
W =
p
Rx1 A
A
W
n AA
AA
a
Sx1
w S, 1 w S, 2 … w S, R
A AA
SxR
T
1w
Sx1 wi, 1
1 b wi, 2 T
Sx1 iw = W = 2w
R S
wi, R T
a = hardlim (Wp + b) Sw
T
a i = hardlim ( n i ) = hardlim ( iw p + b i )
3
4 Single-Neuron Perceptron
w 1, 1 = 1 w 1, 2 = 1 b = –1
Inputs Two-Input Neuron
p2
p1
p2
AA AA
AAAA
w1,1
w1,2
Σ
b
n a 1w p + b = 0
T 1 1 w
a=1
p1
1
a=0 1
a = hardlim (Wp + b)
T
a = hardlim ( 1w p + b ) = hardlim ( w 1, 1 p 1 + w1, 2 p 2 + b )
4
4 Decision Boundary
T T
1w p + b = 0 1w p = – b
1 w
1 w 1 w
5
4 Example - OR
0 , t = 0 0 , t = 1 1 , t = 1 1 , t = 1
1
p = 1 2
p = 2 3
p = 3 4
p = 4
0 1 0 1
6
4 OR Solution
OR
1w
w = 0.5
1
0.5
T 0 + b = 0.25 + b = 0 ⇒
1w p + b = 0.5 0.5 b = – 0.25
0.5
7
4 Multiple-Neuron Perceptron
T
iw p + b i = 0
8
4 Learning Rule Test Problem
{p 1, t 1} , { p 2, t 2} , …, {pQ, tQ}
1 , t = 1 –1 , t = 0 0 , t = 0
1
p = 1 2
p = 2 3
p = 3
2 2 –1
p1
p2
AAAA
AAAA
w1,1
w1,2
Σ n a
a = hardlim(Wp)
9
4 Starting Point
2 1
1.0
1w =
– 0.8
1w
3
a = hardlim ( – 0.6 ) = 0
Incorrect Classification.
10
4 Tentative Learning Rule
• Set 1w to p1
– Not stable
• Add p1 to 1w
new old
Tentative Rule: If t = 1 and a = 0, then 1w = 1w +p
2 1
w
1
3
11
4 Second Input Vector
a = hardlim ( 1w p 2 ) = hardlim 2.0 1.2 – 1
T
2
new old
Modification to Rule: If t = 0 and a = 1, then 1w = 1w –p
2 1
1w
3
12
4 Third Input Vector
T 0
a = hardlim ( 1w p 3 ) = hardlim 3.0 – 0.8
–1
a = hardlim ( 0.8 ) = 1 (Incorrect Classification)
2 1
e = t–a
new old
If e = 1, then 1w = 1w +p
new old
If e = – 1, then 1w = 1w –p
new old
If e = 0, then 1w = 1w
1w
new
= 1w
old
+ e p = 1w
old
+ (t – a)p A bias is a
weight with
new old
b = b +e an input of 1.
14
4 Multiple-Neuron Perceptrons
To update the ith row of the weight matrix:
new old
iw = iw + ei p
new old
bi = bi + ei
Matrix form:
Wnew = W old + ep T
new old
b = b +e
15
4 Apple/Banana Example
Training Set
– 1 1
1
p = ,
1 1 t = 1 2
p = ,
1 2 t = 0
–1 –1
Initial Weights
W = 0.5 – 1 – 0.5 b = 0.5
First Iteration
–1
a = hardlim ( Wp 1 + b ) = hardlim 0.5 – 1 – 0.5 1 + 0.5
– 1
a = hardlim ( – 0.5 ) = 0 e = t1 – a = 1 – 0 = 1
new old T
W = W + ep = 0.5 – 1 – 0.5 + ( 1 ) – 1 1 – 1 = – 0.5 0 – 1.5
new old
b = b + e = 0.5 + ( 1 ) = 1.5
16
4 Second Iteration
1
a = hardlim (Wp 2 + b) = hardlim ( – 0.5 0 – 1.5 1 + ( 1.5 ))
–1
a = hardlim (2.5) = 1
e = t2 – a = 0 – 1 = –1
new old T
W = W + ep = – 0.5 0 – 1.5 + ( – 1 ) 1 1 – 1 = – 1.5 – 1 – 0.5
new old
b = b + e = 1.5 + ( – 1 ) = 0.5
17
4 Check
–1
a = hardlim (Wp 1 + b) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (1.5) = 1 = t 1
1
a = hardlim (Wp 2 + b) = hardlim ( – 1.5 – 1 – 0.5 1 + 0.5)
–1
a = hardlim (– 1.5) = 0 = t 2
18
4 Perceptron Rule Capability
19