You are on page 1of 110

MENOUFIA UNIVERSITY ‫جامعة المنوفية‬

FACULTY OF COMPUTERS AND INFORMATION ‫كلية الحاسبات والمعلومات‬


ALL DEPARTMENTS ‫جميع األقسام‬
ARTIFICIAL INTELLIGENCE ‫جامعة المنوفية‬
‫الذكاء اإلصطناعي‬

Artificial Neural Networks (ANNs)


Step-By-Step Training & Testing Example
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
Neural Networks & Classification
Linear Classifiers
Linear Classifiers
Linear Classifiers
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Not Solved Linearly
Not Solved Linearly
Nonlinear Classifiers
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Classification Example

R (RED) G (GREEN) B (BLUE)


255 0 0
RED
248 80 68
0 0 255
BLUE
67 15 210
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Neural Networks 248
0
80
0
68
255
BLUE
67 15 210

Input Hidden Output


R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Neural Networks 248
0
80
0
68
255
BLUE
67 15 210
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Input Layer 248
0
80
0
68
255
BLUE
67 15 210

Input Output

B
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Output Layer 248
0
80
0
68
255
BLUE
67 15 210

Input Output

G RED/BLUE

𝒀𝒋
B
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Weights 248
0
80
0
68
255
BLUE
67 15 210

Input Output

R
𝑾𝟏

G RED/BLUE
𝑾𝟐
𝒀𝒋
B
𝑾𝟑
Weights=𝑾𝒊
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Activation Function 248
0
80
0
68
255
BLUE
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Activation Function 248
0
80
0
68
255
BLUE
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Activation Function 248
0
80
0
68
255
BLUE
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Components BLUE
0 0 255
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3
s
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3

s=SOP(𝑿𝒊 , 𝑾𝒊 ) s
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

R
W1

G RED/BLUE
W2
𝒀𝒋
B
W3

s=SOP(𝑿𝒊 , 𝑾𝒊 ) s

𝑿𝒊 =Inputs 𝑾𝒊 =Weights
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3

s=SOP(𝑿𝒊 , 𝑾𝒊 ) s

𝑿𝒊 =Inputs 𝑾𝒊 =Weights
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝒎
𝑿𝟑 B
W3
S= 𝟏 𝑿𝒊 𝑾𝒊

s=SOP(𝑿𝒊 , 𝑾𝒊 ) s

𝑿𝒊 =Inputs 𝑾𝒊 =Weights
R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Inputs BLUE
0 0 255
67 15 210

Input Output

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝒎
𝑿𝟑 B
W3
S= 𝟏 𝑿𝒊 𝑾𝒊

s=SOP(𝑿𝒊 , 𝑾𝒊 ) s

𝑿𝒊 =Inputs 𝑾𝒊 =Weights s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


R (RED) G (GREEN) B (BLUE)

Activation Function RED


255
248
0
80
0
68
Outputs BLUE
0 0 255
67 15 210

Input Output

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s) Class Label
Activation Functions
Piecewise
Linear Sigmoid Signum
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
RED
Activation Functions 248
0
80
0
68
255
BLUE
BLUE
67 15 210

Which activation function to use?


One that gives two outputs.
Activation Class
Outputs
Function Labels
TWO TWO Class
Outputs Labels
𝒀𝒋 𝑪𝒋
Activation Functions
Piecewise
Linear Sigmoid Signum
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Activation Function 248
0
80
0
68
255
BLUE
67 15 210

Input Output

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0

𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 𝑿𝟎 = +𝟏

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 s=(+𝟏 ∗ 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=0
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=0
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=+v
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=+v
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=-v
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 Y y=x+b
RED/BLUE Y-Intercept
𝑿𝟐 G b=+v
W2
𝒀𝒋 X
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
Same Concept Applies to Bias
𝑿𝟑 B
W3
s F(s)
sgn 𝒎
S= 𝟏 𝑿𝒊 𝑾𝒊 +BIAS
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 𝒎
S= 𝟏 𝑿𝒊 𝑾𝒊 +BIAS
𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 𝒎
S= 𝟏 𝑿𝒊 𝑾𝒊 +BIAS
𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 𝒎
S= 𝟏 𝑿𝒊 𝑾𝒊 +BIAS
𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
R (RED) G (GREEN) B (BLUE)
255 0 0
RED
Bias Importance 248
0
80
0
68
255
BLUE
67 15 210
Input Output
s=(𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1
W0
s=(𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟏 R
W1 𝒎
S= 𝟏 𝑿𝒊 𝑾𝒊 +BIAS
𝑿𝟐 G RED/BLUE
W2
𝒀𝒋
𝑿𝟑 B
W3
s F(s)
sgn
Learning Rate
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R
W1

𝑿𝟐 G RED/BLUE
W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Inputs 𝑿𝒎
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE
W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Weights 𝑾𝒎
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Bias 𝒃
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Sum Of Products (SOP) 𝒔
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Activation Function 𝒔𝒈𝒏
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Outputs 𝒀𝒋
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Summary of Parameters
Learning Rate η
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
Other Parameters
Step n
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )

𝑿𝟎 =+1
W0 𝒏 = 𝟎, 𝟏, 𝟐, … 𝟎≤η≤𝟏
𝑿𝟏 R 𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
W1

𝑿𝟐 G RED/BLUE W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )


W2 s F(s)
sgn
𝒀𝒋
𝑿𝟑 B
W3
R (RED) G (GREEN) B (BLUE)

Other Parameters RED = -1


255
248
0
80
0
68
Desired Output 𝒅𝒋 BLUE = +1
0 0 255
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1
W0
𝒏 = 𝟎, 𝟏, 𝟐, … 𝟎≤η≤𝟏
𝑿𝟏 R
W1
𝑿(𝒏)=(𝑿𝟎 , 𝑿𝟏 ,𝑿𝟐 , 𝑿𝟑 )
𝑿𝟐 G RED/BLUE
W2 s F(s)
sgn W(𝒏)=(𝑾𝟎 , 𝑾𝟏 ,𝑾𝟐 , 𝑾𝟑 )
𝒀𝒋
𝑿𝟑 B
W3
−𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 (𝑹𝑬𝑫)
𝒅 𝒏 =
+𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 (𝑩𝑳𝑼𝑬)
Neural Networks Training Steps
1 Weights Initialization
2 Inputs Application
3 Sum of Inputs-Weights Products
4 Activation Function Response Calculation
5 Weights Adaptation
6 Back to Step 2
Regarding 5th Step: Weights Adaptation
• If the predicted output Y is not the same as the desired output d,
then weights are to be adapted according to the following equation:

𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
Where
𝑾 𝒏 = [𝒃 𝒏 , 𝑾𝟏 (𝒏), 𝑾𝟐 (𝒏), 𝑾𝟑 (𝒏), … , 𝑾𝒎 (𝒏)]
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=0 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=0:
η = .001
𝑋 𝑛 = 𝑋 0 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 0 = −1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=0 BLUE = +1
67 15 210

𝑿𝟎 =+1
-1

𝑿𝟏 255
-2
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
1 𝒀(𝒏)

𝑿𝟑 0
6.2
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=0 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1+255*-2+0*1+0*6.2
-1
=-511
𝑿𝟏 255
-2
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
1 𝒀(𝒏)

𝑿𝟑 0
6.2
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=0 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟎
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1 = 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
𝑿𝟏 255
-2
RED/BLUE
𝑿𝟐 0 -511 sgn
1 𝒀(𝒏)

𝑿𝟑 0
6.2 +𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=0 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟎
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1 = 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
𝑿𝟏 255
-2
RED
𝑿𝟐 0 -511 -1
1 −𝟏

𝑿𝟑 0
6.2
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=0 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟎 = −𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟎 = −𝟏
-1
∵𝒀 𝒏 = 𝒅 𝒏
𝑿𝟏 255 ∴ Weights are Correct.
-2 No Adaptation
𝑿𝟐 0 -511 -1
1 −𝟏

𝑿𝟑 0
6.2
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=1 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=1:
η = .001
𝑋 𝑛 = 𝑋 1 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 1 = −1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=1 BLUE = +1
67 15 210

𝑿𝟎 =+1
-1

𝑿𝟏 248
-2
RED/BLUE
𝑿𝟐 80 s F(s)
sgn
1 𝒀(𝒏)

𝑿𝟑 68
6.2
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=1 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1+248*-2+80*1+68*6.2
-1
=4.6
𝑿𝟏 248
-2
RED/BLUE
𝑿𝟐 80 s F(s)
sgn
1 𝒀(𝒏)

𝑿𝟑 68
6.2
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=1 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟏
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1 = 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
𝑿𝟏 248
-2
RED/BLUE
𝑿𝟐 80 4.6 sgn
1 𝒀(𝒏)

𝑿𝟑 68
6.2 +𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=1 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟏
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1 = 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
𝑿𝟏 248
-2
BLUE
𝑿𝟐 80 4.6 +1
1 +𝟏

𝑿𝟑 68
6.2
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=1 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟏 = +𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟏 = −𝟏
-1
∵𝒀 𝒏 ≠ 𝒅 𝒏
𝑿𝟏 248 ∴ Weights are Incorrect.
-2 Adaptation Required
𝑿𝟐 80 4.6 +1
1 +𝟏

𝑿𝟑 68
6.2
Weights Adaptation
• According to

𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
• Where n = 1
𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏)
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟏 − (+𝟏) +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐, −. 𝟒𝟗𝟔, −. 𝟏𝟔, −. 𝟏𝟑𝟔
𝑾 𝟐 = −𝟏. 𝟎𝟎𝟐, −𝟐. 𝟒𝟗𝟔, . 𝟖𝟒, 𝟔. 𝟎𝟔𝟒
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=2 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=2:
η = .001
𝑋 𝑛 = 𝑋 2 = +1, 0, 0, 255
𝑊 𝑛 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 2 = +1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=2 BLUE = +1
67 15 210

𝑿𝟎 =+1 -1.002

𝑿𝟏 0 -2.496
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 255 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=2 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1.002+0*-
-1.002
2.496+0*.84+255*6.064
=1545.32
𝑿𝟏 0 -2.496
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 255 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=2 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟐
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 1545.32
= +𝟏
𝑿𝟏 0 -2.496
RED/BLUE
𝑿𝟐 0 1545
.84 sgn 𝒀(𝒏)
.32

𝑿𝟑 255 6.064
+𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=2 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟐
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 1545.32
= +𝟏
𝑿𝟏 0 -2.496
BLUE
𝑿𝟐 0 1545
.84 +1 +𝟏
.32

𝑿𝟑 255 6.064
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=2 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟐 = +𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟐 = +𝟏
-1.002
∵𝒀 𝒏 = 𝒅 𝒏
𝑿𝟏 0 ∴ Weights are Correct.
-2.496
No Adaptation
𝑿𝟐 0 1545
.84 +1 +𝟏
.32

𝑿𝟑 255 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=3 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=3:
η = .001
𝑋 𝑛 = 𝑋 3 = +1, 67, 15, 210
𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 3 = +1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=3 BLUE = +1
67 15 210

𝑿𝟎 =+1 -1.002

𝑿𝟏 67 -2.496
RED/BLUE
𝑿𝟐 15 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 210 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=3 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1.002+67*-
-1.002
2.496+15*.84+210*6.064
=1349.542
𝑿𝟏 67 -2.496
RED/BLUE
𝑿𝟐 15 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 210 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=3 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟑
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 1349.542
= +𝟏
𝑿𝟏 67 -2.496
RED/BLUE
𝑿𝟐 15 1349
.84 sgn 𝒀(𝒏)
.542

𝑿𝟑 210 6.064
+𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=3 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟑
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 1349.542
= +𝟏
𝑿𝟏 67 -2.496
BLUE
𝑿𝟐 15 1349
.84 +1 +𝟏
.542

𝑿𝟑 210 6.064
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=3 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟑 = +𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟑 = +𝟏
-1.002
∵𝒀 𝒏 = 𝒅 𝒏
𝑿𝟏 67 ∴ Weights are Correct.
-2.496
No Adaptation
𝑿𝟐 15 1349
.84 +1 +𝟏
.542

𝑿𝟑 210 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=4 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=4:
η = .001
𝑋 𝑛 = 𝑋 4 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 4 = −1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=4 BLUE = +1
67 15 210

𝑿𝟎 =+1 -1.002

𝑿𝟏 255 -2.496
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 0 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=4 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1.002+255*-
-1.002
2.496+0*.84+0*6.064
=-637.482
𝑿𝟏 255 -2.496
RED/BLUE
𝑿𝟐 0 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 0 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=4 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟒
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 −637.482
= −𝟏
𝑿𝟏 255 -2.496

- RED/BLUE
𝑿𝟐 0 637. sgn
.84 𝒀(𝒏)
482
𝑿𝟑 0 6.064
+𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=4 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟒
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 −637.482
= −𝟏
𝑿𝟏 255 -2.496

- RED
𝑿𝟐 0 637. -1
.84 −𝟏
482
𝑿𝟑 0 6.064
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=4 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟒 = −𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟒 = −𝟏
-1.002
∵𝒀 𝒏 = 𝒅 𝒏
𝑿𝟏 255 ∴ Weights are Correct.
-2.496
No Adaptation
-
𝑿𝟐 0 637. -1
.84 −𝟏
482
𝑿𝟑 0 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=5 BLUE = +1
67 15 210
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=5:
η = .001
𝑋 𝑛 = 𝑋 5 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 5 = −1
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=5 BLUE = +1
67 15 210

𝑿𝟎 =+1 -1.002

𝑿𝟏 248 -2.496
RED/BLUE
𝑿𝟐 80 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 68 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=5 - SOP BLUE = +1
67 15 210

s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )


𝑿𝟎 =+1 =+1*-1.002+248*-
-1.002
2.496+80*.84+68*6.064
=-31.306
𝑿𝟏 248 -2.496
RED/BLUE
𝑿𝟐 80 s F(s)
sgn
.84 𝒀(𝒏)

𝑿𝟑 68 6.064
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=5 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟓
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 −31.306
= −𝟏
𝑿𝟏 248 -2.496

- RED/BLUE
𝑿𝟐 80 31.3 sgn
.84 𝒀(𝒏)
06
𝑿𝟑 68 6.064
+𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
R (RED) G (GREEN) B (BLUE)
Neural Networks 255 0 0
RED = -1
Training Example 248 80 68
0 0 255
Step n=5 - Output BLUE = +1
67 15 210

𝒀 𝒏 =𝒀 𝟓
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 −31.306
= −𝟏
𝑿𝟏 248 -2.496

- RED
𝑿𝟐 80 31.3 -1
.84 −𝟏
06
𝑿𝟑 68 6.064
Neural Networks R (RED) G (GREEN) B (BLUE)

Training Example RED = -1


255
248
0
80
0
68
Step n=5 BLUE = +1
0 0 255

Predicted Vs. Desired 67 15 210

𝒀 𝒏 = 𝒀 𝟓 = −𝟏
𝑿𝟎 =+1 𝐝 𝒏 = 𝒅 𝟓 = −𝟏
-1.002
∵𝒀 𝒏 = 𝒅 𝒏
𝑿𝟏 248 ∴ Weights are Correct.
-2.496
No Adaptation
-
𝑿𝟐 80 637. -1
.84 −𝟏
482
𝑿𝟑 68 6.064
Correct Weights
• After testing the weights across all samples and results were correct
then we can conclude that current weights are correct ones for
training the neural network.
• After training phase we come to testing the neural network.
• What is the class of the unknown color of values of R=150, G=100,
B=180?
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Trained Neural Networks Parameters
η = .001
𝑊 = −1.002, −2.496, .84, 6.064
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
SOP
s=(𝑿𝟎 𝑾𝟎 +𝑿𝟏 𝑾𝟏 +𝑿𝟐 𝑾𝟐 +𝑿𝟑 𝑾𝟑 )
𝑿𝟎 =+1 =+1*-1.002+150*-
-1.002
2.496+100*.84+180*6.064
=800.118
𝑿𝟏 150 -2.496
RED/BLUE
𝑿𝟐 100 s F(s)
sgn
.84 𝒀

𝑿𝟑 180 6.064
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝒀
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 800.118
= +𝟏
𝑿𝟏 150 -2.496
RED/BLUE
𝑿𝟐 100 800.
.84 sgn 𝒀
118

𝑿𝟑 180 6.064
+𝟏, 𝒔 ≥ 𝟎
𝒔𝒈𝒏 𝒔 =
−𝟏, 𝒔 < 𝟎
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝒀
𝑿𝟎 =+1 = 𝑺𝑮𝑵 𝒔
-1.002
= 𝑺𝑮𝑵 800.118
= +𝟏
𝑿𝟏 150 -2.496
BLUE
𝑿𝟐 100 800.
.84 +1 +𝟏
118

𝑿𝟑 180 6.064

You might also like