You are on page 1of 15

Biological Neuron

Propagation , ⑨

Input

output

Neuron
Artificial
McCulloch -
Pitts Neuron →

Inputs

> output
>
Aggregate
? Function Function


For this neuron ,
the output is decided

by

output =

sgn ( É ,
Input : -

∅ )
where ¢ is the activation threshold

i. e. k =
E inputs the output y is

{ if
1 a ≥ ∅
y
=
.

0 ,
if x < 0
Functions →

I sgnte ) →

Squad
{
=
1 ,
a ≥ ◦

O ,
K < 0

↑ y n

l -

l
-12 -1
, ◦ ! L → I

2- sigmoid (a) →

sigmoid.lu ) =

/
I
+ e-
✗u

y↑ i -

_ for ✗ =L

! ! 's L→
<
-1 L
Elements
of an
Artificial Neuron →

×
,

WI

✗ w
L 2
-
E Act 01h

¥7
.

>
yin ti
7
y
win

McCulloch -
Pitts
✗ 3h
Neuron

✗ = [ × , Xz .
. . .
Xu ] → Inputs
W = [ W , Wz _ . . .
Wu ] →
Weights

Increases / Decreases the

in
strength of inputs
{ win i ✗ WT
Jin
= = .

i '
✗~
=
=
W , ✗ , + Wz ✗2 t - . _ .
+ Wu

output y
=

jcyin )

Activation function like
Sgu or
sigmoidral
f the
has a threshold
when
∅ which
7
activates

neuron
yin
Inputs and dhtfsuts be Gor 1)
can
binary or

bipolar C- 1 or 1)
¥ Implement
Pitt
✗ OR Junction using
L consider
McCulloch
data)
-

neuron
binary
For an XOR function ,

y = ✗ i.
XI + ✗ 2. XT

Truth Table →

✗ ,
✗ 2 Y

O O O = 0 . I +0.1

O I I = 0.0 + 1 . I

1 0 I = 1 .
I + 0 . O

I 0 1.0 1 0
I = + .

Let 2, = ✗ i Iz and 22 = ✗2×7

model
the
designed is

ilb "
×,
, 21 I
-1
Y
'

> 0lb
-1

ills I
> X2 Zz
1

Y = 2 , + 22

{
if
1
Y = 21+2271

21+22
if I
0 &

( Activation not
necessarily required
"

J is
too)
as the output itself will come as
Neural Network Architectures

Network
An
Artificial Neural
number
is made
of
large of interconnected
artificial neurons .

like Brain made bunch


is
of of specialized
'
a

neurons ,
ANN is made
of artificial neurons .

There ANN based


types of
4 on
connectivity
of neurons

I. Single Layer Feed Forward Network -1

IIb
Layer
0 / b
Layer
-
-

w, ,

> × "
Yi >
}
I / P when
Uri
" """
,

neurons we
✗ }
-

g 2 ye >
↳ 7
2h .
.

i. :
i. ww } i.
wir

✗n g.
un.

-
' >
,

Here there is a
single layerconnected
of weightsto
and infants are
directly
the outputs .

links
The
synaptic carrying weight connect

inputs to outfits and not the other


way ,

network
hence called
feed forward
-

a
2- Feed Forward Network
Multi
Layer

Ilk
otb
layer Hidden layers eager
✗ 1
Yi
?
{ ^
21 § Ki 2
n

> 7
✗ Yr
{
2
, ,
22 R2
7
} -

i : i
,
> Zu
>
Rn !
'
✗n
yn

Mutilate Hidden Layers are present and

inputs are not connected


directly to outputs
Recurrent
3- Single Layer Network →

"
- Feedback
* ' →
Links
.

#
✗L YL )

Yu 3
Xu
Recurrent
I Multi
Layer Network →

7 Yi >
*,

\
-

22
✗2 42 >

% ya
\ ,
✗n

the feedbacks could also go to output


the
neurons
itself and not necessarily to
the
input Auger .

Learning Methods →

Unsupervised Learning →

Actual 011s
N N
✗ S S Y
w


Changesprovidedmadenot w.at
are .

infants , from
lb

Supervised Learning →

Actual
NN 01"
✗ S s y
w

Error
signal ✓

( D Y)
-

Error Desired
9"
Signal < D
generator

Reinforcement Learning →

Actual
NN 01"
✗ S s y
w

Error
signal ✓

( R Y)
-

Error Reinforcement
Signal < R
generator signal

not desired 0lb ,

but correct or incorrect


Hebbian
Learning Rule →

Weights adjusted using the


inputs and

are

outputs for the went iteration .

Unsupervised Learning as desired ◦


/b is
not used

W = É Yi Xi
i =\

where Xi and Yi are


input and output
vectors
respectively
Ey .

Design a Hebb
( consider
net to implement OR
function
bipolar I / Pando/ P)

Let the Hebb network for OR


function
be as →

3 V4 We

° / P

S R2
w2
y ,

> B-

Let
and
initially
bias b
the
be
weight
to
vector W

equal zero

i. W =
[ w , w - b ]
=
[ o o o ]
the table OR
Truth
for is

Ki R2 B Y
I I
/ I
-
-
-

-1 I 1 I
1 -1 1 I
1 I , 1

For
first
i. iteration

✗ ,
= [ -1 -1 I ]

w = [ O o ]
o
, y ,
= -
I

W ,
= W + 7, ✗ i

=
[ 0+-1 0.tl 0-1 ]
We =
[ I 1 -1 ]

( bi =
bi -
i
+
Yi -1

i. e. bias (now) -
_ bias @d) + target]
For second iteration -5

✗ 2 = [
-
I 1 I ]
W
, = [ 1 I -
I ] ,
V2 = I

Wz = W i + Ya .
✗ 2

= [ I -
I 1+1 I -
I ]
= [0 20 ]

For third iteration →


✗3 =
[ I -1 I]

We =
[ 0 20 ] V3 = I

Wy = WL 1- 43×3
=
[ I 1 ]
I
For
fourth
✗ u
=
iteration
[ I 1

I ]
Ws = [ I 1 I ] y y
= 1

Wu = W
] t Yu Xu
[ 2 23 Final

weight
= 2 as

one
epoch has

passed

The Hebb OR
Junction
wet
for is
'

-
.

3 Ki 2

0lb
2
> K2 Y s

> B-

For Hebb net ,


threshold is ◦

[
1 ✗ 70
y
i. =
,

-1 ,
✗ < 0

Clan be checked for


infants )
Decision boundary →

2kt 2ns 2b
+ +
y
=

Let 0 and since b I


y
=
=

2k ,
+2ns + 2 = O

N I 1- R 2
= -
I
Kha

Y
-

= -1 -1

>

Descent
Gradient Learning
Based E
minimization
of errors
-

on


Activation
function needs to be
differentiable
D. wig J E
=
2
jwij
Dwij →
change in
weights
1 →
Learning rate
Jgk÷ Gradient of w.r.tw
.
→ error ij

Hoffs -
Delta Rule and Back -

Propagation
are examples
Perceptron →

Training Algorithm

Perceptron training is a
supervised
learning algorithm .

"
Let
wig be the ijᵗʰ weight at k iterations

W"
s ✗ 1
Y, S
K
421 }
w
,
7

w " '
7 ✗ 2 s ye >
↳ 7
2h .
.

: .

i. :
s

i. ww
,
i
Wun

✗w In
w "
> > >

Single Layer Model


Perceptron
ᵗʰ
The weight for Rtl iteration is
given by
Wijk
"

Wijk =
+ (t -

y :) an :

where ni is the input


Yi is the output
1- is the target or desired off
and a is the
learning rate
Smaller leads to slow learning and

Large or leads to
fast learning

Perceptions can not handle tasks which


are not
linearly separable
^

A and B

are
linearly
separable

%f → A
not
and Bare
linearly
separable
,

consider OR

problem
K2 A

↑ 0
• •
to) c. is → These are not

linearly
and
separable
hence can't
be solved using
perceptions
• • )
010,0) r
( oil ) n,

You might also like