You are on page 1of 29

Artificial Neural Networks And Its

Applications
Course

By
• Prof. Dr. Mohsen Rashwan
• Prof. Dr. Mona Riad Elghoneimy
Artificial Neural Networks
Contents of the first part of the course :
• Introduction
• Computers And Brain
• Single Layer Perceptron
• Multilayer Perceptron
• The Hopfield Network
• The Self Organising Network
Reference: Simon Haykin, ‘Neural Networks, A
Comprehensive Foundation.’
Introduction
What Is a neural network :
• It has been motivated by recognition That the brain computes in a different
way from the conventional digital computer. the brain is a highly complex,
and parallel computer. It has the capability of organising neurons so as to
perform certain computations many times faster than the fastest digital
computer in existence today.
• The approach of neural computing is to capture the guiding principles that
underly the brain’s solution to these problems and apply them to computer
systems.
• One of the important features of the brain is that it can teach itself like
learning from examples like the children do.
Introduction
Structure of biological neuron
Introduction
The neuron is the basic unit of the brain it is a
simple microprocessing unit which receives and
combines signals from many other neurons through
input processor called dentrites. Signals from
dendrite are communicated to the neuron body
through synapses .
The brain consists of 10s of billions of neurons
interconnected .
The axon or output path of a neuron splits up
and connects to dentrites or input paths of other
neurons through a junction calld synapse
Analogy To The Brain
• In an artificial neural network the processing element (PE) is
analogous to biological neuron.
• The processing element has many input paths (dendrites) and
combined usually by simple summation of the values of these input
paths. The combined input is then modified by a transfer function
this transfer function can be threshold function Which only passes
information if the combined activity level reaches a certain level.
• The output level of the transfer function is generally passed directly
to the output path of the processing element.
Artificial neuron

• An artificial neural network consists of processing


elements called neurons. An artificial neuron tries to
replicate the structure and behavior of the natural
neuron.
• Specifically, a signal xj at the input of synapseconnected to
neuron k is multiplied by the synaptic weight wkj. Unlike a
synapse in the brain, the synaptic weight of an artificial
neuron may lie in a range that includes negative as well as
positive values.
• An adder for summing the input signals, weighted by the
respective synapses of the neuron; the operations described
here constitute a linear combiner.
• An activation function ø for limiting the amplitude of the
output of a neuron. The activation function is also referred to
as a squashing function in that it squashes(limits) the
permissible amplitude range of the output signal to some finite
value.
• The neuron model also includes an externally applied bias,
denoted by b. The bias b, has the effect of increasing or
lowering the net input of the activation function, depending on
whether it is positive or negative, respectively.
• In mathematical terms, we may describe a neuron k by writing
the following pair of equations:
𝑚

𝑣𝑥 = ෍ 𝑤𝑘𝑗 𝑥𝑗
𝑗=0
𝑦𝑘 = 𝜑 𝑣𝑘 + 𝑏𝑘
Types of activation function
• (a) Threshold function.
• (b) Piecewise-linear function.
• (c) Sigmoid function for
varying slope parameter a.
Types Of Activation Function
• 1- Threshold function:
1 𝑖𝑓 𝑢 ≥0
𝜑 𝑢 = ൜0 𝑖𝑓 𝑢<0

• 2- Piecewise linear function:


1 𝑖𝑓 𝑢 ≥ 1/2
𝜑 𝑢 = ቐ𝑢 𝑖𝑓 1/2 > 𝑢 ≻ 1/2
0 𝑢 ≤ −1/2
• 2- Sigmoid function: it is the most common form of activation
function used in construction of neural networks
𝑢 1
•𝜑 𝑢 = tanh
2
=
1+𝑒𝑥𝑝−𝑎𝑢

where a is slope parameter


Major Properties

• 1- Nonlinearity: A neuron is basically a nonlinear device.


Consequently, a neural network, made up of an
interconnection of neurons, is itself nonlinear. Moreover,
the nonlinearity is of a special kind in the sense that it is
distributed throughout the network. Nonlinearity is a
highly important property, particularly if the physical
mechanism responsible for the generation of an input
signal (e.g., speech signal) is inherently nonlinear.
Major Properties
• 2- Input – Output Mapping: The weights of a neural network are
modified by applying a set of training samples. Each sample
consists of a unique input signal and the corresponding desired
response. The network is presented an example picked at random
from the set, and the weights of the network are modified to
minimize the difference between the desired response and the
actual response of the network produced by the input signal. The
training of the network is repeated for many examples in the set
until the network reaches a steady state, where there are no further
significant changes in the weights. Thus the network learns from the
examples by constructing an input- output mapping for the
problem.
Major Properties

• 3- Adaptivity Neural networks have a built-in


capability to adapt their weights to changes in
the surrounding environment. In particular, a
neural network trained to operate in a specific
environment can be easily retrained to deal with
minor changes in the operating environmental
conditions
Major Properties

• 4- Evidential Response: In the context of pattern


classification, a neural network can be designed
to provide information not only about which
particular pattern to select, but also about the
confidence in the decision made. This latter
information may be used to reject ambiguous
patterns and thereby improve the classification
performance of the network.
Major Properties

• 5- Fault Tolerance: If a neuron or its connecting


links are damaged, recall of a stored pattern is
impaired in quality. However, due to the
distributed nature of information in the network,
the damage has to be extensive before the
overall response of the network is degraded
seriously.
Major Properties

• 6- Uniformity of Analysis and Design: The


same notation is used in all the domains involving
the application of neural networks. This
commonality makes it possible to share theories
and algorithms in different applications of neural
networks.
Neural network viewed
as directed graphs

A signal flow graph is a


network of directed links That
are connected at certain
points called nodes. the flow
of signals of the graph is
dictated by three basic rules.
• Rule 1:
The signal flows along a link only in the direction defined by the arrow
on the link. The two types of links are:
1. synaptic links from node signal xi to yj with weight wij
2. activation links which forms the nonlinear activation function.

• Rule 2:
A node signal equals the algebraic sum of all signals entering the node

• Rule 3:
• The signal at a node is transmitted to each outgoing link originating
from that node with that transmission being entirely independent on
that transfer functions of the outgoing links
Signal Flow Graph Of A Neuron
Architectural graph of a neuron

• Source nodes supply input signals to


the graph
• Each neuron is represented by the
single node called a computation node
• The links connecting the source and
computation node of the graph carry
no weight only provide the directions
of signal
• This is called architectural graph Square means source node
Circle means computation node
Neural Network Architectures

• 1- Single layer feedforward


networks
The simplest form has an input layer
of source nodes that projects on to an
output layer of neurons but not vice
versa so it is feedforward network
we do not count the input layer of
source nodes because no computation
is performed there
Neural Network Architectures
2- Multilayer feedforward networks
• There is one or more hidden layers whose
computational nodes are called neurons or hidden units
• The source nodes in the input layer supply elements of
the input vector which contributes the input signals
applied to the neurons in the second layer or the first
hidden layer
The output signal of the second layer are used as inputs
to the third layer and so on for the rest of the network
until we reach the output layer
Multilayer feedforward network fully
connected with one hidden layer
Neural Network Architectures

• In last figure the network has 10 source notes, 4


hidden neurons and 2 outputs. It is referred to as
10 – 4 – 2 network
• In general it is written in the form:
p – h1 – h2 – q network
In case we have two hidden layers
Neural Network Architectures
3- Recurrent Network (e.g. Hopfield NN)
It has at least one feedback loop, it could have hidden neurons.
• Neural Network Architectures
4- Lattice Structures:
Neural Network Architectures

• 4- Lattice Structures:
• This feedforward network consists of a one-
dimensional, two-dimensional, or higher
dimensional array of neurons with a set of source
nodes that supply the input signals to
• the array. The figure depicts a one-dimensional
lattice of 3 neurons fed from a layer of 3 source
nodes.

You might also like