You are on page 1of 27

Artificial Neuron Network

An idea…

Since ancient times…. ; this field was established before the advent of computers,

The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits
But not realm…

 function logically with a set of rules and calculations.Conventional computing versus artificial neural networks Traditional computers:  processing is sequential.  top-down learning. . and concepts.  must learn only by doing different sequences or steps in an algorithm. pictures.  can function via images. An ANN:  is an inherently multiprocessor.  Bottom-up learning.  neural networks can program themselves.

Getting familiar to ANN .

Model of ANN .

Each synapse output stage impinges onto only one synaptic input stage on a subsequent neuron. 2. 4. Propagation delay is assumed to be constant for all neurons. . Each neuron can have a number of input synaptic stages. not continuously.Rules for the operation of the neurons 1. 5. Synaptic input stages contribute to overcoming of a threshold below which the neuron will not fire. Neurons fire at discrete moments. 3.

Firing rules     X1: 0 X2: 0 X3: 0 OUT: 0 0 0 1 0 0 0 1 1 0 1 0/1 0/1 1 0 0 0/1 1 0 1 1 1 1 1 1 0 1 0/1 1 .

.The perceptron  The perceptron is a mathematical model of a biological neuron.

Output of P = {1 if A x + B y > C {0 if A x + B y < = C .   a perceptron calculates the weighted sum of the input values. the perceptron outputs a non-zero value only when the weighted sum exceeds a certain threshold.

Architecture of Neural Networks    Competitive neural networks Feed-forward networks Feed-back networks .

Simple competitive networks: .

Composed of… The Hemming net The Maxnet .

The basic idea is that the nodes compete against each other by sending out inhibiting signals to each other. Each perceptron at the top layer of the Hemming net calculates a weighted sum of the input values. This weighted sum can be interpreted as the dot product of the input vector and the weight vector. including itself.  . The maxnet is a fully connected network with each node connecting to every other nodes.

 . the whole network selects the node with its weight vector closest to the input vector. the Hemming net finds out the ãdistanceä of the weight vector of each node from the input vector via the dot product. a Maxnet connects the top nodes of the Hemming net. the winner. Whenever an input is presented. i. In a simple competitive network.e. In this way. while the Maxnet selects the node with the greatest dot product.

Feed-Forward networks .

Feed-forward networks’ characteristics  1.   . Each perceptron in one layer is connected to every perceptron on the next layer. Hence information is constantly "fed forward" from one layer to the next.. There is no connection among perceptrons in the same layer. Perceptrons are arranged in layers. 3. and this explains why these networks are called feed-forward networks. with the first layer taking in inputs and the last layer producing outputs. 2. The middle layers have no connection with the external world. and hence are called hidden layers.

. feed-forward networks are commonly used for classification.

. 0) = (1. 0) = (1. 4) (5. o o o o o o =( 0. 1) = (0. so that the network 'learns' the relationship between the input and output.learning in feed-forward networks  pairs of input and output values are fed into the network for many cycles. 3) (2. 0) = (0. 6) (6. 7) . {  } i i i i i i = = = = = = (1.Back-propagation -. . 2) (1. . 1) . 3) (3. 0) = (0. . .

BACK-PROPAGATION FORMULAE .

Feedback networks .

The Learning Process  Associative mapping – Auto-association – Hetero-association  Regularity detection .

  Supervised learning Unsupervised learning .

Transfer/Activation Function  The Activation Function accepts a value that is the weighted sum of neuron inputs and returns a value that represents the output of the neuron. we must set the weights on the connections appropriately. This function should be used for training neural networks because a continuous function like this gives better feedback about the degree of error in a network.   To make a neural network that performs some specific task. .

Sigmoid Function .

("remember" ) . (“data-mining ”) Association.Some specific details of neural networks     Classification. (“stock market prediction”) Clustering. (“pattern recognition programs “) Prediction.

Applications of neural networks         Sales forecasting. Data validation. Recognition of speakers in communications. Diagnosis of hepatitis. Target marketing. Industrial process control. Recovery of telecommunications from faulty software. Facial recognition .

. fuzzy logic and related subjects.Conclusion  The computing world has a lot to gain from neural networks. Neural networks have a huge potential we will only get the best of them when they are integrated with computing. AI.