You are on page 1of 25

Neural Networks and its Applications


What is Neural Networks? Why Neural Networks? Human and Artificial Neurons Investigating the Similarities Back-propagation algorithm Applications Conclusion Future Enhancements

What is Neural Network?

An information-processing paradigm.

Composed of large number of highly interconnected processing elements working in unison to solve specific problems.
Also known as neurocomputers, connectionist computers, parallel distributed processors, etc. Key element: Novel structure of the information processing system.

Resembles the brain in two respects:

Knowledge is acquired by network from its environment through a learning process. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

Produced in 1943 by Warren McCulloch and Walter Pitts

Why Neural Networks?

Used to extract patterns and detect trends that are too complex to be noticed by humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. Expert can then be used to provide projections given new situations of interest and answer "what if" questions.

Other advantages..

Adaptive learning Self organization

Real Time Operation

Fault Tolerance via Redundant Information Coding


The neural network needs training to operate.

Because the network finds out how to solve the problem by itself, its operation can be unpredictable.
Requires high processing time for large neural networks. Different architectures consequently requires different types of algorithms despite to be an apparently complex system

Human and Artificial Neurons Investigating the Similarities

How the human Brain Learns?


- neurons collect signals from others through a host - splits into 1000s of branches - converts axon into electrical effects that inhibit or excite activity in the connected neurons.



Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes. Brain cells (neurons) are five to six orders of magnitude slower than the silicon chip happen in ns range where as neural events happen in ms. The brain makes up the slow rate of operation of a neuron by massive interconnection between them.

From Human Neurons to Artificial Neurons

We conduct these neural networks by first trying to deduce the essential features of neurons and their interconnections. We then typically program a computer to simulate these features. However because our knowledge of neurons is incomplete and our computing power is limited, our models are necessarily gross idealizations of real networks of neurons.

Inputs Xi:

Signal at the input of synapse I connected to neuron k.

Weights Wki:

real-valued numbers

Synapse weight of the neuron k.

Threshold u:

Referred to as bias value. Threshold can be regarded as another input / weight pair

Activation function, f:

For limiting the amplitude of the output of a neuron.

Output Yk:

Output signal of the neuron.

The modification of synaptic weights provides the traditional method for the design of NNs.

It is possible for a NN to modify their own topology.

Interconnection layers

Multilayer perceptron

A most common neural network model.

Required desired output in order to learn, so called supervised network. Goal: to create a model that correctly maps the input to the output using historical data so that it can then used to produce when desired output is unknown.

Block diagram of Multi Layer Perceptron

Back propagation Algorithm

Learn MLP and many other neural networks. input data is repeatedly presented to NN

output of the NN compared to the desired output and an error is corrupted.

The error is fed back to NN and used to adjust the weights such that the error decreases with each iteration.

Demonstration of NN learning to model the Xor:

With each presentation, the error between the network output and the desired output is computed and fed back to the neural network. The neural network uses this error to adjust its weights such that the error will be decreased. In order to train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the NN compute the error derivative of weights.


Neural networks have broad applicability to real world problems. successfully applied to broad spectrum of data intensive applications, such as: Voice Recognition Target Recognition Medical Diagnosis Process Modeling and control Credit rating Targeted Marketing Financial forecasting


NNs ability by learning makes them very flexible and powerful. No need to devise an algorithm in order to perform a specific task (no need to understand internal mechanisms). Very well suited for real time system for their fast response and computational times which are due to parallel architecture. Even though NNs have huge potential we will only get the best when they are integrated with computing.

Future Enhancements

Robots that can see, feel and predict the world around them. Improved stock prediction Common usage of self driving cars. Handwritten documents to be automatically transformed into formatted processing documents. Composition of music. Self diagnosis of medical problems using NNs. And much more


Neural Networks- Simon Haykin