You are on page 1of 36

Artificial Neural Network

Guided By Submitted By

Miss Jagruti Goswami Jitendra Patel


Miral Patel
6th IT

C.U.SHAH COLLEGE OF ENGG. & TECH.


WADHWAN CITY – 363 030

C.U.Shah College of Engg. & Tech. 1


Artificial Neural Network

WADHWAN CITY

DIST : SURENDRANAGAR

CERTIFICATE

This is to certify that Mr. Jitendra Patel and Mr.Miral Patel

Studying in SEM – VI of B.E. Information Technology


having Roll-No 31 & 34 has completed his seminar
on the following topic successfully

Topic Name: ARTIFICIAL NEURAL NETWORK

Staff – In charge Head of Dept.

(Miss Saroj Bodar)


Date : ___________

C.U.Shah College of Engg. & Tech. 2


Artificial Neural Network

ACKNOWLEDGEMENT

I hereby take this opportunity to thank each and everyone


who has helped me in creating and formulating this seminar
report. I especially thank to our H.O.D Miss S.G.Bodar and our
lab-incharge Miss Jagruti madam for giving me the moral and
academic support for representing the seminar. At last I would
like to thank all those who directly or indirectly helped me in
preparing the seminar report.

C.U.Shah College of Engg. & Tech. 3


Artificial Neural Network

INDEX
Content Page
No.
1. Introduction 1
2. Definition 2
3. Structure Of Human Brain 3
4. Neurons 4
5. Basics Of ANN Model 6
6. Artificial Neural Networks 7
6.1 How ANN Differs From Conventional Comp.
6.2 ANN Vs Von Neumann Comp.
7. Perceptron 10
8. Learning Laws 11
8.1 Hebb’s Rule
8.2 Hopfield Rule
8.3 The Delta Rule
8.4 The Gradient Descent Rule
8.5 Kohonen’s Learning Rule
9. Basic Structure Of ANNS 14
10. Network Architectures 15
10.1Single Layer Feed Forward ANN
10.2Multilayer Feed forward ANN
10.3Recurrent ANN
11. Learning Of ANNS 18
11.1Learning With A Teacher
11.2Learning Without A Teacher
11.3Learning Tasks
12. Control 23
13. Adaptation 24
14. Generalization 24
15. Probabilistic ANN 25
16. Advantages 26
17. Limitations 27
18. Applications 28
19. References 32

C.U.Shah College of Engg. & Tech. 4


Artificial Neural Network

Artificial Neural Network

Introduction
Ever since eternity, one thing that has made human beings stand
apart from the rest of the animal kingdom is, its brain .The most intelligent
device on earth, the “Human brain” is the driving force that has given us
the ever-progressive species diving into technology and development as
each day progresses.

Due to his inquisitive nature, man tried to make machines that could
do intelligent job processing, and take decisions according to instructions
fed to it. What resulted was the machine that revolutionized the whole
world, the “Computer” (more technically speaking the Von Neumann
Computer). Even though it could perform millions of calculations every
second, display incredible graphics and 3-dimentional animations, play
audio and video but it made the same mistake every time.

Practice could not make it perfect. So the quest for making more
intelligent device continued. These researches lead to birth of more
powerful processors with high-tech equipments attached to it, super
computers with capabilities to handle more than one task at a time and
finally networks with resources sharing facilities. But still the problem of
designing machines with intelligent self-learning, loomed large in front of
mankind. Then the idea of initiating human brain stuck the designers who
started their researches one of the technologies that will change the way
computer work Artificial Neural Networks.

C.U.Shah College of Engg. & Tech. 5


Artificial Neural Network

Definition

Neural Network is the specified branch of the Artificial Intelligence.

In general, Neural Networks are simply mathematical techniques


designed to accomplish a variety of tasks. Neural Networks uses a set of
processing elements (or nodes) loosely analogues to neurons in the brain
(hence the same, neural networks). These nodes are interconnected in a
network that can then identify patterns in data as it is exposed to the data.
In a sense, the network learns from the experience just as people do. Neural
networks can be configured in various arrangements to perform a range of
tasks including pattern recognition, data mining, classification, and process
modeling.

C.U.Shah College of Engg. & Tech. 6


Artificial Neural Network

Structure of human brain

As stated earlier, Neural Networks is very much similar to the


biological structure of human Brain. Following is the biological
structure of brain is given.

Sequential Parallel

Functions: Functions:
Rules Images
Concepts Pictures
Calculations Control

Expert Systems Neural Networks


Learn by Rules Learn by experience

Functions of Brain

As shown in figure, left part of the brain consists of rules, concepts


and calculations. It follows ‘Rule Based Learning’ and hence solves the
problem by passing them through rules. It has sequential pairs of Neurons.
Therefore, this part of brain is similar to the expert systems.

Right part of the brain, as shown below in the figure; consist of


functions, images, pictures, and controls. It follows ‘parallel learning’ and
hence learns through experience. It has parallel pairs of Neurons.
Therefore, this brain is similar to the Neural Network.

C.U.Shah College of Engg. & Tech. 7


Artificial Neural Network

Neurons

The conceptual constructs of a neural network stemmed from our


early understanding of the human brain. The brain is comprised of billion
and billions of interconnected neurons (some expert’s estimate upwards of
1011 neurons in the human brain). The fundamental building blocks of this
massively parallel cellular structure are really quite simply when studied in
isolation. A neuron receives incoming electrochemical signals from its
dendrites and collects these signals at the neuron nucleus. The neuron
nucleus has a internal threshold that determines if neuron itself tires in
response to the incoming information. If the combined incoming signals
exceeds this threshold then neuron fires and an electrochemical signal is
sent to all neurons connected to the firing neuron on its output connections
or axons. Otherwise the incoming signals are ignored and the neuron
remains dormant.

There are many types of neurons or cells. From a neuron body


(soma) many fine branching fibers, called dendrites, protrude. The
dendrites conduct signals to the soma or cell body. Extending from a
neuron’s soma, at a point called axon hillock (initial segment), is a long
giber called an axon, which generally splits into the smaller branches of
axonal arborization. The tips of these axon branches (also called nerve
terminals, end bulbs, telondria) impinge either upon the dendrites, somas or
axons of other neurons or upon effectors.

C.U.Shah College of Engg. & Tech. 8


Artificial Neural Network

The axon-dendrite (axon-soma, axon-axon) contact between end


bulbs and the cell it impinges upon is called a synapse. The signal flow in
the neuron is (with some exceptions when the flow could be bi-directional)
from the dendrites through the soma converging at the axon hillock and
then down the axon the the end bulbs. A neuron typically has many
dendrites but only a single axon. Some neurons lack axons, such as the
amacrine cells.

C.U.Shah College of Engg. & Tech. 9


Artificial Neural Network

Basics of Artificial Neural Models

The human brain is made up of computing elements, called neurons,


coupled with sensory receptors (affecters) and effectors. The average
human brain, roughly three pounds in weight and 90 cubic inches in
volume, is estimated to contain about 100 billion cells of various types. A
neuron is a special cell that conducts and electrical signal, and there are
about 10 billion neurons in the human brain. The remaining 90 billion cells
are called glial or glue cells, and these serve as support cells for the
neurons. Each neuron is about one-hundredth size of the period at the end
of this sentence. Neurons interact through contacts called synapses. Each
synapse spans a gap about a millionth of an inch wide. On the average each
neuron receives signals via thousands of synapses.

The motivation for artificial neural network (ANN) researches is the


belief that a human’s capabilities, particularly in real-time visual
perception, speech understanding, and sensory information processing and
in adaptively as well as intelligent decision making in general, come from
the organizational and computational principles exhibited in the highly
complex neural network of the human brain. Expectations of faster and
better solution provide us with the challenge to build machines using the
same computational and organizational principles, simplified and
abstracted from neurobiological of the brain.

Artificial Neural Network Model

C.U.Shah College of Engg. & Tech. 10


Artificial Neural Network

Artificial Neural Network

Artificial neural network (ANNs), also called parallel distributed


processing systems (PDPs) and connectionist systems, are intended for
modeling the organization principles of the central neurons system, with
the hope that the biologically inspired computing capabilities of the ANN
will allow the cognitive and logically inspired computing capabilities of the
ANN will allow the cognitive and sensory tasks to be performed more
easily and more satisfactory than with conventional serial processors.
Because of the limitation of serial computers, much effort has devoted to
the development of the parallel processing architecture; the function of
single processor is at a level comparable to that of a neuron. If the
interconnections between the simplest fine-grained processors are made
adaptive, a neural network results.

ANN structures, broadly classified as recurrent (involving


feedback) or non-recurrent (without feedback), have numerous processing
elements (also dubbed neurons, neurodes, units or cells) and connections
(forward and backward interlayer connections between neurons in different
layers, forward and backward interlayer connections or lateral connections
between neurons in the same layer, and self-connections between the input
and output layer of the same neuron. Neural networks may not have
differing structures or topology but are also distinguished from one another
by the way they learn, the manner in which computations are performed
(rule-based, fuzzy, even nonalorithmic), and the component characteristic
of the neurons or the input/output description of the synaptic dynamics).
These networks are required to perform significant processing tasks
through collective local interaction that produces global properties.

Since the components and connections and their packaging under stringent
spatial constraints make the system large-scale, the role of graph theory,
algorithm, and neuroscience is pervasive.

C.U.Shah College of Engg. & Tech. 11


Artificial Neural Network

How Neural Networks differ from Conventional Computer?

Neural Networks perform computation in a very different way than


conventional computers, where a single central processing unit sequential
dictates every piece of the action. Neural Networks are built from a large
number of very simple processing elements that individually deal with
pieces of a big problem. A processing element (PE) simply multiplies an
output value (table lookup). The principles of neural computation come
from the massive processing tasks, and from the adaptive nature of the
parameters (weights) that interconnected the PEs.

C.U.Shah College of Engg. & Tech. 12


Artificial Neural Network

Similarities and difference between neural net and von


Neumann computer

Neural net Von Neumann computer


Trained (learning by example) by Programmed with instruction (
adjusting the connection strengths, if-then analysis based on logic)
threshold, and structure)

Memory and processing elements Memory and processing


separate are collected

Parallel(discrete or continuous), Sequential or serial ,


digital, asynchronous synchronous(with a clock)

May be fault-tolerant because of Not fault- talerant


Distributed representation and
Large –scale redundancy

Self-organization during learning Software dependent

Knowledge stored is adaptable Knowledge stored in an


address
Information is stored in the memory location is strictly
Interconnection between neurons replaceable

Processing is anarchic processing is autocratic

Cycle time, which governs cycle time, corresponds to


Processing speed, occurs in processing one step of a
Millisecond range program in the cpu during
One clock cycle , occurs
In the nanosecond range

C.U.Shah College of Engg. & Tech. 13


Artificial Neural Network

Perceptron

At the heart of every Neural Network is what is referred to as the


perceptron (sometimes called processing element or neural node) which is
analogus to the neuron nucles in the brain. The second layer that is very
first hidden layer is known as perceptron. As was the case in the brain the
operation of the perceptron is very simple; however also as is the case in
the brain, when all connected neurons operate as a collective they can
provide some very powerful learning capacity.

Input signals are applied to the node via input connection (dendrites
in the case of the brain.) The connections have “strength” which change as
the system learns. In neural networks the strength of the connections are
referred to as weights. Weights can either excite or inhibite the
transmission of the incoming signal. Mathematically incoming signals
values are multiplied by the value of those particular weights.

At the perceptron all weighted input are summed. This sum value is
than passed to a scaling function. The selection of scaling function is part
of the neural network design.

The structure of perceptron (Neuron Node) is as follow.

Perceptron

C.U.Shah College of Engg. & Tech. 14


Artificial Neural Network

Learning Laws

Many learning laws are in common use. Most of these are some sort
of variation of the best known and oldest learning laws, hebb’s rule.
Research into different learning functions continues as new ideas routine
show up in trade publication. Some researches have the modeling of
biological learning as their main objective. Others are experimenting with
adaptation of their perceptions of how nature handles learning. Either way,
man’s understanding of how neural processing actually works is very
limited. Learning is certainly more complex rhan the simplification
represented by the learning laws currently develop. A few of the major
laws are presented as examples.

Hebb’s Rule
The first, and undoubtedly the best known, learning rule were introduced
by Donald Hebb. The description appeared in his book the Organization of
behavior in 1949. His basic rule is: if a neuron receives an input from
another neuron, and if both are highly active (mathematically have the
same sign), the weight between the neurons should be strengthened.

Hopfield Law
It is similar to Hebb’s rule with the exception that it specifies the
magnitude of the strengthening or weakening. It states,” if the desired
output and the input are both active and both inactive, increment the
connection weight by the learning rate, otherwise decrement the weight by
the learning rate”.

The Delta Rule


This rule is a further variation of Hebb’s Rule. It is one of the most
commonly used. This rule is based on the simple idea of continuously
modifying the strengths of the input connections to reduce the difference
(the delta) between the desired output value and the actual output of a

C.U.Shah College of Engg. & Tech. 15


Artificial Neural Network

processing element. Their rule changes the synaptic weights in the way that
minimizes the mean squared error of the network.

This rule is also referred to as windrows-Hoff Learning rule and the least
mean square (LMS) Learning Rule.

The way that the Delta Rule works is that the delta rule error in the
output layer is transformed by the derivative of the transfer function and is
then used in the previous neural layer to adjust input connection weights. In
other words, the back-propagated into previous layers one layer at a time.
The process of back-propagating the network errors continues until the first
layer is reached. The network typed called feed forward; back-propagation
derives its name from this method of computing the error term. When using
the delta rule, it is important to ensure that the input data set is well
randomized. Well-ordered or structured presentation of the training set can
lend to a network, which cannot converge to the desired accuracy. If that
happens, then network is incapable of learning the problem.

The Gradient Descent Rule


This rule is similar to the Delta rule in that the derivatives of the transfer
function is still used to modify the delta error before it is applied to the
connection weights. Here, however, an additional proportional constant tied
to the learning rate is appended to the final modifying factor acting upon
the weights. This rule is commonly used, even though it converges to a
point of stability very slowly.

It has been shown that different learning rates for different layers of
network help the learning process converge faster. In these tests, the
learning rates for those layers close to the output were set lower than those
layers near the input. This especially important for applications where the
input data is not derived from a strong underlying model.

Kohonen’s Learning Law


The procedure, developed by Teuvo Kohonen, was inspired by learning in
biological systems. In this procedure, the processing elements complete for

C.U.Shah College of Engg. & Tech. 16


Artificial Neural Network

the opportunity to learn, or update their weights. The processing element


with the largest output is declared the winner and has the capabilities of
inhibiting its competitors as well as exciting its neighbors. Only the winner
is permitted an output, and only the winner plus its neighbors are allowed
to adjust their connection weights. Further, the size of the neighborhood
can vary during the training period. The usual paradigm is to start with a
larger definition of the neighborhood, and narrow in as the training process
proceeds. Because the winning element is defined as the one that has the
closest match to the input pattern, Kohonen networks model the distributed
of the data and is sometimes refered to as self-organizing maps or self-
organizing topologies.

C.U.Shah College of Engg. & Tech. 17


Artificial Neural Network

Basic Structure of artificial neural network


input layer:
The bottom layer is known as input neuron network in this case x1 to x5
are
input layerneurons.

Hidden layer:

The in-between input and output layer the layers are knownas hidden layers
where the knowledge of past experience / training is the

Output Layer:
The topmost layer which give the final output. In this case z1 and z2 are
output neurons.

Basic Structure Of Artificial Neural Network

C.U.Shah College of Engg. & Tech. 18


Artificial Neural Network

Network architectures

1). Single layer feedforword networks:


In this layered neural network the neurons are organized in the
form of layers.
In this simplest form of a layered network, we have an input layer
of source nodes those projects on to an output layer of neurons,
but not vise-versa. In other words, this network is strictly a feed
forward or acyclic type. It is as shown in figure:

Such a network is called single layered network, with designation


“single later” referring to the o/p layer of neurons.

2). Multilayer feed forward networks: The second class of the feed
forward neural network distinguishes itself by one or more hidden
layers, whose computation nodes are correspondingly called neurons

C.U.Shah College of Engg. & Tech. 19


Artificial Neural Network

or units. The function of hidden neurons is intervenue between the


external i/p and the network o/p in some useful manner. The ability of
hidden neurons is to extract higher order statistics is particularly
valuable when the size of i/p layer is large.
The i/p vectors are feedforward to 1st hidden layer and this
pass to 2nd hidden layer and so on until the last layer i.e. output
layer, which gives actual network response.

3). Recurrent networks:


A recurrent network distinguishes itself from feed forward
neural network, in that it has least one feed forward loop. As
shown in figures output of the neurons is fed back into its own
inputs is referred as self-feedback
A recurrent network may consist of a single layer of neurons
with each neuron feeding its output signal back to the inputs of all
the other neurons. Network may have hidden layers or not.

C.U.Shah College of Engg. & Tech. 20


Artificial Neural Network

C.U.Shah College of Engg. & Tech. 21


Artificial Neural Network

Learning of ANNS
The property that is of primary significance for a neural network is the
ability of the network to learn from environment, and to improve its
performance through learning.

A neural network learns about its environment through an interactive


process of adjustment applied to its synaptic weights and bias levels.
Network becomes more knowledgeable about its environment after each
iteration of the learning process.

Learning with a teacher:


1). Supervised learning:

the learning process in which the teacher teaches the network by giving the
network the knowledge of environment in the form of sets of the inputs-
outputs pre-calculated examples.

As shown in figure

C.U.Shah College of Engg. & Tech. 22


Artificial Neural Network

Neural network response to inputs is observed and compared with the


predefined output. The difference is calculated refer as “error signal” and
that is feed back to input layers neurons along with the inputs to reduce the
error to get the perfect response of the network as per the predefined
outputs.

Learning without a teacher:


Unlike supervised learning, in unsupervised learning, the learning process
takes place without teacher that is there are no examples of the functions to
be learned by the network.

1). Reinforcement learning / neurodynamic programming

In reinforcement learning, the learning of an input output mapping is


performed through continued interaction with environment in order to
minimize a scalar index of performance.

As shown in figure.

C.U.Shah College of Engg. & Tech. 23


Artificial Neural Network

In reinforcement learning, because no information on way the right output


should be provided, the system must employ some random search strategy
so that the space of plausible and rational choices is searched until a correct
answer is found. Reinforcement learning is usually involved in exploring a
new environment when some knowledge( or subjective feeling) about the
right response to environmental inputs is available. The system receives an
input from the environment and process an output as response.
Subsequently, it receives a reward or a panelty from the environment. The
system learns from a sequence of such interactions.

2). Unsupervides learning:

in unsupervised or self-organized learning there is no external teacher or


critic to over see the learning process.

As indicated in figure.

Rather provision is made for a task independent measure of the quality of


the representation that the network is required to learn and the free
parameters of the network are optimized with respect to that measure. Once
the network has become tuned to the statistical regularities of the input
data, it developes the ability to form internal representation for encoding
features of the input and there by to create the new class automatically.

C.U.Shah College of Engg. & Tech. 24


Artificial Neural Network

Learning tasks
Pattern recognition:

Humans are good at pattern recognition. We can recognize the familiar face
of the person even though that person has aged since last encounter,
identifying a familiar person by his voice on telephone, or by smelling the
fragments comes to know the food etc.

Pattern recognition is formally defined as the process where by a received


pattern/signal is assigned to one of a prescribed number of classes. A neural
network performs pattern recognition by first undergoing a training session,
during which the network is repeatedly present a set of input pattern along
with the category to which each particular pattern belongs. Later, a new
pattern is presented to the network that has not been seen before, but which
belongs to the same pattern caterogy used to train the network.

The network is able to identify the class of that particular pattern because
of the information it has extracted from the training data. Pattern
recognition performed by neural network is statistical in nature, with the
pattern being represented by points in a multidimensional decision space.

The decision space is divided into regions, each one of which is associated
with class. The decision boundries are determined by the training process.

C.U.Shah College of Engg. & Tech. 25


Artificial Neural Network

As shown in figure: in generic terms, pattern-recognition machines using


neural network may take two forms.

1). To extract features through unsupervised network.

2). Features pass to supervised network for pattern classification to give


final output.

C.U.Shah College of Engg. & Tech. 26


Artificial Neural Network

Control
The control of a plant is another learning task that can be done by a neural
network; by a ‘plant’ we mean a process or critical part of a system that is
to be maintained in a controlled condition. The relevance of learning to
control should not be surprising because, after all, the human brain is a
computer, the output of which as a whole system are actions. In the context
of control, the brain is living proof that it is possible to build a generalized
controller that takes full advantages of parallel distributed hardware, can
control many thousands of processes as done by the brain to control the
thousands of muscles.

C.U.Shah College of Engg. & Tech. 27


Artificial Neural Network

Adaptation
The environment of the interest is no stationary, which means that the
statistical parameters of the information bearing generated by the
environment vary with the time. In situation of the kind, the traditional
methods of supervised may learning may prove to be inadequate because
the network is not equipped with the necessary means to track the statistical
variation of the environment in which it operates. To overcome these
shortcomings, it is desirable for a neural network to continually adapt its
free parameters to variation in the incoming signals in a real time fashion.
Thus an adaptive system responds to every distinct input as a novel one. In
other words the learning process encountered in the adaptive system never
stops, with learning going on while signal processing is being performed by
the system. This form of learning is called continuous learning or learning
on the fly.

Generalization
In back propagation learning we typically starts with a training sample and
uses the back propagation algorithm to compute the synaptic weights of a
multiplayer preceptor by loading (encoding) as many as of the training
example as possible into the network. The hope is that the neural network
so design will generalize. A network is said generalize well when the input
output mapping computed by the network is correct or nearly so for the test
data never used in creating or training the network; the term generalization
is borrowed from psychology.

A neural network that is design to generalize well will produced a correct


input output mapping even when the input is slightly different from the
examples used to train the network. When however a neural network learns
too many input output examples the network may end up memorizing the
training data. It may do so by finding a feature that is present in training

C.U.Shah College of Engg. & Tech. 28


Artificial Neural Network

data but not true for the underlining function that is to be modeled. Such a
phenomena is referred to as an over fitting or over training. When the
network is over trained it looses the ability to generalize between similar
input output pattern.

The probabilistic neural network


Another multilayer feed forward network is the probabilistic neural
network (PNN). In addition to the input layer, the PNN has two hidden
layers and an output layer. The major difference from a feed forward
network trained by back propagation is that it can be constructed after only
a single pass of the training exemplars in its original form and two passes is
a modified version. The activation function of a neural in the case of the
PNN is statistically derived from estimating of probability density
functions (PDFs) based on training patterns.

C.U.Shah College of Engg. & Tech. 29


Artificial Neural Network

Advantages of Neural Networks


1) Networks start processing the data without any preconceived
hypothesis. They start random with weight assignment to
various input variables. Adjustments are made based on the
difference between predicted and actual output. This allows for
unbiased and batter understanding of data.

2) Neural networks can be retained using additional input variables


and number of individuals. Once trained thay can be called on to
predict in a new patient.

3) There are several neural network models available to choose


from in a particular problem.

4) Once trained, they are very fast.

5) Due to increased accuracy, results in cost saving.

6) Neural networks are able to represent any functions. Therefore


they are called ‘Universal Approximators’.

7) Neural networks are able to learn representative examples by


back propagating errors.

C.U.Shah College of Engg. & Tech. 30


Artificial Neural Network

Limitations of Neural Network


 Low Learning Rate:- >> For problems requiring a large and
complex network architecture or having a large number of training
examples, the time needed to train the network can become
excessively long.

 Forgetfulness :->> The network tends to forget old training


examples as it is presented with new ones. A previously trained
neural network that must be updated with new information must be
trained using the old and new examples – there is currently no
known way to incrementally train the network.

 Imprecision :->> Neural networks do not provide precise


numerical answer, but rather relate an input pattern to the most
probable output state.

 Black box approach :->> Neural networks can be trained to


transform an input pattern to output but provide no insights to the
physics behind the transformation.

 Limited Flexibility :->> The ANNS is designed and implemented


for only one particular system. It is not applicable to another
system.

C.U.Shah College of Engg. & Tech. 31


Artificial Neural Network

Application Of Artificial Neural Network

In parallel with the development of theories and architectures for


neural networks the scopes for applications are broadening at a rapid pace.
Neural networks may develop intuitive concepts but are inherently ill suited
for implementing rules precisely, as in the case of rule based computing.
Some of the decision making tools of the human brain such as the seats of
consciousness, thought, and intuition, do not seem to be within our
capabilities for comprehension in the near future and are dubbed by some
to be essentially no algorithmic.

Following are a few applications where neural networks are employed


presently:

1) Time Series Prediction


Predicting the future has always been one of humanity’s desires. Time
series measurements are the means for us to characterize and understand a
system and to predict in future behavior.

Gershenfield and weighed defined three goals for time series analysis:
forecasting, modeling, and characterization. Forecasting is predicting the
short-term evolution of the system. Modeling involves finding a description
that accurately captures the features of the long-term behavior. The goal of
characterization is to determined the fundamental properties of the system,
such as the degrees of freedom or the amount of randomness. The
traditional methods used for time series prediction are the moving average
(ma), autoregressive (ar), or the combination of the two, the ARMA model.

Neural network approaches produced some of the best short-term


predictions. However, methods that reconstruct the state space by time-
delay embedding and develop a representation for the geometry in the
system’s state space yielded better longer-term predictions than neural
networks in some cases.

C.U.Shah College of Engg. & Tech. 32


Artificial Neural Network

2) Speech Generation
One of the earliest successful applications of the back propagation
algorithm for training multiplayer feed forward networks were in a speech
generation system called NET talk, developed by Sejnowski and
Rosenberg. Net talk is a fully connected layered feed forward network with
only one hidden layer. It was trained to pronounce written English text.
Turning a written English text into speech is a difficult task, because most
phonological rules have exceptions that are context-sensitive.

Net talk is a simplest network that learns the function in several hours
using exemplars.

3) Speech Recognition
Kohonen used his self-organizing map for inverse problem to that
addressed by Net talk: speech recognition. He developed a phonetic
typewriter for the Finnish language. The phonetic typewriter takes as input
a speech as input speech and converts it into written text. Speech
recognition in general is a much harder problem that turning text into
speech. Current state-of-the-art English speech recognition systems are
based on hidden Markov Model (HMM). The HMM, which is a Markov
process; consist of a number of states, the transitions between which
depend on the occurrence of some symbol.

4) Autonomous Vehicle Navigation


Vision-based autonomous vehicle and robot guidance have proven
difficult for algorithm-based computer vision methods, mainly because of
the diversity of the unexpected cases that musy be explicitly dealt with in
the algorithms and the real-time constraint.

Pomerleau successfully demonstrated the potential of neural networks


for overcoming these difficulties. His ALVINN (Autonomous Land
Vehicle in Neural Networks) set a worked record for autonomous
navigation distance. After training on a two-mile stretch of highway, it

C.U.Shah College of Engg. & Tech. 33


Artificial Neural Network

drove the CMU Navlab, equipped with video cameras and laser range
sensors, for 21.2 miles with an average speed of 55 mph on a relatively old
highway open to normal traffic. ALVINN was not distributed by passing
cars while it was driven autonomously. ALVINN nearly doubled the
previous distance world record for autonomous navigation.

A network in ALVINN for each situation consists of a single hidden


layer of only four units, an output layer of 30 units and a 30 X 32 retina for
the 960 possible input variables. The retina is fully connected to the hidden
layer, and the hidden layer is fully connected to the output layer. The graph
of the feed forward network is a node-coalesced cascade version of
bipartite graphs.

5) Handwriting Recognition
Members of a group at AT&T Bell Laboratories have been working in
the area of neural networks for many years. One of their projects involves
the development of a neural network recognizer for handwritten digits. A
feed forward layered network with three hidden layers is used. One of the
key features in this network that reduces the number of free parameters to
enhance the probability of valid generalization by the network. Artificial
neural network is also applied for image processing.

6) In Robotics Field:
With the help of neural networks and artificial Intelligence. Intelligent
devices, which behave like human, are designed. Which are helpful to
human in performing various tasks.

C.U.Shah College of Engg. & Tech. 34


Artificial Neural Network

Following are some of the application of Neural Networks in various fields:

 Business
o Marketing
o Real Estate
 Document and Form Processing
o Machine Printed Character Recognition
o Graphics Recognition
o Hand printed Character Recognition
o Cursive Handwriting Character Recognition
 Food Industry
o Odor/Aroma Analysis
o Product Development
o Quality Assurance
 Financial Industry
o Market Trading
o Fraud Detection
o Credit Rating
 Energy Industry
o Electrical Load Forecasting
o Hydroelectric Dam Operation
o Oil and Natural Gas Company
 Manufacturing
o Process Control
o Quality Control
 Medical and Health Care Industry
o Image Analysis
o Drug Development
o Resource Allocation
 Science and Engineering

C.U.Shah College of Engg. & Tech. 35


Artificial Neural Network

o Chemical Engineering
o Electrical Engineering

At last I want to say that after 200 or 300 years neural


networks is so developed that it can find the errors of even
human beings and will be able to rectify that errors and
make human being more intelligent.

Reference:

 “Artificial Neural Network”


By :- Robert j Schalkoff

 “Neural Network”

By: - Simon Haykin

 “Internet”: -
www.anns.mit/edu.com

www.cs.barkely.com

www.academicresources.com

C.U.Shah College of Engg. & Tech. 36

You might also like