Professional Documents
Culture Documents
Types of Neural Network Jyoti
Types of Neural Network Jyoti
• Artificial neural networks may by physical devices or simulated on conventional computers. From a practical point of view, an
ANN is just a parallel computational system consisting of many simple processing elements connected together in a specific
way in order to perform a particular task. There are some important features of artificial networks as follows.
• Artificial neural networks are extremely powerful computational devices (Universal computers).
• ANNs are modeled on the basis of current brain theories, in which information is represented by weights.
• ANNs have massive parallelism which makes them very efficient.
• They can learn and generalize from training data so there is no need for enormous feats of programming.
• Storage is fault tolerant i.e. some portions of the neural net can be removed and there will be only a small degradation in the
quality of stored data.
• They are particularly fault tolerant which is equivalent to the “graceful degradation” found in biological systems.
• Data are naturally stored in the form of associative memory which contrasts with conventional memory, in which data are
recalled by specifying address of that data.
• They are very noise tolerant, so they can cope with situations where normal symbolic systems would have difficulty.
• In practice, they can do anything a symbolic/ logic system can do and more.
• Neural networks can extrapolate and intrapolate from their stored information. The neural networks can also be trained.
Special training teaches the net to look for significant features or relationships of data.
TYPES OF NEURAL NETWORK
• .
• Some of the main breakthroughs include: • ANNs have evolved into a broad family of
convolutional neural networks that have proven techniques that have advanced the state of the art
particularly successful in processing visual and other across multiple domains. The simplest types have
two-dimensional data; long short-term memory avoid one or more static components, including number
the vanishing gradient problem and can handle of units, number of layers, unit weights and
signals that have a mix of low and high frequency topology. Dynamic types allow one or more of these
components aiding large-vocabulary speech to evolve via learning. The latter are much more
recognition,text-to-speech synthesis,and photo-real complicated, but can shorten learning periods and
talking heads;competitive networks such as produce better results. Some types allow/require
generative adversarial networks in which multiple learning to be "supervised" by the operator, while
networks (of varying structure) compete with each others operate independently. Some types operate
other, on tasks such as winning a game or on
purely in hardware, while others are purely
deceiving the opponent about the authenticity of an
software and run on general purpose computers
input.
SINGLE LAYER NETWORK
• But one question may arise in reader’s mind. Why we are using the hidden layer between the input and output layer? The answer to
this question is very silly. Each layer in a multilayer neural network has its own specific function. The input layer accepts input signals
from the outside world and redistributes these signals to all neurons in the hidden layer. Actually, the input layer rarely includes
computing neurons and thus does not process input patterns. The output layer accepts output signals, or in other words a stimulus
patterns, from the hidden layer and established the output patterns of the entire network. Neurons in the hidden layer detect the
features, the weights of the neurons represent the features hidden in the input patterns. These features are then used by the output
layer in determining the output patterns. With one hidden layer we can represent any continuous function of the input signals and
with two hidden layers even discontinuous functions can be represented. A hidden layer hides its desired output. Neurons in the
hidden layer cannot be observed through the input/ output behaviour of the network. The desired output of the hidden layer is
determined by the layer itself. Generally, we can say there is no obvious way to know what the desired output of the hidden layer
should be.
APPLICATIONS OF FEED FORWARD NEURAL
NETWORK
• Multilayer neural networks use a most common technique from a variety of learning technique, called
the back propagation algorithm. In back propagation neural network, the output values are compared
with the correct answer to compute the value of some predefined error function. By various techniques
the error is then fed back through the network. Using this information, the algorithms adjust the
weights of each connection in order to reduce the value of the error function by some small amount.
After repeating this process for a sufficiently large number of training cycles the network will usually
converge to some state where the error of the calculation is small.
• The goal of back propagation, as with most training algorithms, is to iteratively adjust the weights in the
network to produce the desired output by minimizing the output error. The algorithm’s goal is to solve
credit assignment problem. Back propagation is a gradient-descent approach in that it uses the
minimization of first-order derivatives to find an optimal solution.
RADIAL BASIS FUNCTION NEURAL
NETWORK:
• Radial basic functions consider the distance of a point with respect to the center. RBF functions have
two layers, first where the features are combined with the Radial Basis Function in the inner layer and
then the output of these features are taken into consideration while computing the same output in the
next time-step which is basically a memory.
APPLICATIONS OF RADIAL BASIS FUNCTION
NEURAL NETWORK
• This neural network has been applied in Power Restoration Systems. Power systems have increased in size and
complexity. Both factors increase the risk of major power outages. After a blackout, power needs to be restored
as quickly and reliably as possible.
• Power restoration usually proceeds in the following order:
The first priority is to restore power to essential customers in the communities. These customers provide health
care and safety services to all and restoring power to them first enables them to help many others. Essential
customers include health care facilities, school boards, critical municipal infrastructure, and police and fire
services.
Then focus on major power lines and substations that serve larger numbers of customers
Give higher priority to repairs that will get the largest number of customers back in service as quickly as possible
Then restore power to smaller neighborhoods and individual homes and businesses
The diagram beside shows the typical order of the
power restoration system.
Referring to the diagram, first priority goes to
fixing the problem at point A, on the
transmission line. With this line out, none of the
.