Prepared By

:

CONTENTS
         

Soft computing Introduction to Neural network Human & artificial neuron Neural network topologies Training of artificial neural network Perceptrons Back propagation algorithm Applications Advantages & disadvantages Conclusion

Soft computing
Soft computing refers to a collection of computational techniques in computer science, machine learning and some engineering disciplines, which study, model, and analyze very complex phenomena: those for which more conventional methods have not yielded low cost, analytic, and complete solutions. Soft computing uses soft techniques contrasting it with classical artificial intelligence & hard computing techniques. Hard computing is bound by a Computer Science concept called NP-complete, which means, in layman's terms, that there is a direct connection between the size of a problem and the amount of resources needed to solve the problem . Soft computing aids to surmount NP-complete problems by using inexact methods to give useful but inexact answers to intractable problems

Components of soft computing  Neural networks(NN)  Fuzzy system(FS)  Evolutionary computation(EC)  Swarm intelligence  Choas theory .

Neural Network  An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems. An ANN is configured for a specific application. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. .ANNs learn by example. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. The key element of this paradigm is the novel structure of the information processing system. This is also true in case of ANNs. Like people . process information. such as pattern recognition or data classification. such as the brain. through a learning process.

g.A biological Neuron • Dendrits: (Input) Getting other activations • Axon: (Output ) forward the activation Dendrits (from 1mm up to 1m long) cell and • Synapse: transfer of activation: nucleus – to other cells. Dendrits of other neurons Axon – a cell has about 1.000 to 10.000 (Neurit) connections to other cells Synapsis • Cell Nucleus: (processing) evaluation of activation . e.

Artificial Neuron Dendrits cell and nucleus Axon (Neurit) Synapsis .Natural vs.

A simple neuron Many input & one output Two modes of operation (training mode & using mode) .

Firing rules  1-taught set of patterns to fire  0-taught set of patterns not to fire X1: X2: 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 X3: 0 1 0 1 0 1 0 1 OUT: 0 0 0/1 0/1 0/1 1 0/1 1 .

X1: 0 0 0 0 1 1 1 1 X2: 0 0 1 1 0 0 1 1 X3: 0 1 0 1 0 1 0 1 OUT: 0 0 0 0/1 0/1 1 1 1 .

A more complicated neuron Inputs are weighted The Neuron fires if X1W1+ X2W2+……>T T=Threshold .

Neural Network topologies  Feed-forward Neural network .

Recurrent neural network .

These input-output pairs can be provided by an external teacher. or by the system which contains the neural network (self-supervised .Training of artificial neural networks  Supervised learning or Associative learning It is a technique in which the network is trained by providing it with input and matching output patterns.

In this paradigm the system is supposed to discover statistically salient features of the input population.Unsupervised learning or Self -organisation  It is a technique in which an (output) unit is trained to respond to clusters of pattern within the input. Unlike the supervised learning paradigm .there is not a priori set of categories into which the patterns are to be classified. rather the system must develop its own representation of the input stimuli .

following which there will be no more changes in its parameters.Reinforcement Learning  This type of learning may be considered as an intermediate form of the above two types of learning. parameter adjustment is continued until an equilibrium state occurs. Generally. . The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters . The self organizing neural learning may be categorized under this type of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment.

. The perceptron function can sometimes be written as The space H of candidate hypotheses considered in perceptron learning is the set of all possible real-valued weight vectors.Perceptrons One type of ANN system is based on a unit called a perceptron.

Representational Power of Perceptrons  .

iterating through the training examples as many times as needed until the perceptron classifies all training examples correctly. Weights are modified at each step according to the perceptron training rule.EECP0720 Expert Systems – Artificial Neural Networks The Perceptron Training Rule One way to learn an acceptable weight vector is to begin with random weights. modifying the perceptron weights whenever it misclassifies an example. This process is repeated. which revises the weight associated with input according to the rule . then iteratively apply the perceptron to each training example.

that is. .Gradient Descent and Delta Rule In order to derive delta training rule let us consider the training of an unthresholded perceptron. let us consider the training error of a hypothesis relative to the training examples. a linear unit for which the output o is given by In order to derive a weight learning rule for linear units.

The gradient specifies the direction that produces the steepest increase in E. The negative of this vector therefore gives the direction of steepest decrease. The training rule for gradient descent is .Derivation of the Gradient Descent Rule The vector derivative is called the gradient of E with respect to written .

) The negative sign is presented because we want to move the weight vector in the direction that decreases E. . This training rule can also written in its component form which makes it clear that steepest descent is achieved by altering each component of in proportion to .Derivation of the Gradient Descent Rule (cont.

Derivation of the Gradient Descent Rule (cont.) The vector of derivatives that form the gradient can be obtained by differentiating E The weight update rule for standard gradient descent can be summarized as .

BACKPROPAGATION Algorithm .

Architecture of Backpropagation .

Backpropagation Learning Algorithm .

Backpropagation Learning Algorithm (cont.) .

) .Backpropagation Learning Algorithm (cont.

Backpropagation Learning Algorithm (cont.) .

) .Backpropagation Learning Algorithm (cont.

target tracking. sonar.Applications  Aerospace  High performance aircraft autopilots. aircraft component fault detectors  Automotive  Automobile automatic guidance systems. nonlinear modeling . machine vision. credit application evaluators  Defense  Weapon steering. new kinds of sensors. radar and image signal processing including data compression. aircraft control systems. process control. signal/image identification  Electronics  Code sequence prediction. autopilot enhancements. warranty activity analyzers  Banking  Check and other document readers. chip failure analysis. feature extraction and noise suppression. flight path simulations. voice synthesis. aircraft component simulations. integrated circuit chip layout. object discrimination. facial recognition.

beer testing. welding quality analysis. visual quality inspection systems. corporate bond rating. planning and management. paper quality prediction. portfolio trading program. analysis of grinding operations.Applications  Financial  Real estate appraisal. currency price prediction  Manufacturing  Manufacturing process control. real-time particle identification. machine maintenance analysis. computer chip quality analysis. dynamic modeling of chemical process systems . loan advisor. corporate financial analysis. product design and analysis. mortgage screening. credit line use analysis. chemical product design analysis. process and machine diagnosis. project bidding.

manipulator controllers. realtime translation of spoken language. vehicle scheduling. automatic bond rating. forklift robot. text to speech synthesis  Securities  Market analysis. stock trading advisory systems  Telecommunications  Image and data compression. customer payment processing systems  Transportation  Truck brake diagnosis systems. routing systems . automated information services. vision systems  Speech  Speech recognition. vowel classification.Applications  Robotics  Trajectory control. speech compression.

 When an element of the neural network fails.  It can be implemented in any application. it can continue without any problem by their parallel nature.  A neural network learns and does not need to be reprogrammed.  It can be implemented without any problem .Advantages  A neural network can perform tasks that a linear program can not.

Disadvantages  The neural network needs training to operate.  Requires high processing time for large neural networks.  The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated. .

 They are also very well suited for real time systems because of their first response and computational times which are due to their parallel architecture. .  There is no need to understand the internal mechanisms of the task.Conclusion  The ability to learn by examples make them very flexible and powerful.

THANK YOU FOR UR PATIENCE .

Sign up to vote on this title
UsefulNot useful