## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

w

Fundamentals of Neural Networks : AI Course lecture 37 – 38, notes, slides

ty

,w

ab

or

.m

**www.myreaders.info/ , RC Chakraborty, e-mail rcchak@gmail.com , June 01, 2010
**

www.myreaders.info/html/artificial_intelligence.html

www.myreaders.info

R

C

C

ha

kr

yr

ea

de

rs

.in

fo

Return to Website

**Fundamentals of Neural Networks
**

Artificial Intelligence

Neural network, topics : Introduction, biological neuron model, artificial neuron model, notations, functions; Model of artificial neuron - McCulloch-Pitts neuron equation; Artificial neuron – basic

elements, activation functions, threshold function, piecewise linear function, sigmoidal function; Neural network architectures - single layer feed-forward networks; of network, Learning learning learning, for multi layer in feed-forward Neural supervised Hebbian learning, separable Applications network, learning, learning, stochastic task, of XOR neural recurrent Methods algorithms, competitive linearly Networks

classification unsupervised gradient learning. learning

learning,

reinforced

learning,

descent algorithm and

Single-Layer NN System training,

single layer perceptron ,

Problem, learning algorithm, ADAptive LINear Element (ADALINE) architecture training mechanism; networks - clustering, classification, pattern recognition, function

approximation, prediction systems.

ea

de

rs

.in

fo ,w w w .m

**Fundamentals of Neural Networks
**

Artificial Intelligence

Topics

(Lectures 37, 38 2 hours) Slides 03-12

R

C

C

ha

kr

ab

or

ty

1. Introduction

yr

**Why neural network ?, Research history, Biological neuron model, Artificial neuron model, Notations, Functions.
**

2. Model of Artificial Neuron

13-19

**McCulloch-Pitts Neuron Equation, sigmoidal function.
**

3. Neural Network Architectures

Artificial neuron – basic elements, piecewise linear function,

Activation functions – threshold function,

20-23

**Single layer feed-forward network, Multi layer feed-forward network, Recurrent networks.
**

4 Learning Methods in Neural Networks

24-29

Classification of learning algorithms, Supervised learning, Unsupervised learning, Reinforced learning, Hebbian Learning, Gradient descent learning, Competitive learning, Stochastic learning.

5. Single-Layer NN System

30-36

Single layer perceptron : learning algorithm for training, linearly separable task, XOR Problem, learning algorithm; ADAptive LINear Element (ADALINE) : architecture, training mechanism

6. Applications of Neural Networks

37

**Clustering, Classification / pattern recognition, Function approximation, Prediction systems.
**

7. References :

02

38

.m

yr

ea

de

rs

.in

fo

ab

or

ty

C

C

ha

kr

,w

Neural Networks

What is Neural Net ?

w

w

**• A neural net is an artificial representation of the human brain that
**

tries to simulate its learning process. An artificial neural network (ANN) is often called a "Neural Network" or simply Neural Net (NN).

R

**• Traditionally, the word neural network is referred to a network of
**

biological neurons in the nervous system that process and transmit information.

**• Artificial neural network is an interconnected group of artificial neurons
**

that uses a mathematical model or computational model for information processing based on a connectionist approach to computation.

**• The artificial neural networks are made of interconnecting artificial
**

neurons which may share some properties of biological neural networks.

**• Artificial Neural network is a network of simple processing elements
**

(neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.

**• Artificial neural network is an adaptive system that changes its
**

structure based on through the network.

03

external

or

internal

information

that

flows

. such as pattern recognition or C data classification.An ANN is configured for a specific application. .in fo w w ab or ty .Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. through a learning process. Introduction Neural Computers mimic certain processing capabilities of the human brain. .w .de rs . This is true of ANNs as well.Neural yr ea AI-Neural Network – Introduction C ha Computing is an information processing paradigm. 04 . learn by example. . composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems.Artificial Neural Networks (ANNs). inspired by kr R biological system.m 1. like people.

1 Why Neural Network yr ea AI-Neural Network – Introduction or ty ■ The conventional computers are good for .w noisy data or data from the environment. massive parallelism.simple processing elements . The neural networks biological brains. . ask them to do. and . The von Neumann machines are based on the processing/memory abstraction of human information processing.fast arithmetic and does what programmer programs. ■ Neural Networks follow different paradigm for computing. are based on the parallel architecture of ■ Neural networks are a form of multiprocessor computer system. tolerance.in fo w w . ■ The neural network systems help where we can not formulate an algorithmic solution or where we can get lots of examples of the behavior we require. 05 .a high degree of interconnection. . with .m 1. and adapting to circumstances.simple scalar messages.de rs .interacting with fault R C C ha kr ab . ■ The conventional computers are not so good for .adaptive interaction between elements.

w w w . Minsky. The neural networks research declined throughout the 1970 and until mid 80's because the perceptron could not learn certain important functions. It is still the fundamental way in which ANNs operate. Hebb (1949) developed the first learning rule. that is if two neurons are active at the same time then the strength between them should be increased. The weight adjustment (learning algorithm) used in the perceptron was found more powerful than the learning rules used by Hebb. Parker and LeCun discovered a learning algorithm for multi-layer networks called back propagation that could solve problems that were not linearly separable.de rs . They suggested many ideas like : a neuron has a threshold level and once that level is reached the neuron fires. Neural network regained importance in 1985-86. It was thought to produce programs that could think.m 1. McCulloch and Pitts (1943) are generally recognized as the designers of the first neural network. The researchers. Papert. The perceptron caused great excitement. that will solve the problem.in fo . The McCulloch and Pitts's network had a fixed set of weights. The neural network model could be proved to converge to the correct weights. many researchers (Block. and Rosenblatt worked on perceptron. They combined many simple processing units together that could lead to an overall increase in computational power.2 Research History yr ea AI-Neural Network – Introduction or ty The history is relevant because for nearly two decades the future of Neural network remained uncertain. In the 1950 and 60's. R C C ha kr 06 ab . Minsky & Papert (1969) showed that perceptron could not learn those functions which are not linearly separable.

Axon hillock is the site of summation for incoming information. Since fat inhibits the propagation of electricity. This insulation acts to increase the rate of transmission of signals. support chemical processing and production of neurotransmitters. A gap exists between each myelin sheath cell along the axon. muscles. Dendrites are branching fibers that extend from the cell body or soma. the myelin sheaths speed the rate of transmission of an electrical impulse along the axon.3 Biological Neuron Model yr ea AI-Neural Network – Introduction ty . the collective influence of all neurons that conduct impulses to a given neuron will determine whether or not an action potential will be initiated at the R C C ha kr ab or Fig.de rs . The massive interaction between all cells and their parallel processing only makes the brain's abilities possible. Axon is a singular fiber carries information away from the soma to the synaptic sites of other neurons (dendrites and somas). Structure of Neuron axon hillock and propagated along the axon.in fo w w . Myelin Sheath consists of fat-containing cells that insulate the axon from electrical activity. At any moment. Electrochemical communication between neurons takes place at these junctions.w The human brain consists of a large number. 07 . Each cell works like a simple processor. Synapse is the point of connection between two neurons or a neuron and a muscle or a gland. Terminal Buttons of a neuron are the small knobs at the end of an axon that release chemicals called neurotransmitters. or glands. the signals jump from one gap to the next.m 1. Soma or cell body of a neuron contains the nucleus and other structures. Nodes of Ranvier are the gaps (about 1 µm) between myelin sheath cells long axons are Since fat serves as a good insulator. more than a billion of neural cells that process information.

in fo w w . McCulloch-Pitts introduced a simplified model of this real neurons. R C C ha kr ab or ty .de rs .w yr ea AI-Neural Network – Introduction Structure of a Neural Cell in the Human Brain ■ Dendrites receive activation from other neurons. ■ Axons act as transmission lines to send activation to other neurons. 08 .m • Information flow in a Neural Cell The input /output and the propagation of information are shown below. ■ Synapses the junctions allow signal transmission between the axons and dendrites. ■ The process of transmission is by diffusion of chemicals called neuro-transmitters. ■ Soma processes the incoming activations and converts them into output activations.

4 Artificial Neuron Model yr ea AI-Neural Network – Introduction ty .The cell can start building up signals again.e. ■ An output line transmits the result to other neurons. ha kr ab or w C C • The McCulloch-Pitts Neuron This is a simplified model of real neurons. known as a Threshold Logic Unit. and then applies a non-linear activation function (i. Input1 Input 2 R Σ Output Input n ■ A set of input connections brings in activations from other neurons.de rs .The input to a neuron arrives in the form of signals. 09 . ■ A processing unit sums the inputs.The signals build up in the cell. squashing / transfer / threshold function). . .m 1. In other words . .in fo w . .w An artificial neuron is a mathematical function conceived as a simple model of a real (biological) neuron.Finally the cell discharges (cell fires) through the output .

yn ) Add : Two vectors of same length added to give another vector. .in fo w w . s = x 1 + x2 + x 3 + .de rs . xn + yn) Multiply: Two vectors of same length multiplied to give a scalar. xn ) . x2 . Row Vectors X = ( x1 . Y = x1 y1 + x2 y2 + . . . y2 . . . x2 + y2 . . Vectors. . .m 1.5 Notations yr ea AI-Neural Network – Introduction ab or ty Recaps : Scalar. y3 . Y = ( y1 . + xnyn = 10 i=1 Σ n xi yi . Matrices and Functions .. . . + x n = i=1 Σ n C R xi (1 x n) C • Vectors : An ordered sets of related numbers. . . . .. . x3 .w ha kr • Scalar : The number xi can be added up to give a scalar number. Z = X + Y = (x1 + y1 . p = X . . .

.in fo w w . . . . wmn R C C ha kr ab or ty . . . w11 w21 W = . .w yr wm1 w11 Add or Subtract : Matrices of the same size are added or subtracted A+B =C. . . a11 a12 a21 a22 b11 b21 b12 b22 cij = aij + bij c12 = a12+b12 C22 = a22 +b22 matrix C. (m x p) + = c11 = a11+b11 C21 = a21+b21 Multiply : matrix A multiplied by matrix B gives (m x n) (n x p) elements a11 a12 a21 a22 c11 c12 C21 C22 11 cij = Σ b11 b21 n k=1 aik bkj c11 c21 (a12 x B21) (a12 x B22) (a22 x B21) (a22 x B22) c12 c22 x b12 b22 + + + + = = = = = (a11 x b11) (a11 x b12) (a21 x b11) (a21 x b12) . . . . . component by component.de rs . column no = n w11 w21 . . . . ea AI-Neural Network – Introduction row no = m . . . w1n w21 . .m • Matrices : m x n matrix . . . . .

6 Functions The Function y= f(x) describes a relationship. an input-output mapping. from x to y.6 .8 C ha kr ab or ty .6 .4 .m 1.8 O/P 1 sigmoid (x) = 1+e -x .4 . ■ Threshold or Sign function : 1 .in fo w w .w yr ea AI-Neural Network – Introduction R sgn(x) defined as O/P C Sign(x) 1 sgn (x) = if x ≥ 0 .de rs .2 0 -4 -3 -2 -1 0 1 2 3 4 I/P 12 .2 0 -4 -3 -2 -1 0 1 2 3 4 I/P 0 if x < 0 ■ Threshold or Sign function : sigmoid(x) defined as a smoothed (differentiable) form of the threshold function Sign(x) 1 .

13 .1 McCulloch-Pitts (M-P) Neuron Equation McCulloch-Pitts neuron is a simplified model of real biological neuron. Model of Artificial Neuron A very simplified model of real neurons is known as a Threshold Logic Unit (TLU).An output line transmits the result to other neurons.Smooth thresholding.m 2. 2.de rs . . and .A processing unit sums the inputs. The model is said to have : . Σ Simplified Model of Real Neuron (Threshold Logic Unit) Output Input n The equation for the output of a McCulloch-Pitts neuron as a function of 1 to n inputs is written as Output = sgn ( i=1 Σ n Input i . squashing / transfer / threshold function).w .in fo w w or ty .Non-linear summation. . . Σ n Input i Input i ≥ Φ < Φ then Output = 1 then Output = 0 i=1 Σ n In this McCulloch-Pitts neuron model.Temporal information processing. the missing features are : . and then applies a non-linear activation R C C ha kr ab yr .Non-binary input and output.e.Stochastic.A set of synapses (connections) brings in activations from other neurons.Φ) where Φ If If i=1 is the neuron’s activation threshold. . Input 1 Input 2 ea AI-Neural Network – Model of Neuron function (i.

thresholds. wn are weights to determine the strength of input vector X = [x1 . It affects the activation of the node output y as: Y = f (I) = f{ To generate the final output Y . I = X .Φk } .de rs . . x2 .W = x1 w1 + x2 w2 + .Basic Elements yr ea AI-Neural Network – Model Neuron or ty Neuron consists of three basic components . . 14 i=1 Σ n xi wi .in fo w w .m 2. the sum is passed on to a non-linear filter f called Activation Function or Transfer function or Squash function which releases the output Y. Each input is multiplied by the associated weight of the neuron connection XT W.2 Artificial Neuron . . w2 . ha kr ab . . . . . and a single activation function. The +ve weight excites and the -ve weight inhibits the node output. . xn]T.weights. .w R x1 x2 C W1 W2 Activation Function C Σ i=1 y xn Wn Synaptic Weights Threshold Φ Fig Basic Elements of an Artificial Linear Neuron ■ Weighting Factors w The values w1 . + xnwn = ■ Threshold Φ T i=1 Σ n xi wi The node’s internal threshold Φ is the magnitude offset. .

.Linear Function.Q is the activation for the neuron is the activation for neuron i is the weight is the threshold subtracted ■ Activation Function An activation function f performs a mathematical operation on the signal output. This is then passed through the sigmoid function.de rs . .x)) x = a ai wi Q where Σ i ai wi .Piecewise Linear Function.Sigmoidal (S shaped) function.Threshold Function. 15 . The most common activation functions are: .w to the neuron minus its threshold value.in fo ea AI-Neural Network – Model of Neuron w w . The total input for each neuron is the sum of the weighted inputs R C C ha kr ab .m ■ Threshold for a Neuron yr or ty In practice. neurons generally do not fire (produce an output) unless their total input goes above a threshold value.Tangent hyperbolic function . The activation functions are chosen depending upon the type of problem to be solved by the network. . The equation for the transition in a neuron is : a = 1/(1 + exp(.

I/P Horizontal axis shows sum of inputs .All functions f are designed to produce values between 0 and 1. if the weighted sum of the inputs is -ve. 1 if I ≥ 0 Y = f (I) = 0 if I < 0 bipolar threshold O/p 1 I/P Output of a bipolar threshold function produces : 1 if the weighted sum of the inputs is +ve.w • Threshold Function A threshold (hard-limiter) activation function is either a binary type or a bipolar type as shown below. 1 if I ≥ 0 -1 Y = f (I) = -1 -1 if I < 0 Neuron with hard limiter activation function is called McCulloch-Pitts model. if the weighted sum of the inputs is -ve.de rs . . 16 .2 Activation Functions f . binary threshold O/p 1 I/P Output of a binary threshold function produces : 1 0 if the weighted sum of the inputs is +ve.Types yr ea AI-Neural Network – Model of Neuron or ty Over the years. researches tried several functions to convert the input into an outputs. The most commonly used functions are described below.m 2. R C C ha kr ab .O/P Vertical axis shows the value the function produces ie output.in fo w w . . .

Piecewise Linear O/p R C C ha kr ab or ty . The mathematical model for a symmetric saturation function is described below.w yr ea AI-Neural Network – Model of Neuron This is a sloping function that produces : -1 for a -ve weighted sum of inputs. for a +ve weighted sum of inputs. proportional to input for values between +1 and -1 weighted sum.de rs .m • Piecewise Linear Function This activation function is also called saturating linear function and can have either a binary or bipolar range for the saturation limits of the output. 1 Y = f (I) = I -1 if if I ≥ 0 I < 0 +1 I/P -1 1 ∝ I if -1 ≥ I ≥ 1 17 .in fo w w .

0 ≤ f(I) ≤ 1 0. with a -4 -2 0 1 2 smooth transition between the two. By varying α different shapes of the function can be obtained which adjusts the abruptness of the function as it changes between the two asymptotic values.0 Y = f (I) = 1+e -α I .0 α= α= 0. differentiable and strictly increasing function. It is mathematically well behaved. This is most common type of activation used to construct the neural networks.m • Sigmoidal Function (S-shape function) The nonlinear curved S-shape function is called the sigmoid function.5 1. 0 ≤ f(I) ≤ 1 I/P This is explained as ≈ 0 for large -ve input values. α is slope parameter also called shape parameter.5 = 1/(1 + exp(-α I)) .de rs . symbol the λ is also used to represented this parameter. Sigmoidal function 1 O/P R C C ha kr ab or ty . 1 for large +ve values. The sigmoidal function is achieved using exponential equation.w yr ea AI-Neural Network – Model of Neuron A sigmoidal transfer function can be written in the form: 1 α= 2. 18 .in fo w w .

in fo w w .m • Example : The neuron shown consists of four inputs with the weights.de rs . is +1 I = XT . prior to the activation function stage. W = +1 1 2 5 8 -1 +2 = 14 = (1 x 1) + (2 x 1) + (5 x -1) + (8 x 2) = 14 With a binary activation function the outputs of the neuron is: y (threshold) = 1. kr ab or ty .w yr ea AI-Neural Network – Model of Neuron R C C x1=1 x2=2 X3=5 +1 +1 -1 +2 Synaptic Weights ha I Activation Function Σ Summing Junction y xn=8 Φ=0 Threshold Fig Neuron Structure of Example The output I of the network. 19 .

consisting large number of simple highly interconnected processing elements as artificial neuron in a network and a set E of edges. v2 . v4. v3 . R Example : V1 e5 e2 V3 V5 e4 e5 V2 e3 V4 Fig. Directed Graph Vertices V = { v1 .m 3. e3 . E) .in fo w w or ty . Neural Network Architectures An Artificial Neural Network (ANN) is a data processing system. e2 . e4. . v5 } Edges 20 E = { e1 .de rs .The vertices may represent neurons (input/output) and .w . an ordered 2-tuple (V.The edges may represent synaptic links labeled by the weights attached. kr ab yr ea AI-Neural Network – Architecture C C structure that can be represented using a consisting a set V of vertices ha directed graph G. e5 } .

via a C C series of weights. The synaptic links carrying weights connect every input to every output .de rs . but not other way. where the inputs of a single layer of .w kr ab are directly connected to the outputs. input xi weights wij output yj x1 w12 w11 w21 w22 w2m wn1 wn2 w1m y1 x2 y2 xn wnm Single layer Neurons Fig. Single Layer Feed-forward Network ym 21 . This way it is considered a network of feed-forward type.m 3.1 Single Layer Feed-forward Network yr ea AI-Neural Network – Architecture or ty The Single Layer Feed-forward Network consists weights . and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1).in fo w w . The sum of the products of the weights and the inputs R ha is calculated in each neuron node. otherwise it takes the deactivated value (typically -1).

m1 neurons in the first hidden layers. besides having the input and the output layers. The computational units of the hidden layer are known as hidden neurons. .in fo w w .2 Multi Layer Feed-forward Network yr ea AI-Neural Network – Architecture or ty The name suggests.A multi-layer feed-forward network with ℓ input neurons.m1 . also have one or more intermediary layers called hidden layers. m2 neurons in the second hidden layers.m2 – n ).de rs .m 3. above illustrates a multilayer feed-forward network with a configuration (ℓ . The architecture of this class of network.The input layer neurons are linked to the hidden layer neurons. the weights on these links are referred to as input-hidden layer weights. . 22 . Input hidden layer weights vij Output hidden layer weights wjk R C C ha kr ab . .m – n).The hidden layer neurons and the corresponding weights are referred to as output-hidden layer weights. . .Fig.w x1 x2 v2m v11 v21 v1m w11 y1 y2 y3 y1 w12 w11 vn1 Vℓ m ym w1m xℓ Input Layer neurons xi Hidden Layer neurons yj yn Output Layer neurons zk Fig. and n output neurons in the output layers is written as (ℓ . it consists of multiple layers. Multilayer feed-forward network in (ℓ – m – n) configuration.The hidden layer does intermediate computation before directing the input to output layer.

23 .3 Recurrent Networks yr ea AI-Neural Network – Architecture ty .m 3.w The Recurrent Networks differ from feed-forward architecture.de rs . Recurrent neural network There could be neurons with self-feedback links. A Recurrent network has at least one feed back loop. that is the output of a neuron is feedback into itself as input. Example : ha kr ab or w C x1 y1 R C y1 x2 y2 Feedback links ym Xℓ Input Layer neurons xi Hidden Layer neurons yj Yn Output Layer neurons zk Fig.in fo w .

Unsupervised Learning .Hebbian.de rs .presence or absence of teacher and . based on the rules used.Stochastic learning. 24 yr ea AI-Neural Network –Learning methods .the information provided for the system to learn. Learning methods in Neural Networks The learning methods in neural networks are classified into three basic types : . . as . These are further categorized.in fo w w or ty .m 4.Gradient descent.Competitive .Reinforced Learning R C C ha kr ab These three types are classified based on : .Supervised Learning. .w . .

w w w or ty R C C ha kr ab . Neural Network Learning algorithms yr ea AI-Neural Network –Learning methods Supervised Learning (Error based) Reinforced Learning (Output based) Unsupervised Learning Stochastic Error Correction Gradient descent Hebbian Competitive Least Mean Square Back Propagation Fig.m Classification of Learning Algorithms Fig.in fo .de rs . These algorithms are explained in subsequent slides. Classification of learning algorithms 25 . below indicate the hierarchical representation of the algorithms mentioned in the previous slide.

. .A reward is given for correct answer computed and a penalty for a wrong answer.Every input pattern is used to train the network. Note : The Supervised and Unsupervised learning methods are most popular forms of learning compared to Reinforced learning.A yr ea AI-Neural Network –Learning methods ty .No teacher is present. between network's R C computed output and the correct expected output.The expected or desired output is not presented to the network.de rs .w teacher is present during learning process and presents w ab expected output.The "error" generated is used to change network parameters that result improved performance. . . • Unsupervised Learning . . • Reinforced Learning . . 26 .A teacher is present but does not present the expected or desired output but only indicated if the computed output is correct or incorrect.in fo w .Learning C ha kr or process is based on comparison.The information provided helps the network in its learning process.The system learns of it own by discovering and adapting to the structural features in the input patterns.m • Supervised Learning . generating "error". .

m • Hebbian Learning Hebb proposed a rule based on correlative weight adjustment. the input-output pattern pairs (Xi .w yr ea AI-Neural Network –Learning methods R the weight matrix W.in fo w w . In this rule. Anderson.de rs . Yi) are associated by C ha kr ab or ty . known as correlation matrix computed as W= C i=1 Σ n Xi YiT where Yi There are T is the transpose of the associated output vector Yi variations of this rule proposed by the other many researchers (Kosko. Lippman) . 27 .

because weight is dependent on the gradient of the error E. Note : The Hoffs Delta rule and Back-propagation learning rule are the examples of Gradient descent learning.If ∆ Wij is the weight update of the link connecting the i th and the j th neuron of the two neighboring layers.Here.de rs . 28 . C ha kr ab or ty .in fo w w .w yr ea AI-Neural Network –Learning methods R the activation function of the network the updates of is required to be C differentiable. then ∆ Wij is defined as ∆ Wij = η (∂ E / ∂ Wij ) where η is the learning rate parameters and (∂ E / ∂ Wij ) is error gradient with reference to the weight Wij . . .m • Gradient Descent Learning This is based on the minimization of errors E defined in terms of weights and the activation function of the network.

in fo w w .In this method. . 29 . all neurons in the layer compete. those neurons which respond strongly to the input or ty . .This strategy is called "winner-takes-all".Example : Simulated annealing which is a learning mechanism employed by Boltzmann and Cauchy machines. R • Stochastic learning .In this method the weights are adjusted in a probabilistic fashion. C C ha and the winning neuron undergoes weight adjustment .m • Competitive Learning .de rs . .When an input pattern is presented.w yr ea AI-Neural Network –Learning methods kr ab stimuli have their weights updated.

input xi weights wij output yj x1 w12 w11 w21 w22 w2m wn1 wn2 w1m y1 x2 y2 xn wnm Single layer Perceptron Fig.w C ha kr .1 Single Layer Perceptron Definition : An arrangement of one input layer of neurons feed forward yr R C ea AI-Neural Network –Single Layer learning to one output layer of neurons is known as Single Layer Perceptron. 5. Single-Layer NN Systems Here. a simple Perceptron Model and an ADALINE Network Model is presented. 1 Simple Perceptron Model ≥0 < 0 ym if net j if net j y j = f (net j) = 0 30 where net j = i=1 Σ n xi wij .m 5.in fo w w ab or ty .de rs .

. xi W ij is the new adjusted weight.e.e. R C C ha kr ab or ty . Where K+1 K+1 K W ij = W ij + α . K+1 W ij = K W ij − If the output is 1 but should have been 0 then the weights are decreased on the active input link i. is K W ij is the old weight xi the input and α is the learning rate parameter.e. K+1 W ij = W K ij − α .de rs . α 31 small leads to slow and α large leads to fast learning. − If the output is correct then no adjustment of weights is done. xi but should have been 1 then the weights are − If the output is 0 increased on the active input link i.m • Learning Algorithm for Training Perceptron The training of Perceptron is a supervised learning algorithm where weights are adjusted to minimize error when ever the output does not match the desired output.w yr ea AI-Neural Network –Single Layer learning i.in fo w w .

Generalizing.de rs . C separable if there is a hyper plane of (n-1) dimensions the sets. .Definition : ab or ty yr ea AI-Neural Network –Single Layer learning C ha Sets of points in 2-D space are linearly separable if the a set of points in n-dimensional space are linearly separates kr R sets can be separated by a straight line.in fo . Example S1 S2 S1 S2 (a) Linearly separable patterns (b) Not Linearly separable patterns Note : Perceptron cannot find weights for classification problems that are not linearly separable. 32 . .w w w .m • Perceptron and Linearly Separable Task Perceptron can not handle tasks which are not separable.

de rs . 33 . x2 plane .in fo ea AI-Neural Network –Single Layer learning .Perceptron is unable to find a line separating even parity input patterns from odd parity input patterns. 1) XOR truth table Even parity means even number of 1 bits in the input Odd parity means odd number of 1 bits in the input (0. 0) ° • C kr • ab (1. 1) ° R X1 Output of XOR in X1 .There is no way to draw a single straight line so that the circles are on one side of the line and the dots on the other side. .m • XOR Problem : Input x1 0 1 0 1 Exclusive OR operation Output 0 0 1 1 X2 (0. 1) yr or ty Input x2 0 1 1 0 C ha Even parity • Odd parity ° (0.w w w .

xn . . ■ Step 4 : Xj of the training set using the i=1 ie compute the weighted sum of inputs net j = Σ n xi wi for each input pattern j . . 1. Compute the output y j using the step function 1 y j = f (net j) = 0 ■ Step 5 : if net j if net j ≥0 < 0 where net j = i=1 Σ n xi wij Compare the computed output each input pattern j yj with the target output yj for .m • Perceptron Learning Algorithm The algorithm is illustrated step-by-step.de rs . . . . 2. Then wi = wi + α xi . .w w yr ea AI-Neural Network –Single Layer learning C ha Create a peceptron with (n+1) input neurons x0 . 1. . ■ Step 6 : Otherwise. C Let O be the output neuron.α xi . . Initialize weight W = (w0 . If all the input patterns have been classified correctly. ■ Step 3 : Iterate through the input patterns weight set. . . x1 . . n If the computed outputs yj is 0 but should have been 1. but should have been 0. update the weights as given below : If the computed outputs yj is 1 Then wi = wi . . wn ) to random weights.in fo w . . ■ Step 7 : goto step 3 ■ END 34 . ■ Step 1 : ab or ty . . . . . w1 . . where ■ Step 2 : kr R x0 = 1 is the bias input. i= 0. i= 0. . n where α is the learning parameter and is constant. . . then output (read) the weights and exit. . 2. . .

in fo w w .w It is a well-established supervised training method that R • Architecture of a simple ADALINE x1 x2 W1 W2 Σ Neuron Output xn Wn Error – Σ + Desired Output The basic structure of an ADALINE is similar to a neuron with a linear activation function and a feedback loop. ] 35 as well as the desired output . C C ha kr ab .de rs . where its weights are determined by the normalized least mean as square (LMS) training law. has been used over a wide range of diverse applications. During the training phase of ADALINE. the input vector are presented to the network. [The complete training mechanism has been explained in the next slide.m 5. The LMS learning rule is also referred to delta rule.2 ADAptive LINear Element (ADALINE) yr ea AI-Neural Network –ADALINE or ty An ADALINE consists of a single neuron of the McCulloch-Pitts type.

.de rs .Realizations of logic gates such as AND.Realize only those logic functions that 36 are linearly separable. network the test produces inputs. of the Once not weights properly If adjusted. . x2 .m • ADALINE Training Mechanism (Ref. Usage of ADLINE : In practice. .Make binary decisions. . consistent it is said responses degree with that the network could generalize. The process of training and generalization are two important attributes of this network.w w yr ha kr ■ The basic structure of an ADALINE is similar to a linear neuron ab R C with an extra feedback loop. ■ After the ADALINE is trained. C the input vector X = [x1 . . Fig. xn] T as well as desired output are presented to the network. ■ Thus. the output is sent through a binary threshold. which are the to training a high set. ■ The activation the in function are is not used during the training response phase. in the previous slide .Architecture of a simple ADALINE) or ty . ■ During the training phase of ADALINE. the network performs an n dimensional mapping to a scalar value. ■ The weights are adaptively adjusted based on delta rule.in fo ea AI-Neural Network –ADALINE w . . . an input vector presented to the network with fixed weights will result in a scalar output. NOT and OR . an ADALINE is used to . the the trained unit can be tested by applying various inputs.

Prediction has a significant impact on decision support systems. ■ Function approximation : The tasks of function approximation is to find an estimate of the function approximation.m 6. Prediction differs from function approximation by considering time factor. includes algorithmic implementations such as associative memory.de rs . Applications of Neural Network Neural Network Applications can be grouped in following categories: ■ yr ea AI-Neural Network –Applications kr ab Clustering: C C A clustering algorithm explores the similarity between patterns and places similar patterns in a cluster. Various engineering and scientific disciplines require ■ Prediction Systems: The task is to forecast some future values of a time-sequenced data. 37 . unknown function subject to noise. System may be dynamic and may produce different results for the same input data based on system state (time).w w .in fo w or ty . ■ R ha Classification/Pattern recognition: The (like task of pattern recognition to one is of to assign an input This pattern category handwritten symbol) many classes. Best known applications include data compression and data mining.

page 1-1 to 19-14.w yr ea AI-AI-Neural Network –References 1. page 1-889. Chapter 1-7. Algorithms and Applications". ( 1996) . "Elements of Artificial Neural Networks". Haykin. Chapter 1.m 7. Chapter 1-15. by Simon S. Chilukuri K. Nilsson.17. (1996). An exhaustive list is 38 . Chapter 1-19. Morgan Kaufmann Inc. Chapter 3. kr ab or C 2. mainly internet.in fo w w . by James A. Related documents from open source. Prentice Hall. (1997). (1993). (1998). Demuth and Mark Hudson Beale. by Kishan Mehrotra. "An Introduction to Neural Networks". PWS Publ.. Company. page 1-339. page 1-449. Anderson.de rs . References : Textbooks ty . MIT Press. Hagan. being prepared for inclusion at a later date. R C ha 3. Page 37-48. Howard B. 6. 7. "AI: A New Synthesis". "Neural Networks: A Comprehensive Foundation". "Neural Network Design". (1999). 5. Chapter1-7. Fausett. by Nils J. Mohan and Sanjay Ranka. MIT Press. 4. page 1-585. by Laurene V. Prentice Hall. "Fundamentals of Neural Networks: Architecture. by Martin T.

- Artificail Neural Network Seminar Report
- Neural Networks
- Artificial Neural Networks (ANN)
- Neural Network Presentation
- Neural Network
- Neural Networks
- Neural Networks
- Neural Networks and Computing Learning Algorithms and Applications
- neural network
- Neural Networks
- neural_networks_-_basics_matlab.pdf
- An Introduction to Neural Networks,Second edition
- Support Vector Machines, Neural Networks, And Fuzzy Logic Models (2001)
- Artificial Neural network.pdf
- Neural Networks
- Fuzzy Sets and Fuzzy Logic Theory and Applications
- Neural Networks and Learning Machines (3rd Edition)
- Digital Signal Processing Notes
- Fuzzy Logic and Neural Networks_ Basic Concepts & Application_8122421822
- Definitions and Basics TRIGNOMETRY
- ML Geometry 1-2 Points Lines and Planes
- Associative Memory Implementation With Artificial Neural Networks
- Haykin,Xue-Neural Networks and Learning Machines 3ed Soln
- BIOM9621-Maths-Revision.pdf
- neural networks ppt
- Kevin Gurney - An Introduction to Neural Networks
- Associative Memory
- Neural Network Learning Theoretical Foundations
- Principles of Artificial Neural Networks 9812706240
- Neural Networks

- Geetha
- bagavat
- dye sensitized analysis
- IJAEA-1-5-8(57-63)30.01.20141391065744
- Powder Coating Process Final
- Inverted Pendulum
- IEEE Electron Device Letters Volume 18 Issue 2 1997 [Doi 10.1109%2F55.553040] Zhao, J.; Wang, A.; Abbaspour-Sani, E.; Yun, F.; Green, M.a. -- Improved Efficiency Silicon Solar Cell Module
- Jk_IDY Common Yoga Protocol_book
- LM3488 Flyback-Mathcad Example
- Jack Fruit
- Harmonics in Power System
- Definition of Power Factor
- AC-Motors
- Micro
- ee2005

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading