You are on page 1of 3

The Counterpropagation Network

The CPN learning process (general form for n input units and m output units): 1. Randomly select a vector pair (x, y) rom the training set. !. Normali"e (shrink#expand to $length% 1) the input vector x &y dividing every component o x &y the magnitude ''x'', where

The Counterpropagation Network


(. )nitiali"e the input neurons with the normali"ed vector and compute the activation o the linear hidden*layer units. +. )n the hidden (competitive) layer, determine the unit , with the largest activation (the winner). -. .d/ust the connection weights &etween , and all N input*layer units according to the ormula0

1. Repeat steps 1 to - until all training patterns have &een processed once.
November 26, 2012 Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I! 1 November 26, 2012 Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I! 2

The Counterpropagation Network


2. Repeat step 1 until each input pattern is consistently associated with the same competitive unit. 3. 4elect the irst vector pair in the training set (the current pattern). 5. Repeat steps ! to + (normali"ation, competition) or the current pattern. 16. .d/ust the connection weights &etween the winning hidden*layer unit and all 7 output layer units according to the e8uation0

The Counterpropagation Network


11. Repeat steps 5 and 16 or each vector pair in the training set. 1!. Repeat steps 3 through 11 until the di erence &etween the desired and the actual output alls &elow an accepta&le threshold.

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

"

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

The Counterpropagation Network


9ecause the input is two*dimensional, each unit in the hidden layer has two weights (one or each input connection). There ore, input to the network as well as weights o hidden*layer units can &e represented and visuali"ed &y two*dimensional vectors. :or the current network, all weights in the hidden layer can &e completely descri&ed &y three !; vectors.

The Counterpropagation Network


This diagram shows a sample state o the hidden layer and a sample input to the network0

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

The Counterpropagation Network


)n this example, hidden*layer neuron <! wins and, according to the learning rule, is moved closer towards the current input vector.

The Counterpropagation Network


. ter doing this through many epochs and slowly reducing the adaptation step si"e , each hidden*layer unit will win or a su&set o inputs, and the angle o its weight vector will &e in the center o gravity o the angles o these inputs. all input vectors in the training set

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

The Counterpropagation Network


. ter the first phase o the training, each hidden*layer neuron is associated with a su&set o input vectors. The training process minimi"ed the average angle di erence &etween the weight vectors and their associated input vectors. )n the second phase o the training, we ad/ust the weights in the network=s output layer in such a way that, or any winning hidden*layer unit, the network=s output is as close as possi&le to the desired output or the winning unit=s associated input vectors. The idea is that when we later use the network to compute unctions, the output o the winning hidden* layer unit is 1, and the output o all other hidden*layer units is 6.
November 26, 2012 Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I! &

The Counterpropagation Network


9ecause there are two output neurons, the weights in the output layer that receive input rom the same hidden*layer unit can also &e descri&ed &y !; vectors. These weight vectors are the only possi&le output vectors o our network. network output i <( wins

network output i <1 wins

network output i <! wins


November 26, 2012 Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I! 10

The Counterpropagation Network


:or each input vector, the output*layer weights that are connected to the winning hidden*layer unit are made more similar to the desired output vector0

The Counterpropagation Network


The training proceeds with decreasing step si"e , and a ter its termination, the weight vectors are in the center o gravity o their associated output vectors0 >utput associated with <1 <! <(

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

11

November 26, 2012

Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I!

12

The Counterpropagation Network


Notice: ? )n the irst training phase, i a hidden*layer unit does not win or a long period o time, its weights should &e set to random values to give that unit a chance to win su&se8uently. ? There is no need for normalizing the training output vectors. ? . ter the training has inished, the network maps the training vectors onto output vectors that are close to the desired ones. ? The more hidden units, the better the mapping. ? Thanks to the competitive neurons in the hidden layer, the linear neurons can reali"e nonlinear mappings.
November 26, 2012 Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigm I! 1"

You might also like