You are on page 1of 13

Unsupervised Learning Neural Networks

- Kohonen Self Organizing Feature Maps

Presentation by:
C. Vinoth Kumar
SSN College of Engineering
Self Organizing Feature Map

 When the learning is based only upon the input data and
is independent of the desired output data no error is
calculated to train a network.

 This type of learning is called unsupervised learning and


input data are called unlabelled data.

 In this case, the net may respond to several output


categories on training.

 But only one of the several neurons has to respond.


Self Organizing Feature Map

 Hence additional structure can be included in the


network so that the net is forced to make a decision as to
which one unit will respond.

 The mechanism by which only one unit is chosen to


respond is called competition.

 The mostly used competition among group of neurons is


Winner-Takes-all.

 Here, only one neuron in the competing group will have


a non-zero output signal when the competition is
completed.
Self Organizing Feature Map

 The form of the learning depends on the purpose for


which the net is being trained.

 In a clustering net, there are as many input units as an


input vector components.

 Since each output unit represents a cluster, the number


of output units will limit the number of clusters that can
be formed.

 The weight vector for an output unit in a clustering net is


called code-book vector for the input patterns, which the
net has placed on that cluster.
Self Organizing Feature Map

 During training, the net determines the output unit that is


the best match for the current input vector; the weight
vector for the winner is then adjusted with respect to the
net’s learning algorithm.

 The different competitive nets are:


(i) Max net
(ii) Mexican hat
(iii) Learning Vector Quantization
(iv) Hamming net
(v) Kohonen Self Organizing Feature Maps
Methods used for Determining the Winner

 There are two methods used to determine the winner


unit.

 Method 1: This method used the squared Euclidean


distance between the input vector and the weight vector,
and chooses the unit whose weight vector has the
smallest Euclidean distance from the input vector.

 Method 2: This method uses the dot product of the input


vector and the weight vector (net input to the
corresponding cluster unit). The largest dot product
corresponds to the smallest angle between the input and
weight vectors.
Kohonen Self Organizing Feature Map

 It is also known as topology-preserving maps.

 In this, units located physically next to each other will


respond to classes of input vectors that are likewise
located next to each other.

 Large dimensional input vectors are projected down on


the two-dimensional map in a way that maintains the
natural order of the input vectors.

 The SOM groups the input data into clusters which are
commonly used for unsupervised training.
Kohonen Self Organizing Feature Map

 The cluster unit whose weight vector matches the input


pattern closely is selected as the winner.

 The winning and neighboring units update their weights.

 The neighboring units weight vectors are not close to the


input pattern but differ to some extent.

 Hence, the connection weights do not multiply the signal


sent from the input units to the cluster units.
Kohonen Self Organizing Feature Map
- Architecture
Kohonen Self Organizing Feature Map
- Neighborhood Scheme
Kohonen Self Organizing Feature Map

 The competitive network is similar to single-layered feed


forward network except that there are connections,
usually negative, between the output units.

 Because of these connections the output nodes tend to


compete to represent the current input pattern.
Kohonen Self Organizing Feature Map
- Training Algorithm
Step 1: Set the topological neighborhood parameters – set
learning rate and initialize weights

Step 2: While stopping condition is false, do steps 3-9

Step 3: For each input vector x, do steps 4-6

Step 4: For each j, compute squared Euclidean distance

Step 5: Find index J, when D(j) is minimum.


Kohonen Self Organizing Feature Map
- Training Algorithm
Step 6: For all units J, with the specified neighborhood of J,
and for all I, update weights.

Step 7: Update the learning rate.

Step 8: Reduce the radius of topological neighborhood at


specified times.

Step 9: Test the stopping condition.

Note: The learning rate is updated by α(t+1) = 0.5 α(t)

You might also like