Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
3Activity
0 of .
Results for:
No results containing your search query
P. 1
Chapter 10

Chapter 10

Ratings: (0)|Views: 43|Likes:
Published by armin2200

More info:

Published by: armin2200 on Dec 06, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

10/28/2012

pdf

text

original

 
L1oi
Self-organizing
Systems
II:
Corn
pet
it
ive Learning
10.1
Introduction
In
this
chapter we continue our study of self -organizing systems by considering a specialclass of artificial neural networks known as self-organizing feature maps. These networksare based on
 competitive learning;
the output neurons of the network compete amongthemselves to be activated or fired, with the result that only
 one
output neuron, or oneneuron per group,
is
on at any one time. The output neurons that win the competition arecalled
winner
-
 takes
-
 all neurons.
One way of inducing a winner-takes-all competitionamong the output neurons is to use lateral inhibitory connections
@e.,
negative feedback paths) between them; such an idea was originally proposed by Rosenblatt (1958).In a
 self 
-
 organizing feature map,
the neurons
are
placed at the nodes of a
lattice
thatis usually one- or two
-
dimensional; higher-dimensional maps are also possible but not ascommon. The neurons become
 selectively tuned 
to
various
input patterns (vectors) orclasses of input patterns in the course of a competitive learning process. The locations othe neurons
so
tuned
(i.e.,
the winning neurons) tend to become ordered with respect toeach other in such a way that a meaningful coordinate system for different input
 features
is created over the lattice (Kohonen,
1990a).
A
self-organizing feature
map
is thereforecharacterized by the formation of a
 topographic map
of the input patterns, in which
 the spatial locations
(i.e.,
 coordinates)
o
 the neurons
in
 the lattice correspond to intrinsic features
o
 the input patterns,
hence the name “self-organizing feature map.’
The development of this special class of artificial neural networks is motivated by adistinct feature of the human brain; simply put, the brain is organized in many placesin such a way that different sensory inputs are represented by
 topologically ordered  computational maps.
In particular, sensory inputs such as tactile (Kaas et al.,
1983),
visual
(Hubel
and
Wiesel,
1962,
1977),
and acoustic (Suga,
1985)
are mapped onto differentareas of the cerebral cortex in a topologically ordered manner. Thus the computationalmap constitutes a basic building block in the information-processing infrastructure of thenervous system.
A
computational map is defined by an array of neurons representingslightly differently tuned processors or filters, which operate on the sensory
information-
bearing signals in parallel. Consequently, the neurons transform input signals into
aplace-
 coded probability distribution
that represents the computed values of parameters by sitesof maximum relative activity within the map
(Knudsen
et al., 1987). The information
so
derived is of such a form that it can be readily accessed by higher-order processors usingrelatively simple connection schemes.
Organization
o
the Chapter
The material presented in this chapter on computational maps is organized as follows:
In
Section 10.2 we expand on the idea of computational maps in the brain. Then, in Section
397
 
398
10
/
Self-organizing
Systems
II:
Competitive Learning
10.3, we describe two feature-mapping models, one originally developed by Willshawand von der Malsburg (1976) and the other by Kohonen
(1982a),
which are able to explainor capture the essential features of computational maps in the brain. The two modelsdiffer from each other in the form of the inputs used.The rest of the chapter is devoted to detailed considerations of Kohonen’s model thathas attracted a great deal of attention in the literature. In Section 10.4 we describe theformation of “activity bubbles,” which refers to the modification of the primary excitationsby the use of lateral feedback. This then paves the way for the mathematical formulationof Kohonen’s model in Section 10.5.
In
Section 10.6 we describe some important propertiesof the model, followed by additional notes of a practical nature in Section 10.7 on theoperation of the model. In Section 10.8 we describe a hybrid combination of the Kohonenmodel and supervised linear filter for adaptive pattern classification. Learning vectorquantization,
an
alternative method of improving the pattern-classification performanceof the Kohonen model, is described in Section 10.9. The chapter concludes with Section10.10
on
applications of the Kohonen model, and some final thoughts on the subject inSection
10.11.
10.2
Computational Maps in the Cerebral Cortex
Anyone who examines a human brain cannot help but be impressed by the extent to whichthe brain is dominated by the cerebral cortex. The brain is almost completely envelopedby the cortex, tending to obscure the other
parts.
Although it is only about 2
mrn
thick,its surface area, when spread out, is about 2400
cm2 (i.e.,
about six times the size of thispage). What is even more impressive is the fact that there are billions of neurons andhundreds of billions of synapses in the cortex. For sheer complexity, the cerebral cortexprobably exceeds any other known structure
(Hubel
and Wiesel, 1977).Figure 10.1 presents a cytoarchitectural map of the cerebral cortex as worked out byBrodmann (Shepherd, 1988; Brodal, 1981). The different areas of the cortex are identifiedby the thickness of their layers and the types of neurons within them. Some of the mostimportant specific areas are as follows:Motor cortex:Somatosensory cortex:Visual cortex:Auditory cortex:motor strip, area 4; premotor area, area 6;frontal eye fields, area 8.areas 3, 1, and 2.areas 17, 18, and 19.areas 41 and 42.Figure
10.1
shows clearly that different sensory inputs (motor, somatosensory, visual,auditory, etc.) are mapped onto corresponding areas of the cerebral cortex in an orderlyfashion. These cortical maps are not entirely genetically predetermined; rather, they aresketched in during the early development of the nervous system. However, it is uncertainhow cortical maps are sketched in this manner. Four major hypotheses have been advancedby neurobiologists (Udin and Fawcett, 1988):
1.
The target (postsynaptic) structure possesses addresses
(Le.,
chemical signals) that
2.
The structure, starting from zero
(Le.,
an informationless target structure),
self-
are actively searched for by the ingrowing connections
(axons).
organizes using learning rules and system interactions.
 
10.2
/
Computational Maps
in
the
Cerebral Cortex
399
4
FIGURE
10.1
Cytoarchitectural map of the cerebral cortex. The different areas areidentified by the thickness of their layers and types of cells within them. Some of the
most
important specific areas are as follows. Motor cortex: motor strip, area
4;
premotorarea, area
6;
frontal eye fields, area 8. Somatosensory cortex: areas 3, 1, 2. Visualcortex: areas 17, 18, 19. Auditory cortex: areas 41 and
42.
(From
G.M.
Shepherd, 1988;
A.
Brodal, 1981; with permission of Oxford University Press.)
3.
Axons, as they grow, physically maintain neighborhood relationships, and therefore
4.
Axons grow out in a topographically arranged time sequence, and connect to aAll these hypotheses have experimental support
of 
their own, and appear to be correct tosome extent. In fact, different structures may use one mechanism or another, or it couldbe that multiple mechanisms are involved.Once the cortical maps have been formed, they remain “plastic” to a varying extent,and therefore adapt to subsequent changes in the environment or the sensors themselves.The degree of plasticity, however, depends on the type of system in question. For example,a
 retinotopic
map
(i.e.,
the map from the retina to the visual cortex) remains plastic foronly a relatively short period of time after its formation, whereas the somatosensory mapremains plastic longer (Kaas et al., 1983).
An
example of a cortical mapping is shown in Figure 10.2. This figure is a schematicrepresentation of computational maps in the primary visual cortex of cats and monkeys.The basis of this representation was discovered originally by
Hubel
and Wiesel (1962).
In
Fig. 10.2 we recognize two
kinds
of repeating computational maps:
1.
 Maps
of 
 preferred line orientation,
representing the angle of tilt of a line stimulus
 2.
 Maps
of 
 ocular dominance,
representing the relative strengths of excitatory influenceThe major point of interest here is the fact that line orientation and ocular dominance aremapped across the cortical surface along independent axes. Although in Fig. 10.2 (forarrive at the target structure already topographically arranged.target structure that is generated in a matching temporal fashion.of each eye

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->