You are on page 1of 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/341735137

Associative-projective neural networks: architecture, implementation,


applications

Conference Paper · November 1991

CITATIONS READS
19 73

3 authors, including:

Ernst Kussul Dmitri A. Rachkovskij


Universidad Nacional Autónoma de México National Academy of Sciences of Ukraine
3 PUBLICATIONS   23 CITATIONS    70 PUBLICATIONS   1,057 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Dmitri A. Rachkovskij on 29 May 2020.

The user has requested enhancement of the downloaded file.


1. INTRODUCTION
ASSOCIATIVE-PROJECTIVE NEURAL NETWORKS :
ARCHITECTURE, IMPLEMENTATION, APPLICATIONS Well-known up-to-date neural network architectures may be subdivi-
ded into several basic categories by their purpose, e.g. neural
networks for neurobiological simulations, for information prepro-
cessing, for pattern recognition; networks for concept formation,
situation estimation, etc. Some problems of Artificial Intel-
Ernst M. Kussul, Dmitri A. Rachkovskij & Tatyana N. Baidyk ligence still remain uncovered by the neural network solutions,
including action planning, logical inference, formation of complex
Ukrainian Academy of Sciences / Institute of Cybernetics - Kiev, USSR hierarchical structures for knowledge bases, natural language
understanding and others. So far these problems demand to use
algorithms of traditional AI.
Our goal is to develop neural network architecture that is
Abstract : In this paper we consider an original neural network paradigm based on an equally destined for solution of all basic AI problems. Such
architecture permits to realize a common approach to heteroge-
associative-projective neural network architecture. The architecture provides a uniform neous information processing at different hierarchical levels,
basis for intelligent systems working in various fields. We describe several modes of allows to concentrate on elaboration of specialized high
architecture operation. This architecture is supported by high performance neurocomputers. performance neurocomputer for its support, and provides a uniform
Experiments carried out to test the architecture characteristics and its efficiency in applica- basis for novel intelligent systems design.
tions are reviewed. We also discuss application examples of this architecture for image The basic design principles for the proposed architecture and
texture classification and an approach to visual object recognition. its application examples will be described below.

Keywords : neural network architecture, artificial intelligence problems, neurocomputers,


neural network applications. 2. ASSOCIATIVE-PROJECTIVE NEURAL NETWORK ARCHITECTURE

We build neural network from separate blocks called neural fields.


There are three types of neural fields: associative field (AF),
buffer field (BF) and difference field (DF). Each neural field
consists of n binary neurons. The fields are linked by bundles of
n non-modifiable projective connections. Each projective connec-
tion has a unity weight and connects the output of a neuron from
one field and the input of the neuron with the same number Егоз
another field.
Neurons of buffer and difference fields actually are clocked
RS triggers. The priority of the reset input is higher than that
of the set input. The whole bundle of connections may be coupled
to either reset or set inputs of the field neurons. Hence we'll
distinguish the bundles of reset and set connections.
Projective connections may be in two states: "on" or "off".
In "on" state field activity patterns are transferred through
projective connections and in "off" state projective connection»
are blocked. Special algorithms control switching of projective
connections for the present. In the future we suppose to develop
special neural structure for this purpose.
Buffer neural field is able to receive and store patterns of
neuron activity from the other fields and executes some other
auxiliary functions. Difference field compares activity patterns
of two different fields connected to its set and reset inputs
producing difference activity pattern.
The neurons of associative field are fully interconnected
with each other forming feedback-type neural network. Associative
neural fields are the basis of this architecture, where the model
of external world is formed and operated. Any information unit of
any complexity (i.e. feature, object, notion, relation, etc.) is
represented not by a single neuron but by some neuron subset
called neural assembly following Hebb [1]. This representation
also preserves the natural likeness between similar information
units in each hierarchical level.
The neurons of associative field possess both the reset/set
inputs and associative ones. If there are no reset/set signals,
neuron output у is defined as

463 464
yj = 1, if Sj ≥ t
(1)
and the assembly representing the whole house is formed in the
other field. Each associative field forms a separate level of the
yj = 0, if Sj < t resulted multilevel architecture. The skeleton of the architecture
consisting of associative fields coupled by projective connections
where t stands for neuron threshold, Sj is calculated from is shown in Figure 2.
Now let us consider more realistic example of two-level
associative-projective neural network consisting of two associa-
Sj =  i = 1,n yiwij (2) tive fields and one buffer field that is shown in Figure 3. In
this figure and further, neural fields are represented by rec-
tangles, each thick line represents the whole bundle of projective
where w i j is synaptic binary weight of connection from neuron i to connections, black arrows represent reset inputs, and white arrows
neuron j which may be 0 or 1. The synaptic weight changing represent set inputs. We'll demonstrate how real world object
according to our stochastic version of Hebbian learning rule looks models are formed by this structure. Let the considered objects be
like: described by a finite feature set F={f1,...,fk}. The neural assem-
w*ij = wij  (yi  yj  ij), M(ij) = r, (3) blies corresponding to the features are formed in the associative
field AF1. For this purpose all features are sequentially input
where w*ij and wij stand for binary connection weight from i-th to to the network by special coding unit CU. The coding unit converts
j-th neuron before and after learning respectively, yi and yj are each input feature fi to some subset of active neurons in associa-
tive field AF1. At the same time positive reinforcement is given
neuron outputs,  - disjunction,  - conjunction,  - negation, to the field, synaptic connection weights are increased (Eq.3),
ij is binary stochastic variable that is equal to 1 with the and neural assembly corresponding to the feature fi is formed.
Let some object be described by the features fl,f2,f3. To
probability r. The negative reinforcement -1<r≤0 leads to form the assembly model of the object, its features are input in
unlearning: turn to the coding unit. The coding unit sequentially activates by
set projective connections the assemblies corresponding to fl,f2,
w*ij = wij   (yi  yj  ij), M(ij) = |r|. (4) f3 (see Figure 3. Remember that the assemblies are really coded
Neural assemblies representing real world patterns are formed in not by compact neuron subsets but by stochastically generated
the associative field in the process of learning. The neuron ones). The neuron activity patterns corresponding to these assem-
activity pattern representing the real pattern is set in the blies are transferred through projective connections to buffer
field, followed by enhancing the connectivity between active field BF where the feature codes are accumulated. After normaliza-
neurons according to Eq.3. tion only a fraction of active neurons from each feature is
After learning act each connection takes certain value of 0 preserved in BF. These are neurons-representatives of features.
or 1, but the forming assemblies possess complex structure very Then the obtained pattern is transferred to the associative field
much alike the structure of assemblies in the neural network with AF2 and the assembly О corresponding to the object model forms
gradual connections. The role of gradual connections is played there under positive reinforcement. It is formed from neurons-
here by expectance M(wij) of binary random variables wij. Since representatives of the object features.
any information unit in the considered neural network is re- The object name coded by subset of active neurons may also be
presented by a number (m) of neurons, the neural assembly forms one of its features (see Figure 4). It permits to activate the
due to changes in sufficient number (M(r)m2) of connections. The object model by some feature subset as well as by its name N. Let
dependence of connection formation probability between two neurons us now consider how the object name is recognized. If only some
versus the number  of their coactivations is given in Figure 1. part of the object features is fed to the coding unit, only a
The assembly already formed in the network may disappear if fraction of neurons representing the object will be accumulated in
buffer field BF. As soon as they will be transferred to the AF2
reinforcement is negative. field, the whole assembly of the named object O+N will be
One of the basic properties of neural assembly is that the activated there according to the associative retrieving property
whole assembly is activated provided with activation of its of the assembly. Hence the neurons-representatives of the object
sufficient part. In the same time the non-assembly neurons are name will be also activated. The name itself can be easily
inactivated. This property appears due to activity regulator activated in AFN by these neurons.
uniformly controlling the thresholds of all the associative field In the same way the features of assembly activated by its
neurons, that is similar to the regulator discussed in [2]. name can be recalled by their neurons-representatives. The process
The regulator provides nearly constant number m of active of recall is accomplished as follows (see Figure 5). Let the as-
neurons within associative neural field. Active neuron number m is sembly 0+N be activated by its name N in AF2. The activity pattern
much less than field neuron number (m << n, m  n). The small num- of AF2 is transferred to AF1 field. The assembly N is absent in
ber of active neurons stochastically distributed in the pattern AF1, so the assemblies of all other object features will get ini-
ensures high storage capacity (e.g. [3]). To reduce the number of tial activation from their neurons-representatives. Since the re-
active neurons to m in the buffer field, mechanism of another kind gulator of AF1 field activity maintains the number of active neu-
is used [4]. This operation is called normalization. rons to be equal to that of only one assembly, the assemblies be-
The specific architecture of associative-projective neural gin competition for activation. It results in activation of one of
network (APNN) is constructed from the above neural fields coupled the feature assemblies, e.g. f2. This assembly is transferred to
by projective connections. The assemblies corresponding to items the preliminary cleared buffer field BF and through reset projec-
of various complexity are formed in different associative fields. tive connections will inactivate its representatives in object
For example, in visual recognition the neural assemblies corres- assembly in AF2. Then AF1 is cleared and the remaining part of as-
ponding to roof, wall, window are formed in one associative field,

465 466
sembly (without f2 representatives) is transferred to it.Now only 100000 MCUPS performance implementing neural networks with up to
fl and f2 features will compete for activation. The winner will be 200 mln connections. Our latest technical solutions and know-how
also transferred to BF field and will inactivate its represen- permit to develop desktop neurocomputer of 10000000 MCUPS simula-
tatives in the object assembly, and so on. This way the sequential ting APNN with 5000 mln connections.
deciphering of the object assembly to its features is carried out.
Special procedures have been also developed that permit to form
assemblies reflecting the sequence of features presented to the 4. EXPERIMENTAL INVESTIGATION OF APNN
input and to retrieve this sequence in the deciphering process.
The architecture allows to build adequate hierarchical model To investigate the basic properties of APNN, numerous experiments
of the external world by addition of extra neural fields connected have been carried out. At first the architecture was simulated by
by projective connections and the pattern processing procedures personal computer, and now neurocomputer prototypes are also used.
similar to those discussed above. A series of experiments on information capacity of
Sometimes it is necessary to compare activity patterns of two associative field has been carried out [5,6]. For example, the
neural fields. Difference fields are used for this purpose, well-known result [7] that the number of statistically independent
allowing to determine the features that differ assemblies from assemblies in the network can exceed the number of neurons has
each other. This property can be used to form relations between
objects, that is necessary to describe scenes, situations, etc. been observed. The associative field of 4096 neurons could store
and retrieve up to 7000 assemblies each consisting of 64 neurons.
Obviously the assemblies overlap and each neuron may belong to
dozens and hundreds of assemblies, but this does not prevent the
3. HARDWARE SUPPORT normal operation of the field. The dependence of information
capacity on signal-to-noise ratio in the input pattern and on the
We consider the described architecture to be universal enough to probability of retrieving was also investigated and characteris-
provide a uniform base for solution of a wide scope of AI problems tics important for applications were obtained [8].
and complicated applications. This allows to create specialized The real assemblies formed in the associative field during
hardware for supporting this only architecture. Such approach information exchange with real environment have rather complicated
makes it possible to achieve very high performance of associati- internal structure reflecting the structure of objects. The most
ve-projective neurocomputers (APNC) that we propose. general properties of the object form the assembly nucleus with
Our neurocomputer is a long-word processor oriented for stronger connectivity than in assembly fringes formed by the
parallel simulation of a large number of neurons described above. features of individual class representatives. We have simulated
The processor implements bit operations on long words stored in the assemblies with nucleus and fringes for the purpose of
word-wide dynamic RAM. The number of simultaneously simulated associative field information capacity investigation. In this case
input sum S of each neuron (Eq.2) is accumulated in separate special tests have been performed. The most difficult one is to
binary counter. The long word processor operates under reduced retrieve the fringe provided with the assembly nucleus and a part
instruction set control unit. of this fringe. The experiments have shown substantial decreasing
For example, our neurocomputer prototype has DRAM of 256K x of information capacity compared to statistically independent
256 bit words and simulates 256 neurons of associative field assemblies [9]. Special learning rule was proposed to increase the
simultaneously. Since APNN usually contains much more neurons, the stability of fringes.
simulation process is only partially paralleled. A number of experiments has been carried out to test the as-
This neurocomputer also effectively implements other operati- sociative field with "diluted" (not full connected) weight matrix.
ons used in associative-projective architecture. For example, The problem was to represent diluted weight matrix in the computer
transferring of neuron activity patterns through projective memory so that the memory size grows linearly (not as the squire)
connections is accomplished by moving several DRAM words from one of the number of neurons preserving the high performance. The
bucket to another. The activity patterns accumulation in buffer experiments have shown that diluted network operates faster than
fields is simulated by bit disjunction of corresponding binary full-connected with the same number of connections, and stores
vectors. The random numbers generation for learning in accordance more assemblies with comparatively large size (m > n neurons).
with Eq.3,4 is accomplished by pseudorandom number generator that To test the functioning of multilevel hierarchical associa-
is also simulated in long-word processor, as well as coding of tive-projective neural network, the experiments with sequences
input information and decoding of output information. processing were fulfilled. The neural network contained four
This neurocomputer can implement associative-projective levels with associative field and several buffer fields in each.
Neural assemblies representing letters were formed in associative
neural networks with up to 25000 neurons and 50 mln connections. field of the lower level. The neural assemblies of the second
So far we have implemented neural network with 4096 neurons and 16 level represented syllables, those of the third level represented
mln connections. In the tests one update cycle of this network has words, and word combinations were represented in the fourth level.
taken 23 ms which corresponds to 700 MCUPS. Neurocomputer is The mechanisms of memorizing, recognition and replaying of complex
created as external module for PC AT and is based on standard letter sequences of various length, including branches and
middle scale integration components. repetitions, were examined in these experiments [8,10].
Custom chips with 3000 gates implementing long-word processor To provide the interface of APNN with external world, it is
section are under manufacturing now. Neurocomputer based on these necessary to transform the input information to n-dimensional
chips and 256K DRAM chips will have performance not less than binary neuron vector. For robustness, reliability, enhanced
5000 MCUPS. Using hi-tech electronic components available in the information capacity, and normal functioning of APNN the codes
market today, such as custom chips with 20000 gates and 4MB must be distributed, stochastic, sparse (with small and controlled
dynamic RAM, it is possible to create one-board neurocomputer with number of is) and reflect the metrics of the coded parameters. Our

469 470
experiments [8] have shown that the coding procedures developed by us type, even if there were several texture types in it - as the
meet these requirements. These procedures (e.g. [11,12]) are used teacher chose to mark it.
in our practical work. A number of experiments has been carried out using various
parameters. First the relative rate of learning and unlearning was
adjusted to provide convergence. Then we have performed experi-
5. APPLICATIONS OF APNN ments to investigate the process of convergence given various
values of initial noise in weight matrix: 1) 20000 random ones and
To test the efficiency of associative-projective architecture in 2) 200000 random ones. The network has been trained on two
applications, the experiments in the following domains have been pictures until convergence. The convergence curves (averaged over
carried out: isolated vowel recognition in a single associative 20 convergence acts) were very much alike in both cases, as the
field [11], shape recognition of hand-drawn figures in two-level following table shows:
network [4], image texture recognition [12], inferencing [13]. The
obtained results allowed us to conclude that associative-projec- error number of each picture presentations
tive neural networks can be effectively used in wide range of rate 1) low noise 2) higher noise
applications and to start elaboration of more complex tasks 1.2% 38 40
including visual quality inspection, handwritten text recognition, 0.6% 60 55
acoustical diagnostics, information search systems. They have also 0.0% 76 80
allowed to develop approaches to APNN solution of such difficult
AI problems as visual recognition (including object and scene The average number of errors (and corresponding learning-unlear-
recognition), continuous speech recognition, advanced expert ning acts) till convergence was 1700.
systems including neural network knowledge base and inference The results of the network function on 5 new (test) pictures
engine with associative substitution, adaptive robot control in have been also obtained. The final recognition rate on these 5
natural environment (including goal formation, decision making, pictures varied from 85,3% to 88,5% with average rate of about
action planning) [14]. This material will be published in English 87%. The convergence dynamics (averaged over 5 convergence acts)
elsewhere, but here we'll briefly discuss the experiments on image can be seen from the following table:
texture classification and approach to visual object recognition.
number of each picture
5.1. Experiments on texture recognition presentations 1 5 10 15 20 30 40 50 60 85
number of errors:
The first stage of recognition task is feature extraction. We have "training" pictures 124 38 21 16 14 10 5 4 2 0
considered the following characteristics of the image in the "view "test" pictures 127 102 66 53 53 65 54 60 56 53
window" as texture features: the brightness histogram, the cont-
rast histogram, the elementary edges orientation histogram. Each These results (obtained using APNC neurocomputer) were compared
column component of each histogram has been considered as separate with experiments on the same pictures using potential function
feature (parameter) of texture. We've used 75 parameters. recognition method [15] implemented by us in PC AT. The
One-level neural network with 4096 neurons and 16 mln modifi- recognition quality was the same, but neurocomputer processed one
able connections has been used for this task. The algorithm of picture in 30 sec as compared with 1000 sec obtained by potential
associative field training and functioning looks as follows. In functions. We have also conducted these experiments with
the mode of supervised training three phases are repeated: perceptron-like neural network in PC AT. Here the recognition of
recognition, unlearning and learning. Coded feature histograms single picture was faster (about 200 sec), but the recognition
extracted from each image window are fed to the neural network. rate was lower: 82-84%. In perspective associative-projective
The output vector after one step of network functioning is neurocomputer will process one picture in several milliseconds.
compared to vectors representing texture classes (masks). If the
output vector has the largest overlap with the mask of proper 5.2. An approach to visual object recognition by APNN
class, nothing is done, and we go to the following image window.
If not, the result is considered wrong and unlearning takes place Complex objects of the external world can be recognized by type,
that results in decreasing connection number from the input code shape and mutual location of the segmented texture regions. Recog-
vector to the mask of wrongly recognized texture class. Then nition of texture type has been touched in the previous Section,
learning increases the number of connections from the activity and shape recognition has been considered in [4]. Here we describe
pattern corresponding to the window under analysis to the correct basic relations between the texture regions, that can be easily
mask. In the functioning (test) mode, only the recognition of the determined and used in object recognition, and their encoding.
new pictures (that was not shown to the network) is fulfilled, and In the first place let us consider space relations and
the statistics are collected. relative size of the image regions. For example, if there is a
We have used for experiments a set of 7 black-and-white pho- tree in the picture, its trunk is usually located lower than the
tos of the city streets converted to 200x144 images with 36 grey crown and near it. Besides, the trunk area is considerably less
levels. The images were scanned by 16x16 window with the step of 8 than the crown area in the image. The area calculation of the
pixel along both axis from the top-left to the bottom-right various image regions can be easily performed by neurocomputer.
corners (the total of 24x17=408 window positions in each picture). The centres of gravity can be also calculated for the regions as
We chose 5 texture types: the sky, the tree crone, the road, well as the mutual location of any pair.
vehicles, tree trunks and polls. First the pictures were marked by Let us consider two texture regions A and B. Then a number of
the teacher. Each window position was marked only by one texture their relations can be determined: P(A,B) - "the region A is near
the region B" relation, L(A,B) - "the region A is to the left of

471 472
the region В" relation, R(A,B) - "the region A is to the right to object and its name. Object name input module N1 and encoding
the region B" relation, T(A,B) - "the region A is above the region module CU2 are for input and encoding of the object name. Name
B" relation, B(A,B) - "the region A is below the region B" relati- input may be realized by a number of ways, e.g. by keyboard input.
on, M(A,B) - "the region A is larger than the region B" relation. Associative field AF1 is for formation of neural assemblies cor-
Each of the above relations can be characterized by the responding to objects in the image, and for their recognition in
numerical value. The former five relations can be characterized by operation mode. Decoding module CU3 is for decoding of the recog-
relative distances measured as the ratio of the corresponding nized object names.
distance between centres of gravity to the square root of region A The described system operates in the following way. The image
area. The value of M(A,B) can be determined as the ratio of the is fed to the texture recognition module, where the uniformly
region A area to the region В area. To encode these relations, it textured regions are determined. The neural pattern corresponding
is necessary to generate the bit masks for each of them and for to the region texture type is transferred to the buffer field BF1.
their values. The bit mask is stochastically generated and then The "map" of the region (i.e. its binary image in the field of
permanently fixed binary vector with known number of ones vision) is transferred to the shape recognition module SR and
corresponding to active neurons. Then bit conjunction of the spatial relation extraction module RE. The assembly of shape most
relation mask with the value mask of this relation is performed. resembling the shape of the region under processing is activated
Certainly, the above relations are not the only possible relations in the SR module. This assembly pattern is transferred to the
between the texture regions, that are used for object recognition. buffer field BF1, where it unites with texture pattern by bit
Thus we have considered encoding of relations between any two disjunction. The obtained code is normalized and transferred to
texture regions. To describe the object, it is necessary to the buffer field BF2. Then the texture recognition block starts to
indicate the specific regions that are characterized by these determine a new uniformly textured region, that is neighboring the
relations. The example of APNN for relation encoding and object previous region.
recognition is presented in Figure 6. As soon as the new region is determined, its map is
transformed to SR and RE modules. The sequential extraction of the
spatial relations is accomplished in RE, then they are coded in
Name CU1 and transferred to BF1. This field accumulates the codes of
spatial relations and normalizes them. The remaining code is
transferred to BF2 where it unites by bit disjunction with the
first texture region code. There also exists certain mechanism for
representation of directed relations in the code.
The recognition of the second region shape is then accompli-
shed in SR module. The texture and shape codes are fed to the
field BF1. In the same manner as before the accumulated code is
normalized. The resulting code is input by disjunction to BF2.
The code accumulated in BF2 is then normalized and transferred to
BF3. Meanwhile the system begins the analysis of the following
pair of object regions.
After looking through all the texture regions of the object
the buffer field BF3 contains the code of the object image. The
code consists of Regionl-Relation-Region2 triplets. The code is
normalized and superimposed by bit disjunction with the object
name code input from N1 and CU2 modules. Then the code is trans-
ferred to AF1 where the assembly representing the object forms.
The procedure of the object recognition is constructed in the
same way, except for the object name code is not input to BF3.
This code is associatively retrieved in AF1 and is then decoded in
CU3. Some peculiarities arise in the process of recognition. It "is
not known in advance what area is occupied by the object, in the
field of vision, so when the code is formed originally, the
features of the outside objects may be found in the buffer field
BF3, and at the same time not all the features of the object under
recognition will be there. Three variants of recognition may
occur, i.e. the correct object is recognized, the incorrect object
is recognized, no object is recognized (no assembly is retrieved
in the associative field).
Figure 6. APNN for visual object recognition. Let us consider the last case. Since the neural network can
not propose any hypothesis about the object, we must move the
analyzed region to the other part of the image or widen it to find
the additional features.
Texture recognition module TR and shape recognition module SR If some assembly is activated in the associative field, it is
possess complex structure including several neural fields. Rela- considered as the hypothesis about the corresponding object
tion extraction module RE is destined for spatial relations presence in the field of vision. This hypothesis must be verified.
extraction between uniformly textured regions and may be realized For this purpose the components of the activated assembly must be
in neurocomputer as well as in front computer.The obtained spatial retrieved. The details of this process were considered in Sectio-
relations are coded in encoding unit CU1. Buffer fields BF1, BF2, 2. This process must be conducted down to the level of texture
BF3 are for accumulating and normalizing of code representing the assemblies and spatial relation assemblies. Then more precise

473 474
View publication stats

determination of the texture regions belonging only to the object [5] Rachkovskij D.A. On numerical-analithycal investigation of
under analysis may be accomplished, and the image area occupied by neural network characteristics. Neuron-like networks and neurocom-
the object is specified. The assembly may contain the textures puters. Kiev, Inst.Cybern.Ukrain.Acad.Sci. - 1990 - P 13-23 ( i n
that were not in the field of vision originally. The deciphering Russian).
allows to initiate the aimed search of the missing textures on the [6] Baidyk T.N., Kussul E.M., Rachkovskij D.A. Numerical-analyti-
basis of the component texture features as well as their supposed cal method for neural network investigation. Proc. Int Symp on
location relative to the already discovered texture regions. Neural Networks and Neural Computing. Prague, Czecho-Slovakia
This additional analysis must be concluded with the second Sept. 10-14, 1990. - P.217-219.
attempt of object recognition. These iterations may be continued [7] Willshaw D.J., Buneman O.P., and Longuet-Higgins H.C. Nonholo-
until the precise analysis in the lower level brings new results. graphic associative memory. Nature. - 1969. - 222. - P.960-962
Now two outcomes are possible: either the composition of the [8] Rachkovskij D.A. Development and investigation of multilevel
discovered texture regions fits the set of the textures in the assembly neural networks. Ph.D.Thesis. - Kiev, 1990. (in Russian)
recognized assembly or some texture regions are absent. In the [9] Baidyk T.N., Kussul E.M. Structure of neural assembly (Sub-
latter case there may be different reasons of the discrepancy. One mitted to NIPS'91).
of these may consist in our examination of the particular object [10] Kussul E.M., Rachkovskij D.A. Multilevel assembly neural
that possibly does not contain all the features of the typical architecture and processing of sequences. (Report presented at the
pattern that is formed as a neural assembly. The second reason may International Workshop "NEUROCOMPUTERS and ATTENTION", Moscow
be that object is partly hidden by another object. This reason may Sept 1989. Proceedings volume will appear in the series "Proceed-
be discovered if the recognition of a new object is accomplished ings in Nonlinear Science", Manchester University Press)
in the places where the missing texture regions must be located as [11] Rachkovskij D.A., Fedoseeva T.V. On audio signals recognition
a result of deciphering. If well-recognized object is discovered by multilevel neural network. Proc. Int. Symp. on Neural Networks
in this place, it is highly probable that it hides the missing and Neural Computing, Prague, Czecho-Slovakia Sept 10-14, 1990.
texture regions. Assumption of object shadowing is also probable - P.281-283.
if the missing texture regions are typical to this object class. [12] Kussul E.M., et.at. On image texture recognition by
Thus the object recognition requires iterative procedure associative-projective neurocomputer. (Submitted to ANNIE'91).
including transition from the lower hierarchical levels to the [13] Experiments with our novel analogy-based inference have been
higher ones as well as in the opposite direction. This procedure carried out by A.M.Kasatkin and L.M.Kasatkina.
permits to verify the recognition correctness and to specify the [14] Kussul E.M. Associative neuron-like structures (in Russian)
object and its parts position, to tell the object from the The book will be publised by Naukova Dumka, Kiev, in 1992.
background, and to determine if it is shaded by another object. [15] Eiserman M.A., Braverman E.M., Rosonoer L.I. Potential
function method in machine learning theory. - Moscow, Nauka, 1970.

6. CONCLUSION

We have described neural network architecture developed especially


for solution of difficult AI problems. Experiments in recognition
of vowels, textures, shapes, processing of sequences and logical
inference conducted on neurocomputer prototypes have demonstrated
efficiency of the proposed architecture in certain aspects of
these problems. The uniform approach to coding and processing of
information in different hierarchical levels and modalities allows
to unite the developed fragments in the frames of single neural
network structure and to obtain the system that will be able to
solve complicated problems that are insoluble today by other
neural network architectures and traditional approaches. The
present architecture permits to create non-expensive and high
performance hardware support, that makes it attractive from angle
of perspective applications. Such hardware is under development
now. As new hardware support will be ready we intend to expand the
range of applications.

REFERENCES
[1] Hebb D.O. The Organization of Behaviour. - N.Y.: Wiley, 1949.
[2] Braitenberg V. Cell assemblies in the cerebral cortex. Lect.
Notes Biomath. - 1978. - 21. - P.171-188.
[3] Palm G. On associative memory. Biol.Cybern. - 1980. - 36. -
P.19-31.
[4] Kussul E.M., Baidyk T.N. Development of neural network archi-
tecture for recognition of object shape in the image. Automatika.-
1990. - N5. - P.56-61. (in Russian).

475 476

You might also like