Professional Documents
Culture Documents
intentionally left
blank
This page
intentionally left
blank
Preface
This book deals with a novel paradigm of neural networks, called multidimensional neural
networks. It also provides comprehensive description of a certain unified theory of control,
communication and computation. This book can serve as a textbook for an advanced course
on neural networks or computational intelligence/cybernetics. Both senior undergraduate
and graduate students can get benefit from such a course. It can also serve as a reference
book for practicising engineers utilizing neural networks. Further more, the book can be
used as a research monograph by neural network researchers.
In the field of electrical engineering, researchers have innovated sub-fields such as
control theory, communication theory and computation theory. Concepts such as logic
gates, error correcting codes and optimal control vectors arise in the computation,
communication and control theories respectively. In one dimensional systems, the concept
of error correcting codes, logic gates are related to neural networks. The author, in his
research efforts showed that the optimal control vectors (associated with a one dimensional
linear system) constitute the stable states of a neural network. Thus unified theory is
discovered and formalized in one dimensional systems. Questioning the possibility of
logic gates operating on higher dimensional arrays resulted in the discovery as well as
formalisation of the research area of multi/infinite dimensional logic theory. The author
has generalised the known relationship between one dimensional logic theory and one
dimensional neural networks to multiple dimensions. He has also generalised the
relationship between one dimensional neural networks and error correcting codes to
multidimensions (using generator tensor).
On the way to unification in multidimensional systems the author has discovered and
formalised the concept of tensor state space representation of certain multidimensional
linear systems.
It is well accepted that the area of complex valued neural networks is a very promising
research area. The author has proposed a novel activation function called the complex signum
function. This function has enabled proposing a complex valued neural associative memory
on the complex hypercube.
He also proposed novel models of neuron (such as linear filter model of synapse).
This book contains 10 chapters. The first chapter provides an introduction to the unified
theory of control, communication and computation. Chapter 2 introduces a mathematical
(viii)
Preface
Contents
PREFACE
1.
INTRODUCTION
2.1.
2.2.
2.3.
2.4.
2.5.
2.6.
2.7.
3.
INTRODUCTION
MATHEMATICAL MODEL OF MULTIDIMENSIONAL NEURAL NETWORKS
CONVERGENCE THEOREM FOR MULTIDIMENSIONAL NEURAL NETWORKS
MULTIDIMENSIONAL LOGIC THEORY, LOGIC SYNTHESIS
INFINITE DIMENSIONAL LOGIC THEORY: INFINITE DIMENSIONAL LOGIC SYNTHESIS
N EURAL NETWORKS, LOGIC THEORIES, CONSTRAINED STATIC OPTIMIZATION
CONCLUSIONS
3.1.
3.2.
3.3.
3.4.
INTRODUCTION
MULTIDIMENSIONAL NEURAL NETWORKS: MINIMUM CUT COMPUTATION IN
THE CONNECTION STRUCTURE: GRAPHOID CODES
MULTIDIMENSIONAL ERROR CORRECTING CODES: ASSOCIATED ENERGY
FUNCTIONSGENERALIZED NEURAL NETWORKS
MULTIDIMENSIONAL ERROR CORRECTING CODES: RELATIONSHIP
TO STABLE STATES OF ENERGY FUNCTIONS
3.5.
3.6.
3.7.
3.8.
(vii)
1
3
3
4
6
9
11
14
17
20
23
25
27
27
29
34
39
42
45
53
59
(x)
4.
Contents
TENSOR STATE SPACE REPRESENTATION: MULTIDIMENSIONAL SYSTEMS
61
4.1.
4.2.
61
4.3.
4.4.
INTRODUCTION
STATE OF THE ART IN MULTI/ INFINITE DIMENSIONAL STATIC/ DYNAMIC SYSTEM THEORY:
REPRESENTATION BY TENSOR LINEAR OPERATOR
STATE SPACE REPRESENTATION OF CERTAIN MULTI/ INFINITE DIMENSIONAL
DYNAMICAL SYSTEMS: TENSOR LINEAR OPERATOR
MULTI/ INFINITE DIMENSIONAL SYSTEM THEORY: LINEAR DYNAMICAL SYSTEMS
STATE SPACE REPRESENTATION BY TENSOR LINEAR OPERATORS
4.5.
4.6.
4.7.
5.
5.1.
5.2.
5.3.
5.4.
INTRODUCTION
ONE DIMENSIONAL LOGIC FUNCTIONS, CODEWORD VECTORS, OPTIMAL CONTROL VECTORS:
ONE DIMENSIONAL NEURAL NETWORKS
OPTIMAL CONTROL TENSORS: MULTIDIMENSIONAL NEURAL NETWORKS
MULTIDIMENSIONAL SYSTEMS: OPTIMAL CONTROL TENSORS,
CODEWORD TENSORS AND SWITCHING FUNCTION TENSORS
5.5
6.
6.1.
6.2.
6.3
6.4.
7.
8.
CONCLUSIONS
INTRODUCTION
FEATURES OF THE PROPOSED MODEL
CONVERGENCE THEOREMS
CONCLUSIONS
63
65
69
70
73
76
79
79
80
82
90
92
95
95
96
97
105
107
7.1.
7.2.
7.3.
7.4.
107
107
113
114
INTRODUCTION
OPTIMAL SIGNAL DESIGN PROBLEM: SOLUTION
OPTIMAL FILTER DESIGN PROBLEM: SOLUTION (DUAL OF SIGNAL DESIGN PROBLEM)
CONCLUSIONS
117
8.1.
8.2.
8.3
8.4.
117
118
120
121
INTRODUCTION
C ONTINUOUS TIME PERCEPTRON AND GENERALIZATIONS
A BSTRACT MATHEMATICAL STRUCTURE OF NEURONAL MODELS
FINITE IMPULSE RESPONSE MODEL OF SYNAPSES: NEURAL NETWORKS
Contents
8.5.
8.6.
8.7.
8.8.
9.
10.
(xi)
122
125
125
126
129
9.1.
9.2
9.3.
9.4.
9.5.
9.6.
9.7.
9.8.
129
130
133
133
134
135
135
136
INTRODUCTION
DISCRETE FOURIER TRANSFORM: SOME COMPLEX VALUED NEURAL NETWORKS
COMPLEX VALUED PERCEPTRON
N OVEL MODEL OF A NEURON: ASSOCIATED NEURAL NETWORKS
CONTINUOUS TIME PERCEPTRON LEARNING LAW
SOME IMPORTANT GENERALIZATIONS
SOME OPEN QUESTIONS
CONCLUSIONS
137
10.1.
10.2.
10.3.
10.4.
137
137
138
139
INDEX
141
This page
intentionally left
blank
CHAPTER
Introduction
Ever since the dawn of civilization, the homo-sapien animal unlike other lower level animals
was constantly creating tools that enabled the community to not only take advantage of
the physical universe but also develop a better understanding of the physical reality through
the discovery of underlying physical laws. The homo-sapien, like other lower level animals
had two primary necessities: metabolism and reproduction. But, more important was the
obsession with other developed necessities such as art, painting, music and sculpture.
These necessities naturally lead to the habit of concentration. This most important habit
enabled him to develop abstract tools utilized to study nature in most advanced civilizations.
Thus the homo-sapien animal achieved the distinction of being a higher animal compared
to the other animals in nature.
In ancient Greece, the homo-sapien civilization was highly advanced in many matters
compared to all other civilizations. Such a lead was symbolized by the development of
mathematics subject in various important stages. The most significant indication of such
development is left to posterity in the form of 13 books called, Euclids Elements. These
books provide the first documented effort of axiomatic development of a mathematical
structure such as the Euclidean geometry. Also, Greek, Babylonian civilizations made
important strides in algebra: solving linear, quadratic equations and studying the quadratic
homogeneous forms in two variables (for conic sections). Algebra was revived during the
Renaissance in Italy. In algebra, solution of cubic, quartic equations was carried out by the
Italian algebraists. This constituted the intellectual heritage, cultural heritage along with
religious, social traditions.
To satisfy the curiosity of observing the heavens, various star constellations,
astronomical objects were classified. In navigating the ships for battle purposes as well as
trade, astronomical observations were made. These provided the first curious data related
to the natural world. In an effort to understand the non-living material universe, homosapiens have devised various tools: measuring equipment, experimental equipment,
mathematical procedures, mathematical tools etc.
With the discovery that Sun is the center of our relative motion system by Copernicus,
Ptolemaic theory was permanently forsaken. It gave Galileo, the curious motivation for
deriving the empirical laws of far flung significance in natural philosophy/natural
science/physics. Kepler after strenuous efforts derived the laws of planetary motion
leading to some of the laws of Newton. Issac Newton formalized the laws of Galileo by
developing calculus. He also developed a theory of gravitation based on the empirical
laws of Kepler. Michael Faraday derived the empirical laws of electric and magnetic
phenomena. Though Newtons mechanical laws were successfully utilized to explain
heat phenomenon, kinetic theory of gases as being due to mechanical motion of molecules,
atoms, they were inadequate for electrical phenomena. Maxwell formalized Faradays
laws of electro-magnetic induction leading to his field equations. Later physics developed
at a feverish pace.
These results in physics were paralleled by developments in other related areas such
as chemistry, biology etc. Thus, the early efforts of homo-sapiens matured into a clearer
view of the non-living world. The above description summarizes the pre 20th century
development of this progress on homo-sapien contributions to understanding the nonliving material universe.
In making conclusive statements on the origin and evolution of physical reality,
the developments of the 20th century are more important. In that endeavor, Einsteins
general theory of relativity was one of the most important cornerstones of 20th century
physics. It enabled him to develop a general, more correct theory of gravitation,
outdating the Newtonian theory. It showed that gravitation is due to curvature of spacetime continuum. The general theory of relativity also showed that all natural physical
laws are invariant under non-linear transformations. This result was a significant
improvement over special theory of relativity, where he showed that all natural physical
laws are invariant under linear Lorentz transformations. This result (in special theory
of relativity) was achieved when Einstein realized that due to finiteness of velocity of
light, one must discard the notions of absolute space and time. They must be replaced
by the notions of space-time continuum i.e. space and time are not independent of one
another, but are dependent. Thus, special and general theories of relativity constrained
the form of natural physical law.
In the 20th century, along with the Theory of Relativity, Quantum Mechanics was
developed due to the efforts of M.Planck, E. Schrodinger and W. Heisenberg. This theory
showed that the electromagnetic field at the quantum level was quantized. This, along
with, wave-particle duality of light was considered irreconcilable with the general theory
of relativity. To reconcile general theory of relativity with various quantum theories,
Y. Nambu proposed a string model for fundamental particles and formalized the
dynamics of light string. Utilizing the experimentally verified quantum theories of
chromodynamics, electrodynamics, supersymmetry of fundamental particles (unifying
Bosons and Fermions), it was possible to supersymmetrize the string model of
fundamental particles, resulting in the so-called superstring (supersymmetric string)
Introduction
discovered that a time varying electric field leads to magnetic field which can be capitalized
for the motion of a neutral body. He also discovered that a time varying magnetic field leads
to electric field inside a neutral conductor and flow of current takes place. These formed the
Flemings left hand and right hand rules relating the relativistic effects between the electric
field, magnetic field and conductor. These investigations of Faraday and other scientists
naturally paved the way for electric circuits consisting of resistors, inductors and capacitors.
Such initial efforts led to canonical circuits such as RL circuit, RLC circuit, RC circuit etc. The
systems of differential equations and their responses were computed utilizing the analytical
techniques. The ability to control the motion of an arbitrary neutral object led to applications
of electrical circuits and their modifications for control of trajectories of aircrafts. Thus, the
automata which can perform CONTROL tasks was generated. These control automata were
primarily based on electrical circuits and operate in continuous time with the ability to make
synchronization at discrete instants. Later utilizing the Sampling Theorem, sample-data
control systems operating in discrete time were developed.
are extended to multi/infinite dimensional linear systems. Also, the results developed
in one dimension for computation of optimal control are immediately extended to certain
multi/infinite dimensional linear systems. This result in association with the formalization
of multi/infinite dimensional logic theory, multi/infinite dimensional coding theory
(as an extension of one dimensional linear and non-linear codes) provided the formal
UNIFIED THEORY in multi/infinite dimensional linear systems. The formal
mathematical detail on models of living system functions are provided in Chapters 2 to
5. These chapters provide the details on control, communication and computation
automata in multiple dimensions. Several generalized models of neural networks are
discussed in Chapters 5 to 9. Also relationship between neural networks and optimal filters
is discussed in Chapter 7. In Chapter 10 advanced theory of evolution is discussed.
Introduction
The organization, culture observed in other biological systems and other natural living
systems is nowhere comparable to those observed in the homo-sapien species. But the
author hypothesizes that this marginal/poor organization is primarily due to lack of coordination which is achieved through the language. Thus, major effort in organizing the
lower level species of living systems is through teaching a language. Thus, organization of
living systems other than the homo-sapiens (for homo-sapien and other purposes) should
be possible.
An important part of organizing the homo-sapiens was the educational system through
an associated language. In the same spirit, by teaching some lower level animals to speak
certain language, they could be organized/educated to understand as well as develop
science and technology. When the lower level animals are organized in a zoo through
various methods, they could lead to a culture and a civilization.
Various natural living machines have developed organs/functional units due to
evolutionary needs. These functional units essentially include sensors to collect video,
audio information or more generally sensors to collect data on the surrounding environment
in the universe. The data gathered by the living machine from the surrounding environment
in physical reality is utilized to perform some primary functions such as metabolism,
reproduction etc. The data is processed by various functional sub-units inside the brain of
a living machine. Thus the understanding of the operation of various functional sub-units
in the brain of natural living machines leads to building artificial living machines which
are far superior in functional capabilities.
This page
intentionally left
blank
CHAPTER
Multi/Infinite Dimensional
Neural Networks, Multi/Infinite
Dimensional Logic Theory
2.1 INTRODUCTION
One dimensional logic theory is concerned with the study of static/dynamic
transformations on one dimensional arrays of zeroes and ones to arrive at arrays of
zeroes and ones. Various standard logic gates such as AND, OR, NOT, NAND, XOR,
NOR are defined on one dimensional arrays/vectors. The logic synthesis of digital
integrated circuits, consisting of the interconnection of logic gates which transit through
a set of states, is performed through the utilization of the associated state transition
diagram. The set of allowed transitions in the state space lead to various classes of
digital circuits such as shift registers, counters, flip flops etc. In one dimensional logic
theory various theorems on the decomposition, synthesis of Boolean functions are
proved and are utilized in the logic synthesis of complex digital integrated circuits. In
the practical implementation of such digital integrated circuits, semiconductor
technology with devices such as diodes, transistors, field effect transistors was
effectively utilized.
The design and implementation of complex digital integrated circuits led to the
development of highly sophisticated computers, computer systems serving various practical
applications. Some practical applications such as those in medical imaging, remote sensing,
pattern recognition led to the design and implementation of various types of parallel
computers. These computers operate on two dimensional arrays of zeroes and ones. But the
processing units in these computers treat the two dimensional array elements as those from
one dimensional arrays. Thus, the two dimensional nature of an array with dependency
structure is never capitalized. This limitation led the author to innovate information processing
units which operate on two/multidimensional arrays. Such information processing units
should necessarily be based on sub-units which operate on arrays of binary data and produce
binary arrays. These sub-units constitute the two/multidimensional logic circuits. A more
10
general class of information processing sub-units and thus the units operate on arrays whose
entries are allowed to assume multiple (not necessarily binary) values.
Automata which operate on multidimensional arrays to perform desired operation
can be defined heuristically in many ways. In some applications such as in 3-d array/
image processing, the information processing operation can only be defined heuristically
based on the required function. But, a more organized approach to define multidimensional
logic functions is discovered and formalized by the author. In this chapter, the author
describes the mathematical formalization for multidimensional logic units. The relationship
between multidimensional logic units and multidimensional neural networks is also
discussed. The generalization of the results to infinite dimensions is also briefly described.
Two dimensional neural networks were utilized by various researchers working in the
area of neural networks. The application of two dimensional neural networks to various
real world problems was also extensively studied. But, an effective mathematical abstraction
for modeling two/multi/infinite dimensional neural networks was lacking. The author in
this chapter demonstrates that tensors provide a mathematical abstraction to model multi/
infinite dimensional neural networks.
The contents of this chapter are summarized as follows:
A mathematical model of an arbitrary multidimensional neural network is developed. A
convergence theorem for an arbitrary multidimensional neural network represented by a
fully symmetric tensor is stated and proved. The input and output signal states of a
multidimensional logic gate/neural network are related through an energy function,
defined over the fully symmetric tensor representing the multidimensional logic gate, such
that the minimum/maximum energy states correspond to the output states of the logic
gate realizing a logic function. Similarly, a logic circuit consisting of the interconnection of
logic gates, represented by a symmetric tensor, is associated with a quadratic/higher degree
energy function. Multidimensional logic synthesis is described. Infinite dimensional logic
theory, logic synthesis are briefly discussed through the utilization of infinite dimension/
order tensors.
This chapter is organized as follows. In section 2, a mathematical model of an arbitrary
multidimensional neural network and associated terminology is developed. In section 3, a
convergence theorem for an arbitrary multidimensional neural network is proved. In section
4, the input/stable states of a multidimensional neural network are associated with the
input/output signal states of a multidimensional logic gate. A mathematical model of an
arbitrary multidimensional logic gate/circuit is described. Thus, multidimensional logic
theory, logic synthesis is formalized. In section 5, infinite dimensional logic theory, logic
synthesis are described. In section 6, the relationship between multidimensional neural
networks, multidimensional logic theories, various constrained static optimization problems
is elaborated. Various constrained optimization problems that commonly arise in various
problems are listed. Various innovative ideas in multidimensional neural networks are
briefly described. The chapter concludes with a set of conclusions.
11
C X
i =1
(2.1)
C
i =1 j =1
ij
Xi X j
(2.2)
C
i =1 j =1 k =1
ijk
Xi X j K k
(2.3)
is called a homogeneous form (BoT) of degree three and so on. Given the components of
a tensor of order n, of dimension m , it is possible to define a homogeneous form of
degree n.
The connection structure of a one dimensional neural network, the symmetric matrix,
is naturally associated with a homogeneous quadratic form as the energy function, which
is optimized over the one dimensional hypercube. Thus, in one dimension, to utilize a
homogeneous form of degree n as the energy function, a generalized neural network is
employed, in which, at each neuron, an arbitrary algebraic threshold function is computed.
But, in multidimensions, to describe the connection structure of a neural network, a tensor
is necessarily utilized.
12
Suppose, one second order tensor is a linear function of another second order tensor
Aik = iklm Bim
(2.4)
where iklm is a set of k 4 coefficients. It is easy to see that iklm is a tensor of dimension k and
order 4. This is illustrative of linear transformation of tensors.
Now, we discuss some concepts in the multiplication of tensors.
Let A i k and Bi k be the components of two second order tensors. Consider all possible
products of the form
Ciklm = Aik Bim
(2.5)
Then, the numbers C iklm are the components of a fourth-order tensor, called the outer
product of tensors with components A ik and Bi k.
Multiplication of any number of tensors of arbitrary order is defined similarly (BoT),
i.e. the product of two or more tensors is the product of the components of the tensors,
which are factors. The order of a tensor product is clearly the sum of the orders of the
factors.
Contraction of Tensors: The operation of summing a tensor of order n (n >2) over two
of its indices is called contraction. It is clear that contraction of a tensor of order n leads to a
tensor of order n-2. This tensor can be repeatedly contracted to arrive at a tensor of order 2
or a scalar depending on whether n is even or odd.
The result of multiplying two or more tensors and then contracting the product with
respect to indices belonging to different factors is often called an Inner Product of the
given tensors.
Thus, based on the notation associated with the indices, it is understood from the context
whether inner product or outer product of tensors is utilized.
With the above requisite notation from tensor algebra summarized, before describing
a mathematical model of an arbitrary multidimensional neural network, the following
intuitive discussion is provided to facilitate easier understanding.
The state of a neuron at the discrete time instant n+1 is computed by summing the
contributions from other neurons connected to it through synaptic weights which are the
components of a fully symmetric tensor S, representing the connection structure and the
13
state tensor of neuronal states at the time instant n. Thus, we first compute the outer product
of connection tensor and the state tensor of neurons at the time instant n and perform the
contraction over all the indices (representing the neurons) connected to a chosen neuron.
Thus, this inner product operation followed by determining its sign/parity/polarity
(positive or negative value) gives us the state tensor at time instant n+1. This procedure is
repeated at all the neurons where the state is updated.
Remark
Throughout the research article, the notation multidimensional neural network is utilized.
The standard notation associated with tensors utilizes the term, dimension to represent
the number of values an independent variable can assume and the term, order to represent
the number of independent variables. Thus, the state tensor order represents the number
of independent dimensions in the multidimensional neural network, MN. The notational
confusion between the usage of terms order, dimension should be resolved from the
context.
(2.6)
for all {i1,i2,...,in}, {j1,j2,...,jn}. This captures the intuitive notion that the multidimensional
neural network has nodes which correspond to the multidimensional neurons. The
connectionist structure of the network, in the fully connected case, has a synaptic connection
from every neuron to every other neuron and thus specifies the number of order indices/
dimensions/variables of the fully symmetric tensor. Furthermore, it is fully symmetric
since there is a link between any two nodes and the weight attached to the link is the same
in both directions.
T is a tensor compatible with S such that each component is the threshold at the node
(i1, i2,...,in) of the multidimensional neural network.
Every node ( multidimensional neuron ) can be in one of the two possible states, either
+1 or 1. The state of node (i1, i2,...., in) at time t is denoted by Xi1, i2,..., in (t). The state of MN
at time t is the tensor Xi1, i2,..., in (t), where X is tensor of dimension m and order n. The state
evolution at node (i1, i2,...,in) is computed by
(2.7)
14
where,
m
j1= 1
jn = 1
(2.8)
The next state of the network Xi1,...,in (t +1) is computed from the current state by
performing the evaluation (2.7) at a subset of the nodes of the multidimensional neural
network, to be denoted by G. The modes of operation of the network are determined by
the method by which the subset G is selected in each time interval.
If the computation is performed at a single node in any time interval, i.e.|G| = 1, then
we will say that the network is operating in a serial mode, and if |G| = m n, then we will
say that the network is operating in a fully parallel mode. All other cases, i.e. 1 < |G| < m n,
will be called parallel modes of operation. Unlike a one dimensional neural network,
multidimensional neural network lends itself for various parallel modes of operation. It is
possible to choose G to be the set of neurons placed in each independent dimension or a
union of such sets. The set G can be chosen at random or according to some deterministic
rule. A state of the network is called stable if and only if
(2.9)
where denotes inner product i.e. outer product followed by contraction over the
appropriate indices. Once the network reaches such a state, there is no further change in
the state of the network no matter what the mode of operation is.
15
contribution from all neurons is first determined and its sign is determined to arrive at the
updated state of the neuron. Mathematically, this is achieved by computing the outer
product of the fully symmetric tensor S and the {+1, 1} state tensor of the multidimensional
neural network. In tensor notation, this is specified by
(2.10)
The total synaptic contribution at any neuron located at the location (i1, i2,..., in) is
determined by contracting the above outer product over all the indices {j1, j2,..., jn} i.e.
over all the neurons connected to it through the synaptic weights determined by the
components of the fully symmetric tensor S. The resultant scalar synaptic contribution
at any neuron (i1, i2,..., in) is thus determined by the inner product operation. The sign of
the resulting scalar constitutes the updated state of neuron. Thus, the state of any neuron
(i1, i2,..., in) in the multidimensional neural network in the serial mode of operation is
given by
m
j1= 1
jn = 1
= Sign (S X(k ) T )
(2.11)
(2.12)
where is utilized as the symbol to denote the inner product between compatible tensors.
This symbol is sometimes suppressed and it should be understood from the context whether
inner product/outer product between the tensors is meant.
With the state updating scheme in the tensor notation specified, the energy function
that is optimized in the network MN is described. It is given by
m
jn = 1
(2.13)
where < > denotes the inner product operator between the compatible tensors. It is
assumed in the above specification of the energy function of the neural network MN that
the threshold at each neuron is zero. This is no loss of generality, since by augmenting
the tensor S and the state tensor, the threshold values can be forced to be zero. It is easy
to see that such a thing can always be done by considering a one dimensional neural
network in which the threshold at each neuron is non-zero and arriving at a network in
which the threshold at each neuron can be made zero by augmenting the state vector as
well as the connection matrix.
Utilizing the definition of the above energy function of the network, let
E = E1 ( t + 1) E1 ( t) , (discrete time index t instead of k is used) be the difference in the
energy associated with two consecutive states (transited in the serial mode of operation of
the multidimensional neural network ), and let X i 1,....in denote the difference between the
next state and the current state of the node at location (i1, i2,..., in) at some arbitrary time t.
Clearly,
16
(2.14)
(2.15)
Utilizing the fact that S is fully symmetric and the definition H i1,..., in (t), it follows that
Xi 1,...., in
(2.16)
Hence, since Xi 1,..., in Hi 1,..., in 0 and Si 1,..., in i 1,..., in 0 , it follows that at every time instant,
E 0 . Thus, since the energy E is bounded from above by the appropriate norm of S, the
value of energy will converge. Now, it is proved in the following that convergence of
energy implies convergence to a stable state.
Once the energy in the network has converged, it is clear from the following facts that
the network will reach a stable state after utmost m 2n time intervals.
(a) if X = 0 then it follows that E = 0;
(b) if X 0 , then E = 0, only if the change in Xi 1,..., in (t) is from 1 to +1,
with Hi 1,..., in = 0.
In the fully parallel mode of operation of the network MN, the state updating scheme
for the state tensor of MN is given by
(2.17)
where denotes the inner product between compatible tensors. Since, the serial mode proof
shows that a stable state is always reached with the above stated updating scheme, it is immediate
that by pairwise flipping of the values of any two dimension variables in the state tensor, the same
energy function value is attained. This, in turn implies that in the parallel mode of operation of a
multidimensional neural network, either a stable state is reached or a cycle of length utmost 2 is
reached (The two state tensors lead to the same value of the energy function). This approach to the
proof for the parallel mode of operation follows the one provided in reference. [Br G] Q. E. D
17
18
19
The proof of the above theorem follows from the convergence theorem and is avoided
for brevity.
The classification of multidimensional logic circuits is based on the type of transitions
allowed between the states in the multidimensional state space. The type of state transitions
fall into the following form:
(a) whether the next state reached depends on the past state only or not, as in one
dimensional logic synthesis,
(b) the type of neighbourhood of states about the current state on which the next state
reached depends. The type of neighbourhoods about the current state are classified
into few classes. These classes are similar to those utilized in the theory of random
fields, multidimensional image processing,
(c) the classification of trajectories transited by the multidimensional neural network
or a local optimum computing circuit/scheme.
In the above discussion, we considered quadratic forms as the energy functions
(motivated by the simplest possible neural network model) optimized by the logic gates,
which when connected together lead to logic circuits. This approach toward
multidimensional logic theory motivates the definition of more general switching/logic
functions as the local optimum of higher degree forms over the various subsets of
multidimensional lattice (hypercube, bounded lattice etc.).
Definition 2.2
A generalized logic function (representing a generalized logic gate or generalized logic
circuit) is defined as a mapping between an m -dimensional input array and the local
optimum of a tensor based form of degree greater than or equal to two, over various
subsets of multidimensional lattice (the multidimensional hypercube,
multidimensional bounded lattice). These local optimum of higher degree form (based
on a tensor) are realized through the stable states of a generalized multidimensional
neural network.
In (Rama 3) , it is shown that the strictly generalized logic function defined above has
better properties than the ordinary logic function described in Definition 4.1. The generalized
logic function is related to a multidimensional encoder utilized for communication through
multidimensional channels.
Now, with the generalized multidimensional logic gate defined above, logic synthesis
with these types of logic gates involves interconnection of them in certain topology.
This ordinary and generalized approach to multidimensional logic gate definition and
logic synthesis is depicted in Figures 2.1 to 2.3. Detailed documentation on logic synthesis
and design of future information processing machines is being pursued.
21
Proof: One dimensional neural network with state vector size infinity is uniquely defined
by (S, T) where S is an infinite dimensional (rows as well as columns) symmetric matrix
and T is an infinite dimensional vector of thresholds at all the neurons.
The state of the neural network at time t is a vector whose components are +1 and 1. The
next state of a node is computed by
(2.18)
1, otherwise
where,
Hi (t ) = Sji X j (t ) Ti
(2.19)
j =1
The entries of S are such that the infinite sum in the above expression converges.
The next state of the network i.e. X ( t+1 ), is computed from the current state by
performing the evaluation (2.18) at a subset of the nodes of the network, to be denoted by
K. The mode of operation of the network is determined by the method by which the set K
is selected at each time interval i.e. if |K| = 1, then we will say that the network is operating
in a serial mode. Without loss of generality T = 0.
In the following, we consider the serial mode of operation. We argue that with the
above stated updating scheme at an arbitrary chosen neuron, the energy function (quadratic)
increases.
E( k ) = Sij Xi ( k ) X j ( k )
(2.20)
i =1 j =1
Without loss of generality, consider the case where all the thresholds are set to zero. It
is easy to see (set the last component of state vector to 1 and appropriately augmented
entries of S) that for any finite L, we have
L
S
i =1 j =1
ij
Xi ( k ) X j ( k )
S
i =1 j =1
ij
Xi ( k + 1) X j ( k + 1)
(2.21)
by the convergence theorem for one dimensional neural networks of order L, for any
arbitrary L. Now let L tend to infinity. Hence
Sij Xi (k) X j (k )
i =1 j =1
S
i =1 j =1
ij
Xi ( k + 1) X j ( k + 1)
(2.22)
22
dimensional vector is state updated in the parallel mode, for every finite segment of it,
either there is convergence or a cycle of length 2 (utmost two vectors for which the energy
values are the same) exists. Since, the energy function associated with the infinite
dimensional vector is the limit of those associated with the finite segments, it is evident
that the scalar energy values converge or a cycle of length utmost two exists.
Q.E.D.
Now, we discuss briefly, the other infinite dimensional neural networks of dimension
infinity and order finite/infinite ( modeling tensor variables).
The following lemma is well known from the set theory.
Lemma 2.1: Countable union of countable sets is countable.
The above lemma implies that the convergence theorem proved above in association
with the convergence theorem for multidimensional neural networks (its proof argument
in section 3) provides us with the convergence proof for a large class of infinite dimensional
neural networks (dimension and/or order of tensors utilized in modeling is infinity). Details
on the convergence theorem for infinite dimensional neural networks are provided below.
Tensors utilized to represent the connection structure, state of neurons of infinite
dimensional neural network are such that the either the dimension or the order is finite/infinite
with not both of them being finite (either the dimension or the order or both are infinite ).
In one dimension, when the number of neurons is infinite and a quadratic energy function
is optimized through a neural network scheme, by a straightforward extension of the results
in (Rama 3), the stable states of the neural network constitute a graph-theoretic code (with
the length of the codeword being infinite). The set over which optimization is carried out is
the unbounded unit hypercube (countable number of entries in the infinite dimensional
state vector), a subset of the lattice ( based on one independent variable ).
The following theorem is concerned with the points on the lattice in multi/infinite dimensions.
This theorem is the infinite dimensional extension of the result proved in section 3.
Theorem 2.4: Let MN = (S, T) be an infinite dimensional neural network of order n/
infinite and dimension infinity (number of neurons in each dimension). S is a fully
symmetric tensor of dimension infinity and order 2n/infinity with Si 1,..., in ; i 1,...,in 0 .
The network MN always converges to a stable state while operating in a serial mode
(i.e., there are no cycles in the state space), while in the parallel mode, the network
will always converge to a stable state or to a cycle of length 2 (i.e., the cycles in the
state space are of length 2).
Proof: For a multidimensional neural network modeled by a tensor of dimension and
order finite, in the serial mode of operation, the network always converges to a stable
state. Since, the quadratic energy function is a scalar value defined over the connection
tensor (whose order, dimension are finite ), by letting the dimension and/or order tend to
infinity in (2.13), it is immediate that the energy function value increases in the serial mode
until a stable state is reached starting in a certain initial state. Thus, for various infinite
23
dimensional neural networks considered, convergence to a stable state in the serial mode
of operation is ensured ( i.e. there are no cycles in the state space ).
In the parallel mode of operation of the infinite dimensional neural network, by the
same reasoning as in Theorem (2.1), the network will always converge to a stable state or
to a cycle of length 2 depending on the order of the network ( i.e. the cycles in the state
space are of length less than or equal to 2).
Q.E.D
As in the case of multidimensional logic theory, the above convergence theorem is
utilized as the basis to describe infinite dimensional logic theory as well as logic
synthesis. It should be noted that the infinite dimensional logic synthesis only has
theoretical importance. Brief discussion on infinite dimensional versions is provided
for the sake of completeness.
Definition 2.3
An infinite dimensional logic function realized through an infinitedimensional logic gate
(with inputs and outputs) is defined to be the local minimum/maximum of the energy
function of an associated infinitedimensional neural network. Equivalently, the local optima
of the energy function of an Infinitedimensional neural network correspond to the logic
functions that are realized through various logic gates.
With the above definition of infinite dimensional logic function, detailed results in
infinite dimensional logic synthesis are being developed along the lines of those in
multidimensional logic synthesis. Brief description is provided in the following for the
sake of completeness.
An infinitedimensional logic circuit consists of an arbitrary interconnection of
infinitedimensional logic gates. Infinitedimensional logic synthesis, as in one dimension
involves synthesizing logic circuits for different purposes. These infinite dimensional logic
circuits only have theoretical implementations. Infinitedimensional logic synthesis depends
on how the infinitedimensional logic gates are connected to one another. The structure of
interconnection determines the structure of symmetric tensor (order and/or dimension is
infinity) representing the infinitedimensional logic circuit.
24
25
(2.23)
In this technical memorandum, the author for the first time associates energy functions
with the state updating scheme. The multidimensional versions of these continuous time
neural networks are discussed in (Rama 4).
2.7 CONCLUSIONS
A mathematical model of an arbitrary multidimensional neural network is described. This
model is utilized to prove the convergence theorem for multidimensional neural networks.
Utilizing the convergence theorem, multidimensional logic functions are defined and
multidimensional logic synthesis is discussed. Infinite dimensional logic synthesis is briefly
described. Various constrained static optimization problems of utility in control,
communication, computation and other applications are summarized. Several innovative
themes on one/multidimensional neural networks are summarized.
26
REFERENCES
(BoT) A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications Inc., New York,
(BrG) J.Bruck and J.W. Goodman, A Generalized Convergence Theorem for Neural
Networks, IEEE Transactions on Information Theory, Vol. 34, No. 5, Sept 88.
(CAB) S.T. Chakradhar, V.D. Aggarwal and M.L. Bushnell, Neural Models and Algorithms
for Digital Testing, Kluwer Academic Publishers.
(HoT) J. J. Hopfield and D. W. Tank, Neural Computations of Decisions in Optimization
Problems, Biological Cybernetics., Vol. 52, pp. 41-52, 1985.
(Rama 1) Garimella Rama Murthy, Multi/Infinite Dimensional Logic Synthesis, Manuscript
in Preparation.
(Rama 2) Garimella Rama Murthy, Unified Theory of Control, Communication and
Computation-Part 1, Manuscript to be submitted to IEEE Proceedings.
(Rama 3) Garimella Rama Murthy, Multi/Infinite Dimensional Coding Theory: Multi/Infinite
Dimensional Neural Networks: Constrained Static Optimization, Proceedings of 2002 IEEE
Information Theory Workshop, October 2002.
(Rama 4) Garimella Rama Murthy, Optimal Control, Codeword, Logic Function Tensors
Multidimensional Neural Networks, International Journal of Systemics, Cybernetics and
Informatics, October 2006, pages 9-17.
(Rama 5) Garimella Rama Murthy, Signal Design for Magnetic and Optical Recording Channels:
Spectra of Bounded Functions, Bellcore Technical Memorandum, TM-NWT-018026.
27
CHAPTER
Multi/Infinite Dimensional
Coding Theory: Multi/Infinite
Dimensional Neural
Networks
Constrained Static
Networks
Optimization
3.1. INTRODUCTION
In the recent years, technological developments in parallel data transfer mechanisms led
to HIPPI (high performance parallel interface), SMMDS (switched multi-megabit data
service), FDDI (fiber distributed data interface). To match these high speed parallel data
transfer mechanisms, multidimensional coding theory has been originated and some ad
hoc procedures were developed for designing linear as well as non-linear codes.
Multidimensional codes are utilized to encode arrays of symbols for transmission over
a multidimensional communication channel. Thus, the central objective in multidimensional
coding theory is to design codes that can correct many errors and whose encoding/decoding
procedures are computationally efficient. A multidimensional error correcting code can
be described by an energy landscape, with the peaks of the landscape being the codewords.
The decoding of a corrupted codeword (array) which is a point in the energy landscape
that is not a peak is equivalent to looking for the closest peak in the energy landscape. An
alternative way to describe the problem is to design a constellation which consists of a set of
points on a multidimensional lattice that are enclosed within a finite region, in such a way
that a certain optimization constraint is satisfied.
Neural network model, simulated annealing, relaxation techniques are some of the various
computation models (based on optimization) that have been attracting much interest because
they seem to have properties similar to those of biological and physical systems. The standard
computation performed in a neural network is the optimization of the energy function. The state
space of a neuro-dynamical system can be described by the topography defined by the energy
function associated with the network. The connection structure of a neural network can either
be distributed on a plane or in multidimensions (Rama 2).
Thus, the field of multidimensional neural network theory and the field of
multidimensional coding theory are linked through the common thread of optimization of
28
29
30
possible states +1 and 1. The state of node ( i1, i 2,..., in ) at time t is denoted by Xi 1, i 2..., in (t) .
The state of MN at time t is the tensor X i1, i2,...1. in (t)of dimension m and order n. The state
evolution at node ( i1, i 2,..., in ) is computed by
(3.1)
where
Hi 1,..., in (t ) =
j1= 1
jn = 1
The next state of the network i.e. X i1, i2,..., in (t + 1), is computed from the current state by
performing the evaluation (3.1) at a subset of nodes of the multidimensional neural network,
to be denoted by G. The modes of operation are determined by the method by which the
subset G is selected in each time interval. If the computation (3.1) is performed at a single
node in any time interval i.e. G| = 1 , then we will say that the network is operating in the
serial mode, and if G|= mn , then we will say that the network is operating in the fully
parallel mode. A state is called stable if and only if
(3.2)
where denotes inner product (the symbol is sometimes suppressed for notational brevity).
Once a neural network reached such a state there is no change in the state of the network
no matter what the mode of operation is.
An important feature of the network MN is the convergence theorem stated below.
Theorem 3.1: Let MN = (S, T) be a multidimensional neural network of dimension m and
order n. S is a fully symmetric tensor of order 2n and dimension m . The network MN
always converges to a stable state while operating in the serial mode (i.e. there are no
cycles in the state space) and to a cycle of length utmost 2 while operating in the fully
parallel mode.( i.e. cycles in the state space are of length utmost 2 ).
This theorem is proved in (Rama 2). This theorem suggests the utilization of MN as a
device for performing a local search of the optimum of an energy function. In the following,
we formulate a problem that is equivalent to determining the global maximum of an energy
function and how to map it onto a multidimensional neural network.
Definition 3.1
Let G = (V, E) be a weighted and undirected non-planar graph in multidimensions where
V denotes the set of nodes of G and E denotes the set of edges of G. Let K be the fully
symmetric tensor whose components are the weights of the edges of G.
Let V1 be a subset of V, and let V1 = VV1. The set of edges each of which is incident at
a node inV1 and at a node in V1 is called a cut in G.
31
Definition 3.2
The weight of a cut is the sum of its edge weights. A minimum cut (MC) of a non-planar
graph/graphoid is a cut with minimum weight.
In the following, we show the equivalence between the minimum cut problem in a
graphoid (from now onwards, we call the connection structure of a multidimensional neural
network also as a graphoid ) and the problem of maximizing the quadratic form as the
energy function of a multidimensional neural network. Every non-planar graph including
the connection structure of a multidimensional neural network is a Graphoid (by definition).
Theorem 3.2: Let MN = (S, T) be a multidimensional neural network with all the thresholds
being zero i.e. T = 0. The problem of finding a state V for which the quadratic energy
function E is maximum is equivalent to finding a minimum cut in the graphoid
corresponding to MN.
Proof: Since T = 0, the energy function is given by
E=
... ... S
i1= 1
in = 1 j 1 = 1
jn = 1
i 1,..., in ; j 1,..., in
Xi 1,..., in
X j 1,..., jn
(3.3)
(3.4)
Since, the first term in the above equation is constant (it is the sum of weights of the
edges), it follows that the maximization of E is equivalent to the minimization of S+.
Clearly, S+ is the weight of the cut in MN with V1 being the nodes of MN with a state
equal to 1.
Q. E. D.
32
non-planar graph type structure called graphoid (not necessarily the connection structure
of a multidimensional neural network). Consider a fully symmetric tensor of dimension m
and order 2n; which is utilized to describe the connection structure of a multidimensional
neural network.
A subset of the set of edges of G can be represented by a characteristic tensor of order 2n
with the edge between two nodes Vi 1, i 2,..., in , Vj 1, j 2..., jn , leading to an entry of +1 at those locations
in the tensor. Thus, an edge characteristic tensor of a graphoid E is defined such that
(3.5)
Definition 3.3
The incidence tensor of a graphoid G = (V , E ) is a block tensor of the form
TV1
TV2
DG = .
..
(3.6)
TV
n
where TV represents the tensor of the set of edges incident upon the node Vi . It should be
noted that the incidence tensor is a blocked tensor and the above illustration is shown to
aid the imagination of the reader.
Various concepts associated with planar graphs are utilized as the basis to define the
following concepts associated with a graphoid (non-planar). They provide the notation
associated with graphoid theoretic codes.
The following lemmas are very easy to verify.
Lemma 3.1: The set of characteristic tensors that correspond to the cuts in a connection
i
( V 1)
The linear tensor/m-d vector space that corresponds to the cuts of a graphoid G will
be called the only cut space of G . Furthermore, the circuits in a graphoid also constitute a
linear tensor/vector space.
Lemma 3.2: Given a connected graphoid G = (V , E ) ; the incidence tensor of G has rank
33
That is, given a graphoid G = V , E and a (0, 1) tensor Y of dimension m and order
2n; what is the codeword in C closest to Y in Hamming distance?
G
The following lemmas will answer the questions.
Hamming Distance: Given two (0,1) tensors, X, Y; the Hamming distance between m
dimensional tensors of order 2n is the number of places where they differ.
This definition is motivated by transmitting a binary tensor X through a noisy
multidimensional channel, observing the output Y and counting the number of errors that have
occurred.
Wi 1, i 2,..., in ; j 1,..., jn is the weight associated with the edge (i1,..., in ; j1,..., jn ) in G . Then the
maximum likelihood decoding of the tensor Y with respect to CG is equivalent to finding
the minimum cut in G Y .
Proof: Assume the number of ones in Y is b. Let P be an arbitrary codeword in CG. Let L i,j denote
the number of positions in which P contains an i {0, 1} and Y contains a j {0,1}. Clearly,
b = L0,1 + L1,1
(3.8)
(3.9)
Thus,
= L0,1 + L1,0 b
(3.10)
34
Minimizing the right hand side of the above expression over all P C G is equivalent to
finding a codeword which is the closest to Y. On the other hand, minimizing the left hand
Q.E.D
side is equivalent to finding the minimum cut in G Y.
From the above lemma, the following theorem follows.
35
= C j 1, j 2,..., jl
(3.11)
where denotes the inner product operation (between tensors defined over a finite field)
by means of exclusive or operation between the components of outer product of tensors
(contraction over appropriate indices of the sum of products of binary variables).
The above procedure of generating the codeword tensor from an information tensor leads to
the following interesting considerations which are inherent to multidimensional code design.
In one dimension, a binary information vector of length k is encoded into a codeword
vector of length n by padding the parity bits to it. The parity check equations obtained
through the parity check matrix determine these bits. In the case of two/multidimensional
array of information bits, there are many ways to encode the array into a codeword array.
Even in the simplest two dimensional array case, by padding a border of parity bits along
the row wise as well as column wise directions, the codeword array can be generated. In
the following, this degree of freedom in multidimensional coding is formally described.
A multidimensional information array (information tensor) is mapped into a codeword
array in the following ways:
(1) An m-dimensional information tensor of order n is mapped into an m-dimensional
codeword tensor of order l (l > n),
(2) An m -dimensional information tensor of order n is mapped into k -dimensional
codeword tensor (k > m ) of order n,
(3) An m-dimensional information tensor of order n is mapped into a k -dimensional
(k > m) codeword tensor of order l (l > n).
For the purpose of notational convenience, in the following encoding through the
operation (1) is only utilized. It is easy to realize that by transposing the information as
well as generator tensors, the operation (2) in encoding is achieved. But to encode an
information tensor into a generator tensor through the operation (3), a second generator
type tensor is utilized.
Various ideas familiar in one dimensional coding theory (parity check matrices,
primitive polynomials, basis, cosets etc.) have corresponding parallels in multi/infinite
dimensional coding theory based on the tensor linear operator defined over a finite field.
The detailed translation from one dimensional encoding/decoding algorithms to
36
H (Xi 1, i 2,..., in ) =
i1= 1
in = 1
37
(3.12)
Given the basic idea of the above definition, results from one dimension are generalized
to multidimensions utilizing the principles described in (Rama 3). Complex sources such
as a Markovian Source require some sophistication in defining the entropy/uncertainty of
the source. The interesting channel model in multidimensions is the discrete memoryless
channel represented through a stochastic tensor whose elements are conditional
probabilities Pj 1,..., jl , i 1,..., ik . This corresponds to a Markov random field. Detailed theorems
are derived utilizing the principles described in (Rama 3).
With the multidimensional encoding scheme formally described, it is proved in the
following that the maximum likelihood decoding problem of a multidimensional linear
block code is equivalent to the maximization of multivariate polynomial (whose terms/
monomials are described in terms of the entries of received, generator tensors) associated
with the generator/received tensors over the multidimensional hypercube.
The essential idea in the derivation of the desired result is (generalization of Theorem
3.3 to arbitrary multidimensional linear codes) to represent the symbols of the additive
group as symbols in the multiplicative group through the following transformation:
(3.13)
a ( 1) a i .e . 0 1, 1 1 .
Thus, the information tensor Bi 1,..., in is represented by the tensor Xi 1,..., in , where the
component Xi 1,..., in =
(1)
Bi 1,...in
Yj1, j2 ,.... Jl = ( 1)
i1 = 1
in = 1
i1,.....,in
= ... X i 1,....,
in
; j1,...... jl
(3.14)
Definition 3.4
In the {1, 1} representation of a multidimensional linear code, instead of a generator tensor,
given an information tensor Xi 1,..., in , an encoding procedure X Y is utilized, where the
tensor Y j1,..., jl is such that Y j1,..., jl component is a monomial that consists of a subset of the
X i1,..., in . An encoding procedure is systematic if and only if Y j1,..., js = X j1,..., js for 1 < s < n.
Definition 3.5
Let Gi1, i 2,..., in ; j1, j 2,...., jl be a generator tensor of ones and zeroes. The polynomial representation
of generator tensor G with respect to a {+1, 1} received tensor of dimension m and order
l, W denoted by E is,
38
m
i1 = 1
in =1
EW ( X ) =W ... X i1,...,in
= W Y (X )
(3.15)
(3.16)
where denotes inner product between the tensors (i.e. outer product of the tensors
followed by contraction over appropriate indices).
Consider the linear multidimensional block code defined by the generator tensor G (or
equivalently by the encoding procedure associated with G ). The polynomial representation
of G i.e. EW ( X ), will be called the energy function of W with respect to the encoding
procedure X Y .
To establish the connection between the energy functions (optimized by neural/
generalized neural networks over various subsets of the multidimensional lattice) and linear
multidimensional block codes, we will prove that finding the global maximum of EW (X) is
equivalent to maximum likelihood decoding of a tensor W with respect to the code C.
Theorem 3.4: Given an ( m, l ; m, n ) multidimensional linear block code C defined by an
encoding procedure X Y , and a tensor W of ones and minus ones i.e. a {+1, 1} tensor,
the closest codeword (in Hamming distance) to W in C corresponds to an information
tensor B if and only if
EW (B) = Maximum overall tensors X of {EW(X)}.
(3.17)
Proof: For an { +1, 1 } information tensor,X the scalar energy function is given by
EW (X) = W Y(X)
(3.18)
(3.19)
= ml
2 {( j 1,... jl ): Wj 1,..., jl Yj 1,..., jl (X )}
= ml 2 dH (W , Y)
(3.20)
(3.21)
EW (X ) =
j1= 1
jl = 1
39
(3.22)
(3.23)
and minimum over all X tensors (all ones tensor) of d H ((all ones tensor), Y)
occurs at M =Maximum overall tensors other than all ones tensor of EW (X )
(3.24)
ml M
2
(3.25)
The above results are being generalized to infinite dimensional codes utilizing infinite
dimension/order tensors .
Game-Theoretic Codes: Optimal Codes
In the theory of error correcting codes, minimum distance of a linear code provides a
measure of the number of errors that can be corrected. From ( 3.25), it is evident that the
maximization of minimum distance of a multidimensional linear block code requires
minimizing M. Thus, we have the following Lemma.
Lemma 3.4: The multidimensional (m, n ; m, l) linear block code which minimizes M in
(3.24) enables the correction of maximum number of errors among all possible such error
correcting codes:
Proof: From (3.25), maximization of minimum distance of an (m, n ; m , l) linear code is
equivalent to minimizing M, i.e. minimizing the maximum value of the energy function
over the m-d hypercube ( excluding the all ones tensor). Such a code design problem fits in
the game-theoretic framework. It is well known that maximization of minimum distance
also maximizes the number of errors that can be corrected.
Q. E. D.
40
(3.26)
P
HT = H j ( n 1),..., ,..., j 1 = I
(3.27)
i.e. a blocked tensor with sub-tensors of compatible dimension and order. From the
definition of a parity check tensor of a multidimensional linear block code,
C j 1,..., jn H j ( n 1),..., ,..., j 1 = 0
(3.28)
E (X) = m(nl).
Proof: E , the polynomial representation of parity check tensor has m (nl) terms, and all
the coefficients are equal to one. Hence, E = m(nl). if and only if all the terms are equal to
one.
Q. E. D.
The above Lemma ensures that in the polynomial representation, E (X), every codeword
corresponds to a global maximum (stable state). An interesting question is, does every local
maximum correspond to a codeword. This question is answered by the following theorem.
Theorem 3.5: Let C be a linear multidimensional block code, with G, H, EC, and E as
defined above. Then E is a polynomial with the properties of EC. That is, X corresponds to
C.
a local maximum in E if and only if X
41
Proof: From the above Lemma, the global maximum of E is m (nl) ; thus every codeword is
a global ( and thus a local ) maximum. The converse follows from the fact that the tensor H
has a systematic form. Specifically, the last m (nl) variables in E ; i.e., xi1,i 2 ,...., in l +1 ,...., in ;
where the order indices in l +1 ,..., in (each of them) assume m values, each appear only in
one term. That is, since I is an identity tensor in the parity tensor H; x i 1,..., in l +1 ,...., in appears
only in first term, and so on. Now, assume that a tensor V exists that corresponds to a local
(nl )
maximum (which is not global maximum). That is E (V) = L, where L < m
. Hence, at
least one term exists in E (V ) that is not one. However, this can be made one by flipping
the value of the index variables that appear in this term. This contradicts the fact that V is
a local maximum.
Q. E. D.
To summarize, given a linear code C, the algorithm for constructing a polynomial is as follows:
42
(3.29)
The essential idea is once again to utilize the multiplicative representation. Let u be the p th
root of unity i.e.
= e ( j 2 )/ p
(3.30)
The additive Zp group can be represented as a multiplicative group of p th roots of
unity through the transformation: a u a
In the multiplicative representation, the information symbols in information tensor
are represented as
X i 1,..., ik = u Bi 1,....,ik
(3.31)
Yi 1,..., in = u
Vi1 ,..., in
= u
m
m
.. .
( B i 1 ,. .. , ik
i 1 = 1 ik = 1
i1= 1
ik = 1
i1= 1
ik = 1
= ... u
G i 1 ,. .. , ik ; j 1 ,. .. , j n ) m o d p
(3.32)
Y = (Yi1,....in)
(3.33)
43
decoding (MLD) problem with the metric being the Hamming distance between the tensors
while in the second case, we consider the Lee distance.
The generalization for the case where the Hamming distance is utilized in the maximum
likelihood decoding (MLD) problem is based on the following well known Lemma.
Lemma 3.6: Let p be a prime, and let ( j 2
1, if k = 0,
=
m =0
0, othe rw ise
The generalization is stated through the following theorem.
(1 p )
( p 1)
km
(3.34)
Theorem 3.6: Consider an (m, k ; m, n) multidimensional linear block code over GF(p), with
p
EW (Y ) =
( p 1)
(W
l=0
i 1,..., in
Yi 1,..., in )
(3.35)
where W denotes the complex conjugate of W and denotes the inner product between
the tensors. Then, the maximum likelihood decoding of W i1 ,......, i n is equivalent to finding
p
the maximum of EW (Y ) .
Proof: It follows by the same argument as Theorem (3.4) adopted to the variables appearing
p
Q.E.D.
The essence of the above theorem stated in more explicit language leads to the following
conclusion.
Given a received tensor Wi 1,..., in , the closest codeword tensor (in Hamming distance) to
W in C (the code utilized at the input to the multidimensional channel) corresponds to a
tensor B if and only if
( p 1)
Max
l
EW (B ) = All tensors EW (Y ) = (W i 1,..., in Yi 1,..., in )
l =0
(3.36)
Next, we consider the maximum likelihood decoding problem with respect to the Lee
distance. We first consider the cases where p = 3 or 5. In these cases, there are easy
expressions for the energy function. It is convenient to redefine the energy function in the
following manner:
Given an encoding procedure for a transmitted tensor X = (Xi1,..., i k), into a codeword
tensor Y = (Yi1,..., in), by the following procedure i.e.
(3.37)
44
and W = (W i1,..., in), a tensor whose entries are the pth roots of unity, we redefine the energy
function as follows:
i
EW ( X ) = Re(W i1 ,....,in Yi1 ,..., in )
(3.38)
where Re (x) denotes the real part of the complex number, x denotes the integral part of
the number x and xi denotes the complex conjugate of x.
It should be noted that the energy function coincides with the one for p = 2 (in the case
u = 1). The definition of Lee distance is provided to facilitate the easier understanding of
further discussion.
Definition 3.6
p
The Lee weight of an m-dimensional tensor of order k , X = (Xi 1,..., ik ), (Xi 1,..., ik ) Z , p is a
prime, is defined as
m
i1=1
ik =1
WL = ... Xi 1,..., ik
(3.39)
where
0 < X i 1, i 2 ,..., ik ( p 2)
X i 1, i 2,..., ik ,
X i 1, i 2,..., ik =
p X i 1, i 2 ,..., ik , ( p 2) < X 1i , i 2,..., ik < ( p 1)
The Lee distance between any two compatible tensors is defined as the Lee weight, W L
of their difference.
With the above definition, we study the cases where p = 3, p = 5. From now, in the
following discussion, X Y denotes the encoding procedure that defines a code
(multidimensional), and X , Y are tensors of dimension m and order k , n respectively, of
third or fifth roots of unity.
In the following, two new theorems are proved. The first one is equivalent to the
Theorem (3.4). It states that maximum likelihood decoding (MLD) in a ternary code is
equivalent to the maximization of the energy function in (3.39). The Theorem is formally
stated below:
Theorem 3.7: Let p = 3, A B; then B is the closest multidimensional codeword (in the
Hamming distance) to a received tensor word W if and only if
EW ( A ) = Max EW (X ).
X
(3.40)
Proof: The proof is similar to that of Theorem (3.4) and is avoided for brevity
Q.E.D.
The proof of Theorem (3.7) as well as Theorem (3.8) requires the utilization of Lemma
(3.6) and a clear understanding of when the energy function is maximized. The new
45
EW ( A ) = Max EW (X ).
X
(3.41)
i
(i 1, i 2,..., in ) : W( i 1,..., in ) B i 1,..., in = u 2 or u 3
(3.42)
i
= m n (i 1, i 2,..., in ) : W ( i 1,..., in ) B i 1,...., in = u or u 4
}
(3.43)
Where W denotes the complex conjugate of all components of W and d L denotes the
Lee distance. Hence, EW (A) reaches a maximum if and only if d L (W, B) reaches the minimum.
Q.E.D.
The above results are generalized to infinite dimension/order tensors in a
straightforward manner.
46
in the previous sections) naturally suggests a search for similar techniques to non-linear
codes. In the following, non-linear multidimensional codes are investigated.
The essential idea in generalizing the results in previous section to non-linear
multidimensional codes is to consider the representation of Boolean functions as polynomials
over the field of real numbers. In the context of one dimensional non-linear codes, part of the
discussion is known (BrB) and is repeated here for the sake of completeness. Also, utilization
of some subtle ideas associated with tensor products make the presentation essential aid for
realizing that non-linear multidimensional codes share various features with linear codes.
Definition 3.7
A Boolean function f on n variables, is a mapping
f : {0,1}n {0,1}
(3.44)
For the present discussion, it is useful to define Boolean functions using the symbols 1
and 1 instead of the symbols 0 and 1, respectively.
Definition 3.8
A Hadamard matrix of order m, denoted by H , is an m m matrix of +1s and 1s such that
Hm HmT = mI m ,
(3.45)
where Im is the m m identity matrix. The above definition is equivalent to the assertion
that any two rows of H are orthogonal.
Hadamard matrices of order 2k exist for all k > 0. The construction is as follows:
H1 = [1]
1 1
H2 = 1 1
H 2n H 2n
.
H 2n +1 =
H 2n H 2n
(3.46)
Definition 3.9
Given a Boolean function f of order n, P is a polynomial (with the coefficients over the field
of real numbers) equivalent to f if and only if for all vectors X {1, 1}
f (X ) = Pf (X ).
(3.47)
47
(3.48)
Thus,
Pf (1) = b 0 + b 1
(3.49)
Pf (1) = b 0 b 1
(3.50)
P = G B,
+1 + 1
where G = +1 1
(3.51)
Pf (X1 , X2 ,..., Xn + 1 ) .
Remark
Before proceeding with the proof, the following comparison/discussion on the similarities
and differences between tensor products, matrix products is very relevant. Consider a G
matrix and a column vector B. The tensor product, when the variables (matrix, column
vector) are treated as tensors is given by
48
CONTRACTION
Pi
G B = Gi , j Bk = Pijk
G11 B1
G21 B1
G11 B2 G12 B1
G21 B2 G22 B1
G12 B2
G22 B2
(3.52)
Now, we perform contraction on certain indices of the tensors. The resulting tensor is
a first order tensor. Specifically, suppose we do the contraction over the indices j, k. Then,
we have
G11 B1 + G12 B2
G21 B1 + G22 B2
(3.53)
Thus, the tensor product, in contrast to the matrix product allows more freedom in
summing the components over different indices (contraction over different indices in the
language of tensor algebra) of the tensor.
Now, we return to the original proof.
The above argument is now generalized to less than or equal to m n variables ( or arbitrary
finite/countable number of variables which are possibly the components of a tensor ) by
the method of mathematical induction.
The case m = 1, n = 1 is proved at the beginning of the proof. Since m n is still a large
number (finite), say l, it is sufficient as well as necessary to prove the result for a finite
number l ( in the case considered, the binary variables are imbedded inside a tensor. Also,
the polynomial representing the Boolean function is expressed through inner product
operation over appropriate tensors ).
Now, as an induction hypothesis, assume that the claim is true for l
P = G2n B
(3.54)
(3.55)
we have
G2n G2n
1 1
G 1 = [ 1 ] ; G 2 = 1 1 ; G2n +1 = G G
2n
2n
P = G 2n + 1 B
(3.56)
(3.57)
Hadamard matrices are non-singular; thus, for any given f, a unique Pf exists (defined
by a vector of coefficients).
49
In the language of tensor algebra, the same argument holds true except that the tensor
can have ( the tensor utilized to couple the coefficients of the polynomial representing a
Boolean function to the values of the polynomial) 0 (zero) entries in addition to +1, 1
entries ( when contraction is performed over the appropriate indices). Uniqueness of such
a polynomial is ensured by the uniqueness as a representation of Boolean function ( from
the discussion/proof above ). Thus, in the tensor algebra notation, we have
P = G B
(3.58)
Q. E. D.
It should be clear that the above representation theory has relevance to the minimum
sum of products representation of a Boolean function. The above theory, as is easily seen
holds true, if one is interested in finding the equivalent polynomial of a Boolean function
which assumes {0,1} values. One way to see the result is by the following claim.
CLAIM: Every monomial over {1, 1} can be written as a polynomial over {0,1} by the
change of variable (BrB), x = 1 - 2 u, as follows:
k
i =1
i =1
Xi = 1+ (2)i
U
Si j Si
(3.59)
Vi 1, i 2,..., in = fi 1, i 2,..., in (X ) .
(3.60)
50
mk
which consists of
Now, by the same argument as in Theorem (3.4), the maximum likelihood decoding
(MLD) of a given received tensor word is equivalent to solving the following
maximization problem:
(3.61)
51
X 2 A 2 + X A1 + A0 = 0
m
X
j =1
Aj =0
X j Aj = 0
(3.62)
j =1
where X, {A} are tensors of compatible dimension, order such that the inner/outer
product operations are well defined. The solution techniques developed in (Rama 11)
when the linear operators are matrices are extended to the tensor linear operator case
in (Rama 6). Also, various results that are well documented in the books such as (Gol)
for matrix polynomials based on the properties of matrix linear operator are extended
to tensor linear operator. Furthermore, in one dimensional system theory, various results
are developed for systems of matrix polynomial equations utilizing only linear operator
properties of a matrix. These results are extended to systems of tensor polynomial
equations (Rama 3). In (Rama 6), the author formulates as well as solves the problem
of determination of tensor variate zeroes of multi-tensor variate polynomial, power
series equations
L
i1= 0
m= 0
... X
... X
i1= 0
m= 0
i1
1
i1
(3.63)
52
Various other associated results are documented in (Rama 6). It is well known that the
zeroes of a uni-variate scalar polynomial constitute a group. By utilizing the set of zeroes
of a determinental polynomial associated with the uni-variate/multi-variate (tensor
variables) polynomial, the set of tensor zeroes are divided into certain set of equivalence
classes. Thus, a group structure is imbedded onto the linear subspace of tensor zeroes of
uni-variate/multi-variate polynomial equations.
Unlike the multivariate polynomials (whose terms/monomials are based on the
components of tensors) optimized in sections 3, 4, 5, 6; in view of the above results, a
natural question that arises is whether the local optimum of multi-tensor variate polynomials
over various subsets of multidimensional (very high dimensional) lattice lead to (each
variable is a tensor) codeword sets with better properties. When the information tensor,
generator tensor, codeword tensors are blocked into sub-tensors and the objective function
for the optimization problem over a subset of multidimensional lattice is rewritten, it is
evident that a multi-tensor variate polynomial appears. Thus, such polynomials are
subsumed in the ones considered in sections 3, 4, 5, 6.
Integer Programming Problems: Solutions Using Decoding Techniques
In computer science, operations research and other fields, problems of the following
form arise very often:
n
Maximize Wi
i =1
X
j Si
(3.64)
where Sj is a subset of {1,2,...,n} and X j {0,1} . Thus, the problem is concerned with optimizing
a multivariate polynomial, whose variables assume integer values. By the discussion, in this
section, every polynomial over {1, 1} can be transformed to an equivalent one over {0,1} by
a change of variable. It is shown in section 2, that a special case of the above problem i.e.
maximization of a quadratic form in {1, 1} variables arises in connection with the
determination of global optimum stable state of a neural network and is equivalent to the
minimum cut problem. This problem is known to be an NP hard problem.
The problem in (3.64) was studied extensively by various researchers and the main effort
concentrated in identifying the special cases which are solvable in polynomial time and in
devising approximation techniques. The most common technique for solving the
unconstrained {0, 1} program of the form in (3.64) is by transforming them to the problem of
finding the maximum weight independent set in a graph, which is an NP-hard problem. The
problem in (3.64) is transformed to the problem of finding the maximum weight independent
set by using the concept of a conflict graph of a 0-1 polynomial. In (BrB), it is shown how
decoding techniques can be utilized to maximize 0-1 nonlinear programs.
The multidimensional version of the 0-1 nonlinear programming problem in (3.64) is
given by
Maximize Wi X ii
i =1
53
(3.65)
where W, X are tensors containing the known coefficients w s in W and the monomials in
the variable components of the unknown tensor X. The inner product between these two
tensors provides the scalar objective function whose variables are allowed to assume only
{0, 1} or more generally finitely many values. It is shown in (Rama 6) that such an integer
programming problem can be solved utilizing the multidimensional decoding techniques
for linear block multidimensional codes. These results in operations research are avoided
here and relegated to (Rama 6).
54
55
56
science) as any other NP-hard problem. Finding algorithms which are efficient (in terms of
complexity) for an NP-hard problem is well recognized as a difficult problem. The following
is a difficult open problem in theoretical computer science:
Problem: Does a polynomial time algorithm exist for an NP-hard problem? In other words,
is the class of problems in NP, the same as the class of problems in P? i.e. is P = NP?
In the following, an innovative algorithm/approach to solve various NP-hard problems
in one dimension is described. The multidimensional generalization of this algorithm/
approach to any NP-hard problem (in multidimensions) is being formalized. It is an
extension of the following results to multidimensions.
In section 2, the problem of computation of minimum cut in a graph is shown to be
equivalent to the problem of determining the global optimum of the energy function of a
neural network i.e. maximizing a quadratic form over the hypercube. It is well known that
this is an NP-hard problem. In the following, an attack on this problem is described.
Positive Definite Synaptic Weight Matrix: Determination of Global Optimum Stable State of a
Neural Network:
Consider a neural network whose synaptic weight matrix is symmetric as well as
positive definite. In the following, an algorithm to determine the global optimum stable
state of such a neural network is described.
(a) Utilizing the well known theorem in linear algebra, every positive definite
symmetric matrix, S can be decomposed into the following form by means of
Cholesky Decomposition.
S = N NT
(3.66)
where N is a lower triangular matrix.
(b) The quadratic form being optimized by the neural network over the hypercube
can be expressed into the following form:
(3.67)
X T S X = XT N N TX = YT Y , where Y = N TX .
Since S is positive definite, XT S X > 0. Thus, YT Y > 0. The scalar expression for the quadratic form
n
in terms of Y is given by
Y
j =1
57
cut computation in an undirected graph, knapsack problem etc.), the complexity of the
algorithm is determined by
(a) Complexity of determination of Cholesky decomposition of a positive definite
symmetric matrix. Since there are various polynomial time procedures for the spectral
decomposition, computationally well studied efficient algorithms are available,
(b) Solving the linear programming problems related to optimization of linear forms
( maximization or minimization whichever leads to a larger value for the term)
over the hypercube. It is well known that there are polynomial time algorithms
for linear programming problems.
In some problems that arise in operations research, communication theory etc.,
constraint set is a convex polygon/polytope (convex hull of various finite structures
leading to convex sets bounded by hyperplanes) etc. and a quadratic/higher degree
form is optimized over the constraint set. Then, by means of Spectral/Cholesky type
decomposition of the positive definite symmetric linear operator (in one as well as
multidimensions), various linear programming problems are solved through efficient
polynomial time procedures. The computation of complexity of such procedures,
efficient algorithms for NP-hard problems in one and multi-dimension, are being
documented. When the connection matrix has other special structure efficient
algorithms are found.
58
finding the maximum of the energy function E of a neural network defined by the graph
G (the weights on the edges of G are given by W = (1) yi with all its threshold values
equal to zero.
But, it is well known that the local optimum of a quadratic form over the hypersphere
occurs at the eigenvectors (eigentensors of the symmetric second order tensor) of the
symmetric matrix (associated with the symmetric matrix) with the value of the quadratic
form being the eigenvalue. Thus, maximum eigenvector of the symmetric matrix maximizes
the quadratic form over the hypersphere. Thus, the sign structure (sign of the components
of the vector) of the maximum eigenvector is utilized as the initial condition to run a neural
network i.e. Mathematically, let X 0 be the vector given by
X 0 = Sign ( X max ), where X max is the normalized maximum eigenvector and
X 0 is the initial state in which the neural network starts. A is the symmetric matrix.
The analysis of hop-and-skip algorithm is provided below.
X T A X = (X X0 + X0)T A(X X0 + X0)
(3.68)
(3.69)
The above manipulations enable one to compare the value of the quadratic form on the
hypercube at any discrete time instant against the maximum value on the unit hypersphere.
The particular choice of initial condition, minimizes the Hamming distance between the
maximum eigenvector and the initial condition vector to run the neural network.
The set of eigenvectors of the connection matrix of neural network span the entire
space or a subspace of it. Similarly, the set of stable states/ stable vectors span the space
or a sub-space. To determine the maximum stable state, the essential idea of the above
approach is to find the vector closest to the maximum stable state and utilize it as the
initial condition to run the neural network. Detailed analysis of the algorithm is being
investigated.
Dynamic Optimization
In (Rama 3), certain multidimensional system, in discrete/ continuous time is described
by the following state space representation through tensors:
Discrete Time:
(3.70)
59
Continuous Time:
(3.71)
where denotes the inner product between compatible tensors in the system description
in continuous/discrete time. Utilizing this state space representation, the author formalized
a unified theory of control, communication and computation in multi/infinite dimensional
systems, first discovered in (Rama1) for one dimensional systems. This theory enabled the
author to develop a highly advanced version of the theory of evolution of life from organic
matter. In this theory the author reasons that various body organs, functions of living
systems have evolved over time and that bilogical systems are organic/inorganic matter
based dynamical systems.
3.8 CONCLUSIONS
Tensor linear spaces over finite fields are utilized to describe and study the structure/
properties of multi/infinite dimensional linear codes. The three concepts: multidimensional
neural/generalized neural networks, multidimensional codes, multivariate polynomial
(terms/monomials being expressed in terms of the components of generator, other tensors)
optimization over various subsets of lattice, are related.
It is shown that (a) the problem of maximum likelihood decoding of error correcting
codes (multidimensional), (b) finding the global maximum of the energy function of neural/
generalized neural networks, and (c) solving integer/non-linear programming problems
in multidimensions are related. The equivalence is proved for binary as well as non-binary
cases. This equivalence naturally suggests utilizing the solvable cases of one problem to
the equivalent problem and vice versa. Full capitalization of equivalence leads to various
new results (Rama 6).
The programming problem of multidimensional neural networks is solved. Several
new heuristic procedures for NP-hard problems in multidimensions are suggested from
the equivalence. The decoding techniques of various (multidimensional extensions of one
dimensional codes) codes are utilized to find approximate solutions of NP-hard problems.
Various innovative results in static optimization are described. Infinite dimensional
generalization of the results is briefly described.
REFERENCES
(Ara) B. Arazi, Common Sense Approach to the Theory of Error Correcting Codes, MIT Press
book.
(BoT) A.I. Borisenko and I.E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications Inc., New York, 1968.
60
(BrB) J. Bruck and M. Blaum, Neural Networks, Error Correcting Codes and Polynomials
Over the Binary Hypercube, IEEE Transactions on Information Theory, Vol. 35, No. 5,
September 1989.
(Gaal) Gaal, Group Theory, Academic Press, 1982,
(Gol) I. Goldberg, Matrix Polynomials, Academic Press, 1972.
(Rama 1) Garimella Rama Murthy, Unified Theory of Control, Communication and
ComputationPart-1, Manuscript to be submitted to the IEEE Proceedings.
(Rama 2) Garimella Rama Murthy, Multi/Infinite Dimensional Neural Networks, Multi/Infinite
Dimensional Logic Theory, Logic Synthesis, Published in International Journal of Neural
Systems, Vol. 15, No. 3, pp 223-235, 2005.
(Rama 3) G. Rama Murthy, Optimal Control, Codeword, Logic Function Tensors:
Multidimensional Neural Networks, International Journal of Systemics, Cybernetics and
Informatics, October 2006, pages 9-17. See also Chapter 4.
(Rama 4) Garimella Rama Murthy, Multi/Infinite Dimensional Logic Synthesis, Manuscript
to be submitted to the IEEE Transactions on Computers.
(Rama 5) Garimella Rama Murthy, Signal Design for Magnetic and Optical Recording Channels,
Bellcore Technical Memorandum, TM-NWT-018026.
(Rama 6) Garimella Rama Murthy, Tensor Variate Polynomials/Power Series, Tensor based
Functions, Tensor Algebraic Geometry: Optimization, Manuscript to be submitted to the
Transactions of American Mathematical Society.
(Rama 10) Garimella Rama Murthy, Unified Theory of Control, Communication and
Computation: Dynamical Systems, Manuscript in Preparation.
(Rama 11) Garimella Rama Murthy, Transient and Equilibrium Analysis of Computer Networks:
Finite Memory and Matrix Geometric Recursions, Ph. D. Thesis, Purdue University, West
Lafayette, Indiana.
(Rama 12) Garimella Rama Murthy, Origin of Universe: Living/Non-Living: Grand-unification
Theory of Universe, Manuscript in preparation.
61
CHAPTER
4.1 INTRODUCTION
With the efforts of researchers in electrical engineering, linear system theory started with
abstract models of arbitrary linear systems through forced/unforced nth order difference
equations in discrete time and differential equations in continuous time. Such representations
are called the input-output representations of the linear system. These arbitrary system (electrical,
mechanical, chemical, hybrid systems) evolution equations were then converted into first
order differential/difference equations in state, control, input, output vectors through state,
input, output coupling matrices. Such a representation is called the state space representation.
The state space equations take the following form (Gop)
Discrete Time Systems:
(4.1)
where {A(n), B(n), C(n), D(n)} as well as {A(t), B(t), C(t), D(t)} are matrices of compatible
dimensions.
Thus, in the design, analysis and synthesis of linear systems, linear algebra techniques
were extensively utilized. Various, input-output representation related concepts such as
impulse response, systems function were shown to be derivable from the state space
description. Also new concepts such as controllability and observability are studied in
terms of state space representation. Thus, the state space representation of linear systems
proved to be a far better description of arbitrary systems.
62
X( h, k )Rn and U( h, k )Rm are the local state and local input value at (h,k)
and
A1,A
, B1 ,B
on local state and local control was utilized in association with partial differential equation
based continuous time linear multidimensional systems. These representations of continuous
time as well as discrete time multidimensional systems required considerable amount of
ingenuity, careful tracking of the indices, in designing and analyzing such systems. To a
certain degree, this notation impeded further progress in multidimensional system theory.
With this type of approach/notation, modeling, design and analysis of certain linear/nonlinear, multi/infinite dimensional systems was a complicated task.
The author for the first time realized that, for the evolution of CERTAIN
multidimensional linear systems, tensor linear operator based state space description is
necessary as well as helpful. This mathematically formal tensor state space representation
was an important contribution for further progress in multi/infinitedimensional system
theory (linear/non-linear dynamical systems). Also, the author after carefully observing
various multi/infinite dimensional systems (explicitly stated as a static or dynamical system
or when a proper abstraction is made the multidimensional nature of problem/
phenomenon becomes apparent) such as those that arise in multi/infinite dimensional
neural networks (Rama 2), databases ( utilizing multiple attribute tree etc. ), multi/infinite
dimensional coding theory (Rama 3), proposed the utilization of tensors ( of order,
dimension finite/infinite ) as the linear operators in the design, analysis and synthesis of
such systems. This idea is already utilized in some applications. It should be noted that in
the analysis of some systems defined over finite fields and other discrete structures,
utilization of tensors considerably simplifies the analysis.
In the case of multidimensional systems, there is no natural notion of causality. Various
types of causality ( quarter-plane causality, half-plane causality) are artificially imposed
by different choices of neighbourhood sets. With such an approach (for all multidimensional
systems), it is very difficult to study controllability, observability and stability. The author
realized that for certain multidimensional systems, utilization of tensor linear operators
to represent the state, control, input, output variables, is very convenient (from the point
of view of design and analysis of such systems) (Rama 1).
63
W (Z1 , Z2 ) =
nZ
i + j 1
1+
ij
Z2 j
dZ
i + j1
ij
Z2 j
(4.2)
The idea of associating two dimensional state space models with two dimensional filters
was originated very naturally. However, since the beginning it appeared that the canonical
technique based on the Nerode equivalence leads to an infinite dimensional state space.
The reason was to utilize a matrix as the linear operator to describe the state dynamics. So,
following some heuristic procedures, several finite dimensional models have been (BiF)
introduced, where two notions of state play different roles:
1. local states: X(h,k) belong to a finite dimensional vector space. They enter in the
state updating equation and determine the value of the output.
64
(4.3)
where x(h , k ) R n , u( h, k )R m , y(h , k )R p , are the values of the local state, the input and
the output at (h, k)Z Z . Since the local state at (h+1, k +1) is computed by solving a first
order difference equation, the system (4.3) denoted by 1 = (A 1, A 2, B1, B2, C) is named a first
order system.
The above model has been extensively studied in its general form and under some
conditions/constraints on the system matrices. The most popular particularized version
of (4.3) is Roessers model, where the local state space X is assumed as the direct sum of
two vector spaces Xh and Xv , and the matrices of the model are constrained to have the
following form (partitioned)
0
A111 A121
0
B11
0
=
=
A1 =
B
B
,
,
, A2 = 2
2
1
2
(4.4)
2
0
0
A21 A22
0
B2
Second order models are less frequently used: the typical structure of their equation is
given by
X ( h + 1, k + 1) = A1X ( h , k + 1) + A2 X ( h + 1, k ) + A0 X ( h , k ) + BU ( h , k )
Y (h , k ) = C X(h , k )
(4.5)
(4.6)
Associated with the external description provided by the behavior different internal
representations can be given by introducing the so called latent variable models. State variable
models constitute a particular type of latent variables, that hold the memory of the system
with respect to the notion of past introduced on Z Z. When a state description is possible,
65
i.e. when the notion of past, present and future are allowed by the structure of , the behavior
is called Markovian. Since there is not any natural direction for the evolution in Z Z , the
Markovian property appears more general than the familiar quarter plane causality and has
been exploited in the analysis of non-causal two dimensional dynamics.
Also, various static systems that involve simple linear transformations in the
multidimensional space were previously abstracted utilizing the matrix linear operator.
Such systems arise in practical applications such as databases (modeling storage of multiple
attribute trees), computerized topography etc. The techniques developed for design and
analysis of such systems were thus very elementary.
The above efforts in two/multidimensional system theory were primarily utilizing the
matrix linear operator on an n-dimensional ( in one independent variable) vector space. System
theorists did not realize that utilization of tensor linear operator (in multidimensions) could
lead to design and analysis of a large class of multidimensional systems.
In the following areas, utilization of tensor linear operator to describe the multi/infinite
dimensional state space enables one to formulate new problems , introduce new concepts,
derive new results/theorems. Some of the areas of interest where such an idea could be
utilized are
(1) Multi/Infinite dimensional computation theory,
(2) Multi/Infinite dimensional information/communication/coding theory,
(3) Multi/Infinite dimensional rate distortion theory,
(4) Multi/Infinite dimensional stochastic systemsTheory of Markov random fields,
(5) Multi/Infinite dimensional time series analysis,
(6) Multi/Infinite dimensional digital signal processing,
(7) Theory of Multi/Infinite dimensional connectionist structuresgraphoids,
(8) Theory of databases utilizing multidimensional storage,
(9) Matroid theory,
(10) Multi/Infinite dimensional Game theory.
By the utilization of the idea of capturing a multidimensional state space through a
tensor linear operator, new research problems can be formulated and solved.
66
(4.7)
If the above property is violated by the dynamical system, we call it a non-linear system.
Conventionally, in multidimensional ( multi-order may be more appropriate, but is not
utilized by the author ) system theory, in the case of discrete time dynamical system (an
example is provided in section 2), the evolution is described by means of local state, local
control, local input and local output variables. This is very cumbersome. In the case of
certain multidimensional systems, the state space representation by means of tensors
(described below) enables one to compactly capture a higher order difference equation
through TENSOR notation.
In order to describe the tensor state space representation, the following concepts/ideas
from tensor analysis are explained.
67
d Ai 1, i 2,..., in (t)
dt
Lt Ai 1, i 2,..., in (t + t ) Ai 1,..., in (t )
t 0
t
(4.10)
calculated in a coordinate system which does not vary in time. The derivative is clearly of
the same order as the tensor itself.
With the above notation from tensor analysis, certain multi/infinite dimensional discrete
time/index dynamical system can be described by means of a state space description of
the following form:
X( i 1,..., ir ) (n + 1) = A( i 1,..., ir ; j1,..., jr ) (n) X( j 1,..., jr ) (n) + B( i 1,..., ir ; j1,..., jp ) (n)U( j 1,..., jp) (n),
Y(l 1,...,ls ) (n) = C(l 1,..., ls ; j 1,..., jr (n) X( j 1,..., jr ) (n) + D(l 1,..., ls ; j 1,..., jp ) (n)U( j 1,..., jp ) (n).
(4.11)
where A(n) is an m dimensional tensor of order 2r (called the state coupling tensor ), X(n)
is the state of the dynamical system at the discrete time index n, whereas X(n+1) is the state
of the system at the discrete time index n+1. Furthermore B(n) is an m dimensional tensor
of order r+p ( called the input coupling tensor ), Y(n) is an output tensor of dimension m
and order s. U (n) is an m dimensional input tensor (varying with the discrete time index of
order p) and C(n) (called the state coupling tensor to the output dynamics) is an m dimensional tensor of order (s + r), D(n) is the input coupling tensor to the output dynamics
of dimension m and order s+p.
In the above state space description of certain type of multidimensional discrete time
dynamical system, there are r dimension variables which are inherently discrete. The
evolution of the system (changes in the system parameters) occur at discrete time instants.
The notation for index set in the state equations requires some explanation. Since the state
tensor is an m -dimensional tensor of order r, it will have m components. When the system
evolves, it transits through tensors in the state space.
With the summary of tensor functions of scalar argument provided above, the dynamics
of certain type of multi/infinite dimensional continuous time/index systems is described
by the following state space description:
68
(4.12)
where A(t) is an m dimensional tensor of order 2r (called the state coupling tensor to
the state dynamics), X (t) is the state of the dynamical system at the continuous time/
.
index t, whereas X (t) is derivative of the state of the system. Furthermore, B (t) is the
m dimensional tensor of order r+ p (called the input coupling tensor to the state
dynamics), Y (t) is the output tensor of dimension m and order s. Also, U (t) is an m
dimensional input tensor of dimension m and order p , and C (t) is an m -dimensional
tensor of order (s + r), D (t) is the input coupling tensor to the output of dimension m
and order s + p . It should be noted that the state space description provided above for
certain continuous/discrete index systems hold true even for certain infinite dimensional
systems. In the case of infinite dimensional systems, in the state space descriptions, the
tensors utilized are of dimension/order infinity ( either or both of them). Now, the above
tensor state space representations are contrasted with the conventional approaches in
the representation of certain multidimensional systems.
It is reasoned that the Tensor State Space Representation is an important leap in multi/
infinite dimensional system theory. Also, another objective is to remove the confusion in
the mind of the reader who read the classical literature in multi/infinite dimensional system
theory with matrix linear operator notation. The primary source of confusion is not so
much in the discrete time/index multidimensional systems, but in the case of continuous
time /index multidimensional systems.
Conventional Multidimensional System State Space Representation versus Modern Tensor State
Space Representation:
In section 2 as well as section 3, the limitations of the way system theorists tried to
represent and analyze the two/multidimensional discrete time/index systems is discussed.
Also, the advantages of tensor state space representation (of certain large class of multi/
infinite dimensional systems) discovered and formalized by the author are described. The
transition from the conventional mode of thinking where the system is represented by
means of multiple independent variables, local state/local control are coupled to the system
dynamics by means of matrices to the modern version where tensor notation is utilized,
requires the realization that the linear space utilized in multidimensions is captured through
the tensor and the system dynamics when done in discrete time requires a discrete variable.
The continuous index case requires more imagination to understand the transition
from conventional approaches to the modern approaches. In the conventional
multidimensional system representation, partial differential equations are utilized to
describe the input-output behavior as well as the state (internal description) dynamics. In
the conventional approaches, multiple independent variables are tracked through separate
indices, leading to partial differential equations. But, the utilization of tensor linear operator
69
and the tensor function of scalar argument enables one to describe the dynamics of tensor
state variable as a function of one continuous time/index variable. Thus, the discrete as
well continuous multi/infinite dimensional system state space representation utilizing
tensors resembles the familiar one dimensional system state space description.
The above tensor state space description reduces to the one dimensional case when the
order of the tensors is one. Thus, various results developed on one dimensional linear
spaces for one dimensional linear systems are readily translated to certain multi/
infinitedimensional systems described through tensor linear spaces (with some care taken
in pathological cases as well as when the problem being solved depends heavily on the
neighborhood set).
70
(4.13)
(4.14)
where denotes the inner product and the variables such asYj 1,..., jr (n) are tensors. The
71
noise models Wi 1,..., ir (n), Vi 1,...,ir (n) are multidimensional versions of white noise.
As in one dimension, the continuous time versions of these models are based on utilizing
a continuous time index t, in the place of discrete time index n and replacing the noise
models in (4.13 and 4.14) by the continuous time white noise or colored noise models. The
formal description is avoided for brevity.
The above models (which effectively reduce to the one dimensional models in the one
dimensional case) enable one to derive various important details related to such stochastic
processes in multi/infinite dimensions. For instance, the autocorrelation tensors, the power
spectrum are derived based on the well known techniques for one dimensional systems. It
should be noted that the multi/infinite dimensional power spectrum estimation problem
(formulated using local state etc.) was well known to be very difficult. Thus, the utilization
of tensor linear operators in certain multidimensional systems enabled one to invoke the
results from one dimensional systems to be extended to certain multidimensional systems.
Various interesting identities arise in the actual analysis. The details are avoided.
In the following, state space representations for arbitrary stochastic linear systems are
described. In one dimension, it is well known that the widely utilized Markov chains
constitute the one dimensional stochastic linear systems. Thus, there has been research
effort to extend the idea, approach to multi/infinite dimensions. Like the deterministic
multi/infinite dimensional linear systems, conventionally various models based on the
local state approach were developed. These are traditionally called the random field
models. With the Tensor State Space Representation (TSSR) (of certain multidimensional
systems) provided in section 3, stochastic multi/infinite dimensional linear systems,
called structured Markov random fields, are based on the tensor linear operator. In the
spirit of the one dimensional approach, the multi/infinite dimensional structured Markov
random fields are homogeneous stochastic linear systems, described by difference
equation of the following form in the discrete time/index
( n + 1)
= ( n ) P( n )
(4.15)
where (n) is the tensor of probabilities of the states in the state space, P (n) is the state
transition tensor of the discrete time structured Markov random field. When the structured
Markov random field is homogeneous, then P(n) = P . Both P(n), P are stochastic tensors.
In the continuous time, the multi/infinite dimensional structured Markov random field
is described by means of a generator tensor. It is given by
(4.16)
where (t) is the tensor of probabilities of states in the state space at time t, Q (t) is the
generator tensor of the continuous time strucured Markov random field. Q(t) satisfies the
properties of a generator tensor.
The equilibrium distribution of states in the discrete as well as continuous time/index
72
structured Markov random field are derived through the utilization of the spectral
representation theorem of the linear operator (tensor) utilizing the eigenvalues and
eigentensors of the linear operator.
When the state transition tensor as well as generator tensor have the G/M/1-type
structure, M/G/1-type structure (Neu), the invariant distribution of the random field has
the tensor geometric form. The derivation of the form of invariant distribution and efficient
recursions for the invariant distribution follow from a generalization of the results in one
dimension.
In the following, state space representations for various types of multidimensional stochastic
dynamical systems that are commonly utilized in electrical engineering are discussed.
In the discrete time, the multi/infinite dimensional dynamical system is described by
the difference equation of the following form:
(4.17)
The tensors A(n), B(n), C(n), D(n) and the state, input, output tensors are of compatible
dimension and order. The noise terms are multi/infinite dimensional extensions of the
independent, identically distributed noise model in one dimension. It is based on the following
tensor based random variable/random process (like vector random variables, vector
random processes) specification. Generally, they are zero mean tensors (each component
random variable has zero mean) and as a sequence constitute independent tensor random
variables. This model is the simplest model that is commonly utilized in stochastic control
theory (ZoP), (SaW). Utilizing Tensor State Space Representation (TSSR), Unified Theory
of Control, Communication and Computation is formalized in (Rama 4).
(4.18)
These plant noise and measurement noise models are assumed to be independent of the
normal random initial state tensor, X( ). The continuous time multi/infinite dimensional
stochastic models utilize continuous time I.I.D. noise (as in one dimension). The state space
model description has an additive I.I.D. noise term to those described in section 3. With
the above state model, theorems in one dimensional stochastic control are extended to
multi/infinite dimensions, since the matrix linear operator is replaced by the tensor linear
operator. In translating the results inner/outer product between vectors/matrices are
replaced by those between the tensors/tensors.
Now, we consider a noise model which describes processes which are more complicated
than the ones considered previously. The colored noise model considered in ARMA time
series model is a special case version of the following noise model. In this model, the noise
processes constitute a structured Markov random field in multi/infinite dimensions. The
73
plant noise model and measurement noise are uncorrelated/independent. The noise models
satisfy the following equations.
(4.19)
L(n), M(n) are discrete time structured Markov random fields. The fact that Markov random field
is a stochastic linear system enables one to apply the stochastic dynamic programming. In the
above noise model, the plant and measurement noise are made to be the most general models
that are conceivable, while at the same time they are tractable. The continuous time version of the
state space model has an additive term added to those in section 3.
With the above state space representation, various results developed in one dimensional
stochastic control theory (SaW) are extended to multi/infinite dimensional systems utilizing
the generic principles described in section 3. Thus, various recursive forms for state
estimation, filtering and prediction are translated from one dimensional systems to
multidimensional systems, particularly with the I.I.D. form of noise.
The time series model discussed at the beginning of the section with tensor state space
representation, led the author to provide very detailed linear prediction type results in multi/
infinite dimensions when the noise process is white as well as colored. Thus, the linear prediction
theory, which was so successful in theoretical as well as practical applications is successfully (in
mathematical completeness) advanced to multi/infinite dimensions by the author with the tensor
state space representation. The mathematical equations look familiar with tensor products
being utilized in the equations.
It should be noted that using the signal and noise models described in this section,
multidimensional versions of Wiener and Kalman filters can easily be derived. Various
results on estimation, prediction and control are translated from one dimension to multidimension (Rama 4) (when the multidimensional system has Tensor State Space
Representation i.e. TSSR).
In summary various results developed in one dimensional stochastic control theory,
theory of one dimensional random processes are extended to multi/infinite dimensions
through the Tensor State Space Representation.
74
of the system in the state space i.e. a dependence on the discrete/continuous time index of
the manner in which the state coupling, input coupling, output coupling tensors vary with
time, resulting in a distributed nature of the manner of state transitions depending on the
location i.e. discrete/continuous time index. This naturally motivates considering systems,
based on practical applications, in which the state transitions in multi/infinite dimensions
depend on the location. This is once again reminiscent of the conventional models of two/
multidimensional signal processing. To formally provide models of distributed dynamical
systems in multi/infinite dimensions, the following notation from tensor algebra/analysis
is introduced.
(4.20)
In the models of distributed systems described in the following, utilizing tensor linear
operators, the state, input, output variables are functions of multiple discrete time/index
or continuous time/index.
The following concept from tensor analysis is also extremely helpful.
Tensor Field:
By a tensor field, we mean a rule assigning a unique value of a tensor to each point of
a certain volume V ( V may be all of space). Let r be the radius vector of a variable point
of V with respect to the origin of some coordinate system. Then, a tensor field is indicated
by writing
Ai 1,..., in = Ai 1,..., in (r )
(4.21)
if the tensor is of order n. A special class of tensor fields are nonstationary fields, which are
functions of both space and time i.e. of both the vector r and the scalar t:
= (r , t ), A = A(r , t )
(4.22)
(4.23)
Tensor fields which are continuous are of utility in physical applications and in modeling
various real life dynamical systems. Non-stationary fields are of utility in modeling
distributed dynamical systems.
It will be evident to an intelligent reader, how the above concepts are utilized in the
following models of distributed dynamical systems. Particularly, tensor fields enable one
to define dynamical systems over regions in the higher dimensional space which are not
75
(2)
(4.24)
(4.25)
where X(h, k ), U(h,k), Y(h,k) are the values of the local state tensor, input tensor and output
tensor at (h, k) Z Z. The multidimensional extension of this model is described based
on the same spirit in the sense that the nearest/farthest neighbourhood set is partitioned
into causal/non-causal parts and utilizing it in writing the multidimensional difference
equation describing the system dynamics.
For instance, the half plane causal model familiar in two dimensional signal processing
is written utilizing the tensor linear operator in the same spirit as the above quarter plane
causal model.
The spirit in which various notions of causality is introduced into the system evolution in
the state space is by means of natural/artificially induced decomposition of the state space.
The state space is partitioned into neighborhoods and the dynamical system is described by
means of a difference equation (multi/infinite dimensional) of the following form
(4.26)
(4.27)
where N, N+1 are neighbourhood sets in the multi/infinite dimensional state space which
are not necessarily bounded by hyperplanes (captured by a structure like tensors/matrices).
The above state space description of a dynamical system in discrete index variables is in
the most general format conceivable. The advantages of such a model is the ability to
make an arbitrary choice of the neighbourhood. If the neighborhood is chosen to be one
among those in the set utilized for embedding causality structure onto the state space,
various models result.
The continuous time version of the above model utilizes, non-stationary tensor fields.
The typical system evolution equations are given by
i
X{( i 1,..., in )( t)( N + 1)} = A(i 1,..., in ; j 1,..., jn )(t ) X {( j 1,..., jn )(t ) N } +
76
(4.28)
(4.29)
i
where, Xi 1,..., in (t) is the tensor of partial derivatives (like the Jacobian matrix, we can call it
a Jacobian tensor) of Xi 1,..., in (t) . Once again this is the most general model conceivable. If
the neighbourhood set is represented by a tensor, we have a very important special case.
If one has understood carefully, the notions of local state, local control and the essential
ideas of the theory of ordinary/partial difference/differential equations, many results
developed in those fields have been adopted to the case where vector-matrix variables are
replaced by the tensor-tensor variables. The outcome of this mathematically formal
approach is:
(i) Results developed by the differential/partial differential equations community are
adopted to the tensor-tensor based equations. Once again, the translation is done with
relative ease, (ii) Distributed dynamical systems are modeled by using the half plane,
quarter plane causal type neighbourhood models. In these models, the matrices/vectors
are replaced by tensors. Various other models based on local state, local control, various
types of decompositions of the state space that arise in fields such as image processing,
tomography etc. are translated to the multidimensional case by replacing the vectors/
matrices by tensors.
Various types of problems formulated and solved in conventional two/
multidimensional system theory are adopted to the tensor based difference/differential
equations by utilizing tensor products and tensor algebra/analysis. Some illustrations of
design, analysis of distributed systems are reported utilizing the tensor linear operators
for local state, local control, local input, local output variables and replacing the vector/
matrix products by means of tensor-tensor products. They are avoided here for brevity.
4.7 CONCLUSIONS
Utilization of tensor linear operator associated with dynamic as well as static linear systems
enables one to formulate as well as solve various known as well as new problems utilizing
the powerful tools of tensor algebra (Rama1). This important representation invoked by
the author is hoped to have useful effect on various scientific/mathematical fields. State
space representation by tensor linear operators is discovered and formalized (Rama1). It is
formally demonstrated how the theory of certain multidimensional systems is developed
utilizing the tensor state space representation and translations of the results from one
dimensional system theory. Approaches to translate one dimensional stochastic control
theory to multi/infinite dimensional systems are briefly described. New state space
representations for distributed dynamical systems are developed which enable translating
the results from conventional state space models of multidimensional systems. Thus, in
77
essence the tensor linear operator based representation of static as well as dynamic systems
has important impact on various fields of scientific endeavour.
REFERENCES
(BiF) M. Bisiacco and E. Fornasini, Optimal Control of Two Dimensional Systems, SIAM
Journal of Control and Optimization, Vol. 28, pp. 582-601, May 1990.
(BoT) A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications Inc., New York, 1968.
(Gop) M. Gopal, Modern Control System Theory, John Wiley and Sons, New York.
(Neu) M.F. Neuts, Matrix Geometric Solutions in Stochastic Models, Marcel-Dekker,
Baltimore.
(Rama 1) Garimella Rama Murthy, Tensor State Space Representation: Multidimensional
Systems, International Journal of Systemics, Cybernetics and Informatics (IJSCI), January
2007, page 16-23
(Rama 2) Garimella Rama Murthy, Multi/Infinite Dimensional Neural Networks, Multi/Infinite
Dimensional Logic Theory, International Journal of Neural Systems, Vol. 15, No. 3, June
2005.
(Rama 3) Garimella Rama Murthy, Multidimensional Neural Networks: Multidimensional
Coding Theory:Constrained Static Optimization Proceedings of 2002 IEEE International
Workshop on Information Theory.
(Rama 4) Optimal Control, Codeword, Logic Function Tensors: Multidimensional Neural
Networks, IJSCI, October 2006, Pages 9-17.
(SaW) Sage and White, Optimal Control Theory, Academic Press.
(Zop) R. Zoppoli and T. Parisini, Learning Techniques and Neural Networks for the
Solution of N-stage Non-linear No n-quadratic Optimal Control Problems, Topics in 2-d
System Theory, 1992.
This page
intentionally left
blank
CHAPTER
5.1 INTRODUCTION
In the mid 1940s, Norbert Wiener coined the word Cybernetics for the research field
dedicated to understand the control, communication, computation and other such functions
of living systems. It is well agreed that these functions of living systems are controlled by
various functional sub-assemblies in the brain synthesized through bio-chemical circuits.
Research work on this field was pursued by several researchers in diverse fields. The multidisciplinary effort resulted in progressing the literature on the subject. But no formally
precise discoveries were made.
Also, starting in 1950s, the research efforts in electrical engineering discipline led to
the isolated theories of control, communication and computation. The central goal of these
three fields is summarized in the following:
The problem of communication is to convey a message from one point in space
and time to another point in space and time as reliably as possible.
The problem of control is to move a system from one point in state space to
another point in state space such that a certain objective function is minimized
The problem of computation is to process a set of input symbols and produce
another set of output symbols based on some information processing operation.
These three problems, on the surface seem to be unrelated to one another.
Also, in the mid 1960s, several researchers became interested in the mathematical model of
the nervous system. This effort was meant to complement the research in cybernetics. Hopfield/
Amari succeeded in providing an abstract model of associative memory. Based on this abstract
model, researchers are led to the following question which remained unanswered.
Question: Is it true that the functional units responsible for control, communication
and computation are synthesized through a network of homogeneous neurons?
80
Occasionally research efforts led to establishing some relationship between the three
fields. But, in this chapter it is shown (with mathematical clarity and preciseness) that in the
sense of optimization ( consolidating the earlier efforts of other authors) of some objective
function, these three problems are related to one another leading to one form of unification.
From a practical point of view, this unification leads to design of brain of powerful robots.
With the efforts of the author, Boolean Logic theory was generalized to multi/infinite
dimensions using an optimization approach (Rama 1). This approach led to the area of
multidimensional neural networks (Rama 1). Also using the generalization of results in (BrB)
in one dimension, multidimensional linear as well as non-linear codes are related to
multidimensional neural networks. Thus using these results the research fields: Computation
and Communication are related through the common thread of neural networks. In this
paper, the main achievement of the author is to show that optimal control tensors of certain
multidimensional systems are synthesized as the stable states of neural networks. Thus
utilizing the results summarized in this paragraph, Unified Theory of Control,
Communication and Computation is generalized to multidimensional systems.
This chapter is organized in the following manner. In section 2, unification of control,
communication and computation in one dimensional systems is summarized. In Section 3,
the discovery and formalization of Tensor State Space Representation of certain
multidimensional systems is briefly discussed. Using this representation, optimal control
tensors (in a well known criteria of optimality) are shown to constitute the stable states of
a multidimensional Hopfield neural network. In Section 4, utilizing the results in (Rama
1), (Rama 2), Unified Theory of Control, Communication and Computation in
multidimensional systems is formally described. Conclusions are reported in Section 5.
81
operations performed by AND, OR, NOR, NAND, XOR gates have appropriate intuitive
interpretation in terms of the entries of the one dimensional arrays i.e. vectors.
Research in the area of artifical neural networks led to the problem whether all one
dimensional logic gates can be synthesized using a single layer neural network. Chakradhar
et al. provided an answer to the problem. They showed that the set of stable states of a
Hopfield neural network correspond to one dimensional logic functions (CAB).
Equivalently, the input and output signal states of a logic gate are related through an
energy function. The outputs correspond to the stable states of neural network (which
constitute the local optima of the energy function). Thus, in a well defined sense, one
dimensional neural networks and logic theory are related.
82
(5.1)
Where denotes inner product operation between compatible tensors (BoT). Also in (5.1),
A(n) is an m dimensional tensor of order 2r (called the state coupling tensor ), X(n) is the
state of the dynamical system at the discrete time index n, whereas X(n+1) is the state of
the system at the discrete time index n+1. Furthermore B(n) is an m dimensional tensor of
order r+p ( called the input coupling tensor ), Y(n) is an output tensor of dimension m and
order s. U(n) is an m dimensional input tensor of order p (varying with the discrete time
index of order p) and C(n) (called the state coupling tensor to the output dynamics) is an
m-dimensional tensor of order (s + r), D(n) is the input coupling tensor to the output
dynamics of dimension m and order s + p.
With the above important representation of certain multidimensional systems, we
formulate and solve an important problem in optimal control of certain multidimensional
systems. The solution of the problem shows that the optimal control tensors are synthesized
as the stable states of a multidimensional Hopfield neural network (The connection structute
of m -d Hopfield neural network is a fully symmetric tensor).
83
Problem Definition
Find an admissible sequence of (realizable) input signal tensors, U(k ) for k { 0, 1, 2, ....}
(with each component of the tensor being bounded in amplitude by unity (one) or without
loss of generality be a fixed constant) i.e. Ui 1, i 2,..., ir ( k ) 1 in order to minimize the criterion
1 kf
J = 2 Yin ,..., i 1 ( k ) Yi1,..., in ( k )
k=0
(5.2)
subject to
X (n +1) = A(n ) X (n) + B(n ) U (n)
(5.3)
Y (n ) = C ( n ) X ( n )
(5.4)
where A(n), B(n), C(n), D(n) are tensors arising in the system dynamics of the discrete
time multi/infinite dimensional system. Furthermore, X(n) is the state tensor of the system.
These tensors which arise in the system dynamics are of compatible dimensions. Without
loss of generality, a multi-input, multi-output multidimensional linear system is considered.
Let the impulse response tensor of the system be denoted by h(k, l). This is the discrete
time version of the problem given in (GoC) for CERTAIN discrete time multidimensional
systems. The open problem given in (GoC) is solved in (Rama 5).
Problem Definition
The optimality condition is derived through the application of the maximum principle or
equivalently, the dynamic programming principle. The application of dynamic
programming enables us to derive the necessary as well as sufficient condition through
the principle of optimality in some cases.
Discrete time, Time Varying Linear Systems:
Let U(k) , k = 0, 1, 2, ..... k f 1 be the optimal control tensor sequence, and let X (k) , k =
0,1,2 ,..., be the state response of the linear system due to the input tensors U(k), uniquely
specified by (5.3), (5.4) and the initial condition of the linear dynamical system. Then, under
reasonable assumptions, discussed in the application of the discrete maximum principle
(SaW), it is shown that there exists a non-trivial function satisfying
H(Xk , Uk , k + 1 , k )
k =
(5.5)
Xk
where the Pontryagin function/Hamiltonian is given by
H(X k , U k , k + 1 , k ) =
1
(C(k ) X(k ))in ,..., i1 (C(k ) X(k ))i1,..., in +
2
(5.6)
84
(5.7)
( k f ) = C jm ,..., j 1 ( k f ) Yi 1,..., ip ( k f )
(5.8)
This will provide the terminal condition for solving (5.7). Since the input tensor sequence
is constrained, it must necessarily satisfy
Thus,
U ( k ) = Sign (Bsl ,..., S1 (k ) t 1 ,...., tn ( k + 1))
(5.9)
Solving (5.7) for (k + 1) and substituting in (5.9), we arrive at the optimal control
sequence. When the constraint set is other than a hypercube, various well known techniques
from mathematical programming for different constraint sets such as a convex polytope,
convex polyhedra are invoked in the context of quadratic programming. The cost function
is quadratic and it is optimized over various types of constraint sets such as the one
described previously.
With the terminal state specified, the equation (5.7) is recursed backwards to arrive at
the optimal control tensor in the case of multi/infinite dimensional systems. Thus, an
efficient computational form for solving the two point boundary value problem is
derived in the following. It should be noted that, we derive the expression for k +1 in the
case of certain linear time varying multi/infinite dimensional dynamical systems
(5.10)
(5.11)
85
(5.13)
(5.14)
(5.15)
B
i =1
sl ,..., s1
(5.17)
Now, utilizing the definition of the impulse response tensor of the time varying linear
system, we have
h (k + i + 1, k ) Y(k + i + 1))
i =1
= Sign ( h (k + i + 1, k ) Y(k + i + 1) )
i=0
(5.18)
86
h(.,.) is the transposed tensor of the impulse response tensor. The term in the
parenthesis is given by
l
k +i +1
i=0
i=0
j=0
h (k + i + 1, k) Y(k + i + 1) = h (k + i + 1, k ) h (k + i + 1, j) u ( j)
(5.19)
Exchanging the order of summation, (with the help of associated index grid), we have
kf
kf k 1
j =0
( h ( k + i + 1, k ) h ( k + i + 1, j) u( j)
(5.20)
U (k ) = Sign
(h (k + i + 1, k ) h (k + i + 1, j )) U ( j )
j = 0 i = max{1, j k 1}
(5.21)
Let us define
kf k 1
R(k , j ) =
i = max{1, j k 1}
(h (k + i + 1, k ) h (k + i + 1, j))
(5.22)
U ( k ) = Sign ( R( k , j) U( j))
j=0
(5.23)
R( k , j) =
kf k 1
(h(i + 1) h(k + i + 1 j)
i = max {1, j k 1}
(5.24)
This is the energy density tensor of time invariant linear system obtained from the
impulse response tensor. Thus the optimal control tensor is the stable state of a
multidimensional Hopfield neural network.
Continuous Time Dynamical Systems
Now, we formulate and solve the continuous time versions of the problems. The
continuous time versions of the problem provides us with the structure of the local optimum
of a quadratic form over the continuous time multi/infinite dimensional hypercube. This is
the problem where the L norm of the control tensors is constrained in amplitude by unity.
In the derivation of the optimal control, the following definition is necessary.
Integral of Tensor Function of a Scalar Argument:
By the integral of a tensor of a scalar continuous argument, we mean the tensor with
the components,
i 1,..., i
(t) dt or
i 1,..., in
87
(5.25)
(5.26)
The objective function being minimized in the optimal control problem is given by
tf
tf
1
J = Yls ,..., l 1 (t) Yl 1,..., ls (t) dt = (X , U , t ) dt
2 to
to
(5.27)
subject to the constraint given in (5.26) and the input tensors are constrained to be on the
continuous time multi/infinite dimensional hypercube.
Solution:
Form the Pontryagin function ( or Hamiltonian) of the problem. It is given by
H (X , U , , t) =
1
(C(t) X(t))ls ,..., l 1 (C(t) X(t))l 1,..., ls +
2
ir ,..., i 1 (t) ( A(t) X(t) + B(t) U(t))
(5.28)
U*
j 1,..., jp
Thus, the optimal control tensors for the problem is obtained from the above equation.
To explicitly determine the optimal control, the adjoint equations and associated boundary
conditions are given by
.
H(X , U , , t) ( X , U , t)
T
i 1,..., ir (t) =
=
+
[ A(t) X(t) + B(t)U(t)] (t),
X
X
X
where
(5.29)
i 1,..., ir (t f ) = 0
The above equations (5.28), (5.29) alongwith the system dynamics described through
(5.26) are solved for determining i 1,..., ir (t)
88
i
i 1,..., ir (t) = Cls ,..., l1 (t) Cl1,..., ls (t) Xi 1,..., ir (t) + Ajr ,..., j 1; ir ,..., i 1 (t) i 1,..., ir (t)
with i 1,..., ir (t f ) = 0
(5.30)
i 1,..., ir (t) = Ajr ,..., j 1; ir ,..., i 1 (t) i 1,..., ir (t) + Cl 1,..., ls (t) Yl 1,..., ls (t)
with i 1,..., ir (t f ) = 0 .
(5.31)
The above differential equation is solved, like the state equations for the linear dynamical
system, to arrive at
t
a
a
i 1,..., ir (t) = (t , t f ) (t f ) + (t , ) Cls ,..., l 1 ( ) Yl 1,..., ls ( ) d ,
tf
d a
(t , ) = Ajr ,..., j 1; ir ,..., i 1 (t) a (t , )
dt
(5.32)
(5.33)
a ( t , ) = I ; a ( t , t f ) = ( t f , t )
a
where (t f , t ) is the state transition tensor. Thus, we have
a
i 1,..., ir (t) = (t , ) Cls ,..., l1 ( ) Yl1,..., ls ( )d
tf
( , t) C
ls ,..., l1
( ) Yl1,..., ls ( )d
tf
(5.34)
tf
Hence, we have
(5.35)
where
(5.36)
89
Thus, we have
B jp , ..., j 1 ; ir , ..., i 1 (t) i1, ..., ir (t)
tf
C( ) ( , s) B(s ) U ( s ) ds] d
(5.37)
t0
jp ,..., i 1
(5.38)
0 s
t <
0
(5.39)
(5.40)
(5.41)
where R(t,s) is the energy density tensor of the linear system and is given by
tf
(5.42)
For, linear time invariant multidimensional systems, H ( , s ) , the impulse response tensor
is dependent only on the difference between arguments/indices. Thus, the necessary
condition on the optimal control (for continuous time multidimensional systems) is given
by (5.41). It shows that the optimal control tensor is the stable state of a continuous time
(Hopfield type) neural network. One must understand that the concept of continuous
time multidimensional neural network is conceived by the author in (Rama 4). It should
be noted that when the objective function is a higher degree form (rather than quadratic
form), similar derivations are done. Details are avoided for brevity.
90
91
such a way that every local maximum of the energy function corresponds to a codeword
tensor and every codeword tensor corresponds to a local maximum (i.e. stable state).
Unification: Now utilizing the results in Section 3 (relating optimal control tensors and
multidimensional neural networks), we readily have the unification of control,
communication and computation (through the common thread of neural networks).
Formally, the optimal control tensors, optimal multidimensional logic functions,
multidimensional codeword tensors are synthesized through the stable states of
multidimensional neural (generalized) networks.
In the above unification discussion, we only considered neural (generalized neural)
networks in discrete time. In equation (5.41), we discovered and formalized the concept of
continuous time neural associative memory (with the energy function being a quadratic
form associated with certain Kernel).
Continuous time generalized neural networks are defined and associated with
optimal control tensors, optimal codeword tensors and optimal switching functions.
Unified theory with generalized neural networks follows in a similar fashion. Details
are avoided here for brevity.
In view of formal clarity, the following theorem is a comprehensive statement of the
unification of control, communication and computation functions (with quadratic energy
function/objective function) The generalization to the case of higher degree energy function
follows in a similar manner.
Theorem 5.1: Consider a linear time varying multidimensional system with the state space
representation provided in (5.3), (5.4). The optimal control tensor (subject to a finite
amplitude constraint, i.e. Uv1,..., vr 1 o r, N), optimal switching function (in the sense of a
transformation between an input tensor and an output tensor), optimal linear
multidimensional code constitute the local optimum of a quadratic form in the components
of state variable, input, output tensors. Thus, in the case of linear dynamical systems, with
quadratic energy/objective function, the optimal control tensors, optimal switching
function, optimal linear code are unified to be the local optima of a quadratic form (with
argument/index/time varying coefficient tensors for time varying systems) over the
multidimensional hypercube. Thus these local optima are synthesized as the stable states
of neural/generalized neural network.
Proof: From (Rama1), the stable states of a multidimensional neural network constitute
the local optimum of a quadratic form with the fully symmetric connection tensor as the
weighting tensor. The convergence theorem for (infinite) multidimensional neural networks
provides a formal result. These local optima are defined to be the multidimensional logic
functions in the sense of a mapping between the input tensors and the stable state tensor.
But, from (5.23), the optimal control tensors which optimize a quadratic objective function
have the stable state structure of an interconnected multidimensional neural network with
92
block fully symmetric connection structure. Thus, the optimal control and optimal switching
function which optimize a quadratic objective function constitute the stable states of a
multidimensional neural network.
From (Rama 2), it is formally true that the connection structure of a multidimensional
graph-type structure (say graphoid) is associated with a multidimensional linear code through
its cut space. These cutset codes are termed graph-theoretic codes. It is also proved in (Rama
2) that maximum likelihood decoding of a corrupted word (received word) with respect to
the graphoid theoretic code is equivalent to finding the global optimum of the quadratic
energy function associated with a multidimensional neural network. Furthermore, it is shown
that a tensor constitutes the local optimum of a multi-variate polynomial in the components
of input, output tensors (quadratic tensor form) if and only if ( the polynomial is associated
with the parity check tensor) it is a codeword of the multidimensional linear code. Thus,
associated with the generator/parity check tensor of graphoid theoretic code, there exists a
quadratic form whose local optimum constitute the codewords (quadratic form over the
multidimensional hypercube).
Hence the optimal code, optimal control and optimal switching function which
constitute the local optimum of a multi-variate quadratic form ( in the components of
state, input, ouput tensors) are unified to be the same. This constitutes the statement of
unified theory of control, communication and computation in linear dynamical systems (time
varying as well as time invariant systems) with a quadratic form as the objective function.
Q. E. D.
In a future revision, it is discussed how the unification extends to other important
functions. Generalization of the results to certain infinite dimensional systems is also
discussed.
5.5 CONCLUSIONS
In this chapter, based on the work of author and earlier authors, the unification of control,
communication and computation functions (through the common thread of neural
networks) is formalized. The main contribution of the author for unification in one
dimension is to show that the optimal control vectors (in a well known optimality criterion)
constitute the stable states of a Hopfield network. The next important step was to envision
unification in multidimensions. Based on the concept of multidimensional neural networks
(Rama1, Rama 2), the author was able to formally unify communication and computation
functions. Tensor State Space Representation (TSSR) conceived and formalized by the author
was utilized to prove that the optimal control tensors constitute the stable states of a
multidimensional neural network (in discrete as well as continuous time systems). With
this important result, the author was able to show that optimal codewords, optimal logic
functions and optimal control tensors constitute the stable states of a multidimensional
neural network.
93
REFERENCES
(BoT) A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications Inc., New York, 1968.
(BrB) J. Bruck and M. Blaum, Neural Networks, Error Correcting Codes and Polynomials Over
the Binary Hypercube, IEEE Transactions on Information Theory, Vol. 35, No. 5, September
1989.
(CAB) S.T. Chakradhar, V.D. Agrawal and M.L. Bushnell, Neural Models and Algorithms
for Digital Testing, Kluwer Academic Publishers, 1991.
(GoC) B. Gopinath and T. Cover, Open Problems in Control, Communication and Computation,
Springer, Heidelberg, 1987.
(HoS) M.Honig and K. Stieglitz, On Wyners conjecture Bellcore Technical
Memorandum.
(Rama 1) Garimella Rama Murthy, Multi/Infinite Dimensional Neural Networks, Multi/Infinite
Dimensional Logic Theory, International Journal of Neural Systems, Vol.15, No.3, Pages
223-235, June 2005.
(Rama 2) Garimella Rama Murthy, Multidimensional Coding Theory: Multidimensional Neural
Networks, In part presented at the 2002 IEEE International Workshop on Information
Theory.
(Rama 3) G. Rama Murthy, Tensor State Space Representation: Multidimensional Systems,
International Journal of Systemics, Cybernetics and Informatics (IJSCI), January 2007,
pages 16-23.
(Rama 4) G. Rama Murthy, Optimal Control, Codeword, Logic Function Tensors:
Multidimensional Neural Networks, International Journal... (IJSCI), October 2006, Pages 917.
(Rama 5) G. Rama Murthy, Signal Design for Magnetic and Optical Recording Channels: Spectra
of Bounded Functions, Bellcore Technical Memorandum, TM-NWT-018026, December 1990.
(RKB) G. Rama Murthy, P. Krishna Reddy and L. Behera, Neural Network Based Optimal
Binary Filters, submitted to Elsevier Signal Processing Journal.
(Gop) M. Gopal, Modern Control System Theory, John Wiley and Sons, New York,
(SaW) Sage and White, Optimum Systems Control, Prentice-Hall Inc., Englewood Cliffs,
New Jersey 07632.
This page
intentionally left
blank
CHAPTER
Comple
xV
alued Neural
Complex
Valued
Associative Memory on the
Comple
x Hypercube
Complex
6.1 INTRODUCTION
The Hopfield model of the neural network is designed basing on the McCulloch-Pitts
neuron. In this network the computation of the algebraic threshold function is carried out
at each node. The edge between two nodes is associated with a weight. This network can
hence be represented with a weight matrix which is nothing but a symmetric matrix where
Wi , j represents the weight associated with the edge connecting the neurons i and j . Since
it is a symmetric matrix, (i.e., the network represented by an undirected graph), we have
Wi , j = W j ,i . The threshold function can be calculated at each neuron using the function
1, if Hi (t ) 0
Vi (t + 1) = sgn ( Hi (t) ) =
1 otherwise
where,
n
Hi (t ) = Wj , i Vj (t ) Ti
j =1
Here, Vi (t + 1) represents the value of the function i.e. state value at node i at time (t +1)
(which is the next time instant).
Energy function: The model also associates an energy function which is the quadratic
form V T (t )WV (t ) (neglecting the threshold value without loss of generality) where V(t)
stands for the column matrix that represents the vector corresponding to the state of all
neurons at time instant t. This vector will lie on the hypercube whose order is that of the
synaptic weight matrix.
96
Modes of operation: The Hopfield model can operate in one of the two modes, serial or
fully parallel mode or a combination of these. Serial mode is the one in which the next state
computation, i.e., the evaluation of the neural network takes place at each node (node after
node) for every time instant. In the fully parallel mode the evaluation takes place for every
node at each time instant. A combination implies that the evaluation occurs at a group of
nodes for every time instant.
A stable state is defined as a state such that after reaching it, the network output does
not change i.e., V(t) = sgn(WV(t)).
The model results in the following convergence theorems:
Theorem 1: If the neural network is operating in the serial mode and the elements on the
diagonal of connection matrix are non-negative, the network will converge to a stable
state i.e., there are no cycles in the state space.
Theorem 2 : If the network is operating in the fully parallel mode, the network will either
converge to a stable state or to a cycle of length two i.e., it oscillates between two states in
the state space.
Goals: The goals of this chapter are to consider the possibilities of implementing a complex
valued associative memory and observe the behavior of the model in the serial and the
fully parallel modes.
Remark
Recently the concept of a complex valued neural network has been explored since
the work of [ZURADA 1996] and has been almost successfully applied to the fields
of image processing and pattern recognition. A conglomeration of the papers on
the subject has been briefly collected in [HIROSE 2003]. Following this literature
our work is based on implementation of a newer method to realize a complex valued
neural network.
The chapter is organized into three parts. The first part of section 2 discusses the features
of the model the authors are proposing. Also implications to convergence of the network
are briefly pointed out. The second part of the same section provides a proof technique
used for arguing the convergence properties of the discussed form of the complex valued
associative memory. The third part actually presents the proof of convergence and considers
how it is similar to real valued Hopfield associative memory.
97
The Synaptic weights are complex-valued and the weight matrix is Hermitian unlike the
real valued case, where it is symmetric. The next state V(t+1) can be computed as,
V(t+1) = sgn (real part (WV(t))) + jsgn (complex part (WV(t)))
(6.1)
Thus the values of the entities in the column vector V(t+1) anytime would be confining to
the set {1+J, 1j, 1+j, 1j} unlike the real case wherein the values confine only to the set
{1, 1}. Thus the total number of values V(t+1) would take, i.e., the number of points of the
Complex hypercube would equal 4n where n is the order of the neural network.
The energy function would thus be E(t) = (VT(t))* WV(t) (neglecting the threshold value
without loss of generality). The authors would like to prove that an important property of
this model would be to converge to a stable state when operating in the serial mode and
utmost to a cycle of length 2 when operating in the fully parallel mode.
The proof technique adopted by the authors is method of isolating the real and imaginary
parts of the Hermitian synaptic weight matrix and evaluating them separately. As one can
see when the Hermitian matrix is isolated into two parts real and imaginary, the matrix
corresponding the real part would be a real symmetric one and that corresponding the
imaginary part would be a real anti-symmetric one.
Remark
It is an interesting observation that the energy function when evaluated for the real part
with the complex valued vector would behave exactly as if it were a matrix being
evaluated for the real valued neural network proposed in [HOPFIELD].That is, we have
complex valued associative memory with a real connection matrix. The exact details of
the proof follow.
98
Ek (t ) = (V (t)" Vk (t ) " V (t ) )
*
1
*
n
W11
#
W
n1
V1 (t )
" W1 n #
O
# Vk (t )
" W nn #
Vn (t )
If we break the expression for Ek (t) into two parts, EkR(t) and Ek i(t), the real and imaginary
parts (of energy function), they come out like this:
"
#
#
W
V1 (t )
#
V (t )
k
#
V (t )
n
" jW1nI
0
#
"
#
*
*
*
EkI (t ) = (V1 (t )"Vk (t ) "Vn (t ) )
jW
"
0
1nI
(6.2)
V1 (t )
#
V (t )
k
#
V (t )
n
Evaluating the real part of (6.1) for the energy function, we have,
Vj ( t ) +
i=0
j= k +1
j =1
k
n
1
n
n
k 1
i =k +1
j =k +1
j =1
k 1
k 1
( t ) WijRVj ( t ) + WikRVk ( t ) +
ijR
Similarly,
W11R
V1 (t )
" W1nR #
# Vk (t + 1)
O
" WnnR #
V (t )
n
(6.3)
99
The expression for Ek R(t+1) results because it is operating in the serial mode and the
updating of the function value takes place at only one node, i.e., the node at which we are
evaluating(Vk ). In the parallel mode, however, all the function values in the vector will be
updated.
n
k 1
*
V
t
W
V
t
W
V
t
WijRVj (t ) +
1
+
+
+
(
)
(
)
(
)
i
ijR j
ikR k
i=0
j= k +1
j =1
k
1
n
=
n
n
k 1
i =k +1
j =k +1
j =1
k 1
EkR ( t ) =
j = 1( j k )
j = 1( jk )
But since the real part of both matrices are symmetric, WkjR = WjkR ;
Thus,
EkR ( t ) =
j = 1( j k )
EkR ( t ) =
j = 1( j k )
EkR ( t ) =
j = 1( j k )
100
EkR (t ) =
j = 1( j k )
j =1( j k )
EkR (t ) =
n
j =1( j k )
(6.4)
n
n
are expressions for H k (t) in the real mode with some arbitrary VjR (t) and VjI (t), then from
the Hopfield convergence theorem, it is proved from the expression for
E = 2 Hk Vk2 (t) + Wkk Vk2 (t) , that it is a value that eventually goes to zero which means
that Ek (t) is not which is the local maxima. Hence EkR (t) in complex case also reaches zero
hence Ek (t) is also not-to a local maxima.
Thus it remains to evaluate the imaginary part contribution of energy function i.e.
V1 ( t )
" W1nI #
# Vk ( t )
O
"
0 #
Vn (t )
k 1
V (t )
i =1
j =1( i j )
jWijI Vj (t ) + jWikIVk (t ) +
101
jWijIVj (t ) +
j = k + 1( i j )
n
k 1
*
EkI (t ) = Vk (t ) jWijIVj (t ) + jWijI Vj (t ) +
j = k +1
j =1
n
n
k 1
*
Vi (t ) jWijI Vj (t ) + jWikIVk (t ) + jWijIVj (t )
i =k +1
j = k + 1( i j )
j =1(i j )
and similarly,
jW1nI
*
k 1
V
i =1
(t )
k 1
j =1( i j )
"
"
"
V1 (t )
#
jW1nI
# Vk (t + 1)
0
#
Vn (t )
jWijI Vj (t ) + jWikIVk (t + 1) +
jWijI Vj (t ) +
j = k + 1( i j )
n
k 1
j = k +1
n
n
k 1
*
Vi (t ) jWijI Vj (t ) + jWikIVk (t + 1) + jWijI Vj (t )
i =k +1
j = k + 1( i j )
j =1(i j )
(W
j =1( j k )
kjI
) (
(6.5)
Thus,
n
2 ( HkA ) VkR ( t ) + WkkR VkR2 (t ) +
Ek (t) =
+ 2 WkjI VjR (t ) VkI WkjI VjI ( t ) VkR
2
j =1( j k )
2 ( HkB ) VkI (t ) + WkkR VkI (t )
) (
(6.6)
This is the expression for Ek (t) in the complex valued neural network.
102
As one can observe from the above expression, the first term is zero when the neural
net converges to a stable state(from [BRUCK 1987]). The second term is real but may take
negative values depending on the imaginary parts of the corresponding entities of the
weight matrix. But when the first term becomes zero, i.e., when the net converges to a
stable state, VkI will be zero. Hence the second term will be zero at the stable state. Which
means that the energy function of a complex valued neural net converges to a positive
value. This also proves that the complex valued associative memory constrained with a
real connection matrix converges to a stable state with a behavior that matches the real
valued neural net described by Hopfield.
Graphs of convergence of the energy function to a stable value and that of the entity
E to zero are depicted below. The relative performance as well as the analogous
relationship of that of three cases depicted (i) complex synaptic weight matrix and complex
vector, (ii) convergence for real valued neural network and (iii) an intermediary of these
two, i.e., a test with complex vectors on a real valued synaptic weight matrix is shown.
Convergence of Energy function
800
700
Complex weights
Complex vector
500
Real weights
complex vectors
400
300
200
100
0
-100
-200
0
7
8
time t
10
11
12
13
14
15
Energy difference
600
Real weights
complex vectors
200
150
100
Real weights
real vectors
50
0
0
8
time t
10
12
14
16
103
Thus as we see from the figures above, in the first graph the energy function value is
the greatest for the case where weights are complex and the vectors are also complex. That
is because the complex part of the weight matrix is non-zero and contribute to the energy
value. Next comes the case where weights are real but the vectors are complex. In this case,
though there will be no complex part of the weights contributing, the complex part of the
vectors contribute to this increase in the energy value from the original Hopfield case
which in turn has the least local optima of convergence.
It is customary and mandatory to prove that this analogue between the complex and
the real cases of Hopfield associative memory in the serial mode can also be extended to
work in the fully parallel mode. It has been observed however, that a separate proof is not
required to illustrate the behavior of the model in the fully parallel mode. The same proof
can be extended in a certain manner shown by the work of [BRUCK 1987].
Before going into the extension needed, let the general form of the expression for the
fully parallel mode be observed first.
W11R
W11I
V1 (t + 1)
" W1nR #
"
# Vk (t + 1)
" WnnR #
Vn (t + 1)
V1 (t + 1)
" W1nI #
"
# Vk (t + 1)
" WnnI #
Vn (t + 1)
These expressions change because the computation of the function is done at every
node of the neural net at a certain time instant.
Instead of evaluating the above expressions, an easier method is to define a special
neural net N such that,
N =
where
and
[W , T ]
0 W
W = W 0
T
T = T
104
As it can be seen from the above matrix N, it defines a newer neural net which
corresponds to a bipartite graph with 2n nodes. Let the subsets of nodes be P1 and P2
which are independent sets of nodes.
It has been proved beyond doubt in previous work that
1. for any serial mode of operation in N there exists a serial mode of operation in
N provided W has a non-negative diagonal.
2. there exists a serial mode of operation in N which is equivalent to a fully parallel
mode of operation in N.
This can be seen because if N is operating in a fully parallel mode, since P1 and P2
correspond to independent sets of nodes, it would be equivalent to evaluating one node at
a time in N which is nothing but the serial mode.
Now, since N is operating in a serial mode, when N reaches a stable state one of the
following things happen.
1. The current state of operation of both the partitions P1 and P2 that correspond to
N may be the same which means that both P1 and P2 converge to a stable state.
2. The current state of operation of both P1 and P2 are distinct which implies N
will oscillate between the two states at which P1 and P2 are existing currently,
thus converging to a cycle of length two.
The graphs which depict the operation in the parallel mode are shown below.
Parallel mode convergence of a complex valued neural network
250
200
Energy function E
150
100
50
0
-50
-100
-150
1
6
Time t
10
11
105
+
+
200
150
100
+
50
-50
+
-100
+
-150
1
Time t
The above graphs depict the oscillation of the value of the energy function for a peculiar
case and that of the value of E as it oscillates about zero.
6.4 CONCLUSIONS
From the above discussion one can observe that the evaluation of the signum function at
each node and thereby determining the vector that originates for the next instant makes
the complex valued neural network similar in behavior to the real valued one. However
the designs of neural networks( [ZURADA 1996],[ZURADA 2003] and [HIROSE 2003])
proposed so far have not seen the plausibility of the implementation of the above mentioned
method of performing the complex signum function. Since the application of the function
proves that the network converges just as the real valued one, it can be conveniently applied
to applications such as image processing and pattern recognition.
106
REFERENCES
[1]. [HOPFIELD 1982] J.J. Hopfield and D.W. Tank Neural Computations of Decisions in
Optimization Problems in Proc. Nat. Acad. Sci. USA , Vol. 79., pp. 2554-2558, 1982.
[2]. [BRUCK 1987] Jehoshua Bruck and Joseph W.Goodman A Generalized Convergence Theorem
for Neural Networks. IEEE First Conference on Neural Networks, San Diego, CA June 1987.
[3]. [ZURADA 1996] Stainslaw Jankowski, Andrzej Lozowski and Jacek M.Zurada,
Complex-valued Multistate Neural Associative Memory. IEEE Transactions on Neural
Networks. Vol. 7, No 6, November 1996.
[4]. [ZURADA 2003] Mehemet kerem Muezzinoglu, Student member IEEE, Cuneyt Guzelis
and Zacek.M.Zurada, Fellow IEEE A New Design Method for the Complex-valued Multistate
Hopfield Associative Memory. IEEE Transactions on Neural Networks Vol. 14. No. 4, July
2003.
[5]. [HIROSE 2003] Akira Hirose. Complex Valued Neural Networks: Theories and
Applications. World Scientific Publishing Co, November 2003.
[6]. G. Rama Murthy and D. Praveen, Complex valued Neural Associative Memory on the
Complex Hypercube, Proceedings of 2004 IEEE Conference on Cybernetics and Intelligent
Systems (CIS 2004).
107
CHAPTER
7.1 INTRODUCTION
In the case of many artificial/natural linear/non-linear systems/channels the output signal
is corrupted by noise. A natural important practical/theoretical problem is filtering. Thus it
is also desirable to design an optimal filter. In many traditional approaches, the criterion of
optimality is the minimization of mean square error between actual output and estimated
output. Based on this criteria Wiener formulated the problem and discovered the optimal
filter. This filter is derived based on transfer function description. Kalman formulated the
problem based on state space description of linear system and in his case the filter is recursive.
Independent of the research in system theory, Shannon formalized information theory
by formulating the notion of block codes for noisy communication channels. The relationship
between system theory approach and coding theory approach was seriously investigated
by few researchers. In [2] the problem of optimal signal design for linear system/channels
was formulated and solved. It is shown in this chapter that investigating the relationship
between system theory approach and coding theory approach leads to the formulation
and solution of a new optimal filtering problem.
In section 2 an optimal signal design problem is formulated and solved which then
forms the basis of the optimal filtering problem and its solution which is discussed in
section 3. Finally the section 4 concludes the chapter.
108
i
component of which (vector) is bounded in amplitude by one i.e. uk 1, in order to
k 1
f
1
XkT CT CXk
2 k =0
subject to
X k +1 = AX k + Buk , X (0) = 0
Yk = CX k
7.2.2 Solution
The optimal control vectors which maximize the total output energy of a linear discrete
time filter over a finite horizon [0, k f ] are given by
u k = sign
k f k 1
where Rkj =
i = max{1, j k 1}
*
kj u j
provided above provides a necessary condition on the optimum input signal/control. The
stable states of a neural network constitute the local optimum control vector. The global
optimum stable state provides the global optimum control vector [2].
Proof : Discrete Maximal Principle well known in literature is utilized to provide the
solution. Consider a one-dimensional linear dynamical system. Its state space description
is given by
(7.1)
(7.2)
It should be noted that we are purposefully considering a linear time varying system.
In the above state space description of the linear system, A(k ) is an n n matrix (a second
109
order tensor) and X(k ) is the state vector (first order tensor) i.e. an n 1 vector. C(k ) is an
m n matrix, B(k ) is an n p matrix.
In the case of certain multidimensional linear systems as well as infinite dimensional
linear systems, the state transition tensor [7], the state tensor, the input tensor [5], the
output tensor are of compatible dimension as well as order. The discrete time dynamical
system evolution (linear or non-linear) is described through tensors [7]. The inner product
between the linear operators is carried out with the standard method. To restrict oneself
to the problem considered in the present chapter, we return to the one-dimensional system
keeping in mind that the authors already made the extension to multi/infinite
dimensional systems [7].
i
The input sequence satisfies the constraints of the following form i.e. uk 1 , where u ki
f
1 T T
1
X kT CT (k )C(k )X k Xk f C ( k f )C( k f )Xk f
J=
2
2 k =0
(7.3)
k =
H( xk , uk , k , k )
xk
(7.4)
1 T T
Xk C ( k )C( k )Xk + T [ A( k )X + B(k )u ]
k +1
k
k
2
(7.5)
k = CT (k )C(k )Xk + AT (k )k +1
(7.6)
(k f ) = C T (k f )C( k f )X (k f )
(7.7)
110
This will provide the terminal condition for solving (7.6). Since the input is constrained
it must necessarily satisfy
( H (xk , v , k +1 , k ))
H (x k , u k , k +1 , k ) = min
vV
(7.8)
T
uk* = S ign {B ( k )k + 1 }
(7.9)
In most textbooks [6] and references for optimal control /signal design (7.9) is derived
as a necessary condition. We make the following detailed derivation, to expose the structure
of optimal control and its relationship to stable states of a neural network.
Solving (7.6) for k +1 and substituting in (7.9) we arrive at the optimal control sequence.
It is immediate to see that if v is a convex polytope, then we have a mathematical
programming problem. Our chief contribution is the following derivation.
With the terminal state specified, the equation (7.6) is recursed backwards to arrive at
the optimal vector (optimal control tensor in the multidimensional case). Thus, an efficient
computational form for solving the two-point boundary value problem [9] is derived in
the following.
It should be noted that, we derived the expression for k+1 .in the case of linear time
varying dynamical system.
AT ( k f 1)CT ( k f )Y( k f )
k f 2 = C T ( k f 2)C( k f 2)X (k f 2)
AT (k f 2)C T (k f 1)Y (k f 1)
AT ( k f 2) AT ( k f 1)C T ( k f )Y( k f )
k f 3 = CT ( k f 3)Y( k f 3)
(7.10)
111
AT ( k f 3)CT ( k f 2)Y( k f 2)
AT ( k f 3) AT ( k f 2)C T ( k f 1)Y( k f 1)
AT ( k f 3) AT ( k f 2) AT ( k f 1)CT ( k f )Y( k f )
Thus continuing the pattern downwards, we have for the linear time invariant filters
k f l = CT Y( k f l) AT CT Y( k f l + 1)
( AT )2 C T Y ( k f l + 2)
( AT )3 C T Y ( k f l + 3) ...
( AT )l C T Y ( k f )
(7.11)
k +1 = CT Y(k + 1) AT CT Y(k + 2)
( AT )2 CT Y(k + 3) ...
( AT )
k f k+l
(7.12)
C T Y( k + l + 1)
uk = Sign BT ( AT )i C T Y ( k + i + 1)
i =0
(7.13)
(7.14)
h
i =0
(i + 1)Y( k + i + 1) =
i =0
k + i +1
hT (i + 1)
h (k + i + 1 j)u ( j)
j =0
(7.15)
112
Exchanging the order of summation, with the help of grid, in Figure 7.1 we have
uk*
k f
= Sign
j =0 i
[h T (i + 1) h (k + i + 1 j )] u ( j )
j k 1)
k f k 1
= max ( 1,
kf
k+1
0, 0
kf k 1
uk*
k f k 1
k f
[h T (i + 1)h (k + i + 1 j )]u ( j )
Sign
=
j =0 i =max{1, j k 1}
(7.16)
Let us define
k f k 1
Rkj =
i = max{1, j k 1}
hT (i + 1) h ( k + i + 1 j )
(7.17)
Thus,
k f
uk* = Sign Rk ju ( j )
j =0
(7.18)
In the case of linear time varying systems, the above derivation still applies, as is easily
seen above. It is easy to see that we have an expression of the following form for the
optimal control vector over finite horizon of a time varying linear system.
uk* = Sign
{ S
kj u( j )
where, Skj is the energy density matrix of time varying linear system. This can be stated as
a theorem. The authors derived similar results for multidimensional and infinite
dimensional systems[2].
113
1
J =
2
k f 1
T T
k C CX k
X k +1 = AX k + Buk ,
subject to
X (0) = 0
Yk = CX k
where A is n x n matrix , B is n x 1 matrix, C is an 1 n matrix. Let the impulse response at
time k be denoted by h(k ).
That is, find a bounded support ,bounded magnitude impulse response values such that total
output energy over a finite horizon is maximized. The input is unconstrained.
7.3.2 Solution
Since convolution is a commutative operator, as far as the output of linear filter is concerned,
the roles of input and impulse response can be exchanged.
y ( n ) = u ( n ) * h ( n) = h ( n ) * u ( n )
The input and impulse response have a dual role in determining output. Maximizing
output subject to bounded extent, bounded support input is equivalent to maximizing
output subject to bounded extent, bounded support impulse response. The optimal input
vector is given as the stable state of a neural network [3]. Thus optimal input signals
constitute a linear code [1], [4].
The optimal set of impulse responses constitutes stable states of a neural network whose
connection matrix is the input energy density matrix. The components of optimal impulse
response vector assume binary values. They constitute a linear code. The linear filtering
operation reduces to the binary filtering operation. The optimal binary filters are related
to optimal codes matched to input. The derivation follows by replacing the input by
impulse response finite in extent and finite in support. The derivation involves duplication
114
effort required with the derivation of optimal input. Thus, linear filtering involves
weighting the input values in the window by binary values. It is shown in [3] that the logic
functions in multidimensions also constitute the stable states of an m-d neural network.
Known impulse response
of linear channel over
finite horizon
Determine the
connection matrix of
neural network (energy
density matrix)
Determine the
local/global optimum
input signal, i.e., the
stable state of neural
network by running it in
serial mode
7.4 CONCLUSIONS
In the above discussion, a realistic/ practical optimal filtering problem is formulated. It is
shown that this filtering problem is related to an optimal control problem [6]. By solving
115
the optimal control/ signal design problem, it is shown that the global optimum impulse
response constitutes the stable state of a Hopfield neural network.
REFERENCES
* Journal:
[1] Jehoshua Bruck, Mario Blaum, Neural Networks, Error Correcting Codes, and Polynomials
over the Binary n- Cube, IEEE Transactions on Information Theory, Vol. 35, No. 5, September
1989.
* Conference Proceedings:
[2] G. Rama Murthy, Optimal Control, Codeword, Logic Function Tensors: Multidimensional
Neural Networks, International Journal of Systemics, Cybernetics and Informatics (IJSCI),
October 2006, pages 9-17.
[3] G. Rama Murthy, Multi/Infinite Dimensional Neural Networks, Multi/Infinite Dimensional
Logic theory, International Journal of Neural Systems, Vol. 15, No. 3, pp. 223-235, 2005.
[4] G. Rama Murthy, Multi/Infinite Dimensional Coding Theory: Multi/Infinite Dimensional
Neural Networks: Constrained Static Optimization, Proceedings of IEEE Information Theory
Workshop, October 2002.
* Books:
[5] A.I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis with Applications, Dover
Publications Inc., New York, 1968.
[6] M. Gopal, Modern Control System Theory, John Wiley and Sons, New York.
[7] G. Rama Murthy, Tensor State Space Representation: Multidimensional Systems, International
Journal of Systemics, Cybernetics and Informatics (IJSCI), January 2007, pp.16-23
[8] B. Gopinath, T.Cover , Open Problems in Control and Communication and Computation.,
Springer, Hiedelberg, 1987.
[9] A.E. Bryson and Y.C. Ho, Applied Optimal Control: Optimization, Estimation and Control,
Taylor and Francis Inc. 1995.
This page
intentionally left
blank
CHAPTER
8.1 INTRODUCTION
Artificial neural networks are innovated to provide models of biological neural networks.
The currently available models of neurons are utilized to build single layer ( e.g. single
layer perceptron ) as well as multi-layer neural networks (e.g. multi-layer perceptron).
These neural networks were utilized successfully in several applications. Also various
paradigms of neural networks such as radial basis functions, self-organizing memory are
innovated and utilized in applications.
In the case of conventional real valued neural networks, the inputs, outputs belong
to the Euclidean space (Rn or Rm ). In these neural networks, a synapse is represented/
modeled by a single synaptic weight which is lumped at one point. These synaptic weights
are updated in the training phase using one of the learning laws ( for example, Perceptron
learning law, gradient rule etc). In the case of supervised training, these learning laws
enable one to classify the input patterns into finitely many classes (based on the training
samples).
Motivation for a Better Model of Neurons:
By reflecting on modeling biological neurons, we are naturally led to making the realistic
assumption that synapses constitute distributed elements rather than lumped elements.
Thus, a realistic model of a synapse is a linear system (characterized by impulse response)
while at the same time maintaining tractability.
In conventional neuronal models, the input at each synapse is a constant and is
acted on by the scalar synaptic weight. But in biological neurons, it is most natural
to consider that the input signal samples are not scalar values, but are functions
defined over a finite support. The synapses (characterized by impulse response)
act on these input signals which are defined on the domain (restricted to a
118
support) [0,T]. Thus the class of input signals belong to a function space (defined
on [0,T]). For the sake of notational convenience, let the synaptic weight functions
be also defined on [0,T].
In summary, a continuous-time, real valued neuron has input signals (which are real valued
functions of time) defined over a finite support. The input signals are fed to synapses acting as
linear systems/filters and sum of responses is operated on by an activation function. Using this
model of a neuron, various feed-forward/recurrent networks of neurons are designed and studied.
This chapter is organized as follows. In section 2, continuous time perceptron model is
discussed. Also in this section, the continuous time perceptron learning law is discussed.
In section 3, abstract mathematical structure of neuronal models is discussed. In section 4,
neuronal model based on finite impulse response model of synapse is discussed. Also the
associated neural networks are proposed. In section 5, a novel continuous time associative
memory is proposed and the convergence theorem is discussed. In section 6, various
multidimensional neural network generalizations are discussed. In section 7, complex
valued neural networks based on the continuous time neuronal model are discussed. The
chapter concludes in section 8.
y(t) = Sign
ai (t ) Wi (t ) T
i =1
(8.1)
Linear Filter Model of a Synapse: Associated Novel Real/Complex Valued Neural Networks
119
where denotes the convolution operation between two time functions (and T is the threshold
at the neuron. Without loss of generality, T can be assumed to be zero ). More explicitly,
M
y(t) = Sign
i =1
ai ( ) Wi (t ) d T
(8.2)
The successive input functions are defined over the interval [0,T]. They are fed as inputs to
the continuous time neurons at successive SLOTS.
M
y(t) = sign ai (t ) w i (t )
i =1
a1 (t)
w1 (t)
w2(t)
a2 (t)
wm(t)
.
.
.
am (t)
(8.3)
where S(t) is the target output for the current training example, g(t) is the output generated
by the perceptron and is a positive constant called the learning rate.
Proof: In this model of continuous time perceptron, the weights are functions of time defined
on the interval [0, T]. Thus, since the synaptic weights are functions of time, we are led to
investigating the type of convergence: (i) Pointwise or (ii) Uniform.
Suppose we fix the time point, t. The convergence of synaptic weights in (8.3) is assured
by the proof of convergence in the case of conventional perceptron. Since, the choice of
time point is arbitrary, we are assured of pointwise convergence of synaptic weights based
on training sample input functions.
It is interesting to know under what conditions, the sequence of synaptic weight
functions converge UNIFORMLY.
Q.E.D.
120
1
,
1 + e z(t )
(8.4)
z(t) =
a (t) W (t)
i
j =1
( ..convolution operator)
(8.5)
Linear Filter Model of a Synapse: Associated Novel Real/Complex Valued Neural Networks
121
a (t)
i =1
Wi (t) = L(t)
(8.5)
i =1
a (t)
i =1
Wi ( t) > L( t)
(8.6)
122
of a digital filter i.e. a Finite Impulse Response Filter (FIR). In the following, we
consider neural network with such a model of synapse.
Typically, let the discrete time input signals be considered over the finite horizon
[ 0, 1, 2, , S]. For the sake of simplicity, let the length of all FIR filters modeling
the synapses be the same, say T (The generalization to the case where the FIR filters
have different lengths is straightforward). Thus the impulse response sequences
(associated with different synapses) extend over the duration {0,1,2,.,}.
The output of the synapse (described by an FIR filter) depends on the input signal
values over a finite horizon (depending on the length of the impulse response).
Typically the length of filter is smaller than the support of a distinct input sequence
i.e. T << S. It should be noted that the successive input sequences are of same length.
M i
Sign
C ( n) a i ( n)
y (n ) =
i =1
(8.7)
M T
C i (k ) a i (n k )
= Sign
i =1 k = 0
Where C i(k) for k = 1, 2,...,T is the impulse response sequence of ith synapse and
(8.8)
where S(k ) is the target output for the current training example, g(k ) is the output generated
by the perceptron at time k and is a positive constant called the learning rate.
This update rule converges when the input patterns are linearly separable. Using the
same model of neuron, a multi layer perceptron is trained using a modified version of
Back Propagation Algorithm.
It is possible to consider neuronal models in which the synapse acts as an Infinite
Impulse Response filter. Furthermore, based on such a model of neuron (synapse acting as
an FIR filter), it is possible to discuss a novel associative memory. Currently, the models of
neurons discussed (in section 2, section 4) are being compared with traditional models of
neurons [Rama5].
Linear Filter Model of a Synapse: Associated Novel Real/Complex Valued Neural Networks
123
y (t)
2
The author [Rama3] as well as Honig et al. [HoS] independently solved the problem.
The solution in [Rama3] is more general in the sense that we considered Multi-Input, MultiOutput (MIMO) linear time varying filters and derived the optimal input vector. Let Y (t)
be an optimal input vector. Then it satisfies the following signed integral equation
(8.9)
where R (t,u) is the energy density matrix of the multi-input, multi-output, Linear time
varying system. In the case of multi-input, multi-output, linear time invariant system, the
optimal input vector satisfies the following equation
(t)
=
Sign
R (t u) Y (u) du
Y
(8.10)
(n)
(8.11)
(t) = Sign R ( t ) Y ( ) d
Y
0
124
conditions) without fear of approximating the system dynamics. The standard procedure
of discretizing a continuous time system is summarized in many textbooks including
Gopals book ( Gop., Pages 185-187),
With the discrete time system equivalent to the continuous time system, the argument
technique adopted for convergence is once again the energy function hill climbing in
successive iterations.
Theorem 8.1: Consider a Multi-Input, Multi-Output (MIMO), linear time-invariant
system described by the dynamics
i
X (t ) = A X (t ) + B Y (t )
Z(t) = C X (t)
(8.12)
The discrete time simulation (of the above continuous time system) of the following form
X( k+1 ) = F X( k ) + G Y (k)
(8.13)
Z( k ) = H X( k )
(8.14)
can always be done. The discrete simulation is almost exact except for the error introduced
by sampling the input and that caused by the iterative procedure for evaluating the matrices.
Proof: Follows from the procedure described in Gopal (Gop, pp.185-187 ).
Q.E.D.
With such a discrete time system corresponding to a continuous time system, we have
the following recursion (successive approximation scheme):
Y (n +1) ( k ) = Sign {W Y( n) (k )}
for n 0,
(8.15)
Where Y(k ) is the optimal control vector associated with the discrete time linear system
(obtained by discretizing a continuous time system) and W is the energy density tensor
(associated with the discrete time system). Thus we have a Hopfield network with W as
the synaptic weight matrix. Hence starting with an initial vector Y (0) (k ) , the above recursion
converges to a stable state (local optimum vector) or at most a cycle of length 2 ( by invoking
the convergence theorem associated with Hopfield neural network whose Connection
matrix is W).
Q.E.D.
Thus, the above approach converts the problem of determining the convergence of
scheme in (8.11), to that associated with a discrete time linear system. The iteration reminds
of L version of Neumann Series. The energy function (Lyapunov function) optimized
over the state trajectory of continuous time linear system is a quadratic form [Rama1].
In [BrB], various possible generalized neural networks are discussed. These neural
networks are associated with an energy function which is a higher order form than a
quadratic form (associated with a Hopfield neural network). It is very natural to formalize
associative memories which are generalizations of those discussed in this chapter.
Several generalizations of the results are documented in the technical report [Rama5].
For instance, the complex valued, continuous time associative memory is discussed in
Linear Filter Model of a Synapse: Associated Novel Real/Complex Valued Neural Networks
125
detail in the technical report [Rama5, RaP]. For such a complex valued associative memory,
a convergence theorem is stated and proved.
126
(8.16)
(8.17)
In the case of complex valued, continuous time multi-layer perceptron, we utilize the
above complex valued sigmoidal function as the activation function at each (complex
valued) neuron. With such a model of neuron, the backpropagation algorithm in Nitta et al.
[Nit1, Nit2] is generalized to the case of continuous time neural networks.
Utilizing traditional model of a neuron, unified theory of control, communication and
computation is discovered and formalized [Rama 4]. This unified theory is generalized
using the models of neurons discussed in this chapter [Rama1].
8.8. CONCLUSIONS
In this, chapter models of neurons are proposed. The synapses are considered as distributed
elements rather than lumped elements. Thus, synapses are modeled as linear filters in
continuous time as well as discrete time. Using these novel models of neurons, associated
neural networks are proposed. Also, a novel model of associative memory is proposed.
Using such a model, convergence aspects of various modes of operation is discussed.
Multidimensional generalizations of neural networks are discussed. Also associated
complex valued neural networks are discussed.
REFERENCES
[AAV] I. N. Aizenberg, N. N. Aizenberg and J. Vandewalle, Multi-Valued and Universal
Binary Neurons, Kluwer Academic Publishers, 2000.
[BrB] J. Bruck and M. Blaum, Neural Networks, Error Correcting Codes, and Polynomials over
the Binary n-Cube, IEEE Transactions on Information Theory, pp. 976- 987, Vol. 35, No.5,
September 1989.
[GoC] B. Gopinath and T. Cover, Open Problems in Control, Communication and Computation,
Springer, Hiedelberg, 1987.
[Gop] M. Gopal, Modern Control System Theory, John Wiley & Sons, Second Edition,
1993.
[HoS] M. Honig and K. Stieglitz, On Wyners Conjecture, Bellcore Technical Memorandum.
[Nit1] T. Nitta and T. Furuya: A Complex Back-propagation Learning, Transactions of
Information Processing Society of Japan, Vol.32, No.10, pp.1319-1329 (1991) (in Japanese).
Linear Filter Model of a Synapse: Associated Novel Real/Complex Valued Neural Networks
127
This page
intentionally left
blank
CHAPTER
Novel Comple
xV
alued
Complex
Valued
Neural Networks
9.1 INTRODUCTION
Starting in 1950s researchers tried to arrive at models of neuronal circuitry. Thus the research
field of artificial neural networks took birth. The so-called, perceptron was shown to be
able to classify linear separable patterns. Since the Ex-clusive OR gate cannot be synthesized
through any perceptron (as the gate outputs are not linearly separable), the interest in
artificial neural networks faded away. In the 1970s, it was shown that multi-layer feed
forward neural network such as a multi-layer perceptron is able to classify non-linearly
separable patterns.
Living systems/machines such as homosapiens, lions, tigers etc. have the ability to
associate externally presented one/two/three dimensional information such as audio
signal/images/three dimensional scenes with the information stored in the brain. This
highly accurate ability of association of information is amazingly achieved through the
bio-chemical circuitry in the brain. In 1980s Hopfield revived the interest in the area of
artificial neural networks through a model of associative memory. The main contribution
is a convergence theorem which shows that the artificial neural network reaches a memory/
stable state starting in any arbitrary initial input (in a certain important mode of operation).
He also demonstrated several interesting variations of associative memory. In (Rama4), a
continuous-time version of associative memory is described. It is shown that the celebrated
convergence theorem in discrete time generalizes to the continuous time associative
memory. In (Rama2), the model of associative memory in one dimension (Hopfield
associative memory) is generalized to multi/infinite dimensions and the associated
convergence theorem is proven.
It was realized by researchers such as N.N. Aizenberg that the basic model of a
neuron must be modified to account for complex valued inputs, complex valued
130
synaptic weights and thresholds [AAV]. In many real world applications, complex
valued input signals need to be processed by neural networks with complex synaptic
weights [Hir]. Thus the need to study, design and analysis of such networks is real.
Also, in (Rama3) the results on real valued associative memories are extended to
complex valued neural networks. In [Nit1, Nit2], the celebrated back propagation
algorithm is generalized to complex valued neural networks. Also, in [Rama4], based
on a novel model of neuron, complex valued neural networks are designed. Thus, based
on the results in section 2, section 3, it is reasoned that transforming real valued signals
into complex domain and processing them in the complex domain could have many
advantages.
This chapter is organized as follows. In Section 2, Discrete Fourier Transform (DFT)
is utilized to transform a set of real/complex valued sequences into the complex valued
( frequency) domain. It is reasoned that, in a well defined sense, processing the signals
using complex valued neural networks is equivalent to processing them in real domain.
In Section 3, a novel model of continuous time neuron is discussed. The associated neural
networks (based on the novel model of neuron) are briefly outlined. In Section 4, some
important generalizations are discussed. In Section 5, some open questions are outlined.
The chapter concludes in Section 6.
DFT: X ( k ) =
IDFT: x(n ) =
1
M
n =0
k =0
X( k ) WM k n for 0 n ( M 1)
(9.1)
(9.2)
Where
WM = e
j(
2
)
M
(9.3)
131
(9.4)
S2 = {( x , y)| W1 x + W2 y < C }
(9.5)
(9.6)
p q
r s
(9.7)
X ' p q X
Y ' = r s Y
(9.8)
X = pX + qY
Y = rX + sY
On inverting the linear transformation, we have
(9.9)
132
X p
Y =
r
q
s
X '
Y '
s /d
= r /d
(9.10)
q /d X '
p /d Y '
'
q
s
X d X' d Y '
Y =
p
r
X' +
Y '
d
d
(9.11)
q
p
s
W1 X '
Y ' + W2
X' +
Y ' = C
d
d
d
(9.12)
From the above equations, it is clear that a point in two dimensional Euclidean space
belonging to the set gets transformed to the point T (x . y ) = (x ', y ') i.e.
(x , y ) S1
T (x , y ) = ( x ', y ')S '1
(9.13)
Thus we have shown that the patterns which are linearly seperable in two
dimensional Euclidean space will remain linearly seperable after applying a
bijective linear transformation to the samples.
The above proof is easily generalized to samples in n-dimensional Euclidean
space ( where n is arbitrary).
Q.E.D.
Consider the equation (9.1) for computing the Discrete Fourier Transformation of a
discrete sequence of samples {x(n) : 0 n ( M 1)}. Let the column vector containing these
samples be given by Y. Also, let the column vector containing the transformed samples i.e
{X(k) : 0 k ( M 1)} be given by Z. It is clear that equation (9.1) is equivalent to the
following:
Z = F Y,
133
(9.14)
Where F is the Discrete Fourier Transform matrix. This matrix is invertible. Hence the
transformation between the discrete sequence vectors Y, Z is bijective. Thus the above
Lemma applies.
{Y1 , Y2 ,..., YL } . The following supervised learning procedure is utilized to classify the patterns:
Apply the DFT to the successive input training sample vectors resulting in the
vectors. {Z1 , Z2 ,...., ZL } .
Train a single layer of Complex Valued Perceptrons using the transformed sample
vectors (complex valued version of perceptron learning law provided in [AAV]
is used)
Apply the IDFT to arrive at the proper class of training samples.
Utilize the trained complex valued neural network to classify the test patterns.
In view of Lemma 1, the above procedure converges when the training samples are
linearly separable. Thus the linearly separable test patterns are properly classified.
The above procedure is also applied for non-linearly separable patterns using a complex
valued Multi-Layer Perceptron. Back propagation algorithm discussed in [Nit1, Nit2] is
utilized. Detailed discussion is provided in [Rama1]. It is argued by Nitta et al. that the complex
valued version of back propagation algorithm converges faster than the real one. Thus from
computational viewpoint, the above procedure is attractive.
134
a j (t ) w j (t )
y(t ) = Sign
(9.15)
j =1
More general activation functions ( sigmoid, hyperbolic tangent etc.) could be used. The
successive input functions are defined over the interval [0,T]. They are fed as inputs to the
continuous time neurons at successive SLOTS. For the sake of notational convenience, we
call such a neuron, a continuous time perceptron.
y(t) = Sign[
a1(t)
a (t) w (t) ]
1
i=1
w1(t)
w2(t)
a2(t)
.
wm(t)
.
.
am (t)
(9.16)
where S(t) is the target output for the current training example, g(t) is the output
generated by the continuous time perceptron and is a positive constant called the
learning rate. The proof of convergence of conventional perceptron learning law, also
guarantees the point wise convergence (not necessarily uniform convergence) of
synaptic weight functions.
Using sigmoid function as the activation function and the continuous perceptron as
the model of neuron, it is straightforward to arrive at a continuous time Multi-Layer
Perceptron. The conventional back propagation algorithm is generalized to such a feed
forward network.
135
136
9.8 CONCLUSIONS
In this chapter, transforming real valued signals into complex domain (using DFT) and
processing them using complex valued neural network is discussed. A novel model of
neuron is proposed. Based on such a model, real as well as complex valued neural networks
are proposed. Some open research questions are provided.
REFERENCES
[AAV] I. N. Aizenberg, N. N. Aizenberg and J. Vandewalle, Multi-Valued and Universal
Binary Neurons, Kluwer Academic Publishers, 2000.
[Hir] A.Hirose, Complex Valued Neural Networks: Theories and Applications, World Scientific
Publishing Company, November 2003.
[KIM] H. Kusamichi, T. Isokawa, N. Matsui et al. A New Scheme for Colour Night Vision by
Quaternion Neural Network, 2nd International Conference on Autonomous robots & agents,
Dec. 13-15, 2004, Palmerston North, New Zealand.
[Nit1] T. Nitta and T. Furuya: A Complex Back-propagation Learning, Transactions of
Information Processing Society of Japan, Vol.32, No.10, pp.1319-1329 (1991) (in Japanese).
[Nit2] T. Nitta : An Extension of the Back-Propagation Algorithm to Complex Numbers,
Neural Networks, Vol.10, No.8, pp.1391-1415 (1997).
[Rama1] G. Rama Murthy, Unified Theory of Control, Communication and Computation, To
be submitted to Proceedings of IEEE.
[Rama 2] G. Rama Murthy, Multi/Infinite Dimensional Neural Networks, Multi/Infinite
Dimensional Logic Theory, International Journal of Neural Systems, Vol.15, No. 3 (2005), 113, June 2005.
[Rama 3] G. Rama Murthy and D. Praveen, Complex-Valued Neural Associative Memory on
the Complex Hypercube, Proceedings of 2004 IEEE International Conference on Cybernetics
and Intelligent Systems (CIS-2004), Singapore.
[Rama 4] G. Rama Murthy, Linear Filter Model of Synapses: Associated Novel Real/Complex
Valued Neural Networks, IIIT Technical Report in preparation.
[Rama 5] G. Rama Murthy, Some Novel Real/Complex Valued Neural Network Models,
Proceedings of 9th Fuzzy days, (International Conference on Computational Intelligence),
September 2006, Dortmund, Germany, Pages, 473-483.
137
CHAPTER
10
Advanced Theory of
Evolution of Living Systems
138
With one/two eyes formed on the surface of the egg, due to the rotation of the earth, the
natural terrain in the oceans, the egg was constantly drifting in the ocean. The homogeneity
of the organic mass has simultaneously led to the formation of several eggs in the same
region. This led to the problem of congestion in the region (of eggs). The life forms, in order
to cope with the problem began to develop limbs for LOCOMOTION. The remaining organs
formed due to the natural environment/atmosphere have similar topological features in
different species of living systems (UNIFICATION of various LIVING species). The life forms
began to jostle and to handle the environmental needs, the ellipsoid based egg deformed to
form various shapes for the body and primitive organs (non-intelligence based). These
differences in the shape/topological features of the body, organs led to the classification of
such living systems into species such as frogs, fish, crocodiles etc.
The initial organic mass based life had no intelligence. The set of characteristics that
are common to various life forms have formed over a large length of time. Some novel and
innovative concepts which are the distinguishing features of this advanced theory are
briefly described below.
139
10.4 CONCLUSIONS
In an effort to understand non-living physical reality, various sub-fields of science such as
physics and chemistry were developed. Based on experimental observations from physical
reality, various mathematical, empirical theories were constucted to derive laws of nature.
These laws, principles, theories on non-living physical reality were utilized to develop
science and engineering. The field of biology was developed to understand the composition,
operation, coordination of various organs/functional units of living systems in nature such
as homo-sapiens, tigers etc. The distinction between living pysical reality and non-living
physical reality was very puzzling to scientists. In the mid 1940s, N. Wiener coined the
word CYBERNETICS for the field dedicated to understand the control, communication
and computation functions of living systems.
The author pioneered the field of mathematical cybernetics by unifying the control,
communication and computation functions of living system functional units. Thus, a
mathematical model of natural living systems was developed. It is shown that in the context
of one dimensional linear dynamical systems that the unification includes various other
functions alongwith control, communication and computation. By utilizing the tensor state
space representation of certain multi/infinite dimensional linear dynamical systems
discovered by the author, cybernetics results for multi/infinite dimensional systems were
developed. These results enabled the development of multi/infinte dimensional coding,
computation and system theories.
The author also made some pioneering investigations into the functions of various
natural living sytems. These investigations provided the important conclusion that the
living machines such as homo-sapiens, tigers etc. programmed themselves for functions
140
such as metabolism, sex etc. Many issues of importance to the living machines such as
control/coordination of them, diseases, programmed bad habits are all addressed based
on a proper understanding of the theory. The advanced theory of evolution resulting from
the unified theory of control, communication and computation resulted in new perspectives
into nature based living systems.
In summary, in this book, the author related multidimensional logic, coding and control
theories to the concept of multidimensional neural networks (proposed by him). He
innovated a novel complex signum function and proposed a novel complex valued
associative memory. Several novel models of neuron are proposed and associated real as
well as complex valued neural networks are discussed.
Index
141
Index
A
A codeword 91
A Human/Animal Brain 125
A Multi-Layer Feed Forward Network 120
A Sigmoid Function 120
Astable State 16, 124
Abstract Mathematical Structure 118
Abstract Model 79
Abstract models 137
Accurate 118
Activation Function 118, 125, 133, 134
Activation Functions 125
Adaptive Neural Networks 24
Addition 120
Additive 42
Additive I.I.D. Noise Term 72
Adjoint Equations 87
Admissible Control Tensors 87
Admissible Functions 64
Admissible Sequence 83, 108, 109, 113
Advanced Theory of Evolution 137
Algebraic Geometry 45
Algebraic Threshold Function 1, 29, 95, 97
All One Dimensional Logic Gates 81
All-Ones Tensor 40
Amplitude Modulation 135
Analog Linear Filter. 121
Analysis 62, 63, 76
AND, OR , NOR, NAND, XOR Gate 17
AND, OR, NOR, NAND, XOR Gates 81
AND, OR, NOR, NAND, XOR, NOT Gates 90
AND, OR, NOT, NAND, XOR, NOR 9
Animal 1
Approximation 55
Approximation Theory 55
Arbitrary Algebraic Threshold Function 12
Arbitrary Open 54
ARMA Time Series Model 72
Array 35
Artificial 3, 107
Artificial Neural Networks 81, 117, 118, 129
Associated Boundary Conditions 87
Associative Memory 79, 80, 125, 129
Attasis Model 64
Audio Signal 129
Auto-Regressive (AR) 70
Auto-Regressive Moving Average (ARMA) 70
Autocorrelation Tensors 71
Automata 4, 10
B
Back Propagation Algorithm 118, 122, 126, 130, 133
Basic Model of a Neuron 129
Basis 33
Behavior 64, 96
Behavior Approach 64
Better Model of Neurons 117
Bijection 131
Bijective Linear Transformation 131
Binary Arrays 9
Binary Codes 50
Binary Filtering 113
Binary Linear Multidimensional Code, 42
Binary Tensor 33
Binary Valued Functions 123
Binary Vector 123
Biological Neural Networks 117
Biological Neurons 117
Biological Systems 135
Bipartite Graph 104
Block Codes 107
Block Symmetric Tensor 18
142
Block Tensor 32
Block Tensors 33
Blocked Tensor 32
Boolean Algebra 3
Boolean Function 46, 48, 49
Boolean Functions 9, 46, 47
Boolean Logic Theory 80
Bounded Extent 113
Bounded Lattice 24, 53, 54
Bounded Lattices 54
Bounded Magnitude 113
Bounded Support [0, T] 118, 123
Bounded Support Input 113
Brain of Powerful Robots 80
C
Causal/Non-Causal Parts 75
Certain Discrete Time Multidimensional Systems
83
Certain Discrete Time System 82
Certain Multidimensional Linear System 109
Certain Multidimensional Linear Systems 62
Certain Multidimensional System 82
Certain Multidimensional Systems 69, 82
Certain Multi/Infinite 65
Channels 107
Characteristic Tensor 32
Cholesky Decomposition 56, 57
Circuits 32
Civilization, 1
Class of Input Signals 118
Class of Inputs 123
Class of Problems in P? 56
Classes (of Functions) 135
Classes of Functions 55
Classes of NP-Hard Problems 56
Clifford Neural Networks 135
Clifford/Geometric Algebra 135
Code 29, 49
Codes 27
Codeword 33, 38, 40, 81
Codeword Array 35
Codeword Tensor 35, 42, 43, 46, 47, 49
Codeword Vector 35
Codewords 27, 53, 54
Coding 4
Coding Theory Approach 107
Coding Theory 27
Colored Noise 71
Colored Noise Model 72
Common Thread 80
Common Thread of Neural Networks 80, 81
Communication 4, 79, 80, 81
Communication and Computation 91
Communication Systems 135
Commutative Operator 113
Compact 54
Compact Set 53, 54, 55
Compatible Tensors 15, 59
Complex 97
Complex Domain 136
Complex Hypercube 96, 97
Complex Neural Networks 24
Complex Number 97, 120
Complex Part of the Weight Matrix 103
Complex Signum Function 105, 125
Complex Synaptic Weight Matrix 102
Complex Synaptic Weights 130
Complex Valued 96
Complex Valued Associative memory 96, 102, 125
Complex Valued Backpropagation Algorithm 135
Complex Valued, Continuous Time Associative
Memory 124
Complex Valued, Continuous Time Multi-layer
Perceptron 126
Complex Valued, Continuous Time Neuron 125
Complex Valued Inputs 129
Complex Valued Multi-Layer Perceptron 133
Complex Valued Neural Net 102
Complex Valued Neural Network 96, 101, 105,
133, 135, 136
Complex Valued Neural Networks 118, 125, 130
Complex Valued Neuron 125
Complex Valued Perceptron 133
Complex Valued Perceptrons 133
Complex Valued Sigmoidal Function 126
Complex Valued Synaptic Weights 129
Complex Valued Vector 97
Complex-Valued 97, 135
Complexity Theory 55
Computation 3
Index
143
D
Darwin 137
Data bases 62
Death 138
Decoders 34
144
Decoding 27, 45
Decoding Algorithm 45
Decoding Techniques 28, 52
Decomposition Principle 29, 54, 55, 57
Decompositions of the State Space 76
Definition 23
Degree n 11
Derivative 67
Design 63
Design a Constellation 27
Design, Analysis 61, 62
Design of Codes 28
Determinant of the Matrix 132
DFT 133, 136
Difference Equation 71
Difference Equations 61
Difference in Energy 16
Differential Equations 61, 69
Differential/partial differential equations 76
Digital filter 122
Digital filters 130
Digital Signal Processing 130
Dimension 13, 22, 29
Dimensional 27
Dimensional Continuous Time 67
Dimensional Dynamical Systems 65
Dimensional Hypercube 53
Dimensional Linear as well as Non-Linear Codes 34
Dimensional Neural Networks 27
Discovered and Formalized 82
Discovery and Formalization 80
Discrete Fourier Transform 130
Discrete Fourier Transform matrix 133
Discrete Fourier Transformation 132
Discrete Maximal Principle 108
Discrete maximum principle 83
Discrete memoryless channel 37
Discrete Sequence of Samples 132
Discrete Sequences 130
Discrete Time 82
Discrete Time Input Signals 122
Discrete Time Linear System 124
Discrete Time Multidimensional Neural Network
29
Discrete Time Multidimensional Systems 62
Discrete Time Multi/Infinitedimensional System
83
E
Each Neuron Act 135
Egg 138
Eigentensors 72
Eigenvalue 58
Eigenvalues 72
Eigenvectors 58
Electrical Transmission Lines 73
Ellipsoid 138
Ellipsoid Based Egg 137
Empirical Theory of Evolution 137
Encode 35
Encoded Codeword 37
Encoder 4
Encoders 34
Encoding 27, 35
Encoding Algorithm 45
Encoding Procedure 37, 38, 43, 44
Encoding/Decoding Algorithms 35, 45
Energy Density Matrix 123
Energy density tensor 86, 89, 124
Energy E 16
Energy Function 10, 1, 15, 16, 17, 21, 27,
28, 29, 30, 31, 38, 43, 44, 45, 58, 59,
81, 90, 91, 95, 97, 98, 102, 103, 105, 124
Index
145
F
Fast Fourier Transform 130
Feed Forward Network 118, 134
Feed-forward/Recurrent Networks of Neurons
118
Field, F 121
Filter 118
Filtering 73, 107
Finite Dimensional Vector Space 63
Finite Field 35, 42
Finite Fields 59
Finite Impulse Response Filter 122
Finite Impulse Response Model 121
Finite Impulse Response Model of Synapse 118
Finite Support 118, 120, 123
Finitely Many Classes 117
Fourier Laplace Transform 121
Fully Parallel Mode 14, 16, 20, 96, 103, 104
Fully Symmetric Connection Tensor 91
Fully Symmetric Tensor 10, 14, 29, 30, 82, 84
Fully Symmetric Tensor of 22
Fully Symmetric Tensor S 13, 15
Function 31
Function Space 118, 120, 121
G
G/M/1-Type Structure 72
Game-Theoretic Codes: Optimal Codes 39
Generalization of Back Propagation Algorithm
120
Generalized Logic Circuit 19
Generalized Logic Function 19
Generalized Logic Gate 19
Generalized Multidimensional Logic Gate 19
Generalized Multidimensional Neural Network 19
Generalized Neural 91
Generalized Neural Network 11, 28
Generalized Neural Networks 124
Generalized/Multidimensional Neural Networks 12
Generator Tensor 29, 34, 35, 37, 38, 40, 42, 49,
53, 71, 72, 90
Generator Tensor, Codeword Tensors 52
Generator Tensor G 43
Generator Tensors 35
Generator/Information Tensor 36
Generator/Parity Check Tensor 92
Generator/Parity Check Tensors 36
Global Maximum 28, 29, 30, 34, 38, 40, 81
Global Optimization 55
Global Optimum 28, 92
Global Optimum Control Vector 108
Global Optimum Impulse Response 115
Global Optimum Stable State 52
Global States 63
Global/Local Optimum 54
Graph-Theoretic Code 22, 57, 92
Graphoid 29, 31, 33, 34, 92
Graphoid Based Codes 29
Graphoid Codes 31, 34
Graphoid Theoretic Codes 32
Graphs of Convergence 102
Group 42, 52
H
Hadamard Matrix 46, 47
Half Plane Causal 75
146
I
Identity Tensor 40
IDFT 133
Image Models 73
Image Processing 63, 96, 105
Image Processing, Tomography 76
Images 129
Imaginary Parts of 97
Index
147
J
Jacobian Matrix 76
Jacobian Tensor 76
K
Knapsack Problem 57
L
Language 138
Latent Variable Models 64
Lattice 22, 50, 59
Lattice (Unbounded Lattice) 24
Lattice 5
Learning Laws 117
Learning Rate 119, 134
Lee Distance 42, 43, 44, 45
Lee Weight 44
Life 138
Light Wave 34
Lindeloffs Covering Lemma 54
Linear 66, 118
Linear Algebra 61, 63
Linear Algebra Concepts 35
Linear As Well As Non-linear Codes 90
Linear Block Code 29, 81
Linear Block Codes 45, 57
Linear Block Multidimensional Code 28
Linear Block Multidimensional Codes 53
Linear Code 113
Linear Discrete Time Filter 108
Linear Dynamical System 88
Linear Dynamical Systems 70, 91
Linear Equations 47
Linear Filtering 113
Linear Filters 126
Linear Multidimensional 49
Linear Multidimensional Block Code 38, 41, 42
Linear Multidimensional Block Codes 38, 41
Linear Multidimensional Codeword Tensor 49
Linear Operator 63, 70, 72
Linear Operators 62, 109
Linear Prediction 73
Linear Programming 56
Linear Programming Problems 57
Linear Programming Problems: Decomposition
Principle 57
Linear Separability 121
Linear Space 68
Linear System 107, 117
Linear System Theory 61
Linear Systems 5, 24, 61
Linear Systems/Filters 118
Linear Tensor/Vector 32
Linear Tensor/Vector Space 32
Linear Time Invariant Continous Time System 81
Linear Time Invariant Multidimensional System 89
Linear Time Varying Multidimensional System 91
Linear Time Varying Multi/Infinitedimensional
Dynamical Systems 84
Linear Time Varying System 108
Linear Time Varying Systems 112
Linear Time-invariant System 124
Linear Transformation, 131
Linear Transformation Groups 51
148
Linear Transformations 65
Linear/Non-linear 137
Linear/Non-linear Codewords 54
Linear/Non-linear Dynamical Systems 62
Linear/Non-linear Multidimensional Codewords 54
Linearly Separable 122, 131
Linearly Separable Patterns 131
Linearly Separable Test Patterns 133
Linearly Seperable 132
Living Systems 4, 137
Living Systems/Machines 129
Local Control 62, 66, 76
Local Input 66
Local Input, Local Output 76
Local Maxima 100
Local Maximum 28, 29, 91
Local Minimum/Maximum Of 17
Local Optima 55, 57, 81, 90
Local Optima of Energy Function 19, 81
Local Optima of The Energy Functions 18
Local Optimum 19, 54, 91, 92
Local Optimum Control Vector 108
Local Output 66
Local State 62, 66, 76
Local State, Local Control 76
Local State Tensor 75
Local State, The Input 64
Local States 63
Locomotion 138
Logic Circuit 10, 19
Logic Circuits 10, 19, 80
Logic Function 10
Logic Functions 17, 22, 114
Logic Gate 10, 81
Logic Gates 3, 9, 19, 22, 80
Logic Gates, Logic Circuits 17
Logic Synthesis 9, 10, 80
Logic Theory 9, 81
Lumped 117
Lumped Elements 117, 126, 133
Lyapunov Function 124
M
m-d Hopfield Neural Network 82
m-d Neural Network 114
M-Dimensional Euclidean Space 131
Index
149
150
N
Natural Linear 107
Natural Living Systems 138
Nearest/Farthest Neighbourhood Set 75
Necessary Condition 108, 110
Neighbourhood Sets 75
Nerode Equivalence 63, 64
Networks 91
Neumann Series 124
Neural 91
Neural Net 102, 104
Neural Network 28, 30, 56, 57, 58, 90, 95, 97,
108, 110, 113, 122, 135
Neural Network Model 27
Neural Networks 59, 120, 121, 130, 133, 135
Neural/Generalized Neural Networks 38, 59
Neuron 12, 13, 122
Neuron Output 96
Neuronal Element 29
Neuronal Models 117, 118, 120
No Natural Notion of Causality 82
Node 13, 99
Nodes 13
Noise Model 72, 73
Noise Process 73
Noise Processes 72
Noise Terms 72
Noisy Communication Channels 107
Non-binary Codes 28, 29, 50
Non-binary Linear Codes 41
Non-causal Two Dimensional Dynamics 65
Non-homogeneity 74
Non-linear 28
Non-linear Block Codes 45
Non-linear Codes 27, 46, 47, 90
Non-linear Multidimensional Codes 29, 46, 50
Non-linear System 66
Non-linear Systems 107
Non-linearly Separable Patterns 118, 129, 133
Non-negative Diagonal 104
Non-planar Graph 31
Non-stationary Fields 74
Non-stationary Tensor Fields 75
Nonstationary Fields 74
Novel Associative Memory 122
Index
151
O
Objective Function 55, 57, 80, 87, 91
Objective Function J 109
Objective Functions 55
Observability 61, 62, 82
One Dimensional 5
One Dimensional Arrays 9
One Dimensional Arrays i.e.vectors 90
One Dimensional Arrays of Zeroes and Ones 80
One Dimensional Coding Theory 35
One Dimensional Error Control Coding Theory 34
One Dimensional Error Correcting Code 81
One Dimensional Error Correcting Codes 90
One Dimensional Linear Dynamic Systems 63
One Dimensional Linear Space 69
One Dimensional Linear System 82
One Dimensional Linear Systems 69
One Dimensional Logic Functions 18, 81
One Dimensional Logic Theory 17, 18,
50, 51, 80, 90
One Dimensional Neural Network 11, 20, 81
One Dimensional Neural Networks 20, 80, 81
One Dimensional Non-Linear Codes 46
One Dimensional Optimal Control Vectors 81
One Dimensional Stochastic Linear Systems 71
One Dimensional System Theory 76
One Dimensional Systems 71, 80, 82
One-dimensional Linear Dynamical System 108
One/Two/Three Dimensional Information 129
Open Problem 83
Open Questions 130
Open Research Problem 107, 122, 123
Open Set 54
Open/Closed Sets 29
Operating in the Fully Parallel Mode 97
Optical Networks 34
152
Oscillates 105
Oscillation 105
Outer Product 14, 35, 69
Outer Product of Tensors 12
Output 64, 113
Output Generated 119
Output of A Neuron 133
Output of Each Synapse 118
Output States 10
Output Tensor 67, 68, 82
Output Tensors 72
Outputs 117
P
Parallel Computers 9
Parallel Data Transfer 34
Parallel Mode 21, 22, 97
Parallel Modes 14
Parity Check Equations 35
Parity Check Matrices 35
Parity Check Matrix 35
Parity Check Tensor 29, 40
Partial Differential Equations 68, 69
Pattern Recognition 96
Patterns 132
Perceptron 118, 119, 129, 134
Perceptron Learning Law 118, 135
Perceptron Model 135
Planar Graphs 32
Plant and Measurement Noise 73
Plant Noise 72
Plant Noise Model 73
Point Wise Convergence 134
Polynomial 46, 49
Polynomial Representation 37, 40
Polynomial Time Algorithms 57
Polynomials 46, 47
Polynomials, Power Series 55
Pontryagin Function 83, 87
Positive Definite Symmetric Matrix 56, 57
Positive Definite Synaptic Weight Matrix 56
Power Spectrum 71
Preciseness 80
Prediction 73
Prime 43
Q
Quadratic 84
Quadratic Energy 91
Quadratic Energy Function 22, 31, 34, 92
Quadratic Form 23, 31, 56, 58, 86, 91, 92, 95
Quadratic Form over the Hypercube 80
Quadratic Forms 12, 19
Quadratic Objective Function 57, 91, 92
Quadratic/Higher Degree Energy Function 10
Quarter Plane Causal Distributed Dynamical
Systems 75
Quarter Plane Causal Model 75
Quarter Plane Causality 63, 64, 65, 70
Quarter-plane Causality, Half-plane Causality 62,
82
Quaternion Based Neural Networks 135
R
Random Field 72
Random Field Models 71
Random Process 72
Random Variable 72
Rational Functions 63
Real Anti-symmetric One 97
Real Connection Matrix 102
Real Mode 100
Real Numbers 120
Real Part 97, 98, 99
Real Symmetric 97
Index
153
S
Samples 131
Scalar Synaptic Weight 117
Second Order Models 64
Separable Filters 64
Separating Line/Hyper Plane 132
Serial Mode 14, 20, 21, 22, 30, 96, 97, 99, 103,
104
Shannon 107
Sigmoid Function 134
Sign Structure 58
Signal Design 122
Signal Design Problem 24
Signed Integral Equation 123
Signum Function 105, 118, 120
Signum or Sigmoid or Hyperbolic Tangent 133
Simulated Annealing 27
Single Input, Single Output 81, 113
Single Input, Single Output Linear Time Invariant
123
Single Layer 117, 121, 133
Single Layer Neural Network 81
Single Layer of Perceptrons 118
Single Layer Perceptron 117
Single Synaptic Weight 117
Single/Multi-Layer Continuous Time Neural
Networks 125
154
T
Target Output 119, 122, 134
Tensor 11, 29, 32, 68
Tensor Algebra, 49, 76
Tensor Algebra Concepts 35
Tensor Analysis 66
Tensor Based 29
Tensor Based Difference/Differential Equations 76
Tensor Based Multivariate Polynomials 28
Tensor Based State Space Representation 63
Tensor Field 74
Tensor Functions 74
Tensor Geometric Form 72
Tensor Inner Product (Outer Product) 35
Tensor Linear Operator 34, 35, 62, 63, 65, 69, 76
Tensor Linear Operators 62, 71, 76
Tensor Linear Space 36
Tensor Linear Spaces 59, 69, 70
Tensor of Partial Derivatives 76
Tensor of Probabilities of The States 71
Tensor Product 12, 48
Tensor Products 46, 73, 76, 125, 135
Tensor Products, Matrix Products 47
Tensor Spaces 28, 34
Tensor State Space 82
Tensor State Space Description 69, 70
Tensor State Space Representation 62, 63, 66,
67, 68, 71, 72, 73, 76, 80, 82, 92
Tensor State Space Representations 68
Tensor-tensor Products 76
Tensor-tensor Variables 70
Tensors 10, 12, 38, 58, 62
Terminal State 84
Index
155
U
Unbounded Lattice 24
Undirected Graph 57, 95
Uni-variate Scalar Polynomial 52
Unification 80, 90, 92
Unification of Control 91
Unified 81
Unified Theory 6, 92
Unified Theory of Control, Communication and
Computation 80, 126, 137
Uniform Convergence 134
Univariate 51
Universe 1
Unknown Tensor X 53
Updating of the Function 99
V
Various Recurrent Networks 120
Vector Space 120, 121
Vector-matrix Variables 70
Vector/Matrix Products 76
Version 124
W
Weight Matrix 95, 97
Weight of a Cut 31
Weighted and Undirected Non-planar Graph 30
Weighted Contribution 135
Weighted Undirected Connectionist Structure 29
Weights 119
Weights are Complex 103
White As Well As Colored 73
White Noise 71
Wiener 107, 137
Wiener and Kalman Filters 73
Wyner 81, 122
Z
Zero Mean Tensors 72
Zurada 105