You are on page 1of 12

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO.

6, NOVEMBER 1998

1319

Optimal Hopfield Network for Combinatorial


Optimization with Linear Cost Function
Satoshi Matsuda, Member, IEEE

Abstract An optimal Hopfield network is presented for


many of combinatorial optimization problems with linear cost
function. It is proved that a vertex of the network state hypercube
is asymptotically stable if and only if it is an optimal solution to
the problem. That is, one can always obtain an optimal solution
whenever the network converges to a vertex. In this sense, this
network can be called the optimal Hopfield network. It is
also shown through simulations of assignment problems that
this network obtains optimal or nearly optimal solutions more
frequently than other familiar Hopfield networks.
Index Terms Assignment problems, combinatorial optimization, Hopfield network, knapsack problems, linear cost function,
optimal neural representation, stability of state hypercube vertex.

I. INTRODUCTION

ANY ATTEMPTS have been made to solve combinatorial optimization problems using Hopfield networks.
Most of them, however, have lacked any theoretical principle.
That is, the fine-tuning of the network coefficients has been
accomplished by trial and error, and the neural representation of the problem has been made heuristically or without
care. Accordingly, Hopfield networks have failed in many
combinatorial optimization problems [1], [2].
At the same time, there have been some theoretical works
which analyze the network dynamics. Analyzing the eigenvalues of the network connection matrix, Aiyer et al. [3]
have theoretically explained the dynamics of the network for
traveling salesman problems (TSPs). However, since their aim
is only to make the network converge to a feasible solution
rather than to an optimal or nearly optimal one, their treatment
of the distance term of the energy function is theoretically
insufficient, consequently, they have not shown the theoretical
relationship between the values of the network coefficients and
the qualities of the feasible solutions the network converges
to. Abe [4] has shown such theoretical relationship by deriving
the stability condition of the feasible solutions. However,
since these are made by comparing the values of the energy
at vertices of the network hypercube, these are true only
for the networks without diagonal elements of connection
matrix. On the other hand, by analyzing the eigenvalues of
Jacobian matrix of the network dynamic equation, not only for
the networks without diagonal elements of connection matrix
Manuscript received September 16, 1996; revised November 2, 1996, March
16, 1998, and September 14, 1998.
The author is with the Computer and Communication Research Center,
Tokyo Electric Power Company, 4-1, Egasaki-cho, Tsurumi-ku, Yokohama,
230-8510 Japan.
Publisher Item Identifier S 1045-9227(98)08808-0.

but also for those with diagonal elements, Matsuda [5][8]


has also theoretically derived such relationship between the
network coefficients and the feasible solutions qualities, and
the stability and instability conditions of the feasible and
infeasible solutions. Furthermore, based on these theoretical
conclusions, he has also pointed out the importance of the diagonal elements of connection matrix to obtain good solutions.
Actually, the diagonal elements of connection matrix play a
crucial role in this paper. By tuning the coefficients to satisfy
these conditions, one can obtain good solutions frequently.
However, it is also theoretically shown that one cannot always
achieve optimal solutions even with well-tuned coefficients.
An inherent limitation arises from the neural representation
itself. We must represent the problems very carefully, on the
basis of the theoretical principle, rather than on heuristics.
These theoretical works, however, are mainly on the network dynamics or tuning of the network coefficients, but do
not give theoretical basis to neural representation of optimization problems. Not only are the neural representations
made heuristically, but also their comparisons are made only
by network simulations of some problem instances rather
than on a theoretical basis. Since the network performance is
deeply dependent on the nature of each problem instancefor
example, the city locations in TSPsit is inappropriate to
compare networks performances solely by simulations. And
yet, despite the need for theoretical comparison of the neural
representations or neural networks, few such treatments have
been made. Matsuda [9], [10], using the stability theory of the
feasible solutions and infeasible solutions mentioned above,
has also proposed a theoretical method of comparing the
performances of the neural networks or neural representations
of the same problem. This method is based on the relative
ability of networks to distinguish optimal solutions from other
feasible solutions and infeasible solutions. He has applied this
method to two familiar networks for TSPs, and established
a theoretical justification of the superiority of one network
to the other although this superiority was already observed
through simulations [11]. Thus, one network can be regarded
as more sharply distinguishing optimal solutions to TSPs than
the other. He, however, has also shown that even this superior
network cannot completely distinguish optimal solutions; there
can be nonoptimal solutions which are indistinguishable from
optimal solutions even by this superior network.
Then, a question rises as to whether there is still another network which, in terms of this theoretical comparison,
distinguishes optimal solutions more sharply than these two
networks, or whether there is an optimal network which

10459227/98$10.00 1998 IEEE

1320

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

distinguishes optimal solutions completely or most sharply


(or completely) of all the networks. We can expect to obtain optimal solutions most frequently by such an optimal
network.
In this paper, such an optimal Hopfield network is presented for many of combinatorial optimization problems with
linear cost function. A vertex of the network state hypercube is asymptotically stable if and only if it is an optimal
solution to the problem. In Section II, the basic definitions
and concepts of the Hopfield network and combinatorial
optimization problems are introduced. Sections III and IV
give theoretical characterizations on two conventional neural
representations for these optimization problems, in order to
reveal their theoretical limitation in deriving optimal solutions.
The concepts of a theoretical network comparison and an
optimal network are also given in Section IV. In Section V,
a new representation is presented and theoretically shown to be
optimal, and Section VI gives such an optimal network for
the maximization version of these optimization problems. The
simulation results given in Section VII illustrate our theoretical
findings. Last, concluding remarks appear in Section VIII.
II. HOPFIELD NETWORKS AND
COMBINATORIAL OPTIMIZATION
A. Hopfield Networks
In a Hopfield network, any two neurons are connected in
both directions with the same synaptic weights. The dynamics
of a Hopfield network are defined by [12]
(1)

energy minimization. Hopfield and Tank [13] have shown that


nearly optimal solutions to TSPs, which are NP-complete
combinatorial optimization problems, can be obtained quickly.
We herein use the words, energy, neural representation, and
network interchangeably.
B. Stability of Vertices of Network State Hypercube
As shown in (4) and mentioned above, at equilibrium,
neuron takes a value 0, 1, or a value between zero and
. However, since the Hopfield
one that satisfies
network converges to asymptotically stable equilibrium and
cannot converge to unstable one, the stability of the equilibrium is an important property which makes the network
dynamics theoretically clear. In this paper, however, we focus
on the stability of the vertices of the hypercube for two main
reasons. First, strictly speaking, the meaning of the neuron to
the combinatorial optimization problem is given only when it
takes a value of zero or one. Second, analysis of the stability
of the vertices provides an approximation of the network
behavior when the network converges inside the hypercube [5],
[9], [10], [14]. Actually, in Section VII, we illustrate through
simulations that our theoretical results based on the stability of
the vertices also work well in the case of convergence within
the hypercube. Thus it is crucial to explore the stability of the
vertices of the state hypercube.
of the network state
It is known [15] that a vertex
hypercube is asymptotically stable if
(5)
holds for any neuron , and unstable if

(2)

(6)

, , and
are an output value, an internal
where
is a synaptic
value, and a bias of neuron , respectively, and
.
weight from neuron to neuron
Energy of the network is also given by [12]

holds for some neuron . These conditions are helpful to our


theory. The proof can be found in [15], and is also shown in
the Appendix, for convenience.

(3)

C. Combinatorial Optimization Problems


with Linear Cost Function
Combinatorial optimization problems we attack in this paper
have linear cost functions to minimize, and are formally stated
as follows:

It is well known that


(4)

minimize
holds. This shows that the value of a neuron changes in
order to decrease energy (energy minimization), and finally
: that is, to zero,
converges to equilibrium where
within the
one, or to somewhere which satisfies
hypercube. Note that the dimensional space [0, 1] is called
a network state hypercube, or, simply, a hypercube, and that
is called a vertex of the hypercube, where
is
the number of neurons in the network.
By representing combinatorial optimization problems in the
form of the energy shown above, and constructing a Hopfield
network with this energy, we can expect the network to
converge to a good or optimal solution to the problem by

subject to

for any
(7)
for any
for any

and

is a two dimensional variable


where
is an integer representing a unit cost,
is
matrix,
an integer representing some quantities which is related to ,
is also an integer representing some total quantities.
and

MATSUDA: OPTIMAL HOPFIELD NETWORK FOR COMBINATORIAL OPTIMIZATION WITH LINEAR COST FUNCTION

Thus, these problems have two sets of constraints such that one
winner for each column, , and quantities as the sum of all
winners quantities, s, for each line . Note that
(8)

1321

assignment problems are formulated as


minimize
subject to

for any
(11)
for any

holds.
a feasible solution or an infeasible
We call
solution if it does or does not satisfy all the constraints,
respectively. Furthermore, a feasible solution which minimizes
is also called an
the objective or cost function
optimal solution, and feasible solutions other than optimal
one are sometimes called nonoptimal solutions. The goal of
combinatorial optimization problems is to find an optimal
solution or sometimes a nearly optimal solution.
Note that some of the optimization problems have inequality
constraint such as
subject to

for any

(9)

for any

and

Thus, one can immediately see that assignment problems are


,
,
for any and ,
stated as (7) with
.
and
In this paper, we present an optimal Hopfield network
for the combinatorial optimization problem with linear cost
function given as (7). In the next two sections we theoretically
explore two conventional neural representations for these
problems to compare with our optimal representation.
III. A NETWORK THAT MINIMIZES THE TOTAL COST
A. Neural Representation

rather than equality one shown in (7), however, by introducing


, we can
some slack variables (slack neurons)
express the inequality constraint as an equality constraint,

The first neural representation for the combinatorial optimization problem given as (7) is a familiar one which
minimizes the total cost, and given by

subject to
for any

(10)

and
for some of
s,
Thus, by letting
combinatorial optimization problems with inequality constraint
can also be stated as (7). This requires a large number of slack
variables (neurons) and seems to make the neural approach
impractical, however, quantized neurons [16], each of which
takes an integer value, overcome this difficulty. One quantized
slack neuron, , serves as all slack neurons, s, for each ;
. However, in this paper we employ
for simplicity. Note that the second constraint in (7) does not
contain slack variables.
Many of the combinatorial optimization problems can be
stated as (7). For example, minimization version of the generalized assignment problem, which is NP-hard, is such problems [17]. Since the assignment problems, which are not
NP-hard, are special cases of the generalized assignment
problems, they can be also stated as (7) of course. In this paper,
for simplicity, by taking assignment problems as examples, we
will explain and illustrate our theory. The assignment problems
are stated as follows. Given a set of tasks, people, and
to carry out task by person , the goal of the
cost
assignment problem is to find an assignment
which minimizes total cost,
, for carrying out all
the tasks under the constraints that each task must be carried
out once and only once by some person, and that each person
must carry out one and only one task, where a value of
means that person does or does not carry out task . So, the

(12)
We can easily see that the first and second terms are
constraint conditions explained above, and that the last term
denotes the total cost to minimize. The third term, which
moves values of neurons toward zero or one, i.e., to a vertex
of the hypercube, is called a binary value constraint because it
takes minimum value when all the neurons take the value zero
or one. Note that the existence of the binary value constraint
term does not change the value of energy at each vertex of
is also called
the hypercube. Thus, a vertex
a feasible or an infeasible solution if the values of the first
three terms in (12) are all zero or not, respectively. Constants
,
, , and
are coefficients or weights of each
condition. We must tune the values of these coefficients in
order for the network to converge to an optimal or good
we denote, interchangeably,
solution. As noted before, by
the energy, the neural representation, and the network with this
energy. First we will theoretically characterize the network
by the set of all the asymptotically stable vertices of its state
hypercube.
B. Stability of Feasible and Infeasible Solutions
So, following [5][7], we show the asymptotical stability
and instability conditions of the feasible and infeasible solu. First, as the next theorem shows, all
tions in the network
the infeasible solutions can be unstable.

1322

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

Theorem 1: In the Hopfield network


solutions are unstable if

, all the infeasible

hold, where

,
, and
.
Proof: For any infeasible solution
, at least one of the following cases holds:
for some ;
1)
2)
for some ;
for some ;
3)
for some .
4)
, since
, we have
In the case of 1), for

Note that, in the instability conditions shown in Theorem


depends on that of
, however, since
1, the value of
does not depend on that of
, these conditions can be
satisfied without any contradiction.
For the stability and instability of feasible solutions we can
also derive the next theorem.
Theorem 2: In the Hopfield network , a feasible solution,
, is asymptotically stable if

and unstable if

Proof: For any

and , since

we have

Hence, if

we have
, thus, from (6),
, since
For the case of 2), for
holds, we similarly have

is unstable.
For

always holds. For

holds, we have
asymptotically stable if

. Hence, from (5),

, if

is

Hence, if

we have
, so that
For the case of 3), there exists
assume that 2) does not hold,
. So, we have

is unstable.
. Since we can
holds for any

Hence, if

we have
, so that
is unstable.
For the case of 4), since we can assume that neither 1) nor 3)
holds for any . Hence, similarly,
holds,
, we have
for

Therefore, if

we have

, so that

is unstable.

holds.
Conversely, if
for some

, we have
, and from (6),
is

unstable.
Theorem 2 shows that, by tuning the value of coefficient
or , we can make some particular feasible solution, say, the
optimal one, asymptotically stable or unstable. Furthermore,
since the conditions in Theorem 2 do not depend on the value
or
, it is possible to satisfy the instability condition
of
of infeasible solutions in Theorem 1 simultaneously. Since we
do not want infeasible solutions to be asymptotically stable,
and
satisfy
we hereafter assume that the values of
the instability conditions of infeasible solutions in Theorem 1.
Hence the set of all the asymptotically stable vertices of this
network hypercube contains all the feasible solutions which
satisfy the former condition in Theorem 2, and does not contain
all the infeasible solutions and feasible solutions which satisfy
the latter condition in Theorem 2.
Note that, for assignment problems, Theorems 1 and 2 also
, the instability
hold, of course, however, since
conditions of all the infeasible solutions shown in Theorem 1
can be simply expressed as

MATSUDA: OPTIMAL HOPFIELD NETWORK FOR COMBINATORIAL OPTIMIZATION WITH LINEAR COST FUNCTION

1323

TABLE I
AN INSTANCE OF AN ASSIGNMENT PROBLEM [COST MATRIX (cij )] AND ITS THREE FEASIBLE SOLUTIONS. SYMBOL 3 MEANS THAT THE VALUE OF
cij IS TOO LARGE TO BE TAKEN INTO CONSIDERATION. THE OPTIMAL SOLUTION V 0 (TOTAL COST
14), THE NONOPTIMAL
SOLUTION V 1 (TOTAL COST
25), AND THE NONOPTIMAL SOLUTION V 2 (TOTAL COST
16) ARE SHOWN FROM LEFT
k
k
1 FOR EACH FEASIBLE SOLUTION V k = (Vij
)
TO RIGHT, RESPECTIVELY, WHERE ck
ij WITH BRACKETS, h i, INDICATES Vij

C. Limitations
As Theorem 2 shows, the stability of a feasible solution, ,
depends on only the maximum unit cost which is included in
the total cost of this feasible solution, that is,
Hence, in any network
where an optimal solution,
, is asymptotically stable, any feasible solution, say
, is also asymptotically stable and can be converged to if

representations of the problems (7), of course. In this section,


by exploring the set of all the asymptotically stable vertices
of this network hypercube, we show that the network
distinguishes optimal solutions more sharply than , and that
it greatly overcomes many of the weakness of the network .
However, it is also shown that, in general, the network
cannot always obtain an optimal solution even if it converges
to a vertex.
B. Stability of Feasible and Infeasible Solutions

Consider the assignment problem instance given in Table I.


or nonoptimal solution
The total cost of its optimal solution
is 14 or 25, respectively. Since
holds, no matter how well the network
is tuned, the
is asymptotically stable and can be
nonoptimal solution
is asymptotically
converged to if the optimal solution
stable. Thus, in general, we cannot always obtain an optimal
converges to a vertex. The
solution, even if the network
cannot sharply distinguish optimal solutions from
network
, and we cannot expect
other feasible solutions such as
the average quality of the feasible solutions obtained by the
to be good. This is a theoretical limitation of the
network
.
network
IV. A NETWORK THAT MINIMIZES
THE SQUARE OF THE TOTAL COST

As the next theorem shows, all the infeasible solutions are


potentially unstable, just as in the previous case.
, all the infeasible
Theorem 3: In the Hopfield network
solutions are unstable if

holds.
Proof: For any infeasible solution
, at least one of the following cases holds:
for some ,
1)
2)
for some ,
for some ,
3)
4)
for some .
For the cases of 1) and 2), just the same as those for
is unstable if
Theorem 1, we can show that
and

A. Neural Representation
We can solve the problem by minimizing the square of the
cost function [18], as well as by minimizing the cost function
. Thus, we sometimes represent the problem (7)
itself as
as follows:

(13)
is the same as
except for
The neural representation
the last term, which minimizes the square of the total cost,
minimizes the total cost itself. Both are valid
whereas

hold, respectively. Then, for the case of 3), there exists


. Since we can assume that 2) does not hold,
holds for any , and
. Hence, we have

1324

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

Hence, if

we have
, so that
is unstable.
For the case of 4), since we can assume that neither 1) nor
holds for any . And, since we
3) holds,
for any
can also assume that 2) does not hold,
, and
. Hence, for
, we similarly have

shown in Theorem 3. Hence, the set of all the asymptotically


stable vertices of this network hypercube contains all the feasible solutions which satisfy the former condition in Theorem
4, and does not contain all the infeasible solutions and feasible
solutions which satisfy this theorems latter condition.
As Theorem 4 shows, the stability of a feasible solution,
, depends on the product of its total cost and
maximum unit cost, which is included in the total cost of
this solution, that is,

In the previous section, the limitation of the network


was
, the feasible
theoretically shown. That is, in the network
shown in Table I is asymptotically stable
solution
shown in Table I is
if the optimal solution
asymptotically stable. However, since

Hence, if

holds, so that is unstable.


For the stability and instability of feasible solutions we can
also have the next theorem.
Theorem 4: In the Hopfield network , a feasible solution,
, is asymptotically stable if
holds, by selecting

and

to satisfy

and unstable if

where

is the total cost of

Proof: For any

we can make the nonoptimal solution


unstable and the
asymptotically stable in the network
optimal solution
. This suggests that the network
distinguishes optimal
, and that the average quality
solutions more sharply than
of the solutions obtained is improved. Actually, we will
theoretically confirm this expectation soon. Thus we can
expect to obtain optimal or good solutions more frequently
than by
.
by the network

, that is

and , we have

C. Networks Partial Ordering

.
For

holds, we have
stable if

always holds. For

. Hence,

, if

is asymptotically

holds.
Conversely, if

, we have
for some
, and is unstable.
Just the same as
, the instability conditions of all the
infeasible solutions to assignment problems can be simply
restated as

Also the same as


, we can assume that coefficients
and
satisfy the instability conditions of infeasible solutions

Many neural representations have been proposed for each


of combinatorial optimization problems, and their performances have been evaluated and compared with each other
solely through network simulations for some problem instances rather than on a theoretical basis. Since the network
performance deeply depends on the nature of each of problem
instancesfor example, city locations in TSPsit is inappropriate to compare networks performances solely through
simulations. Therefore, we need some theoretical comparison
methods for network performances.
Matsuda [9], [10], using the stability theory of feasible
and infeasible solutions shown above, has also proposed a
theoretical method of comparing the performances of networks
for each of the combinatorial optimization problems. This
method is based on the relative abilities of networks to
distinguish optimal solutions from other feasible and infeasible
solutions. More precisely, the network is characterized by
a set of all the asymptotically stable vertices of its state
hypercube, and then, of two networks, one is regarded as more
sharply distinguishing optimal solutions than the other if its
characterizing set is set-theoretically included in that of the
other. Thus, based on this set-theoretical inclusion relation, a
partial ordering is given to the set of all the networks for each

MATSUDA: OPTIMAL HOPFIELD NETWORK FOR COMBINATORIAL OPTIMIZATION WITH LINEAR COST FUNCTION

of the combinatorial optimization problems, and it represents


the relative performance of the network. This is formally stated
as follows.
Definition 1: For any two Hopfield networks or neural
representations, say and , for the same particular problem
and
be sets of asymptotically stable
instance, let
and
,
vertices of the state hypercube of the networks
respectively. Note that the values of all coefficients of these
,
, , and
in
,
neural representation, for example,
and
contain all the
are fixed. We assume that
optimal solutions. Then, we call the network better, or more
sharply distinguishing optimal solutions, than the network ,
, if
. In addition, we denote
denoted as
if
and
, that is,
holds. If neither
nor
holds, we
and
incomparable. Thus, a partial ordering of all
call
neural representations or Hopfield networks for each problem
instance is given.
Although this theoretical method concerns solely with
asymptotically stable vertices, the performance of network
depends not only on the asymptotically stable vertices but
also on asymptotically stable points inside the hypercube and
the width of basin of each of the asymptotically stable points.
However, the stability of vertices have a close relation to
the behavior of the network converging inside the hypercube.
Furthermore, by applying this theoretical comparison method
to two familiar networks for TSPs, Matsuda [9], [10] has
established a theoretical justification of the superiority of
one network to the other. The superior network is one with
relative distances between cities, and the inferior is one with
absolute distances. That is, for all TSP instances, the former
network is better than the latter in the above partial ordering
of all the networks for TSPs, and greatly overcomes the
theoretical limitation of the latter network. This superiority
had already been observed through simulations [11], and
then has been theoretically justified. Thus, this theoretical
comparison method is appropriate for comparing networks
capabilities.
D. Superiority and Limitation
Based on the above partial ordering of the networks, we can
distinguishes optimal solutions
now theoretically show that
.
more sharply than
be an optimal solution. In the Hopfield
Theorem 5: Let
, if
network

and

is small enough, then

holds for any


satisfying
, that is, the following
hold:
, the optimal solution
is asymptotically
1) For any
, that is,
.
stable in the above network
holds if
is small enough
2)
, that is, the optimal solution
is
and
.
asymptotically stable in

1325

Proof: 1) is obvious from Theorem 4. We will next show


2). Let
be a feasible solution. Then, by Theorem
4, we have

Then, by letting

, we have

Hence, we have

So, since
, by Theorem 2, we have
.
.
Thus,
Thus, no matter how well the network
is tuned, the
tuned to satisfy the conditions shown in Theorem
network
5 can distinguish optimal solutions more sharply than any .
However, for the nearly optimal but nonoptimal solution
of cost 16 shown in Table I, since

holds, by Theorem 4, the nonoptimal solution


is asympif the optimal solution
totically stable in the network
is asymptotically stable. Thus, although the network
distinguishes optimal solutions more sharply than , it cannot
always yield an optimal solution even if it converges to
a vertex, and so cannot completely overcome theoretical
limitation mentioned before. This is a theoretical limitation
.
of the network
E. Notion of Optimal Network
Then, we have an interest in presenting the best or optimal network in the above partial ordering, if any, rather
or
. Thus,
than presenting just a better network than
the optimal network distinguishes optimal solutions most
sharply of all the neural networks, and so we can expect that
such a network obtains optimal or nearly optimal solutions
most frequently. This concept is formally stated as follows.
Definition 2: A Hopfield network or neural representation,
, for some particular problem instance is called optimal,
,
or most sharply distinguishing optimal solutions, if
for any Hopfield network
for
equivalently,
this problem instance.
Note that, however, this definition does not mean that one
can always obtain an optimal solution whenever the optimal
network converges to a vertex. That is, the optimal network
cannot always distinguish optimal solutions completely. Thus,
even the optimal network cannot completely overcome

1326

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

the theoretical limitation mentioned above although it distinguishes optimal solutions most sharply of all the networks. So
we have the next definition.
is called complete
Definition 3: A Hopfield network
is the set of all
for some particular problem instance if
optimal solutions.
Then, we immediately have the next theorem on the relationship between the complete network and the optimal
one.
Theorem 6: For any problem instance, a complete network is optimal, and can always obtain an optimal solution
whenever it converges to a vertex.
Thus, the complete network completely overcomes the
theoretical limitation mentioned above, however, it is uncertain
whether the complete network exists or not. In the next section,
for each instance of the combinatorial optimization problems
given by (7), we design the complete, and so optimal,
network. Thus, the optimal networks for (7) are shown to
be complete.

Proof: Similar to that for Theorem 3.


Just the same as
and , the instability conditions of all
the infeasible solutions to assignment problems can be simply
stated as

For the stability and instability of feasible solutions we can


also give the following theorem.
Theorem 8: In the Hopfield network , a feasible solution,
, is asymptotically stable if

and unstable if

where

V. OPTIMAL NETWORK

is the total cost of

Proof: First, for any

, that is,

and

such that

, we have

A. Neural Representation
Here we present a new neural representation
except for the third term
the same as

, which is

.
For
For

always holds since

, if

(14)
where
.
Thus the value of the coefficient of the binary value con. Note that the
straint is proportional to the cost
existence of this binary value constraint also does not change
the value of energy at each vertex of the hypercube. This is
also a valid representation of the problems given by (7). In
this section we theoretically show that, of all the Hopfield
can distinguish optimal solutions
networks, the network
most sharply.

holds, we have
Also, for any

.
and

such that

.
Then, we always have
for
stable if

. Hence,

for
, and
is asymptotically

B. Stability of Feasible and Infeasible Solutions


As before, we show the asymptotical stability and instability
conditions of feasible and infeasible solutions for the network
. Here again, all the infeasible solutions are potentially
unstable, as the next theorem shows.
, all the infeasible
Theorem 7: In the Hopfield network
solutions are unstable if

holds.

Conversely, if
, we have
for any
. Thus, is unstable.
Just as in the previous networks, if coefficients
and
satisfy the instability condition of infeasible solutions shown
in Theorem 7, the set of all the asymptotically stable vertices
of this network hypercube contains all the feasible solutions
which satisfy the former condition in Theorem 8, and does
not contain all the infeasible solutions and feasible solutions
which satisfy the latter condition in Theorem 8.

MATSUDA: OPTIMAL HOPFIELD NETWORK FOR COMBINATORIAL OPTIMIZATION WITH LINEAR COST FUNCTION

C. Optimal Network
Theorem 8 shows that, if the values of and are properly
can make all the optimal solutions
selected, the network
asymptotically stable and all the nonoptimal solutions unstable.
Thus we can immediately obtain the next theorem.
satisfy
Theorem 9: Let the Hopfield network

1327

problems, can be expressed as (15). Knapsack problems are


stated as follows. A set of items is available to be packed
into a knapsack, the capacity of which is of unit size. Item
is of unit size, and is a unit profit given by item . The
goal is to find the subset of items which should be packed in
order to maximize their total profit. Thus, it can be formally
stated as
maximize
(16)

subject to
where
is an optimal solution. Then, if is small enough,
is complete, consequently, optimal.
which satisfies the conditions in
Thus, in the network
Theorem 9, a vertex is asymptotically stable if and only if it
is an optimal solution. As a result, one can always obtain an
optimal solution whenever the network converges to a vertex.
, however, the
In order to construct this optimal network
knowledge of the minimum cost of the problem is required in
advance. Since our objective is to find an optimal solution, this
and
optimal network seems meaningless. However,
also need a priori knowledge of an optimal solution in order
to be best-tuned. Hence, as far as we employ these Hopfield
networks, we cannot completely escape from trial and error
to find the appropriate value of the network coefficient,
or . On this point there is no difference between
and
,
, and many
other networks. However, differently from
achieves the above novel performance if
other networks,
. We are
it is well tuned. This is why we are to employ
expected to find such appropriate or nearly appropriate value
or
for
shown in Theorem 9 by trial and error
of
without knowing the minimum cost or optimal solution.
For the quadratic combinatorial optimization problems such
as quadratic assignment problems and TSPs, we can design
such optimal network by employing higher order Hopfield
networks [19].

for any
where
if item is packed
otherwise.
As mentioned in Section II, by introducing some slack neurons
, we can express knapsack problems given by (16)
as (15).
In order to solve the optimization problems (15), however,
we must restate them as minimizing the objective function
rather than maximizing, since the Hopfield network approach
relies on the energy minimization. So, we express the objective
function in (15) as
(17)

minimize

Thus, we can construct an optimal Hopfield network for


(15) as follows:

VI. OPTIMAL NETWORK FOR MAXIMIZATION PROBLEMS


In the previous section we have shown the optimal network
for the combinatorial optimization problems given by
(7), which minimize the value of the cost function. Some
optimization problems, however, maximize the value of the
objective function rather than minimize. In this section optimal
network for these problems are shown.
These maximization problems are formally stated as follows.
maximize
subject to

(18)
where
.

for any
(15)

is actually optimal.
Then, we can similarly show that
satisfy
Theorem 10: Let the Hopfield network

for any
for any

and

Many of the NP complete combinatorial optimization problems, for example, multiple knapsack problems and knapsack

where
is an optimal solution and
. Then, if
is small enough,
is complete, consequently, optimal.

1328

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

TABLE II
RESULTS OF SIMULATIONS CONVERGING
TO A VERTEX OF THE NETWORK HYPERCUBE

TABLE III
RESULTS

OF SIMULATIONS CONVERGING TO A
VERTEX OF OR INSIDE THE NETWORK HYPERCUBE

VII. SIMULATIONS
A. Objectives and Network Tuning
In this section we illustrate, through simulations of the
,
, and
, the theoretical characterizations
network
derived above. Simulations are made for ten instances of
20 assignment problems, each unit cost of which is
20
randomly generated from one to 100. Three networks are
tuned to distinguish optimal solutions from other feasible and
infeasible solutions as sharply as possible. For these networks,
we can obtain an optimal solution at a vertex of the hypercube.
is the optimal network given by Theorem 9. We
Such
and
by making the coefficient
can also construct such
slightly larger than the values of the right-hand side of
the asymptotical stability conditions for an optimal solution
shown in Theorems 2 and 4, respectively, and also by making
satisfy the instability conditions of infeasible solutions in
Theorems 1 and 3, respectively. For each network, we make
ten simulations by changing the initial states of the networks
randomly near the center of the hypercubes. Thus, 300 ( ten
problem instances ten initial states three networks)
simulations are made.
Update of each neuron value is made asynchronously by
employing the next difference equation rather than differential
equation (1)
(19)
is given by a sigmoid function (2). The
an output value
are selected by trial and error. Note that
values of and
annealing is not employed.
B. Results
Although all the simulations converge, some of them do
not converge to a vertex of the hypercube, of course. First,
only the results of all the simulations which converge to a
vertex are shown in Table II. For three networks, the rates
of the convergence to a vertex are shown in the first line.
s good convergence to vertices. The second
We can see
line shows the rates of the convergence to feasible solutions,
and, by comparing the first and second lines, we can see
that all the simulations converging to a vertex obtain feasible
solutions. This is because all the networks satisfy the instability

conditions of infeasible solutions shown in Theorems 1, 3,


or 7. Most importantly, as the third line shows, there is
a significant difference in the rates of simulations which
converges to
converge to optimal solutions. The network
and
obtain optimal
optimal solutions only 1%, however,
solutions 31 and 58%, respectively. By comparing the first
always obtains an optimal
and third lines, we can see that
solution whenever it converges to a vertex, whereas either
or
cannot always obtain an optimal solution even if
it converges to a vertex. This is what Theorems 2, 4, and
9 mean. The last line shows another important difference in
the average error of the feasible solutions obtained by these
networks, where the average error (%) is given by
average error (%)
average cost obtained minimum cost
minimum cost
Thus, we can see the great improvements in the networks
and
, in particular, in
.
As mentioned before, one cannot, strictly speaking, obtain a
solution to a combinatorial optimization problem if a network
converges inside the hypercube, because the meaning of each
neuron is given for a value zero or one. However, one usually
accepts simulations which converge inside the hypercube if
there is a correspondence between a converging point and a
vertex by some criterion. Although our theory focuses only
on cases in which the networks converge to vertices, it is also
interesting to notice the qualities of solutions obtained by these
networks converging inside the hypercube. By corresponding
to one if
, and to 0 if
, in Table III,
we show the results of all the simulations which converge
either to a vertex or inside the hypercube. As shown in the
first line, we always obtain feasible solutions for these 300
simulations. By the second and third lines, also in this case
we can see the similar differences in the performance among
the three networks.
,
Thus, in either case, we can see the relation
obtains optimal solutions
and that the optimal network
most frequently and achieves the best performance.
VIII. CONCLUSIONS
We have presented the optimal Hopfield network for many
of the combinatorial optimization problems with linear cost
function. It is proved that a vertex of this network state
hypercube is asymptotically stable if and only if it is an optimal
solution. That is, one can always obtain an optimal solution

MATSUDA: OPTIMAL HOPFIELD NETWORK FOR COMBINATORIAL OPTIMIZATION WITH LINEAR COST FUNCTION

whenever the network converges to a vertex. On the basis


of some theoretical measure of the network performance, it
has been shown that this network is optimal. It has been
also shown through simulations of assignment problems that
this network obtains optimal or nearly optimal solutions more
frequently than other conventional networks, including when
they converge inside their state hypercubes.
However, some problems Hopfield networks have remain
unsolved even for the optimal Hopfield network. That is,
in order to obtain optimal network we need a tuning of
the network coefficients, probably by trial and error. So,
we still cannot escape from trial and error. Furthermore,
the optimal network may not converge to a vertex of the
state hypercube. However, differently from other Hopfield
networks, the optimal network achieves the above novel
performance once it is well-tuned and converges to a vertex.
Although the optimal network presented here cannot
be applicable to the combinatorial optimization problems
with quadratic cost function, such as quadratic assignment
problems and TSPs, we can design such optimal network
for quadratic problems by employing higher order Hopfield
networks [19].

holds, we have

where
is Kroneckers delta. Hence, Jacobian matrix
is a diagonal matrix. So its eigenvalue is a diagonal element,
, and we have
(21)
is asymptotically stable
It is well known that equilibrium
have negative real parts, and
if all the eigenvalues of
unstable if at least one eigenvalue has a positive real part [20].
, is asymptotically stable if
So, a vertex,
(5)
holds for any neuron , and unstable if

APPENDIX

(6)

Proof of (5) and (6): The asymptotical stability and instability conditions of a vertex of the network state hypercube
have been already given [15], but their proof is repeated again,
for convenience.
holds, by taking a derivaFirst, since
tive of (2) w.r.t. time , we have (4)

for any . Thus, any vertex


Then, by letting

is equilibrium.

and

we can state (4) as


(20)
We now show the stability condition of a vertex , equilibrium
of (20), by deriving eigenvalues of Jacobian matrix

of

evaluated at

. For any vertex

1329

, since

holds for some neuron .


REFERENCES
[1] G. W. Wilson and G. S. Pawley, On the stability of the travelling
salesman problem algorithm of Hopfield and Tank, Biol. Cybern., vol.
58, pp. 6370, 1988.
[2] B. Kamgar-Parsi and B. Kamgar-Parsi, On the problem solving with
Hopfield neural networks, Biol. Cybern., vol. 62, pp. 415423, 1990.
[3] S. V. B. Aiyer, M. Niranjan, and F. Fallside, A theoretical investigation
into the performance of the Hopfield model, IEEE Trans. Neural
Networks, vol. 1, pp. 204215, June 1990.
[4] S. Abe, Global convergence and suppression of spurious states of the
Hopfield neural networks, IEEE Trans. Circuits Syst. I, vol. 40, pp.
246257, Apr. 1993.
[5] S. Matsuda, On the stability of the solution in Hopfield neural networkThe case of travelling salesman problem, Inst. Electr., Inform.,
Commun. Eng., Tech. Rep., vol. NC93-8, 1993, in Japanese.
, The stability of the solution in Hopfield neural network, in
[6]
Proc. IEEE Int. Joint Conf. Neural Networks (IJCNN93), 1993, pp.
15241527.
, Stability of solutions in Hopfield neural network, Trans. Inst.
[7]
Electr., Inform., Commun. Eng., vol. J77-D-II, pp. 13661374, July
1994, in Japanese; also translated in Syst. Comput. Japan, vol. 26, pp.
6778, May 1995.
[8]
, Theoretical characterizations of possibilities and impossibilities
of Hopfield neural networks in solving combinatorial optimization
problems, in Proc. IEEE Int. Conf. Neural Networks (ICNN94), 1994,
pp. 45634566.
[9]
, Theoretical comparison of mappings of combinatorial optimization problems to Hopfield networks (extended abstract), in Proc. IEEE
Int. Conf. Neural Networks (ICNN95), 1995, pp. 16741677.
[10]
, Set-theoretic comparison of the mappings of combinatorial optimization problems to Hopfield neural networks, Trans. Inst. Electron.,
Inform., Commun. Eng., vol. J78-D-II, pp. 15311542, Oct. 1995, in
Japanese; also translated in Syst. Comput. Japan, vol. 27, pp. 4559,
June 1996.
[11] H. Kita, H. Odani, and Y. Nishikawa, Solving a placement problem by
means of an analog neural network, IEEE Trans. Ind. Electron., vol.
39, pp. 543551, Dec. 1992.
[12] J. J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, in Proc. Nat. Academy
Sci. USA, May 1984, vol. 81, pp. 30883092.

1330

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998

[13] J. J. Hopfield and D. W. Tank, Neural computation of decisions of


optimization problems, Biol. Cybern., vol. 52, pp. 141152, 1985.
[14] S. Matsuda, Distribution of asymptotically stable states in Hopfield
network for TSP (extended abstract), in Proc. IEEE Int. Conf. Neural
Networks (ICNN96), 1996, pp. 529534.
[15] Y. Uesaka, On the stability of neural networks with the energy function
of binary variables, Inst. Electron., Inform., Commun. Eng., Tech. Rep.,
vol. PRU88-6, 1988, in Japanese.
[16] S. Matsuda, Quantized Hopfield networks for integer programming,
Trans. Inst. Electron., Inform., Commun. Eng., vol. J81-D-II, pp.
13541364, June 1998, in Japanese; also translated in Syst. Comput.
Japan, to be published.
[17] S. Martello and P. Toth, Knapsack Problems. New York: Wiley, 1990.
[18] M. Takeda and J. W. Goodman, Neural network for computation:
Number representation and programming complexity, Appl. Opt., vol.
25, pp. 30333046, Sept. 1986.
[19] S. Matsuda, Optimal Hopfield network of higher order for quadratic
combinatorial optimization, to be published.
[20] M. W. Hirsch and S. Smale, Differential Equations, Dynamical Systems,
and Linear Algebra. New York: Academic, 1974.

Satoshi Matsuda (M95) received the B.E., M.E.,


and Ph.D. degrees in 1971, 1973, and 1976, respectively, from Waseda University, Tokyo, Japan
He then was with Fujitsu Corp., where he engaged
in research and development of operating systems,
on-line systems, and artificial intelligence. In 1987,
he joined Tokyo Electric Power Company, where he
is currently in the Computer and Communications
Research Center. He is also a Lecturer of computer
science at the Graduate School of Chuo University,
Tokyo. His research fields include the theory of
artificial intelligence and neural networks, but he is interested in computer
sciences in general.
Dr. Matsuda is a member of the IEEE Computer Society, ACM, the Institute
of Electronics, Information, and Communication Engineers, Information Processing Society of Japan, Japan Society for Software Science and Technology,
and Japanese Society for Artificial Intelligence.

You might also like