Computational Evolution
Jing Han

The Santa Fe Institute,
Sante Fe, NM, 87501
and
Institute of Systems Science,
Academy of Mathematics and Systems Science,
Beijing, China, 100080
This paper compares computational evolution using local and global eval
uation functions in the context of solving two classical combinatorial
problems on graphs: the kcoloring and minimum coloring problems. It
is shown that the essential difference between traditional algorithms us
ing local search (such as simulated annealing) and distributed algorithms
(such as the Alife&AER model) lies in the evaluation function. Sim
ulated annealing uses global information to evaluate the whole system
state, which is called the global evaluation function (GEF) method. The
Alife&AER model uses local information to evaluate the state of a single
agent, which is called the local evaluation function (LEF) method. Com
puter experiment results show that the LEF methods are comparable to
GEF methods (simulated annealing and greedy), and in some problem in
stances the LEF beats GEF methods. We present the consistency theorem
which shows that a Nash equilibrium of an LEF is identical to a local
optimum of a GEF when they are “consistent.” This theorem explains
why some LEFs can lead the system to a global goal. Some rules for
constructing a consistent LEF are proposed.
1. Introduction
The process of solving a combinatorial problemis often like an evolution
of a system which is guided by evaluation functions. The purpose of
this paper is to explore some of the differences between local and global
evaluation functions in the context of graph coloring problems. We
limit our discussion to the evolution of systems that are composed of
homogeneous agents in limited state space. Note that these agents do
not evolve their strategies during evolution.
Evolution is not pure random processing. There always exists an
explicit or implicit evaluation function which directs the evolution. In

Electronic mail address: hanjing@amss.ac.cn.
Complex Systems, volume (year) 1–1+; © year Complex Systems Publications, Inc.
2 J. Han
different contexts, the evaluation function is also called the objective
function, penalty function, ﬁtness function, cost function, or energy E.
It has the form f : S e H ÷ , where H is the state (conﬁguration)
space. The evaluation function calculates how good a state of the whole
system (or a single agent) is, then selection for the next step is based on
its value. The evaluation function plays a key role in evolution and is
one of the fundamental problems of evolution.
Generally speaking, the system state (conﬁguration) related to the
minimal evaluationvalue is the optimumof the system. For example, the
Hamiltonian is used to evaluate the energy of a conﬁguration of the spin
glass system in statistical physics. So in this case the Hamiltonian is the
evaluation function and the groundstate is related to the minimum of
the Hamiltonian. In combinatorial problems, such as the SAT problem
[1], the evaluation function is the total number of unsatisﬁed clauses
of the current conﬁguration. So the evaluation value for a solution
of the SAT problem is zero. In artiﬁcial molecular evolution systems,
the evaluation function is the ﬁtness function, and evolution favors
conﬁgurations that are related to the maximal ﬁtness value.
A system may evolve in different paths if it is guided by different
evaluation functions. But there always exists pairs of different evolution
functions that lead to the same ﬁnal state although the paths may be
different. Therefore, for a speciﬁed system, people are able to construct
different evaluation functions to reach the same goal state. As the main
theme of this paper, we investigate the following two ways to construct
evaluation functions.
1. Global evaluation function (GEF). GEF methods use global information
to evaluate the whole system state. This is the traditional way of con
structing the evaluation function and is widely used in many systems. The
GEF embodies the idea of centralized control.
2. Local evaluation function (LEF). LEF methods use local (limited) infor
mation to evaluate a single agent’s state and guide its evolution.
The idea of using local rules based on local information is one of the
features of complex systems such as cellular automata [2], Boid [3],
game of life [4], escape panic model [5], and sandpile model [6]. The
private utility function in game theory [7] and the multiagent learning
system COIN [8] are very close to the LEF idea except that the utility
function is allowed to use global information.
It is a simple and natural idea to use a GEF to guide the evolution of
a whole system to a global goal. But can we use an LEF to guide each
agent so that the whole system attains the global goal in the end? In
other words, if each agent makes its own decisions according to local
information, can the agents get to the same system state reached by evo
lution guided by a GEF? Some emergent global complex behaviors from
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 3
local interactions have been found in natural and manmade systems
[3–5, 9].
Most traditional algorithms used for solving combinatorial problems
use a GEF, such as simulated annealing [10], and some new ones use an
LEF, such as extremal optimization [11] and the Alife&AER model [12,
13]. Our earlier paper [14] investigated emergent intelligence in solving
constraint satisfaction problems (CSPs, also called decision problems)
[15] by modeling a CSP as a multiagent system and then using LEF
methods to ﬁnd the solution.
This paper studies the relationship and essential differences between
local and global evaluation functions for the evolution of combinatorial
problems, including decision and optimization problems. We use the
coloring problem to demonstrate our ideas because it has two forms:
the kcoloring problem, which is a decision problem; and the minimum
coloring problem, which is an optimization problem. We found that
some LEFs are consistent with some GEFs while some are not. So LEF
and GEF methods differ in two layers: consistency and exploration. We
also compare the performance of LEF and GEF methods using com
puter experiments on 36 benchmarks for the kcoloring and minimum
coloring problems.
This paper is organized as follows. In section 2, computer algorithms
for solving coloring problems are reviewed, and then a way to model
a coloring problem as a multiagent system is described. Section 3 is
about GEFs and LEFs. We ﬁrst show how to construct a GEF and an
LEF for solving the coloring problem (section 3.1), and then show their
differences from aspects of consistency (section 3.2.1) and exploration
heuristics (section 3.2.2). Computer experiments (section 3.3) show
that the LEF (Alife&AER model) and GEF (simulated annealing and
greedy) are good at solving different instances of the coloring problem.
Conclusions are presented in section 4.
2. Coloring problems
The coloring problem is an important combinatorial problem. It is NP
complete and is useful in a variety of applications, such as timetabling
and scheduling, frequency assignment, and register allocation [15]. It is
simple and can be mapped directly onto a model of an antiferromagnetic
system [16]. Moreover, it has obvious network structure, making it
a good example for studying network dynamics [17]. The coloring
problemcan be treatedin either decision problem(CSPs) or optimization
problem form.
Deﬁnition 1 (kcoloring problem) Given a graph G = (V, e), where V is the
set of n vertices and e is the set of edges, we need to color each vertex
using a color from a set of k colors. A “conﬂict” occurs if a pair of
Complex Systems, volume (year) 1–1+
4 J. Han
Green
Blue
Red
V
3
V
2
Red V
4
V
1
Figure 1. A solution for a 3coloring problem.
linked vertices has the same color. The objective of the problem is
either to ﬁnd a coloring conﬁguration with no conﬂicts for all vertices,
or to prove that it is impossible to color the graph without any conﬂict
using k colors. Therefore, the kcoloring problem is a decision problem.
There are n variables for the n vertices where each variable is one of
k colors. For each pair of nodes linked by an edge, there is a binary
constraint between the corresponding variables that disallows identical
assignments. Figure 1 is an example:
Variable set X = ¦X
1
, X
2
, X
3
, X
4
¦,
Variable domain D
1
= D
2
= D
3
= D
4
= ¦green, red, blue¦,
Constraint set R = ¦X
1
= X
2
, X
1
= X
3
, X
1
= X
4
, X
2
= X
3
¦.
The kcoloring problem (k ± 3) is NPcomplete. It might be impossi
ble to ﬁnd an algorithm to solve the kcoloring problem in polynomial
time.
Deﬁnition 2 (Minimum coloring problem) This deﬁnition is almost the same
as the kcoloring problem except that k is unknown. It requires ﬁnding
the minimum number of colors that can color the graph without any
conﬂict. The minimum possible number of colors of G is called the
chromatic number and denoted by Χ(G). So the domain size of each
variable is unknown. For the graph shown in Figure 1, Χ(G) = 3.
The minimum coloring problem is also NPcomplete.
2.1 Algorithms
There are several basic general algorithms for solving combinatorial
problems. Of course they can also be used to solve the coloring problem.
Although we focus on the coloring problem, the purpose of this paper is
to demonstrate the ideas of LEF and GEF methods for all combinatorial
problems. What follows is a brief review of all general algorithms used
for combinatorial problems, even though some of them might not have
been applied to the coloring problem. The focus is only on the basic
ones, so variants of the basic algorithms will not be discussed.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 5
Backtracking …
Ant Colony
Optimization …
Local search (Simulated Annealing)
Extremal Optimization
Alife&AER Model …
Genetic Algorithm
Particle Swarm Optimization …
Singlesolution Algorithms
Multiplesolution Algorithms
h
c
r
a
e
S
c
i
t
a
m
e
t
s
y
S
e

t
e
s
t
t
a
r
e
n
e
G
Figure 2. Classiﬁcations of basic algorithms for combinatorial problems. This
paper focuses on GTSS algorithms, the intersection of singlesolution and
generatetest (the gray area).
Figure 2 is our classiﬁcation of current algorithms. According to how
many (approximate) solutions the system is seeking concurrently, they
can be divided into singlesolution and multiplesolution algorithms.
There are two styles of search: systematic search and generatetest.
This paper will focus on GTSS algorithms: the intersection of single
solution and the generatetest algorithms (the gray area in Figure 2),
because the process of ﬁnding a solution using GTSS algorithms can be
regarded as the evolution of a multiagent system.
Singlesolution algorithms (SS). The system is searching for only one con
ﬁguration at a time. Some examples are backtracking [18], local search
[10, 19–26], extremal optimization (EO) [18], and the Alife&AER
model [12, 13].
Multiplesolution algorithms (MS). The system is concurrently searching
for several conﬁgurations, which means that there are several copies of
the system. Examples of this are genetic algorithms [27], ant colony
optimization [9], and particle swarm optimization [28]. They are all
populationbased optimization paradigms. The system is initialized
with a population of random conﬁguration and searches for optima by
updating generations based on the interaction between conﬁgurations.
Backtracking. The systemassigns colors to vertices sequentially and then
checks for conﬂicts in each colored vertex. If conﬂicts exist, it will go
back to the most recently colored vertex and perform the process again.
Many heuristics have been developed to improve performance. For the
coloring problem, several important algorithms are SEQ[29], RLF [29],
and DSATUR [30].
Generatetest (GT). This is a much bigger and more popular family than
the backtracking paradigm. GT generates a possible conﬁguration (col
ors all vertices) and then checks for conﬂicts (i.e., checks if it is a so
Complex Systems, volume (year) 1–1+
6 J. Han
lution). If it is not a solution, it will modify the current conﬁguration
and check it again until a solution is found. The simplest but inef
ﬁcient way to generate a conﬁguration is to select a value for each
variable randomly. Smarter conﬁguration generators have been pro
posed, such as hill climbing [25] or minconﬂict [20]. They move to
a new conﬁguration with a better evaluation value chosen from the
current conﬁguration’s neighborhood (see details in section 3.1) until
a solution is found. This style of searching is called local (or neigh
borhood) search. For many largescale combinatorial problems, lo
cal search always gives better results than the systematic backtracking
paradigm. To avoid being trapped in the localoptima, it sometimes per
forms stopandrestart, randomwalk [26], and Tabu search [23, 24, 31].
Simulated annealing [10] is a famous heuristic of local search inspired
by the roughly analogous physical process of heating and then slowly
cooling a substance to obtain a strong crystalline structure. People pro
posed simulated annealing schedules to solve the coloring problem and
showed that it can dominate backtracking diagrams on certain types of
graphs [29].
Some algorithms that use the complex systems ideas are genetic al
gorithms, particle swarm optimization, extremal optimization, and the
Alife&AER model.
Extremal optimization and the Alife&AER model also generate a
random initial conﬁguration, and then change one variable’s value in
each improvement until a solution is found or the maximum number of
trials is reached. Note that extremal optimization and the Alife&AER
model improve the variable assignment based on LEFs, which is the
main difference from simulated annealing.
Both genetic algorithms and particle swarm optimization have many
different conﬁgurations at a time. Each conﬁguration can be regarded
as an agent (called a “chromosome” in genetic algorithms). The ﬁt
ness function for each agent evaluates how good the conﬁguration rep
resented by the agent is, so the evolution of the system is based on
a GEF.
1
All of the given algorithms have their advantages and drawbacks,
and no algorithm can beat all other algorithms on all combinatorial
problems. Although GT algorithms always beat backtracking on large
scale problems, they are not complete algorithms. They cannot prove
there is no solution for decision problems and sometimes miss solutions
that do exist.
1
Some implementations of particle swarmoptimization can use either a GEF or an LEF.
Since particle swarm optimization is not a singlesolution algorithm, it is not discussed in
this paper.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 7
2.2 Multiagent modeling
We proposed modeling a CSP in the multiagent framework in [14].
Here we translate the coloring problem into the language of multiagent
systems.
The concept of agent (denoted by a) is a computational entity. It
is autonomous. It is able to act with other agents and get information
fromthe system. It is driven by some objectives and has some behavioral
strategies based on information it collects.
A multiagent system (MAS) is a computational system that consists
of a set of agents A = ¦a
1
, a
2
, . . . , a
n
¦, in which agents interact or work
together to reach goals [32]. Agents in these systems may be homo
geneous or heterogeneous, and may have common or different goals.
MAS focus on the complexity of a group of agents, especially the local
interactions among them (Figure 3).
It is straightforward to construct a MAS for a coloring problem:
n vertices ¦V
1
, V
2
, . . . , V
n
¦ ÷ n agents ¦a
1
, a
2
, . . . , a
n
¦
Possible colors of V
i
÷ possible states of agent a
i
Color of V
i
is red = X
i
= red ÷ state of agent a
i
is a
i
.state = red
Link between two vertices ÷ interaction between two agents
Graph ÷ interaction network in MAS
Conﬁguration S = X
1
, X
2
, . . . , X
n
÷ a system state
Solution S
sol
÷ goal system state
Process of ﬁnding a solution by GTSS algorithms ÷ evolution of MAS
Evaluation function ÷ evolution direction of MAS
G
B
R
V
3
V
2
B V
4
V
1
Agent
Interaction
R
G
B
R
G
B
R
G
B
R
G
B
a
1
a
3
a
4
a
2
(a) (b)
Figure 3. A MAS (right) for a 3coloring problem (left). The table with three
lattices beside each agent represents all possible states (Rred, Ggreen, Bblue),
and the circle indicates the current agent state (color). The current conﬁguration
S = G, R, B, B is a solution.
Complex Systems, volume (year) 1–1+
8 J. Han
G
B
R
V
3
V
2
G V
4
V
1
An initialization (a)
R G B
a
1
a
2
a
3
a
4
G
B
R
V
3
V
2
B V
4
V
1
(b) A solution state
R G B
a
1
a
2
a
3
a
4
Figure 4. The MAS representation for a 3coloring problem.
3. Global and local evaluation functions
3.1 Evolution
We now need to design a set of rules (or algorithm) to evolve conﬁgu
rations of the coloring problems. The core of the algorithm is how the
whole system (conﬁguration) changes from one state to another state,
that is, how variables change their assigned values.
For example, if the system is initialized as shown in Figure 4, S =
G, R, B, G is obviously not a solution. There is a conﬂict between
V
1
and V
4
which are linked to each other and have the same color.
Thus, the system needs to evolve to reach a solution state, for example,
S = G, R, B, B (see Figure 4(b)).
How does the system evolve to the solution state? As mentioned ear
lier, GTSS algorithms (e.g., simulated annealing, extremal optimization,
and the Alife&AER model) are schedules for evolution of the MAS. In
the language of MAS, those algorithms all share the same following
framework.
1. Initialization (t = 0): each agent picks a random state, get S(0).
2. For each time step (t): according to some schedule and heuristics, one
or several agents change their state, so that S(t + 1) = Evolution(S(t));
t ÷ t + 1.
3. Repeat step 2 until a solution is found or the maximum number of trials
is reached.
4. Output the current state S(t) as the (approximate) solution.
The essential difference between the local and global functions lies
in the process of Evolution(S) in step 2. GEF methods are described in
section 3.1.1 and LEF methods follow in section 3.1.2.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 9
3.1.1 Traditional methods: Global evaluation functions
Evolution(S) of step 2 for GEF methods can be described as:
Compute the GEF values of all (or some, or one) neighbors of the current
system state S(t), and then select and return a neighboring state S(t)
·
to
update the system.
A neighborhood in GEF methods is the search scope for the current
system state. For example, we can deﬁne the neighborhood structure of
a conﬁguration S e H based on the concept of Hamming distance:
N
d
S
= ¦S
·
 Hammingdistance(S, S
·
) = d¦,
where
Hammingdistance(S, S
·
) =
i
(1 ÷ ∆(X
i
, X
·
i
)),
with ∆ denoting the Kronecher function ∆(x, y) = 1 if x = y and otherwise
∆(x, y) = 0.
For example, if d = 1 in the kcoloring problem, the GEF search space
is N
1
S
and there are n(k ÷ 1) neighbors of each system state. Figure 4(b)
is a neighbor of Figure 4(a). In Figure 6, (b) and (c) are neighbors of
(a). If d is larger, the neighborhood size is bigger. This will enlarge the
search scope and increase the cost of searching the neighborhood. But
there will probably be greater improvement in each time step. So the
neighborhood structure will affect the efﬁciency of the algorithm[33]. In
the following discussions, we always use the neighborhood structure N
1
S
since the neighborhood structure is not in the scope of this paper. Note
that simulated annealing just randomly generates a neighbor and accepts
it if it is a better state, or with a speciﬁed probability to accept a worse
state. Here “better” and “worse” are deﬁned by the evaluation value.
A GEF evaluates how good the current system state S is. In simulated
annealing, the GEF is the energy or cost function of the current system
state and guides the search. There are a variety of evaluation functions
for a speciﬁed problem, and a suitable choice can get good performance.
A GEF for the kcoloring problem. One naïve GEF for the kcoloring prob
lem is to count the number of conﬂicts in the current conﬁguration. For
simplicity, equation (1) counts twice the number of conﬂicts:
E
GEFk
(S) =
n
i=1
X
j
eS
÷i
∆(X
i
, X
j
) (1)
where S
÷i
= X
j
 i, j e e denotes the set of variables that link to V
i
.
Equation (1) is a GEF because it considers the whole system includ
ing all agents. For the system state shown in Figure 4(a), the evaluation
value is 1 × 2 = 2. Figure 4(b) shows a solution state, so E
GEF
(S
sol
) = 0.
Complex Systems, volume (year) 1–1+
10 J. Han
Minimizing the evaluation value is the goal of the solving process, al
though a state with a better evaluation value does not always mean that
it is closer to the solution state.
A GEF for the minimum coloring problem. There are several evaluation
functions [29, 34, 35] for ﬁnding the chromatic number Χ(G). We
modiﬁed the version given by Johnson [29] to be:
E
GEFO
(S) = ÷
m
i=1
C
i

2
(a)
+n
n
i=1
X
j
eS
÷i
∆(X
i
, X
j
)
(b)
(2)
where m is the current total number of colors being used. The set C
i
includes variables that are colored by the ith color. We assign each color
s an index number Index(s). Say red is the ﬁrst color, Index(red) = 1;
green is the second color, Index(green) = 2; blue is the third color,
Index(blue) = 3; and so on. So
C
i
= ¦X
j
 Index(X
j
) = i, j e 1. . . n¦.
Part (b) of equation (2) counts twice the number of conﬂicts in the
current conﬁguration. Part (a) favors using fewer colors. n in part (b)
is a weight value to balance parts (a) and (b), which makes all the local
optima in equation (2) correspond to a nonconﬂicting solution, that is,
a feasible solution.
For example, using equation (2) again, in Figure 5(a): C
1
= ¦X
2
, X
4
¦,
C
2
= ¦O¦, and C
3
= ¦X
1
, X
3
¦, so E
GEFO
(S
a
) = ÷(2
2
+2
2
) +4 × 1 ×2 = 0.
We can get the evaluation value of its two neighbors: E
GEFO
(S
b
) = 2
and E
GEFO
(S
c
) = ÷6. E
GEFO
(S
b
) > E
GEFO
(S
a
) shows that S
b
is a worse
B
V
3
V
2
V
4
V
1
R
R
B
(a) S
a
B
V
3 V
2
V
4
V
1
R
G
B
(b) S
b
G
V
3 V
2
V
4
V
1
R
R
B
(c) S
c
Figure 5. An example of using GEFs for a coloring problem. S
a
is the initial
state. S
b
and S
c
are two neighbors of S
a
. As a 3coloring problem, based on
equation (1), E
GEFk
(S
a
) = 2, E
GEFk
(S
b
) = 2, E
GEFk
(S
c
) = 0. So S
c
is the
solution. As a minimum coloring problem based on equation (2), E
GEFO
(S
a
) =
0, E
GEFO
(S
b
) = 2, E
GEFO
(S
c
) = ÷6, so S
c
is the best one (also an optimal
solution).
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 11
state than S
a
, because it uses one more color, but it still has one conﬂict
as does S
a
. For E
GEFO
(S
c
) < E
GEFO
(S
a
), S
c
is a better state because it
can remove the conﬂict in S
a
although it uses one more color. Actually
S
c
is an optimal solution.
So we can see that the GEF methods have two main features: (1) they
evaluate the whole system state, not a single agent’s state; (2) they
use all information of the system, equations (1) and (2) consider all
agents’ states. Therefore, centralized control in GEF methods is obvious:
in each step, the system checks all (or one) neighbors and selects the
one that has the best evaluation value as the new system state. No
agents here are autonomous. They all obey the choice of the centralized
controller.
3.1.2 Local evaluation function methods
Evolution(S) of step 2 using LEF methods can be described as:
For each agent a
i
, do the following sequentially:
2
Compute the evaluation
values of all/some states by a
i
’s LEF, and then select a state for a
i
to update
itself.
After obtaining the evaluation values for all states of an agent, it will
update its state using some strategies, such as leastmove, bettermove,
and randommove (random walk). Leastmove means the agent will
select the best state and change to that state, and randommove means
the agent will change its state randomly (for details, see [13]). These
strategies are part of the exploration heuristics (see section 3.2.2).
An LEF for the kcoloring problem. Like the GEF, there are a variety of
functions which can serve as the LEF for a problem. Equation (3) is
one possible LEF for each agent a
i
in the kcoloring problem [13]. It
calculates how many conﬂicts each state (s) of agent a
i
receives based
on the current assignments of a
i
’s linked agents. Note that if for all
i e 1. . . n we have E
i
LEFk
(S
÷i
, X
i
) = 0, then S is a solution (Figure 6(c)):
E
i
LEFk
(S
÷i
, s) =
X
j
eS
÷i
∆(X
j
, s). (3)
For example, in Figure 5(a), agent a
1
links to a
2
, a
3
, and a
4
, so we
get:
E
1
LEFk
(S
a,÷1
, R) = (∆(X
2
, R) + ∆(X
3
, R) + ∆(X
4
, R)) = 2
E
1
LEFk
(S
a,÷1
, G) = (∆(X
2
, G) + ∆(X
3
, G) + ∆(X
4
, G)) = 0
E
1
LEFk
(S
a,÷1
, B) = (∆(X
2
, B) + ∆(X
3
, B) + ∆(X
4
, B)) = 1.
2
Or simultaneously in some cases, as in cellular automata. This is one of our future
projects.
Complex Systems, volume (year) 1–1+
12 J. Han
Evaluation values in (a) S
a
R G B
a
1
2 0 1
a
2
0 0 2
a
3
1 0 1
a
4
0 0 1
R G B
a
1
1 1 1
a
2
0 0 2
a
3
1 0 1
a
4
0 0 1
R G B
a
1
2 0 1
a
2
0 1 1
a
3
1 1 0
a
4
0 1 0
(b) S
b
Evaluation values in
(c) S
c
Evaluation values in
Figure 6. An example of using an LEF for the 3coloring problem shown in
Figure 5. Numbers on each row of the lattices represent the evaluation values
based on equation (3) for an agent’s three possible states (colors): R, G, or B.
Circles refer to the current state of the agent. (c) shows that S
c
is a solution state
because all agents are in nonconﬂicting states. From S
a
, if a
1
moves ﬁrst, it will
most probably change its state from B to G and get to S
c
. If a
4
moves ﬁrst, it
will stay, or with small probability, it will change to G and get to S
b
.
Since the evaluation value of the current state (X
1
= B) is higher than
state G, a
1
will most probably change to state G.
Agent a
4
only links to a
1
, so it only considers a
1
’s state:
E
4
LEFk
(S
a,÷4
, R) = ∆(X
1
, R) = 0
E
4
LEFk
(S
a,÷4
, G) = ∆(X
1
, G) = 0
E
4
LEFk
(S
a,÷4
, B) = ∆(X
1
, B) = 1.
Similarly, E
2
LEFk
for agent a
2
only considers a
1
and a
3
. E
3
LEFk
for
agent a
3
only considers a
1
and a
2
. Continuing this way, we can get
evaluation values (Figure 6) for all states of each agent.
An LEF for the minimum coloring problem. If each agent only knows its
linked agents, how can they use their limited knowledge to ﬁnd the
minimal colors for a graph? Each agent has to make an approximation
of the global requirement. Let us try the following LEF for each agent a
i
:
E
i
LEFO
(S
÷i
, s) = Index(s)
(a)
+n
X
j
eS
÷i
∆(X
j
, s)
(b)
(4)
where Index(s) is described in section 3.1.1 and returns the index number
of color s.
Part (b) of equation (4) counts how many conﬂicts a
i
will receive
from its linked agents if a
i
takes the color s. Part (a) of equation (4)
favors using fewer colors by putting pressure on each agent to try to
get a color with a smaller index number. n in part (b) is a weight
value to balance parts (a) and (b), which makes all the local optima
in equation (4) correspond to a nonconﬂicting state. Note that if for
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 13
R G B Y
a
1
9 2 7 4
a
2
1 2 11 4
a
3
5 2 7 4
a
4
1 2 7 4
(a) Evaluation values
in S
a
R G B Y
a
1
5 6 7 4
a
2
1 2 11 4
a
3
5 2 7 4
a
4
1 2 7 4
(b)
in S
b
Evaluation values
R G B Y
a
1
9 2 7 4
a
2
1 6 7 4
a
3
5 6 3 4
a
4
1 6 3 4
(c) Evaluation values
S
c
in
Figure 7. An example of using an LEF for the minimumcoloring problemshown
in Figure 5. Numbers on each row of the lattices represent the evaluation values
based on equation (4) for all possible states (colors) of an agent. Circles refer
to the current state of the agent. (c) shows that S
c
is a feasible solution state
because all agents are in nonconﬂicting states and it is also an optimal solution.
From S
a
, if a
1
moves ﬁrst, it will most probably change its state from B to G and
get to S
c
; if a
4
moves ﬁrst, it will stay, or with small probability it will change
to G and get to S
b
.
all i e n we have E
i
LEFO
(S
÷i
, X
i
) s n, S = X
1
, X
2
, . . . , X
n
is a feasible
solution, but not always the optimal solution.
We can evaluate each state of a
1
in Figure 7(a):
E
1
LEFO
(S
a,÷1
, R) = Index(R) + n × (∆(X
2
, R) + ∆(X
3
, R) + ∆(X
4
, R))
= 1 + 4 × 2 = 9
E
1
LEFO
(S
a,÷1
, G) = Index(G) + n × (∆(X
2
, G) + ∆(X
3
, G) + ∆(X
4
, G))
= 2 + 4 × 0 = 2
E
1
LEFO
(S
a,÷1
, B) = Index(B) + n × (∆(X
2
, B) + ∆(X
3
, B) + ∆(X
4
, B))
= 3 + 4 × 1 = 7.
Since the evaluation value of the current state (X
1
= B) is higher than
state G, a
1
will most probably perform a leastmove to change from B
to G, and get to an optimal solution S
c
. Or, with lower probability a
1
will perform a randommove to change to state Y and get to another
feasible solution (but not an optimal one, since it uses four colors).
So we can see that the LEF methods have two main features: (1) they
evaluate the state of a single agent, not the whole system; (2) they use
local information of the system. Equations (3) and (4) only consider
its linked agents’ states (i.e., only knows the information of its related
subgraph). Decentralized control in LEF methods is also obvious: in
each step, the system dispatches all agents sequentially (e.g., in the
extremal optimization algorithm, the system dispatches the agent who
is in the worst state according to the LEF). All agents are autonomous
and decide their behaviors based on the LEF that maximizes their own
proﬁt. Therefore, if the whole systemcan achieve a global solution based
on all selﬁsh agents which are using the LEF, we call this emergence and
the system selforganizes towards a global goal.
Complex Systems, volume (year) 1–1+
14 J. Han
3.2 Essential differences between local and global evaluation functions
3.2.1 Consistency and inconsistency
In section 3.1 we studied the local and global evaluation functions using
the coloring problem. Now we know the following two aspects should
be considered while constructing an evaluation function.
1. What object is being evaluated? With a GEF, the whole system is consid
ered, while the LEF considers a single agent.
2. How much information is used in the evaluation? The GEF considers
the whole system state, while the LEF only considers the agent’s linked
agents.
This ﬁts the concepts we mentioned in the Introduction. But what if
the evaluation functions consider the whole system and only use local
information? What is the relationship between the local and global
evaluation functions?
Consistency between LEF and GEF. In the following, we use
S
·
= replace(S, i, s, s
·
), s, s
·
e D
i
to denote that S
·
is achieved by replacing the state s of X
i
in S to be s
·
.
So, all variables share the same state in S and S
·
except X
i
. So S
·
belongs
to N
1
S
.
Deﬁnition 3. An LEF is consistent with a GEF if it is true for !S e H
and !S
·
e N
1
S
(i.e., S
·
= replace(S, i, s, s
·
)) that
sgn(E
GEF
(S
·
) ÷ E
GEF
(S)) = sgn(E
i
LEF
(S
÷i
, s
·
) ÷ E
i
LEF
(S
÷i
, s)), (5)
where sgn(x) is deﬁned as: sgn(x) = 1 when x > 0, sgn(x) = ÷1 when
x < 0, and sgn(0) = 0 when x = 0.
For convenience, sometimes we just simply say sgn(AE
GEF
) = sgn(AE
i
LEF
)
for equation (5).
Deﬁnition 4. An LEF is restrictconsistent with a GEF if
(1) the LEF is consistent with the GEF and
(2) ¬Α e R
+
such that it is true for all S e H that
E
GEF
(S) = Α
n
i=1
D
i
LEF
(S
÷i
, X
i
). (6)
So restrictconsistent is the subset concept of consistent.
It is easy to prove that LEF equation (3) is restrictconsistent with GEF
equation (1) in the kcoloring problem. For example, in the 3coloring
problem shown in Figure 5, agent a
1
uses LEF equation (3) to change
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 15
to G which can decrease its valuation value from 1 to 0. This change
is also a good change for the whole system, because it decreases the
evaluation value of the whole system based on GEF equation (1) (from
E
GEFk
(S
a
) = 2 to E
GEFk
(S
c
) = 0). Suppose we use simulated annealing.
The current system state is S and we randomly generate a neighbor S
·
=
replace(S, j, s, s
·
). Then we need to calculate A = E
GEFk
(S
·
) ÷ E
GEFk
(S)
to decide whether S
·
is a better system state or not. In the consistent
case, it is not necessary to consider all of the agents’ states and AE
GEFk
can be calculated by AE
j
LEFk
:
E
GEFk
(S
·
) ÷ E
GEFk
(S) = 2AE
j
LEFk
.
What does consistency mean? A good agent decision based on the
LEF that can decrease the value of E
j
LEFk
is also a good decision for the
whole systembased on the GEF that can decrease the value of E
GEFk
. In
the kcoloring problem, while all agents get to a state that is evaluated
by equation (3) of zero, the evaluation value of equation (1) for the
whole system is also zero, which means it is a solution. This makes it
easier for us to understand why the LEF can guide agents to a global
solution. When the LEF is consistent with the GEF, they both indicate
the same equilibrium points. For a certain conﬁguration S, if it is the
local optimum of the GEF, then it is the Nash equilibrium [36] of the
LEF, and vice versa.
Deﬁnition 5. If E
GEF
(S
·
) ± E
GEF
(S) is true for S e H and !S
·
e N
1
S
,
then S is called the local optimum of the GEF using a N
1
S
neighborhood
structure in problem P.
The set of all local optima is denoted as L(P, E
GEF
).
Optimal solutions of optimization problems and feasible solutions
in decision problems belong to L(P) and have the smallest evaluation
value.
Deﬁnition 6. A Nash equilibrium of an LEF in a problem P is deﬁned
as S = s
1
, s
2
, . . . , s
n
, S e H, such that
E
i
LEF
(S
÷i
, s
i
) s E
i
LEF
(S
÷i
, s
·
i
) for !s
·
i
e D
i
holds for all i = 1, 2, . . . , n.
The set of all Nash equilibria of the P and the LEF is denoted as
N(P, E
LEF
).
Theorem 1 (Consistency) Given a problem P, a GEF, and an LEF: if the
LEF is consistent with the GEF (i.e., equation (5) holds between them)
then the following is true:
(!S e H) S e L(P, E
GEF
) = S e N(P, E
LEF
). (7)
It follows that L(P, E
GEF
) = N(P, E
LEF
).
Complex Systems, volume (year) 1–1+
16 J. Han
Proof. If S e L(P, E
GEF
), S is a local optimum, that is, E
GEF
(S
·
) ±
E
GEF
(S) is true for !S
·
e N
1
S
= !i = 1, 2, . . . , n, !x e D
i
, E
GEF
(replace(S, i, s
i
, x)) ÷ E
GEF
(S) ± 0.
Because sgn(E
GEF
(replace(S, i, s
i
, x)) ÷ E
GEF
(S)) = sgn(E
i
LEF
(S
÷i
, x)÷
E
i
LEF
(S
÷i
, s))
= !i = 1, 2, . . . , n, !x e D
i
, E
i
LEF
(S
÷i
, x) ÷ E
i
LEF
(S
÷i
, s) ± 0
= !i = 1, 2, . . . , n, !x e D
i
, E
i
LEF
(S
÷i
, x) ± E
i
LEF
(S
÷i
, s)
= S is a Nash equilibrium, S e N(P, E
LEF
).
If S e N(P, E
LEF
) and S is a Nash equilibrium, then we have
!i = 1, 2, . . . , n, !x e D
i
, E
i
LEF
(S
÷i
, x) ± E
i
LEF
(S
÷i
, s).
Because sgn(E
i
LEF
(S
÷i
, x) ÷ E
i
LEF
(S
÷i
, s)) = sgn(E
GEF
(replace(S, i, s
i
, x)) ÷
E
GEF
(S))
= !i = 1, 2, . . . , n, !x e D
i
, E
GEF
(replace(S, i, s, x)) ÷ E
GEF
(S) ± 0
= !S
·
e N
1
S
, E
GEF
(S
·
) ± E
GEF
(S) ÷ S is a local optimum,
S e L(P, E
GEF
).
Inference 1. Given a problem P, a GEF, and an LEF: if the LEF is
consistent with the GEF, S is a solution and S is a local optimum of the
GEF, then S is also a Nash equilibrium of the LEF.
Inference 1 helps us to understand why some LEF can solve problems
that the GEF can solve: because their equilibrium points are the same!
They have the same phase transition, solution numbers, and structure
according to spin glass theory [37]. The main difference between a
consistent pair of LEF and GEF lies in their exploration heuristics, such
as dispatch methods and strategies of agents. We discuss this in sec
tion 3.2.2.
So can we construct any other LEF or GEF that can make an essential
difference? Is there any case for an LEF that is not consistent with a
GEF?
Inconsistency between local and global evaluation functions. The LEF equa
tion (4) is designed for minimum coloring problems, but it can also be
used to solve the kcoloring problem (the experimental result in sec
tion 3.3.1 shows that equation (4) works for the kcoloring problem).
LEF equation (4) and GEF equation (1) are not consistent with each
other. Suppose the current conﬁguration is S and its neighboring con
ﬁguration S
·
= replace(S, i, s, s
·
), then
AE
i
LEFk
=
n
2
AE
GEFk
+ Index(s
·
) ÷ Index(s). (8)
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 17
∆E
LEF
∆E
GEF
>0 =0 <0
>0 Y N N
=0 Y Y Y
<0 N N Y
∆E
LEF
∆E
GEF
>0 =0 <0
>0 Y Y N
=0 N Y N
<0 N Y Y
(a) (b)
Table 1. Weakinconsistency relations between LEF and GEF. (a) LEF is weak
inconsistent with GEF and (b) GEF is weakinconsistent with LEF. The tables
show all possible relation combinations of AE
GEF
and AE
LEF
for any S e H and
any S
·
e N
1
s
. Y means allowed in the weakinconsistency, N means not allowed.
If AE
GEF
= 0: Because Index(s) s n for all s, so n/2AE
GEF
 >
Index(s
·
) ÷ Index(s) holds for all s
·
and s, we will get sgn(AE
GEF
) =
sgn(AE
i
LEFk
).
If AE
GEF
= 0: AE
i
LEFk
can be larger than, equal to, or smaller than
zero depending on Index(s
·
) ÷Index(s). There exists S and S
·
e N
1
S
such
that sgn(AE
GEF
) = sgn(AE
i
LEFk
).
So equation (5) is true only when AE
GEF
= 0. LEF equation (4) is
inconsistent with GEF equation (1) in the kcoloring problem.
Deﬁnition 7. An LEF is inconsistent with a GEF if the LEF is not con
sistent with the GEF.
Deﬁnition 8. An LEF is weakinconsistent with a GEF if !S e H and
!S
·
e N
1
S
, equation (5) is not true only when AE
GEF
= 0. A GEF is
weakinconsistent with an LEF if !S e H and !S
·
e N
1
S
, equation (5) is
not true only when AE
i
LEF
= 0 (see Table 1).
So weakinconsistency is an asymmetrical relation and is a subset
concept of inconsistency. LEF equation (4) is weakinconsistent with
GEF equation (1) in the kcoloring problem.
Inference 2. Givena problemP, a GEF, and anLEF we have N(P, E
LEF
) ·
L(P, E
GEF
) if the LEF is weakinconsistent with the GEF. We have
L(P, E
GEF
) · N(P, E
LEF
) if the GEF is weakinconsistent with the LEF.
GEF equation (2) and LEF equation (4) in the minimum coloring
problemis an example of inconsistency but not weakinconsistency. The
minimal number of colors Χ(G) is a global variable. We can see that the
linear relationship between equations (2) and (4) only exists between
part (b), not between part (a). It is easy to prove the inconsistency:
suppose a variable X
k
changes from the lth color to the hth color. If
this does not cause a change in conﬂict numbers (i.e., conﬂicts between
Complex Systems, volume (year) 1–1+
18 J. Han
X
k
and its linked vertices keep the same number, part (b) remains the
same), then
AE
GEFO
= 2(C
h
 ÷ C
l
 ÷ 1) and AE
i
LEFO
= h ÷ l.
If h > l and C
l
 > C
h
 + 1, we will get AE
GEFO
< 0 and AE
i
LEFO
> 0. If
h < l and C
l
 < C
h
 +1, we will get AE
GEFO
> 0 and AE
i
LEFO
< 0. In the
LEF, it is more likely that h < l because the agent favors colors which
have smaller index numbers according to equation (4). So when h < l
and C
h
has no fewer vertices than C
l
, a good choice for an agent (from
color l to color h) based on the LEF will be a bad one for the whole
system based on the GEF. This occurs because part (a) of equation (2)
favors the unbalanced distribution of colors.
The minimum coloring problem example in Figure 8 shows that,
when using an LEF, a
2
will not change from G to B, while if using a
GEF it will. This indicates that LEF equation (4) and GEF equation (2)
will lead the system to different states.
V
1
V
2
V
4
V
3
G
B
B
G
V
1
V
2
V
4
V
3
B
B
B
G
R G B
a
1
1 14 3
a
2
1 8 9
a
3
1 8 15
a
4
1 8 3
R G B
a
1
1 8 9
a
2
1 8 9
a
3
1 2 21
a
4
1 8 3
(a) S
a (b) S
b
(c) LEF values for S
a
(d) LEF values for S
b
GEF for (a) and (b):
E
GEFO
(S
a
)=4
E
GEFO
(S
b
)=2
∆ E
GEFO
= 2 <0
(e) GEF values for S
a
, S
b
Figure 8. Inconsistency between LEF equation (4) and GEF equation (2) in a
minimumcoloring problem. (a) and (b) showtwo possible conﬁgurations which
are neighbors and differ only in agent a
2
. (c) and (d) separately showLEF values
for all states of each agent of S
a
and S
b
, so E
2
LEFO
(S
b
, G)÷E
2
LEFO
(S
a
, B) = 9÷8 =
1 > 0. (e) shows the GEF values for S
a
and S
b
, E
GEFO
(S
b
) ÷E
GEFO
(S
b
) = 2÷4 =
÷2 < 0. So equations (4) and (2) are inconsistent with each other because the
LEF prefers S
a
while the GEF prefers S
b
.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 19
X
j
X
i
s
l
s
h
s
l
x
x
r
r
s
h
z
z
y
y
Table 2. The symmetrical penalty of two linked nodes i and j for two colors s
l
and s
h
. Upper values are of g
ij
(X
i
, X
j
), and lower values are of g
ji
(X
j
, X
i
). It has
the attribute of g
ij
(X
i
, X
j
) = g
ji
(X
j
, X
i
).
Consistency between E
LEF
and E
GEF
E
LEF
. Inconsistency means the
individual’s proﬁt (deﬁned by an LEF) will (sometimes) conﬂict with the
proﬁt (deﬁned by a GEF) of the whole system. Does this mean that
if an LEF is inconsistent with a GEF, the LEF is also inconsistent with
other GEFs? Actually, we can ﬁnd a consistent GEF for equation (4) by
simply deﬁning it as the sum of the entire formula:
E
GEFO
(S) =
n
i=1
Index(X
i
) + n
n
i=1
X
j
eS
÷i
∆(X
i
, X
j
). (9)
It is easy to prove that LEF equation (4) is consistent with GEF
equation (9).
3
Deﬁnition 9. If an LEF E
LEF
is consistent with the GEF E
GEF
= E
LEF
,
E
LEF
is called a consistent LEF, otherwise it is called an inconsistent LEF.
Is any LEF E
LEF
always consistent with E
GEF
= E
LEF
? Are all LEFs
consistent? Our answer to both is no. For example, in the 2coloring
problem for a graph that only consists of two linked nodes i and j, we
have a total of four coloring conﬁgurations. If we deﬁne the evaluation
value of the LEF of each node in each conﬁguration as in Table 2, we
will ﬁnd an inconsistency between the LEF and GEF = E
LEF
.
We want to give some rules for constructing a consistent LEF under
the following limitations.
1. We limit our discussion to binary constraint problems with the same
domains for all variables. This means all constraints are only between
two variables, and D
1
= D
2
= = D
n
= D. The coloring problem is a
binaryconstraint problem and all domains are the same.
3
We can also say that GEF equation (9) is inconsistent with GEF equation (2) if we
extend the consistency concept to be: evaluation function E
1
is consistent with evaluation
function E
2
if for any two neighboring conﬁgurations S and S
·
, sgn(AE
1
) = sgn(AE
2
).
Complex Systems, volume (year) 1–1+
20 J. Han
2. We only discuss LEFs which have the form
E
i
LEF
(S
÷i
, s) = f (s) + Β
X
j
eS
÷i
g
ij
(s, X
j
),
Β > 0 and Β > max
s
·
,s
¦f (s
·
) ÷ f (s)¦. (10)
So the form of the GEF E
GEF
= E
LEF
is:
E
GEF
(S) =
i
f (X
i
) + Β
i
X
j
eS
÷i
g
ij
(X
i
, X
j
). (11)
f (s) indicates the preference for color (value) s of the node (agent or vari
able). The second part is about constraints between two linked nodes, so
g
ij
(X
i
, X
j
): D
i
×D
j
÷ is the static penalty function of X
i
incurred by the
constraint between X
i
and X
j
. Β balances these two parts. Equation (11)
has the same formas a Hamiltonian in spin glasses: the ﬁrst part relates to
the external ﬁeld, and the second part relates to the interaction between
two spins. Therefore, the form we deﬁned here can cover a big class of
LEFs for combinatorial problems, and the related GEFs have the same
form as the Hamiltonian in spin glass models. Equations (3) and (4) are
of the form given by equation (10); equations (1) and (9) are of the form
of equation (11).
3. Only one form of static penalty function g
ij
(X
i
, X
j
) is considered here
which satisﬁes g
ij
(X
i
, X
j
) = g
ji
(X
j
, X
i
). We call this g
ij
(X
i
, X
j
) a sym
metrical penalty (see Table 2): in equations (1) through (4) and (9),
g
ij
(X
i
, X
j
) = ∆(X
i
, X
j
) which is a symmetrical penalty with x = y = 1 and
r = z = 0. Obviously, the penalty function shown in Figure 9(a) is not a
symmetrical penalty.
Given these limitations, an LEF can be consistent, as stated in Theo
rem 2.
Theorem 2. If !i, j e e, g
ij
(X
i
, X
j
) = g
ji
(X
j
, X
i
), then an E
LEF
which
has the same form as equation (10) is consistent with E
GEF
= E
LEF
.
That is, E
LEF
is a consistent LEF.
Proof. For !S = X
1
, X
2
, . . . , X
n
e H and !S
·
= X
·
1
, X
·
2
, . . . , X
·
n
e N
1
S
,
sgn(E
GEF
(S
·
) ÷ E
GEF
(S))
= sgn
l
.
.
.
.
.
.
.
.
\
k
f (X
·
k
) + Β
k
X
·
j
eS
·
÷k
g
kj
(X
·
k
, X
·
j
)
÷
l
.
.
.
.
.
.
.
\
k
f (X
k
) + Β
k
X
j
eS
÷k
g
kj
(X
k
, X
j
)
\
.
.
.
.
.
.
.
!
\
.
.
.
.
.
.
.
!
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 21
Green Red
i 1
5
2
4
j 3
6
4
4
(b) iGreen, jRed
Green Red
i 3
6
4
4
j 1
5
2
4
(c) iRed, jGreen
Green Red
i 1
5
2
4
j 1
5
2
4
(d) iRed, jRed
Green Red
i 3
6
4
5
j 3
6
4
5
(e) iGreen, jGreen
j
i
Green Red
Green 3
3
1
4
Red 4
1
2
2
(a)
Figure 9. The LEF is inconsistent with the GEF = E
LEF
in a 2nodelinked
graph of a 2coloring problem. (a) LEF values for node i (upper) and j (lower).
For example, if i is Green and j is Red, the LEF value of i is ÷1, and the LEF
value of j is ÷4. (b) through (e) are LEF values (upper) and GEF values (lower)
of each conﬁguration for nodes i and j. ÷ is the tendency to change color given
the other node’s color according to the LEF judgment. ÷ is the tendency to
change color given the other node’s color according to the GEF judgment. (d) is
the Nash equilibrium of the LEF methods, and (e) is the local optimum of the
GEF methods. In (b) and (c), the LEF and the GEF lead the system to evolve
to different conﬁgurations. So they are inconsistent even though the GEF is the
sum of the LEFs of all agents. If we see “Green” as “Cooperative” and “Red”
as “Defective” (a) is actually the payoff matrix of the prisoner dilemma problem
[7] which is one of the most interesting problems in game theory, and the LEF
of each agent is the utility function of each player.
= sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
) + g
ji
(X
j
, s
·
) ÷ g
ji
(X
j
, s))
\
.
.
.
.
.
.
.
!
Complex Systems, volume (year) 1–1+
22 J. Han
= sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
) + g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
))
\
.
.
.
.
.
.
.
!
= sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + 2Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
))
\
.
.
.
.
.
.
.
!
.
And
sgn(E
i
LEF
(S
÷i
, s
·
) ÷ E
i
LEF
(S
÷i
, s))
= sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
))
\
.
.
.
.
.
.
.
!
.
Because Β > max
s
·
,s
¦f (s
·
) ÷ f (s)¦, we have
sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + 2Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
))
\
.
.
.
.
.
.
.
!
= sgn
l
.
.
.
.
.
.
.
\
f (s
·
) ÷ f (s) + Β
X
j
eS
÷i
(g
ij
(s
·
, X
j
) ÷ g
ij
(s, X
j
))
\
.
.
.
.
.
.
.
!
÷ sgn(E
GEF
(S
·
) ÷ E
GEF
(S)) = sgn(E
i
LEF
(S
÷i
, s
·
) ÷ E
i
LEF
(S
÷i
, s)).
Theorem2 only gives a sufﬁcient condition but not the necessary con
dition for a consistent LEF. Finding theorems for the necessary condition
will be our future study because some asymmetrical penalties might be
good evaluation functions for efﬁcient search.
So far, we know we can always ﬁnd a GEF = E
LEF
for any LEF.
Not all GEFs constructed in this way are consistent with the LEF, which
means they will direct the system to evolve to different conﬁgurations.
However, on the other hand, not all GEFs can be decomposed to be n
LEFs, such as in minimum coloring problems. If we construct a GEF as:
E
GEFO
(S) = m+ n
n
i=1
X
j
eS
÷i
∆(X
i
, X
j
) (12)
where m is the current number of colors, we get a naïve evaluation
function that reﬂects the nature of the minimumcoloring problem. Since
the LEF is not allowed to use global information, an agent only knows its
linked agent and does not knowthe others, so it cannot knowhowmany
colors have been used to color the graph. It seems difﬁcult for a single
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 23
agent to make a good decision for the whole system. One approximate
solution here is LEF equation (4). If we compare its consistent GEF
equation (9) with GEF equation (12), we will see equation (12) is more
“rational” (rationality is discussed in section 3.2.2) than equation (9)
because equation (12) gives a “fair” evaluation on all solution states
while equation (9) does not. We will discuss “rational” and “partial
rational” evaluation functions in our next paper. The experiments in
section 3.3.2 show that the rational GEF equation (12) works much
worse than the partial rational GEF equation (12).
3.2.2 Exploration of local and global evaluation functions
If an LEF is consistent with a GEF, what is the main difference between
them? Is there any picture that can help us understand how LEF and
GEF explore the state space?
To understand the behavior of the LEF and GEF, ﬁrst of all we should
understand the role of an evaluation function in searching. An impor
tant notion here is that the evaluation function is not a part of the
problem, it is a part of the algorithm:
GTSS Algorithms = Evaluation Function + Exploration Heuristics.
Therefore, the evaluation function values (EF values) are not builtin
attributes for conﬁgurations of a given combinatorial problem. In the
backtracking algorithm, there is no evaluation function and only two
possible judgments for current partial conﬁgurations: “no conﬂict” or
“conﬂict” (i.e., the evaluation function in backtracking has only two EF
values). In the GT paradigm, since the system needs to decide whether
it accepts a new state or not, it has to compare the new and current
states—which one might be better? Which one might be closer to the
solution state? For this purpose, the evaluation function is introduced to
make approximate evaluations for all possible states and lead the system
to evolve to a solution state. Except for solution states, it is always
difﬁcult to compare which conﬁguration is better than the other. So the
evaluation function is an internal “model” for the actual problem. As
we know, models do not always exactly reﬂect everything of the reality,
depending on their purpose. Therefore, different models can be built
for a problem. For example, there are several different GEFs for the
coloring problem.
A rational evaluation function satisﬁes: (1) all solution states should
have smaller EF values than all nonsolution states; (2) all solution states
should have the same EF value. A good evaluation function will lead the
system to evolve to a solution state more directly. For example, evalua
tion Function 1 is more efﬁcient than Evaluation Function 2 in Figure 10
because Evaluation Function 2 has many local optima that will trap the
evolution. However, it is still impossible to construct an evaluation
function that has no local optima for many combinatorial problems.
Complex Systems, volume (year) 1–1+
24 J. Han
EF value
Configurations
Solution
Local optima
— Evaluation Function 1
 Evaluation Function 2
Figure 10. Different evaluation functions for a problem.
This is why they are so difﬁcult to solve and people have developed
so many heuristics to explore the conﬁguration space. Otherwise, for
Evaluation Function 1, we can simply use the greedy heuristic.
Searching (as in the GTSS style) for solutions is just like walking in
a dark night using a map. The map is the model for the conﬁguration
space, so the evaluation function is the generator of the map. Heuristics
are for exploration in the space based on the map. So an algorithm
includes what map it uses and how it uses the map to explore.
It is impossible to compute evaluation values for all conﬁgurations at
a time. It is only possible to get evaluation values for a small part of the
conﬁguration space at each time step. This is like using a ﬂashlight to
look at a map on a dark night. But since the map is very big, only part
of the map can be seen unless the ﬂashlight is very “powerful.”
Using the metaphors of “map” and “ﬂashlight,” we now illustrate
the exploration of LEF and GEF.
GEF: One ndimensional map and neighborhood ﬂashlight. Traditional GEF
methods use only one map, which is ndimensional for nvariable prob
lems. For each time step, the system computes the evaluation values
of neighbors in N
1
S
, that is, it uses a ﬂashlight to look at the neighbor
hood of the current position on the map and make a decision about
where to go (see Figure 11). There are several strategies for making de
cisions based on the highlighted part of the map: greedy, always selects
the best one (Figure 11); greedyrandom, most of the time selects the
best, but with a small probability will go to a random place (GEFGR,
see section 3.3); totally random, and others. In simulated annealing,
the ﬂashlight only highlights one point (conﬁguration) each time in the
neighborhood, and will move to that point if it is better than the cur
rent one, or with a small probability (e
÷AE/T
) move to that point even
though it is worse. There can be many variations in types of ﬂash
light and strategies for selection on the highlighted part of the map. In
section 3.3, we show that both the evaluation function (map) and the
exploration methods will affect performance.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 25
a
1
EF value
a
2
1
2
3
4
0
Figure 11. The GEF methods for a system consisting of two agents (a
1
and a
2
).
The evolution path is 0 ÷ 1 ÷ 2 ÷ 3 ÷ 4. The light circle around 1 is the part
highlighted by the ﬂashlight when the system is in state 1. 2 is the best neighbor
inside this circle. So the system accepts 2, highlights its neighborhood, gets to
3, and so on.
LEF: n d
i
dimensional maps and slice ﬂashlight. Using the distributive LEF
methods, agent a
i
has a d
i
dimensional map if a
i
has d
i
constrained
agents (in the coloring problem, d
i
is the degree of a
i
). In every time
step each agent uses its onedimensional slice of its d
i
dimensional map
to guide its evolution.
Consistent LEF and inconsistent LEF are very different. For consis
tent LEF a map of a
i
is actually a d
i
dimensional slice of the GEF map
of E
GEF
= E
LEF
. In other words, all LEF maps can make up a GEF
map. At every time step, each agent uses a onedimensional slice of the
ndimensional GEF map of E
GEF
= E
LEF
. Since the system dispatches
all agents sequentially, its exploration is from slice to slice. As Figure 12
shows, the twoagent system evolves as 0 ÷ 1 ÷ 2 using slices (b) and
(c) of the GEF map (a). The ﬂashlight illuminates a onedimensional
slice each time and the agent makes a decision according to its strategies:
leastmove, always select the best one (Figure 12); bettermove, select
a random one and accept it if it is better than the current assignment;
randommove, which is totally random selection, or others. So here the
LEF and GEF use the same map and the difference lies in how they use
the map: what ﬂashlights and strategies are used for selecting from the
highlighted part of the map.
For inconsistent LEF a map of a
i
is no longer a d
i
dimensional slice
of the GEF map of E
GEF
= E
LEF
. Each agent uses its own map. So
here the differences between the LEF and GEF lie not only in how they
use the map, but also on which map they are using.
To conclude this section, we summarize the main differences between
global and local evaluation functions in Table 3 from two layers: eval
Complex Systems, volume (year) 1–1+
26 J. Han
a
1
EF value
1
2
a
2
EF value
EF value
(b) A map slice for agent a
2
(c) A map slice for agent a
1
0
2
a
1
a
2
0
1
1
(a) The GEF map of a system consisting
of agents a
1
and a
2
Figure 12. Consistent LEF: map slices of GEF and exploration. The system starts
from “0”. Suppose it is the turn of a
2
to move, then a
2
gets a slice (b) through
“0” and selects the best move “1”. Then it is the turn of a
1
to move, a
1
gets
a slice (c) through “1” and selects the best move “2” and then repeats. So the
exploration is from one slice (dimension) to another: 0 ÷ 1 ÷ 2.
LEF
GEF
Consistent Inconsistent
Generator E
GEF E
LEF
E
i
LEF
, (i = 1..n)
Dimension
n
(number of
nodes)
d
i
(degree of each node V
i
)
Nash equilibria N(E
LEF
)
Evaluation
Function
Equilibria
Local optima
L(E
GEF
)
N(E
LEF
)=L( E
LEF
) N(E
LEF
)≠ ≠≠ ≠L( E
LEF
)
Flashlight
(search scope)
Neighborhood onedimensional slice
Exploration
Heuristics
Strategies
Greedy,
Greedy
random, SA,
Random, …
Leastmove, bettermove, random, …
Table 3. Differences between the GEF and consistent/inconsistent LEF.
uation function (map) and exploration heuristics. This paper focuses
on the evaluation function layer. We believe that using a different eval
uation function will result in different performance. The exploration
heuristics also affect performance. This is discussed in the next section.
3.3 Experimental results
In the previous sections we proposed LEF methods and compared them
to traditional GEF methods. We now ask the following questions about
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 27
performance: Can the LEF work for coloring problems? Can an LEF
(GEF) solve problems that a GEF (LEF) can solve if they are inconsis
tent? If an LEF and a GEF are consistent, is their performance similar?
In this section we present some preliminary experimental results on a
set of coloringproblem benchmarks fromthe Center for Discrete Math
ematics and Theoretical Computer Science.
4
Sections 3.3.1 and 3.3.2
separately show results on the kcoloring and minimum coloring prob
lems. Some observations are given in section 3.3.3.
Note that comparisons between local and global evaluation func
tions are difﬁcult for two reasons. First, the LEF and GEF are concepts
for evaluation functions. One speciﬁed LEF/GEF includes several algo
rithms according to their exploration heuristics and different parameter
settings. Second, it is difﬁcult to avoid embodying some preliminary
knowledge in the algorithm. So the following experimental compar
isons are limited to the speciﬁed settings of algorithms and problem
instances.
We now compare the performance of the following three algorithms
using different evaluation functions and speciﬁed parameters.
1. LEF method: Alife&AER [12, 13].
2. GEF method: Greedyrandom(GEFGR) always looks for the best conﬁg
uration in a N
1
S
neighborhood and has a small random walk probability.
3. GEF method: Simulated annealing (GEFSA) [29].
Here are details of the Evolution(S(t)) of the three algorithms in the
framework shown in section 3.1.
LEF
Evolution(S)
{
1. For each agent i = 1 to n do
2. Choose a random number r in [0, 1]
3. If r < P
leastmove
// perform leastmove strategy
4. Find one color c that for all other possible colors b,
E
i
LEF(S
÷i
, c) s E
i
LEF(S
÷i
, b)
5. Else // perform random walk
6. c = randomselect (current possible colors)
7. S
·
= replace(S, i, X
i
, c)
8. If S
·
is a feasible solution, call ProcessSolution(S
·
)
9. Return S
·
}
4
Available at: http://mat.gsia.cmu.edu/COLOR/instances.html.
Complex Systems, volume (year) 1–1+
28 J. Han
GEFGR
Evolution(S)
{
1. Choose a random number r in [0, 1]
2. If r < P
greedy
// perform greedy strategy
3. Find one neighbor S
·
e N
1
S
, that for all other
S
··
e N
1
S
, E
GEF
(S
·
) s E
GEF
(S
··
)
4. Else // perform random walk
5. S
·
= randomselect(N
1
S
)
6. If S
·
is a feasible solution, call ProcessSolution(S
·
)
7. Return S
·
}
GEFSA
Evolution(S)
{
1. S
r
= randomselect(N
1
S
)
2. A = E
GEF
(S
r
) ÷ E
GEF
(S)
3. If A s 0 // perform downhill move
4. S
·
= S
r
5. Else A > 0
6. Choose a random number r in [0, 1]
7. If r s e
÷A/T
// perform uphill move
8. S
·
= S
r
9. Else // do not accept the new state
10. return S
11. If S
·
is a feasible solution, call ProcessSolution(S
·
)
12. Return S
·
}
ProcessSolution(S)
{
#If it is a kcoloring problem:
Output the result S
#If it is a minimum coloring problem:
If S uses m colors and m is the chromatic number of the graph
Output the result S and m
Else if S uses m colors and m is less than the current number
of possible colors
Current Number of Possible Colors = m
// reduce the number of possible colors, so N
1
S
is reduced
}
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 29
Measurement
An agent move
Algorithm of a
i
A time step
LEFAlife&AER O(2e
i
 + m) O
n
i=1
(2e
i
 + m) = O(4e + nm)
GEFGR O(2e
i
 + nm+ n) O(2e
i
 + nm+ n)
GEFSA O(2e
i
) O(2e
i
)
Table 4. Complexity of an agent move and a time step of the LEFAlife&AER,
GEFGR, and GEFSA algorithms.
The setting of P
leastmove
in the LEF method is equal to 1.5n/(1.5n +
1), where n is the problem size. P
greedy
in GEFGR is also simply set
to 1.5n/(1.5n + 1). Parameter settings in GEFSA are the same as in
[29]: initialT = 10, freezeLim = 10, sizeFactor = 2, cutoff = 0.1,
tempFactor = 0.95, and minPercent = 0.02.
The following experiments examine the performance of ﬁnding a
solution. First, we give the CPU runtime on a PIII1G PC WinXP for
ﬁnding a solution and then measure the runtime as operation counts
[38], which is the number of agent moves used in ﬁnding a solution.
The cost of an agent move in the LEF includes steps 2 through 8 in
Evolution(s), and a time step includes n agent moves. The cost of an
agent move in GEFGR includes steps 1 through 6. The cost of an agent
move in GEFSA includes steps 1 through 11. A time step in both GEF
GR and GEFSA only includes one agent move. The complexity mainly
relates to calculating conﬂicts and comparing EF values (the complexity
of picking up the smallest value from m values is O(m)). Table 4 lists
the complexities of the LEF, GEFGR, and GEFSA methods where e is
the number of edges in the graph, e
i
 is the number of edges from node
V
i
, and m is the current number of colors being used. So, in a kcoloring
problem mis constantly equal to k, and in a minimum coloring problem
m is a dynamic decreasing value but is always less than n.
So from Table 4 we can get this relationship of complexity for an
agent move:
GEFSA < LEF < GEFGR.
It is easy to think that the more complex an agent’s move is, the better
its performance will be. If so, the number of agent moves for ﬁnding a
solution would have the relation:
GEFSA moves > LEF moves > GEFGR moves.
Is this true? Let us look at the experimental results in sections 3.3.1
and 3.3.2 where some comparisons are given.
Complex Systems, volume (year) 1–1+
30 J. Han
3.3.1 kcoloring problem
We compare the CPU runtime (Figure 13) and the number of moves
(Figure 14) for ﬁnding a solution for a given k colors using different
LEFs (equations (3) and (4)) and GEFs (equations (1), (2), and (9)). The
following comparisons are included:
a. Consistent cases: LEF equation (3) versus GEF equation (1), and LEF
equation (4) versus GEF equation (9).
b. Inconsistent cases: LEF equation (3) versus GEF equation (9), LEF equa
tion (4) versus GEF equation (1), LEF equation (3) versus GEF equa
tion (2), and LEF equation (4) versus GEF equation (2).
c. Different evaluation functions for LEF/GEF: LEF equations (3) versus (4),
GEF equations (1) versus (2) versus (9).
d. Different exploration heuristics for the same evaluation function: GEF
GR equation (1) versus GEFSA equation (1) versus LEF equation (3).
3.3.2 Minimum coloring problem
Finding the chromatic number of the graph in a minimum coloring
problem is a global goal. The LEF of each agent can only use local in
formation to guide itself and sometimes there is not enough information
to make a good decision. We test the CPU runtime (Figure 15) and num
ber of agent moves (Figure 16) for ﬁnding a Χ(G)color solution when
Χ(G) is unknown. The system starts with a large number of colors m,
and decreases it if a feasible mcolor solution is found, until m is equal
to Χ(G). The following comparisons are included:
a. Consistent case: LEF equation (4) versus GEF equation (9).
b. Inconsistent case: LEF equation (4) versus GEF equation (2) versus GEF
equation (12).
c. Different GEFs: GEF equations (2) versus (9) versus (12).
d. Different exploration heuristics for the same evaluation function: GEF
GR equation (2) versus GEFSA equation (2).
3.3.3 Observations from the experiments
(1) LEFs work for the kcoloring and minimum coloring problems. The GEF
does not always lead the evolution more efﬁciently than the LEF al
though it uses more information to make decisions and has centralized
control over the whole system. In the kcoloring problems, the LEF
(especially LEF equation (4)) beats the GEFs in CPU runtime in almost
all instances. In the minimum coloring problems, although the LEF
only knows local information and the purpose is global, it still beats the
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 31
0
>
6
0
0
0
Figure 13. Average CPU runtime (second) of 100 runs for the kcoloring prob
lems. Note that the yaxis is a logplot, with the lowest value denoting zero
(less than 0.001 second) and the highest value denoting that the runtime is more
than 6000 seconds.
Complex Systems, volume (year) 1–1+
32 J. Han
u
n
k
n
o
w
n
Figure 14. Average number of agent moves of 100 runs for the kcoloring prob
lems. Note that the yaxis is a logplot, with the highest value “unknown”
denoting that we have not tested the number of agent moves because the run
time is more than 6000 seconds.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 33
>
6
0
0
0
0
Figure 15. Average CPU runtime (second) of 100 runs for solving the minimum
coloring problems. Note that the yaxis is a logplot, with the lowest value
denoting zero and the highest value denoting that the runtime is more than
6000 seconds.
Complex Systems, volume (year) 1–1+
34 J. Han
u
n
k
n
o
w
n
Figure 16. Average number of agent moves of 100 runs for solving the minimum
coloring problems. Note that the yaxis is a logplot, with the highest value
“unknown” denoting that we have not tested the number for agent moves
because the runtime is more than 6000 seconds.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 35
GEFs for most instances except queen8_12 and queen9_9. The LEF is
much faster than the GEFs for homer, miles250, queen7_7, queen8_8,
fpsol2, inithx, mulsol, and zeroin. Although GEF equation (9) is con
sistent with LEF equation (4) and GEF equation (2) works better than
GEF equation (9) in most instances, LEF equation (4) still beats GEF
equation (2). In addition to coloring problems, the LEF also works for
Nqueen problems [12–14]. Using the LEF for the Nqueen problem
can beat a lot of algorithms, but it is not as fast as one speciﬁed local
search heuristic [18]. One important reason is that the speciﬁed local
search heuristic embodies preliminary knowledge of the Nqueen prob
lem. For more experimental results, such as distributions and deviations
of runtime, and performance based on different parameter settings, see
[12–14].
(2) Evaluation function affects performance. As we know, the GEF selection
will affect performance when solving function optimization problems
[39]. This is also true for combinatorial problems. For the kcoloring
problem (a decision problem) the performance of LEF equations (3) and
(4) is very different. In fpsol2, inithx, mulsol, and zeroin, equation (4)
beats equation (3) strongly, while equation (3) slightly beats equation (4)
in miles and queen. GEF equation (2) can solve fpsol2, inithx, mulsol,
and zeroin within 2 seconds, while GEF equation (1) cannot solve it
within 6000 seconds. We guess that is because part (a) of equation (2)
puts pressure on each node to select a small index number color, which
helps to break the symmetry [40] in the coloring problems [41]. In the
minimumcoloring problem(an optimization problem), the performance
difference is very clear. GEF equation (12) cannot solve more than half
of those instances, while the others can solve them. GEF equation (2)
works better than GEF equation (9) on average.
(3) Performance of an LEF and a GEF are more alike if they are consistent. In
the kcoloring problem, LEF equation (4) is consistent with GEF equa
tion (9) while LEF equation (3) is consistent with GEF equation (1), so
performance between LEF equation (3) and GEF equation (1) is more
alike than that of LEF equation (4) and GEF equation (9). LEF equa
tion (4) and GEF equation (9) can solve fpsol2, inithx, mulsol, and
zeroin quickly while LEF equation (3) and GEF equation (1) cannot.
LEF equation (3) and GEF equation (1) beat LEF equation (4) and GEF
equation (9) in miles1000 and queen8_12. In the minimum coloring
problem, for example, queen8_12 is difﬁcult for LEF equation (4) and
GEF equation (9), but easy for GEF equation (2). These results tell us
that if the Nash equilibria of the LEF are identical to the local optima
of the GEF, they will behave more alike.
(4) Exploration heuristics affect performance. There are so many efforts
on exploration heuristics that we know they will affect performance.
Complex Systems, volume (year) 1–1+
36 J. Han
This is also true for the consistent LEF and GEF. Even though they
have the same evaluation function (map), they still work differently in
some instances. In the kcoloring problem, LEF equation (3) beats GEF
equation (1) in miles and queens while GEFGR equation (1) beats LEF
equation (3) and GEFSA equation (1) in mulsol and zeroin. LEF equa
tion (4) and GEFSAequation (9) are consistent but the LEF works faster
than GEFSA. In the minimum coloring problems, LEF equation (4) and
GEF equation (9) are consistent but LEF equation (4) beats GEF equa
tion (9) strongly. Both using GEF equation (2), GEFSA and GEFGR
perform differently, especially in queen7_7, queen8_8, and queen9_9.
(5) Larger complexity does not always mean better performance of an agent
movement. We have the ranking for complexities of an agent move
in different algorithms: GEFGR > LEF > GEFGR. However, GEF
GR does not always use the fewest agent moves to reach the (optimal)
solution state in coloring problems, even in the consistent case. In both
the kcoloring and minimum coloring problems, the LEF uses fewer
moves than GEFGRin some instances, such as queen5_5 and queen7_7.
So this suggests that in some instances using local information does not
always mean decreasing the quality of the decision. Even using the
same evaluation function, GEFGRlooks at n neighbors and selects one,
while GEFSA only looks at one neighbor, we still see in some problem
instances, such as queen5_5, queen7_7, and queen9_9, that GEFSA
uses fewer moves than GEFGR when solving kcoloring problems.
(6) Performance of the LEF and GEF is dependent on the problem instance. The
LEF ﬁts for most instances except queen8_12 in the minimum coloring
problem. GEF equation (1) and LEF equation (3) are not good for
fpsol2, inithx, mulsol, and zeroin. As in our other paper [14], the algo
rithmic details will not change the performance for a problem instance
very much. We guess the graph structure (constraint network) will have
a big impact on the performance of the LEF and GEF. In other words,
the LEF might be good for some networks while the GEF might be
good for other networks. In the language of multiagent systems, some
interaction structures are more suitable for selforganization to reach
a global purpose, while some interaction structures favor centralized
control. Finding the “good” networks for the LEF and GEF is a very
important future topic.
4. Conclusions
This paper proposes a new look at evolving solutions for combinato
rial problems, using NPcomplete cases of the kcoloring problem (a
decision problem) and the minimum coloring problem (an optimization
problem). The new outlook suggests several insights and lines of future
work and applications.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 37
Finding solutions for a coloring problem by using generatetest
singlesolution (GTSS) algorithms is like the evolution of a multiagent
system. The evaluation function of the GTSS algorithms is the in
ternal ranking of all conﬁgurations. There are two ways to construct
evaluation functions: the global evaluation function (GEF) uses global
information to evaluate a whole system state, and the local evaluation
function (LEF) uses local information to evaluate a single agent state.
We analyze the relationship between LEF and GEF in terms of consis
tency and inconsistency. Table 3 summarizes the differences between
GEF and consistent/inconsistent LEF.
A consistency theorem is proposed (Theorem 1) which shows that
Nash equilibria (Deﬁnition 6) of an LEF are identical to the local op
tima (Deﬁnition 5) of a GEF if the LEF is consistent with the GEF. In
spin glass theory, this theorem means that the LEF and GEF have the
same ground state structure and the same number of solutions, as well
as the same phase transitions [1]. So the LEF can be partly explained by
the current theoretical results on traditional GEF [16, 37, 42, 43]. This
helps us understand why the LEF can guide agents to evolve to a global
goal state, if the global goal can be expressed by a GEF that is consistent
with the LEF. When an LEF is inconsistent with a GEF, the LEF’s Nash
equilibria are not identical to the GEF’s local optima. Obviously, the
LEF evolution proceeds differently than the GEF evolution. Experimen
tal results support this: performance of LEF and GEF are more alike if
they are consistent.
Since the way to construct a GEF is not unique for a problem, we
can always construct a GEF by summing up all of the agents’ LEFs as
E
GEF
= E
LEF
. E
LEF
can be consistent or inconsistent with E
LEF
.
This paper gives a preliminary theorem for constructing consistent LEF
which have the formof the spin glass Hamiltonian (equation (11)): If the
penalty function g
ij
(X
i
, X
j
) in an E
LEF
is a symmetrical penalty, the E
LEF
is consistent with E
LEF
. More study on consistent and inconsistent
LEFs should give more insights into computation and game theory.
Even though an LEF is consistent with a GEF, they still explore differ
ently. The experimental results show that exploration heuristics affect
performance, so even though the LEF is consistent with a GEF, it still
behaves differently and works better in some instances.
The LEF and GEF concepts provide both a new view of algorithms
for combinatorial problems, and for the collective behavior of complex
systems. Using coloring problems clariﬁes the concepts of “local” and
“global.” These concepts are important distinguishing characteristics of
distributed and centralized systems. In addition to the research reported,
there is still a lot of future work. Some remaining questions are: What is
the condition for the existence of a consistent GEF for any LEF? What is
the condition for the existence of a consistent LEF for any GEF? Howdo
network properties relate to consistency of LEF and GEF in the coloring
Complex Systems, volume (year) 1–1+
38 J. Han
problems? How can we construct good LEFs and GEFs for a problem?
LEF is about individual and GEF is about the whole system, how about
the evaluation functions for compartments? Can we study consistent
and inconsistent LEFs in the context of game theory? These will be our
future study.
5. Acknowledgments
This paper is supported by the International Program of the Santa Fe
Institute and the National Natural Science Foundation of China. The
author wants to thank Lei Guo and John Holland for their strong sup
port and discussions. Thanks also go to Mark Newman, Eric Smith,
Haitao Fang, Li Ming, Cris Moore, Supriya Krishnamurthy, Dengy
ong Zhou, Melanie Mitchell, Jose Lobo, John Miller, Sam Bowles, Jim
Crutchﬁeld, ChienFeng Huang and other people for helpful discussions.
Thanks to Laura Ware for proofreading.
References
[1] Remi Monasson, Riccardo Zecchina, Scott Kirkpatrick, Bart Selman,
Lidror Troyansky, “Determining Computational Complexity from Char
acteristic ‘Phase Transitions,’” Nature, 400 (1999) 133–137.
[2] John von Neumann, Theory of SelfReproducing Automata (University of
Illinois Press, UrbanaChampaign, IL, 1966).
[3] http://www.red3d.com/cwr/boids/applet. A demonstration of the Boids
model of bird ﬂocking is provided.
[4] Martin Gardner, “Mathematical Games: The Fantastic Combinations
of John Conway’s New Solitaire Game ‘Life,’” Scientiﬁc American, 223
(October 1970) 120–123; http://www.bitstorm.org/gameoﬂife.
[5] Dirk Helbing, Illes J. Farkas, andTamas A. Vicsek, “Simulating Dynamical
Features of Escape Panic,” Nature, 407 (2000) 487–490.
[6] P. Bak, C. Tang, and K. Wiesenfeld, “Selforganized Criticality,” Physics
Review A, 38 (1988) 364–374.
[7] R. Axelrod, The Evolution of Cooperation (Basic Books, New York,
1984).
[8] David Wolpert and Kagan Tumer, “An Introduction to Collective Intel
ligence,” in Handbook of Agent Technology, edited by A. Jeffrey and
M. Bradshaw (AAAI Press/MIT Press, Cambridge, 1999).
[9] M. Dorigo and G. Di Caro, “The Ant Colony Optimization Meta
heuristic,” in New Ideas in Optimization, edited by D. Corne, M. Dorigo,
and F. Glover (McGraw Hill, UK, 1999).
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 39
[10] S. Kirkpatrick, C. D. Gellat, Jr., and M. P. Veechi, “Optimization by
Simulated Annealing,” Science, 220(4589) (1983) 671–681.
[11] S. Boettcher and A. G. Percus, “Nature’s Way of Optimizing,” Artiﬁcial
Intelligence, 119 (2000) 275–286.
[12] Han Jing, Jiming Liu, and Cai Qingsheng, “From ALIFE Agents to
a Kingdom of N Queens,” in Intelligent Agent Technology: Systems,
Methodologies, and Tools, edited by Jiming Liu and Ning Zhong (The
World Scientiﬁc Publishing Co. Pte, Ltd., 1999). A demonstration of
Alife Model for solving Nqueen problems can be downloaded from
http://www.santafe.edu/~hanjing/dl/nq.exe.
[13] Jiming Liu, Han Jing, and Yuan Y. Tang, “Multiagent Oriented Constraint
Satisfaction,” Artiﬁcial Intelligence, 136(1) (2002) 101–144.
[14] Han Jing and Cai Qingsheng, “Emergence from Local Evaluation Func
tion,” Journal of Systems Science and Complexity, 16(3) (2003) 372–390;
Santa Fe Institute working paper 200208036.
[15] Vipin Kumar, “Algorithmfor Constraint Satisfaction Problem: ASurvey,”
AI Magazine, 13(1) (1992) 32–44.
[16] J. van Mourik and D. Saad, “Random Graph Coloring: A Statistical
Physics Approach,” Physics Review E, 66(5) (2002) 056120.
[17] M. E. J. Newman, “The Structure and Function of Networks,” Computer
Physics Communications, 147 (2002) 40–45.
[18] Rok Sosic and Jun Gu, “Efﬁcient Local Search with Conﬂict Minimization:
ACase Study of the Nqueen Problem,” IEEE Transactions on Knowledge
and Data Engineering, 6(5) (1994) 661–668.
[19] V. Kumar, “DepthFirst Search,” in Encyclopaedia of Artiﬁcial Intelli
gence, Volume 2, edited by S. C. Shapiro (John Wiley and Sons, Inc., New
York, 1987).
[20] S. Minton, M. D. Johnston, A. B. Philips, and P. Laird, “Minimizing
Conﬂicts: A Heuristic Repair Method for Constraint Satisfaction and
Scheduling Problems,” Artiﬁcial Intelligence, 52 (1992) 161–205.
[21] Bart Selman, Henry A. Kautz, and Bram Cohen, “Local Search Strategies
for Satisﬁability Testing,” in Second DIMACS Challenge on Cliques, Col
oring, and Satisﬁability, edited by M. Trick and D. S. Johnson, Providence,
RI, USA, 1993.
[22] D. McAllester, B. Selman, and H. Kautz, “Evidence for Invariants in Local
Search,” in Proceedings of AAAI’97 (AAAI Press/The MIT Press, 1997).
[23] F. Glover, “Tabu Search: Part I,” ORSA Journal on Computing, 1(3)
(1989) 190–206.
Complex Systems, volume (year) 1–1+
40 J. Han
[24] F. Glover, “Tabu Search: Part II,” ORSA Journal on Computing, 2(1)
(1990) 4–32.
[25] B. Selman, H. Levesque, and D. Mitchell, “A New Method for Solving
Hard Satisﬁability Problems,” in Proceedings of AAAI’92 (MIT Press, San
Jose, CA, 1992).
[26] B. Selman, Henry A. Kautz, and Bram Cohen, “Noise Strategies for Im
proving Local Search,” in Proceedings of AAAI’94 (MIT Press, Seattle,
WA, 1994).
[27] J. H. Holland, Adaptation in Natural and Artiﬁcial Systems (The Univer
sity of Michigan Press, Ann Arbor, 1975).
[28] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” IEEE In
ternational Conference on Neural Networks IV (IEEE Service Center,
Piscataway, NJ, 1995).
[29] David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine
Schevon, “Optimization by Simulated Annealing: An Experimental Eval
uation; Part II, Graph Coloring and Number Partitioning,” Operations
Research, 39(3) (1991) 378–406.
[30] D. Brelaz, “NewMethods to Color Vertices of a Graph,” Communications
of the ACM, 22, 251–256.
[31] A. Hertz and D. de Werra, “Using Tabu Search Techniques for Graph
Coloring,” Computing, 39(4) (1987) 345–351.
[32] Jacques Ferber, Multiagent Systems: An Introduction to Distributed Arti
ﬁcial Intelligence (AddisonWesley, New York, 1999).
[33] R. K. Ahuja, O. Ergun, J. B. Orlin, and A. P. Punnen, “A Survey of
Very Largescale Neighborhood Search Techniques,” Discrete Applied
Mathematics, 123 (2002) 75–102.
[34] Jan HM Korst and Emile HL Aarts, “Combinatorial Optimization on
a Boltzmann Machine,” Journal of Parallel and Distributed Computing,
6(2) (1989) 331–357.
[35] E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines: A
Stochastic Approach to Combinatorial Optimization and Neural Com
puting (John Wiley & Sons, 1989).
[36] Eric van Damme, Stability and Perfection of Nash Equilibria (Springer
Verlag, Berlin and New York, 1987).
[37] Marc Mezard, Giorgio Parisi, and Miguel Angel Virasoro, Spin Glass
Theory and Beyond (World Scientiﬁc, Singapore and Teaneck, NJ, 1987).
[38] Ravindra K. Ahuja and James B. Orlin, “Use of Representative Operation
Counts in Computational Testing of Algorithms,” INFORMS Journal on
Computing, 8(3) (1996) 318–330.
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution 41
[39] Alice Smith and David Coit, “Constraint Handling Techniques: Penalty
Functions,” in Handbook of Evolutionary Computation, edited by
T. Baeck, D. Fogel, and Z. Michalewicz (Oxford University Press, 1997).
[40] Ian P. Gent and Barbara M. Smith, “Symmetry Breaking in Constraint
Programming,” in Proceedings ECAI’2000, edited by W. Horn (IOS Press,
2000).
[41] A. Marino and R. I. Damper, “Breaking the Symmetry of the Graph
Colouring Problemwith Genetic Algorithms,” in Proceedings Genetic and
Evolutionary Computation Conference (GECCO2000), Late Breaking
Papers, Las Vegas, NV (Morgan Kaufmann Publishers, San Francisco,
2000).
[42] R. Mulet, A. Pagnani, M. Weigt, and R. Zecchina, “Coloring Ran
dom Graphs,” Physical Review Letters, 89 (2002) 268701; cond
mat/0208460.
[43] S. Cocco, R. Monasson, A. Montanari, and G. Semerjian, “Approx
imate Analysis of Search Algorithms with ‘Physical’ Methods,” cond
mat/0302003.
[44] Wim Hordijk, “A Measure of Landscapes,” Evolutionary Computation,
4(4) (1996) 335–360.
Complex Systems, volume (year) 1–1+
2
J. Han
different contexts, the evaluation function is also called the objective function, penalty function, ﬁtness function, cost function, or energy E. It has the form f S , where is the state (conﬁguration) space. The evaluation function calculates how good a state of the whole system (or a single agent) is, then selection for the next step is based on its value. The evaluation function plays a key role in evolution and is one of the fundamental problems of evolution. Generally speaking, the system state (conﬁguration) related to the minimal evaluation value is the optimum of the system. For example, the Hamiltonian is used to evaluate the energy of a conﬁguration of the spin glass system in statistical physics. So in this case the Hamiltonian is the evaluation function and the groundstate is related to the minimum of the Hamiltonian. In combinatorial problems, such as the SAT problem [1], the evaluation function is the total number of unsatisﬁed clauses of the current conﬁguration. So the evaluation value for a solution of the SAT problem is zero. In artiﬁcial molecular evolution systems, the evaluation function is the ﬁtness function, and evolution favors conﬁgurations that are related to the maximal ﬁtness value. A system may evolve in different paths if it is guided by different evaluation functions. But there always exists pairs of different evolution functions that lead to the same ﬁnal state although the paths may be different. Therefore, for a speciﬁed system, people are able to construct different evaluation functions to reach the same goal state. As the main theme of this paper, we investigate the following two ways to construct evaluation functions.
1. Global evaluation function (GEF). GEF methods use global information to evaluate the whole system state. This is the traditional way of constructing the evaluation function and is widely used in many systems. The GEF embodies the idea of centralized control. 2. Local evaluation function (LEF). LEF methods use local (limited) information to evaluate a single agent’s state and guide its evolution.
The idea of using local rules based on local information is one of the features of complex systems such as cellular automata [2], Boid [3], game of life [4], escape panic model [5], and sandpile model [6]. The private utility function in game theory [7] and the multiagent learning system COIN [8] are very close to the LEF idea except that the utility function is allowed to use global information. It is a simple and natural idea to use a GEF to guide the evolution of a whole system to a global goal. But can we use an LEF to guide each agent so that the whole system attains the global goal in the end? In other words, if each agent makes its own decisions according to local information, can the agents get to the same system state reached by evolution guided by a GEF? Some emergent global complex behaviors from
Complex Systems, volume (year) 1–1+
Local and Global Evaluation Functions for Computational Evolution
3
local interactions have been found in natural and manmade systems [3–5, 9]. Most traditional algorithms used for solving combinatorial problems use a GEF, such as simulated annealing [10], and some new ones use an LEF, such as extremal optimization [11] and the Alife&AER model [12, 13]. Our earlier paper [14] investigated emergent intelligence in solving constraint satisfaction problems (CSPs, also called decision problems) [15] by modeling a CSP as a multiagent system and then using LEF methods to ﬁnd the solution. This paper studies the relationship and essential differences between local and global evaluation functions for the evolution of combinatorial problems, including decision and optimization problems. We use the coloring problem to demonstrate our ideas because it has two forms: the kcoloring problem, which is a decision problem; and the minimum coloring problem, which is an optimization problem. We found that some LEFs are consistent with some GEFs while some are not. So LEF and GEF methods differ in two layers: consistency and exploration. We also compare the performance of LEF and GEF methods using computer experiments on 36 benchmarks for the kcoloring and minimum coloring problems. This paper is organized as follows. In section 2, computer algorithms for solving coloring problems are reviewed, and then a way to model a coloring problem as a multiagent system is described. Section 3 is about GEFs and LEFs. We ﬁrst show how to construct a GEF and an LEF for solving the coloring problem (section 3.1), and then show their differences from aspects of consistency (section 3.2.1) and exploration heuristics (section 3.2.2). Computer experiments (section 3.3) show that the LEF (Alife&AER model) and GEF (simulated annealing and greedy) are good at solving different instances of the coloring problem. Conclusions are presented in section 4.
2. Coloring problems
The coloring problem is an important combinatorial problem. It is NPcomplete and is useful in a variety of applications, such as timetabling and scheduling, frequency assignment, and register allocation [15]. It is simple and can be mapped directly onto a model of an antiferromagnetic system [16]. Moreover, it has obvious network structure, making it a good example for studying network dynamics [17]. The coloring problem can be treated in either decision problem (CSPs) or optimization problem form.
Deﬁnition 1 (kcoloring problem)
Given a graph G (V, e), where V is the set of n vertices and e is the set of edges, we need to color each vertex using a color from a set of k colors. A “conﬂict” occurs if a pair of
Complex Systems, volume (year) 1–1+
4
J. Han
V1 V2 Red V4
Green V3 Blue Red
Figure 1. A solution for a 3coloring problem.
linked vertices has the same color. The objective of the problem is either to ﬁnd a coloring conﬁguration with no conﬂicts for all vertices, or to prove that it is impossible to color the graph without any conﬂict using k colors. Therefore, the kcoloring problem is a decision problem. There are n variables for the n vertices where each variable is one of k colors. For each pair of nodes linked by an edge, there is a binary constraint between the corresponding variables that disallows identical assignments. Figure 1 is an example: Variable set X Variable domain D1 Constraint set R X1 , X2 , X3 , X4 , D2 D3 D4 green, red, blue , X1 X2 , X1 X3 , X1 X4 , X2 X3 .
The kcoloring problem (k 3) is NPcomplete. It might be impossible to ﬁnd an algorithm to solve the kcoloring problem in polynomial time.
Deﬁnition 2 (Minimum coloring problem)
This deﬁnition is almost the same as the kcoloring problem except that k is unknown. It requires ﬁnding the minimum number of colors that can color the graph without any conﬂict. The minimum possible number of colors of G is called the chromatic number and denoted by Χ(G). So the domain size of each variable is unknown. For the graph shown in Figure 1, Χ(G) 3. The minimum coloring problem is also NPcomplete.
2.1 Algorithms
There are several basic general algorithms for solving combinatorial problems. Of course they can also be used to solve the coloring problem. Although we focus on the coloring problem, the purpose of this paper is to demonstrate the ideas of LEF and GEF methods for all combinatorial problems. What follows is a brief review of all general algorithms used for combinatorial problems, even though some of them might not have been applied to the coloring problem. The focus is only on the basic ones, so variants of the basic algorithms will not be discussed.
Complex Systems, volume (year) 1–1+
because the process of ﬁnding a solution using GTSS algorithms can be regarded as the evolution of a multiagent system. They are all populationbased optimization paradigms. Classiﬁcations of basic algorithms for combinatorial problems. The system assigns colors to vertices sequentially and then checks for conﬂicts in each colored vertex. This paper will focus on GTSS algorithms: the intersection of singlesolution and the generatetest algorithms (the gray area in Figure 2).. Many heuristics have been developed to improve performance. ant colony optimization [9]. According to how many (approximate) solutions the system is seeking concurrently. 13]. Singlesolution algorithms (SS). the intersection of singlesolution and generatetest (the gray area). local search [10. and the Alife&AER model [12. The system is concurrently searching for several conﬁgurations. There are two styles of search: systematic search and generatetest. Some examples are backtracking [18]. and DSATUR [30]. This is a much bigger and more popular family than the backtracking paradigm. Multiplesolution algorithms (MS). several important algorithms are SEQ [29]. The system is searching for only one conﬁguration at a time. it will go back to the most recently colored vertex and perform the process again. If conﬂicts exist. Generatetest (GT). Examples of this are genetic algorithms [27]. volume (year) 1–1+ Backtracking. For the coloring problem. which means that there are several copies of the system.e. checks if it is a soComplex Systems. RLF [29]. they can be divided into singlesolution and multiplesolution algorithms. 19–26]. Generatetest . The system is initialized with a population of random conﬁguration and searches for optima by updating generations based on the interaction between conﬁgurations. extremal optimization (EO) [18]. GT generates a possible conﬁguration (colors all vertices) and then checks for conﬂicts (i. and particle swarm optimization [28]. Figure 2 is our classiﬁcation of current algorithms.Local and Global Evaluation Functions for Computational Evolution 5 Singlesolution Algorithms Systematic Search Backtracking … Local search (Simulated Annealing) Extremal Optimization Alife&AER Model … Ant Colony Optimization … Genetic Algorithm Particle Swarm Optimization … Multiplesolution Algorithms Figure 2. This paper focuses on GTSS algorithms.
it sometimes performs stopandrestart.1 All of the given algorithms have their advantages and drawbacks. and Tabu search [23. and then change one variable’s value in each improvement until a solution is found or the maximum number of trials is reached. it is not discussed in this paper. The simplest but inefﬁcient way to generate a conﬁguration is to select a value for each variable randomly. Some algorithms that use the complex systems ideas are genetic algorithms. random walk [26].6 J. For many largescale combinatorial problems. 31]. such as hill climbing [25] or minconﬂict [20]. volume (year) 1–1+ . To avoid being trapped in the localoptima. This style of searching is called local (or neighborhood) search. Complex Systems. it will modify the current conﬁguration and check it again until a solution is found. and no algorithm can beat all other algorithms on all combinatorial problems. local search always gives better results than the systematic backtracking paradigm. extremal optimization.1) until a solution is found. which is the main difference from simulated annealing. The ﬁtness function for each agent evaluates how good the conﬁguration represented by the agent is. particle swarm optimization. 24. 1 Some implementations of particle swarm optimization can use either a GEF or an LEF. If it is not a solution. People proposed simulated annealing schedules to solve the coloring problem and showed that it can dominate backtracking diagrams on certain types of graphs [29]. Simulated annealing [10] is a famous heuristic of local search inspired by the roughly analogous physical process of heating and then slowly cooling a substance to obtain a strong crystalline structure. Note that extremal optimization and the Alife&AER model improve the variable assignment based on LEFs. they are not complete algorithms. Since particle swarm optimization is not a singlesolution algorithm. Both genetic algorithms and particle swarm optimization have many different conﬁgurations at a time. Although GT algorithms always beat backtracking on largescale problems. Smarter conﬁguration generators have been proposed. and the Alife&AER model. Each conﬁguration can be regarded as an agent (called a “chromosome” in genetic algorithms). so the evolution of the system is based on a GEF. They cannot prove there is no solution for decision problems and sometimes miss solutions that do exist. Han lution). They move to a new conﬁguration with a better evaluation value chosen from the current conﬁguration’s neighborhood (see details in section 3. Extremal optimization and the Alife&AER model also generate a random initial conﬁguration.
volume (year) 1–1+ . Bblue). B. especially the local interactions among them (Figure 3). Agents in these systems may be homogeneous or heterogeneous. It is able to act with other agents and get information from the system. The current conﬁguration S G. . . V2 . and may have common or different goals. . and the circle indicates the current agent state (color). A multiagent system (MAS) is a computational system that consists of a set of agents A a1 . Complex Systems. . an possible states of agent ai Xi red state of agent ai is ai . . . B is a solution. a2 . in which agents interact or work together to reach goals [32].2 Multiagent modeling We proposed modeling a CSP in the multiagent framework in [14]. . . R. It is autonomous.state red Link between two vertices Graph interaction between two agents interaction network in MAS X1 . A MAS (right) for a 3coloring problem (left). Ggreen. . . The table with three lattices beside each agent represents all possible states (Rred. . . It is straightforward to construct a MAS for a coloring problem: n vertices V1 . a2 . Xn goal system state evolution of MAS a system state Conﬁguration S Solution Ssol Process of ﬁnding a solution by GTSS algorithms Evaluation function evolution direction of MAS a1 V1 V2 R V4 B Agent R G B R B G G V3 B a3 a2 B R G B a4 R G Interaction (a) (b) Figure 3. MAS focus on the complexity of a group of agents.Local and Global Evaluation Functions for Computational Evolution 7 2. an . Vn Possible colors of Vi Color of Vi is red n agents a1 . X2 . It is driven by some objectives and has some behavioral strategies based on information it collects. . . Here we translate the coloring problem into the language of multiagent systems. . The concept of agent (denoted by a) is a computational entity. .
R. Han V1 V2 R V4 G V3 B G R G B V2 R V1 G V3 B V4 B R G B a1 a2 a3 a4 (a) An initialization a1 a2 a3 a4 (b) A solution state Figure 4.. for example. extremal optimization. 2. The MAS representation for a 3coloring problem. volume (year) 1–1+ . those algorithms all share the same following framework.1 Evolution We now need to design a set of rules (or algorithm) to evolve conﬁgurations of the coloring problems. G is obviously not a solution. Thus. R.2.1. 3. S G. that is. B. if the system is initialized as shown in Figure 4. B (see Figure 4(b)). Initialization (t 0): each agent picks a random state. B.8 J. In the language of MAS.1. 1.1 and LEF methods follow in section 3. How does the system evolve to the solution state? As mentioned earlier. Repeat step 2 until a solution is found or the maximum number of trials is reached. The essential difference between the local and global functions lies in the process of Evolution(S) in step 2. Complex Systems. how variables change their assigned values. simulated annealing. one or several agents change their state. For example. The core of the algorithm is how the whole system (conﬁguration) changes from one state to another state. and the Alife&AER model) are schedules for evolution of the MAS. 4. There is a conﬂict between V1 and V4 which are linked to each other and have the same color.g. t t 1. the system needs to evolve to reach a solution state. GEF methods are described in section 3. For each time step (t): according to some schedule and heuristics. S G. 3. get S(0). so that S(t 1) Evolution(S(t)). Output the current state S(t) as the (approximate) solution. Global and local evaluation functions 3. GTSS algorithms (e.
with ∆ denoting the Kronecher function ∆(x. In 1 the following discussions. we always use the neighborhood structure NS since the neighborhood structure is not in the scope of this paper. For simplicity. and a suitable choice can get good performance. or one) neighbors of the current system state S(t). where Hammingdistance(S. equation (1) counts twice the number of conﬂicts: n EGEFk (S) i 1 Xj S i ∆(Xi . the evaluation value is 1 2 2. S ) i (1 ∆(Xi . Xi )). we can deﬁne the neighborhood structure of a conﬁguration S based on the concept of Hamming distance: d NS S Hammingdistance(S. so EGEF (Ssol ) 0. S ) d. In Figure 6. (b) and (c) are neighbors of (a). the GEF search space 1 is NS and there are n(k 1) neighbors of each system state. Complex Systems. Equation (1) is a GEF because it considers the whole system including all agents.1. This will enlarge the search scope and increase the cost of searching the neighborhood. Figure 4(b) is a neighbor of Figure 4(a).1 Traditional methods: Global evaluation functions Evolution(S) of step 2 for GEF methods can be described as: Compute the GEF values of all (or some. y) 1 if x y and otherwise ∆(x. A neighborhood in GEF methods is the search scope for the current system state. For example. j e denotes the set of variables that link to Vi . A GEF for the kcoloring problem.Local and Global Evaluation Functions for Computational Evolution 9 3. and then select and return a neighboring state S(t) to update the system. In simulated annealing. if d 1 in the kcoloring problem. If d is larger. For example. For the system state shown in Figure 4(a). or with a speciﬁed probability to accept a worse state. Here “better” and “worse” are deﬁned by the evaluation value. But there will probably be greater improvement in each time step. the GEF is the energy or cost function of the current system state and guides the search. y) 0. So the neighborhood structure will affect the efﬁciency of the algorithm [33]. Note that simulated annealing just randomly generates a neighbor and accepts it if it is a better state. There are a variety of evaluation functions for a speciﬁed problem. One naïve GEF for the kcoloring problem is to count the number of conﬂicts in the current conﬁguration. Xj ) (1) where S i Xj i. Figure 4(b) shows a solution state. volume (year) 1–1+ . A GEF evaluates how good the current system state S is. the neighborhood size is bigger.
in Figure 5(a): C1 C2 . 35] for ﬁnding the chromatic number Χ(G). There are several evaluation functions [29. EGEFO (Sb ) 2. and C3 X1 . As a minimum coloring problem based on equation (2). 34. green is the second color. Part (b) of equation (2) counts twice the number of conﬂicts in the current conﬁguration. Xj ) (b) (2) (a) where m is the current total number of colors being used. Index(green) 2. using equation (2) again. Say red is the ﬁrst color. blue is the third color. An example of using GEFs for a coloring problem. j 1. so EGEFO (Sa ) (22 22 ) 4 1 2 0. based on equation (1). EGEFk (Sc ) 0. EGEFO (Sb ) > EGEFO (Sa ) shows that Sb is a worse and EGEFO (Sc ) V1 B V2 R V4 R (a) Sa V3 B V2 R V4 G (b) Sb V1 B V3 B V2 R V4 R (c) Sc V1 G V3 B Figure 5. A GEF for the minimum coloring problem. a feasible solution. although a state with a better evaluation value does not always mean that it is closer to the solution state. As a 3coloring problem. The set Ci includes variables that are colored by the ith color. Sa is the initial state.n . volume (year) 1–1+ . EGEFk (Sb ) 2. We can get the evaluation value of its two neighbors: EGEFO (Sb ) 2 6. We modiﬁed the version given by Johnson [29] to be: m n EGEFO (S) i 1 Ci 2 n i 1 Xj S i ∆(Xi . that is. EGEFO (Sa ) 0. Complex Systems. which makes all the local optima in equation (2) correspond to a nonconﬂicting solution. We assign each color s an index number Index(s). X3 . Sb and Sc are two neighbors of Sa . Han Minimizing the evaluation value is the goal of the solving process. For example. X4 . so Sc is the best one (also an optimal solution). So Sc is the solution. n in part (b) is a weight value to balance parts (a) and (b). EGEFk (Sa ) 2. and so on. Part (a) favors using fewer colors.. So Ci Xj Index(Xj ) i. EGEFO (Sc ) 6. Index(blue) 3.. Index(red) 1. X2 .10 J.
2). G) (∆(X2 . Like the GEF. 1 . not a single agent’s state. R) ∆(X3 . B) 1 (∆(X2 . For EGEFO (Sc ) < EGEFO (Sa ). as in cellular automata. 3. No agents here are autonomous. R) (∆(X2 .2. then S is a solution (Figure 6(c)): ELEFk (S i . They all obey the choice of the centralized controller. equations (1) and (2) consider all agents’ states.1. s).2 Local evaluation function methods Evolution(S) of step 2 using LEF methods can be described as: For each agent ai . there are a variety of functions which can serve as the LEF for a problem. B) ∆(X4 . These strategies are part of the exploration heuristics (see section 3. R)) ∆(X4 . This is one of our future projects. volume (year) 1–1+ . Xi ) 0. . Leastmove means the agent will select the best state and change to that state. bettermove.Local and Global Evaluation Functions for Computational Evolution 11 state than Sa . . 1 . Therefore. Actually Sc is an optimal solution. G) ∆(X3 . G) 1 ELEFk (Sa. After obtaining the evaluation values for all states of an agent. An LEF for the kcoloring problem. such as leastmove. because it uses one more color. Note that if for all i i 1 . but it still has one conﬂict as does Sa . B) ∆(X3 . (2) they use all information of the system. Complex Systems. see [13]). a3 . the system checks all (or one) neighbors and selects the one that has the best evaluation value as the new system state. Sc is a better state because it can remove the conﬂict in Sa although it uses one more color. n we have ELEFk (S i . and randommove (random walk). agent a1 links to a2 . So we can see that the GEF methods have two main features: (1) they evaluate the whole system state. s) Xj S i i ∆(Xj . and then select a state for ai to update itself. R) 1 ELEFk (Sa. (3) For example. do the following sequentially:2 Compute the evaluation values of all/some states by ai ’s LEF. 1 . so we get: ELEFk (Sa. centralized control in GEF methods is obvious: in each step. Equation (3) is one possible LEF for each agent ai in the kcoloring problem [13]. and randommove means the agent will change its state randomly (for details. 2 Or simultaneously in some cases. B)) 2 0 1. and a4 . it will update its state using some strategies. G)) ∆(X4 . in Figure 5(a). It calculates how many conﬂicts each state (s) of agent ai receives based on the current assignments of ai ’s linked agents.
a1 will most probably change to state G. (c) shows that Sc is a solution state because all agents are in nonconﬂicting states. If each agent only knows its linked agents. An LEF for the minimum coloring problem. Agent a4 only links to a1 . Part (a) of equation (4) favors using fewer colors by putting pressure on each agent to try to get a color with a smaller index number. 3 Similarly. From Sa . R) ELEFk (Sa. n in part (b) is a weight value to balance parts (a) and (b). it will most probably change its state from B to G and get to Sc . it will stay. s) (b) (4) where Index(s) is described in section 3. If a4 moves ﬁrst.1 and returns the index number of color s. B) 0 0 1. how can they use their limited knowledge to ﬁnd the minimal colors for a graph? Each agent has to make an approximation of the global requirement. volume (year) 1–1+ . Han a1 a2 a3 a4 R 2 0 1 0 G 0 0 0 0 B 1 2 1 1 a1 a2 a3 a4 R 1 0 1 0 G 1 0 0 0 B 1 2 1 1 a1 a2 a3 a4 R 2 0 1 0 G 0 1 1 1 B 1 1 0 0 (a) Evaluation values in Sa (b) Evaluation values in Sb (c) Evaluation values in Sc Figure 6. Note that if for Complex Systems. G) 4 ELEFk (Sa. Let us try the following LEF for each agent ai : EiLEFO (S i . G) ∆(X1 . 4 . we can get evaluation values (Figure 6) for all states of each agent. Numbers on each row of the lattices represent the evaluation values based on equation (3) for an agent’s three possible states (colors): R. which makes all the local optima in equation (4) correspond to a nonconﬂicting state. Part (b) of equation (4) counts how many conﬂicts ai will receive from its linked agents if ai takes the color s.12 J. so it only considers a1 ’s state: ELEFk (Sa. ELEFk for agent a2 only considers a1 and a3 . R) ∆(X1 . Since the evaluation value of the current state (X1 B) is higher than state G.1. it will change to G and get to Sb . B) 2 4 4 ∆(X1 . Circles refer to the current state of the agent. s) Index(s) n (a) Xj S i ∆(Xj . ELEFk for agent a3 only considers a1 and a2 . or B. 4 . 4 . if a1 moves ﬁrst. An example of using an LEF for the 3coloring problem shown in Figure 5. G. or with small probability. Continuing this way.
So we can see that the LEF methods have two main features: (1) they evaluate the state of a single agent.g. B)) i Index(B) n (∆(X2 . 1 . if the whole system can achieve a global solution based on all selﬁsh agents which are using the LEF.. Complex Systems. G)) ∆(X4 . Circles refer to the current state of the agent. Since the evaluation value of the current state (X1 B) is higher than state G. . Xi ) n. R)) ∆(X4 . the system dispatches all agents sequentially (e. if a1 moves ﬁrst. B) 3 4 1 7. 1 . We can evaluate each state of a1 in Figure 7(a): E1 LEFO (Sa. a1 will most probably perform a leastmove to change from B to G. only knows the information of its related subgraph). it will most probably change its state from B to G and get to Sc . the system dispatches the agent who is in the worst state according to the LEF). in the extremal optimization algorithm. From Sa . G) ∆(X3 .Local and Global Evaluation Functions for Computational Evolution 13 a1 a2 a3 a4 R 9 1 5 1 G 2 2 2 2 B 7 11 7 7 Y 4 4 4 4 a1 a2 a3 a4 R 5 1 5 1 G 6 2 2 2 B 7 11 7 7 Y 4 4 4 4 a1 a2 a3 a4 R 9 1 5 1 G 2 6 6 6 B 7 7 3 3 Y 4 4 4 4 (a) Evaluation values in Sa (b) Evaluation values in Sb (c) Evaluation values in Sc Figure 7. Equations (3) and (4) only consider its linked agents’ states (i. Xn is a feasible solution. All agents are autonomous and decide their behaviors based on the LEF that maximizes their own proﬁt. all i n we have ELEFO (S i . with lower probability a1 will perform a randommove to change to state Y and get to another feasible solution (but not an optimal one. Numbers on each row of the lattices represent the evaluation values based on equation (4) for all possible states (colors) of an agent. . G) E1 LEFO (Sa. 1 . Or. volume (year) 1–1+ . R) 9 n 2 (∆(X2 . R) ∆(X3 . Decentralized control in LEF methods is also obvious: in each step. An example of using an LEF for the minimum coloring problem shown in Figure 5.e. since it uses four colors). (2) they use local information of the system. . Therefore. X2 . or with small probability it will change to G and get to Sb . it will stay. but not always the optimal solution. B) ∆(X4 . S X1 . and get to an optimal solution Sc . R) E1 LEFO (Sa. G) ∆(X3 . we call this emergence and the system selforganizes towards a global goal. (c) shows that Sc is a feasible solution state because all agents are in nonconﬂicting states and it is also an optimal solution. not the whole system. B) Index(R) 1 4 2 Index(G) 2 4 0 n (∆(X2 . . if a4 moves ﬁrst..
So.14 J. i. Xi ). This ﬁts the concepts we mentioned in the Introduction. Now we know the following two aspects should be considered while constructing an evaluation function. the whole system is considered. S replace(S. So S belongs 1 to NS . and S An LEF is consistent with a GEF if it is true for S 1 NS (i. s to denote that S is achieved by replacing the state s of Xi in S to be s .1 we studied the local and global evaluation functions using the coloring problem. s ).1 Consistency and inconsistency In section 3. volume (year) 1–1+ . we use Di S replace(S. s. 1 when x > 0. How much information is used in the evaluation? The GEF considers the whole system state. Deﬁnition 3. 2. sometimes we just simply say sgn( EGEF ) for equation (5). all variables share the same state in S and S except Xi . An LEF is restrictconsistent with a GEF if (1) the LEF is consistent with the GEF and that (2) Α R such that it is true for all S n EGEF (S) Α i 1 DiLEF (S i . agent a1 uses LEF equation (3) to change Complex Systems.e.2. Han 3. But what if the evaluation functions consider the whole system and only use local information? What is the relationship between the local and global evaluation functions? Consistency between LEF and GEF. For example.2 Essential differences between local and global evaluation functions 3. while the LEF considers a single agent. s )) that EGEF (S)) sgn(EiLEF (S i . s ) EiLEF (S i . Deﬁnition 4. i. (6) So restrictconsistent is the subset concept of consistent. It is easy to prove that LEF equation (3) is restrictconsistent with GEF equation (1) in the kcoloring problem. while the LEF only considers the agent’s linked agents. s. 1.. sgn(x) For convenience. What object is being evaluated? With a GEF. and sgn(0) 0 when x 0. In the following. s. (5) 1 when sgn( ELEF ) i sgn(EGEF (S ) where sgn(x) is deﬁned as: sgn(x) x < 0. in the 3coloring problem shown in Figure 5. s)).
Deﬁnition 1 5. This makes it easier for us to understand why the LEF can guide agents to a global solution. equation (5) holds between them) then the following is true: ( S )S L(P. Optimal solutions of optimization problems and feasible solutions in decision problems belong to L(P) and have the smallest evaluation value. . . Theorem 1 (Consistency) Given a problem P. Then we need to calculate EGEFk (S ) EGEFk (S) to decide whether S is a better system state or not. . . EGEF ). This change is also a good change for the whole system. ELEF ). s. The current system state is S and we randomly generate a neighbor S replace(S.e. it is not necessary to consider all of the agents’ states and EGEFk j can be calculated by ELEFk : EGEFk (S ) EGEFk (S) 2 ELEFk . For a certain conﬁguration S. n. 2. such that s1 .. . s2 .Local and Global Evaluation Functions for Computational Evolution 15 to G which can decrease its valuation value from 1 to 0. . while all agents get to a state that is evaluated by equation (3) of zero. which means it is a solution. EGEF ) S N(P. sn . ELEF ). because it decreases the evaluation value of the whole system based on GEF equation (1) (from EGEFk (Sa ) 2 to EGEFk (Sc ) 0). EGEF ) N(P. and an LEF: if the LEF is consistent with the GEF (i. In the consistent case. if it is the local optimum of the GEF. Suppose we use simulated annealing. j. Complex Systems. If EGEF (S ) EGEF (S) is true for S and S NS . When the LEF is consistent with the GEF. S EiLEF (S i . they both indicate the same equilibrium points. and vice versa. s ). In the kcoloring problem. ELEF ). Deﬁnition as S 6. A Nash equilibrium of an LEF in a problem P is deﬁned . . si ) for si Di holds for all i 1. . volume (year) 1–1+ . then it is the Nash equilibrium [36] of the LEF. the evaluation value of equation (1) for the whole system is also zero. (7) It follows that L(P. 1 then S is called the local optimum of the GEF using a NS neighborhood structure in problem P. a GEF. si ) EiLEF (S i . The set of all local optima is denoted as L(P. The set of all Nash equilibria of the P and the LEF is denoted as N(P. j What does consistency mean? A good agent decision based on the j LEF that can decrease the value of ELEFk is also a good decision for the whole system based on the GEF that can decrease the value of EGEFk .
. and an LEF: if the LEF is consistent with the GEF. x) Di . If S L(P.16 J. EGEF ).2. S is a local optimum. . x) ELEF (S i . EiLEF (S i . si . . x) EiLEF (S i . ELEF ) and S is a Nash equilibrium. x Di . . i sgn(ELEF (S i . We discuss this in section 3. that is. but it can also be used to solve the kcoloring problem (the experimental result in section 3. s. . s) EiLEF (S i . x i Di . The main difference between a consistent pair of LEF and GEF lies in their exploration heuristics. x ) L(P. s ). and structure according to spin glass theory [37]. then we have i 1. x)) EGEF (S)) EGEF (S) 0. x)) EGEF (S) S is a local optimum. . .2. . x Di . n. x)) EGEF (S) 0 Because sgn(ELEF (S i . n. . Inference 1 helps us to understand why some LEF can solve problems that the GEF can solve: because their equilibrium points are the same! They have the same phase transition. s) 0 S is a Nash equilibrium. s)) i EiLEF (S i . Given a problem P. 2. EiLEF (S i . . i. . i. si . . N(P. x 1. i. volume (year) 1–1+ . x) EGEF (S)) i S S Inference 1.3. EGEF (replace(S. then n i E ELEFk Index(s ) Index(s). LEF equation (4) and GEF equation (1) are not consistent with each other. x)) i ELEF (S i . S is a solution and S is a local optimum of the GEF. Han Proof. (8) 2 GEFk Complex Systems. 2. 2. . s. si . 2. .1 shows that equation (4) works for the kcoloring problem). 2. S If S N(P. s). 1 NS . EGEF ). . EiLEF (S i . . sgn(EGEF (replace(S. . a GEF. So can we construct any other LEF or GEF that can make an essential difference? Is there any case for an LEF that is not consistent with a GEF? Inconsistency between local and global evaluation functions. n. . s)) i i 1. solution numbers. . x) Because sgn(EGEF (replace(S. n. n. EGEF (replace(S. i. such as dispatch methods and strategies of agents. ELEF ). Suppose the current conﬁguration is S and its neighboring conﬁguration S replace(S. The LEF equation (4) is designed for minimum coloring problems. EGEF (S ) 1 EGEF (S) is true for S NS i 1. i. then S is also a Nash equilibrium of the LEF. . Di . EGEF (S 1.
An LEF is weakinconsistent with a GEF if S and 1 NS . If this does not cause a change in conﬂict numbers (i. or smaller than 1 zero depending on Index(s ) Index(s). Inference 2. N means not allowed. We have L(P. Y means allowed in the weakinconsistency. EGEF ) N(P. EGEF ) if the LEF is weakinconsistent with the GEF.. We can see that the linear relationship between equations (2) and (4) only exists between part (b). i If EGEF 0: ELEFk can be larger than. There exists S and S NS such i that sgn( EGEF ) sgn( ELEFk ). volume (year) 1–1+ . Deﬁnition 7. So equation (5) is true only when EGEF 0. (a) LEF is weakinconsistent with GEF and (b) GEF is weakinconsistent with LEF. equal to. so n/2 EGEF > Index(s ) Index(s) holds for all s and s. equation (5) is not true only when EGEF 0. LEF equation (4) is weakinconsistent with GEF equation (1) in the kcoloring problem. Deﬁnition So weakinconsistency is an asymmetrical relation and is a subset concept of inconsistency. we will get sgn( EGEF ) i sgn( ELEFk ). equation (5) is i not true only when ELEF 0 (see Table 1). ELEF ) if the GEF is weakinconsistent with the LEF. If EGEF 0: Because Index(s) n for all s. and an LEF we have N(P. It is easy to prove the inconsistency: suppose a variable Xk changes from the lth color to the hth color. Given a problem P. Weakinconsistency relations between LEF and GEF. conﬂicts between Complex Systems. A GEF is S 1 weakinconsistent with an LEF if S and S NS . a GEF. not between part (a). The minimal number of colors Χ(G) is a global variable. GEF equation (2) and LEF equation (4) in the minimum coloring problem is an example of inconsistency but not weakinconsistency.Local and Global Evaluation Functions for Computational Evolution 17 ∆ELEF ∆EGEF >0 =0 <0 >0 Y Y N (a) =0 N Y N <0 N Y Y ∆ELEF ∆EGEF >0 =0 <0 >0 Y N N (b) =0 Y Y Y <0 N N Y Table 1. The tables and show all possible relation combinations of EGEF and ELEF for any S 1 any S Ns . ELEF ) L(P. An LEF is inconsistent with a GEF if the LEF is not consistent with the GEF. 8.e. LEF equation (4) is inconsistent with GEF equation (1) in the kcoloring problem.
In the LEF. so ELEFO (Sb . This occurs because part (a) of equation (2) favors the unbalanced distribution of colors. So when h < l and Ch has no fewer vertices than Cl . V3 G V2 V4 B G V1 B B V2 V4 V3 G V1 B B (a) Sa (b) Sb a1 a2 a3 a4 R 1 1 1 1 G 14 8 8 8 B 3 9 15 3 a1 a2 a3 a4 R 1 1 1 1 G 8 8 2 8 B 9 9 21 3 (c) LEF values for Sa (d) LEF values for Sb GEF for (a) and (b): EGEFO (Sa)=4 EGEFO (Sb)=2 ∆ EGEFO = 2 <0 (e) GEF values for Sa.18 J. B) 9 8 1 > 0. EGEFO (Sb ) EGEFO (Sb ) 2 4 2 < 0. it is more likely that h < l because the agent favors colors which have smaller index numbers according to equation (4). while if using a GEF it will. part (b) remains the same). we will get EGEFO < 0 and ELEFO > 0. then EGEFO 2( Ch Cl 1) and EiLEFO h l. Sb Figure 8. i If h > l and Cl > Ch 1. a good choice for an agent (from color l to color h) based on the LEF will be a bad one for the whole system based on the GEF. (c) and (d) separately show LEF values 2 2 for all states of each agent of Sa and Sb . If i h < l and Cl < Ch 1. a2 will not change from G to B. The minimum coloring problem example in Figure 8 shows that. Han Xk and its linked vertices keep the same number. we will get EGEFO > 0 and ELEFO < 0. Complex Systems. when using an LEF. So equations (4) and (2) are inconsistent with each other because the LEF prefers Sa while the GEF prefers Sb . G) ELEFO (Sa . This indicates that LEF equation (4) and GEF equation (2) will lead the system to different states. volume (year) 1–1+ . (e) shows the GEF values for Sa and Sb . Inconsistency between LEF equation (4) and GEF equation (2) in a minimum coloring problem. (a) and (b) show two possible conﬁgurations which are neighbors and differ only in agent a2 .
1. sgn( E1 ) sgn( E2 ). Complex Systems. Xj ). Does this mean that if an LEF is inconsistent with a GEF. Xj ) gji (Xj . ELEF is called a consistent LEF. Is any LEF ELEF always consistent with EGEF ELEF ? Are all LEFs consistent? Our answer to both is no. volume (year) 1–1+ . For example. Upper values are of gij (Xi .Local and Global Evaluation Functions for Computational Evolution 19 Xj Xi sl sh x sl sh r x z z y r y Table 2. We limit our discussion to binary constraint problems with the same domains for all variables. we can ﬁnd a consistent GEF for equation (4) by simply deﬁning it as the sum of the entire formula: n n EGEFO (S) i 1 Index(Xi ) n i 1 Xj S i ∆(Xi . It has the attribute of gij (Xi . 3 We can also say that GEF equation (9) is inconsistent with GEF equation (2) if we extend the consistency concept to be: evaluation function E1 is consistent with evaluation function E2 if for any two neighboring conﬁgurations S and S . Consistency between ELEF and EGEF ELEF . If we deﬁne the evaluation value of the LEF of each node in each conﬁguration as in Table 2. Xi ). and D1 D2 binaryconstraint problem and all domains are the same. Inconsistency means the individual’s proﬁt (deﬁned by an LEF) will (sometimes) conﬂict with the proﬁt (deﬁned by a GEF) of the whole system. and lower values are of gji (Xj . we have a total of four coloring conﬁgurations. This means all constraints are only between Dn D. The symmetrical penalty of two linked nodes i and j for two colors sl and sh . If an LEF ELEF is consistent with the GEF EGEF ELEF . We want to give some rules for constructing a consistent LEF under the following limitations. in the 2coloring problem for a graph that only consists of two linked nodes i and j. Xi ). we will ﬁnd an inconsistency between the LEF and GEF ELEF . Xj ). (9) It is easy to prove that LEF equation (4) is consistent with GEF equation (9). otherwise it is called an inconsistent LEF.3 Deﬁnition 9. the LEF is also inconsistent with other GEFs? Actually. The coloring problem is a two variables.
. For S sgn(EGEF (S ) sgn and S X1 . volume (year) 1–1+ . Xj ). gij (Xi . Theorem 2. . Han 2. The second part is about constraints between two linked nodes. We only discuss LEFs which have the form EiLEF (S i . Obviously. f (s) . Equation (11) has the same form as a Hamiltonian in spin glasses: the ﬁrst part relates to the external ﬁeld. gij (Xi . We call this gij (Xi . Xi ). . Β balances these two parts. . has the same form as equation (10) is consistent with EGEF That is. . Xj ). Therefore. so is the static penalty function of Xi incurred by the gij (Xi . Xj ) ∆(Xi . an LEF can be consistent. . as stated in Theorem 2. s) f (s) Β Xj S i gij (s.s So the form of the GEF EGEF EGEF (S) i ELEF is: gij (Xi . X2 . Xj ) metrical penalty (see Table 2): in equations (1) through (4) and (9). Xj ) which is a symmetrical penalty with x y 1 and r z 0. Xj ) Complex Systems.20 J. j e. Β k Xj S k gkj (Xk . Xj ) is considered here gji (Xj . Xj ) gji (Xj . equations (1) and (9) are of the form of equation (11). (10) Β > 0 and Β > max f (s ) s . . Xi ). and the second part relates to the interaction between two spins. Only one form of static penalty function gij (Xi . . the penalty function shown in Figure 9(a) is not a symmetrical penalty. 3. X1 . then an ELEF which ELEF . Xj ): Di Dj constraint between Xi and Xj . Given these limitations. ELEF is a consistent LEF. and the related GEFs have the same form as the Hamiltonian in spin glass models. Xj ) f (Xk ) k Β k Xj S k gkj (Xk . Xj ) a symwhich satisﬁes gij (Xi . Equations (3) and (4) are of the form given by equation (10). Xn 1 NS . the form we deﬁned here can cover a big class of LEFs for combinatorial problems. Xn EGEF (S)) f (Xk ) k Proof. X2 . If i. i Xj S i f (Xi ) Β (11) f (s) indicates the preference for color (value) s of the node (agent or variable).
sgn f (s ) f (s) Β Xj S i (gij (s . the LEF value of i is 1. the LEF and the GEF lead the system to evolve to different conﬁgurations. s ) gji (Xj . The LEF is inconsistent with the GEF ELEF in a 2nodelinked graph of a 2coloring problem. jRed Green i 1 Red 4 4 1 2 j 5 4 (c) iRed. Xj ) gij (s. and the LEF value of j is 4. and the LEF of each agent is the utility function of each player. So they are inconsistent even though the GEF is the sum of the LEFs of all agents. Xj ) gji (Xj . jRed 3 4 6 5 3 4 j 6 5 (e) iGreen. If we see “Green” as “Cooperative” and “Red” as “Defective” (a) is actually the payoff matrix of the prisoner dilemma problem [7] which is one of the most interesting problems in game theory. volume (year) 1–1+ . (d) is the Nash equilibrium of the LEF methods.Local and Global Evaluation Functions for Computational Evolution 21 j i Green Red 1 Green 3 3 Red 4 1 (a) Green i 1 Red i 4 2 2 Green 3 6 Red 2 5 4 3 4 j 6 4 (b) iGreen. (b) through (e) are LEF values (upper) and GEF values (lower) of each conﬁguration for nodes i and j. and (e) is the local optimum of the GEF methods. jGreen Figure 9. if i is Green and j is Red. In (b) and (c). For example. s)) Complex Systems. is the tendency to change color given the other node’s color according to the GEF judgment. is the tendency to change color given the other node’s color according to the LEF judgment. (a) LEF values for node i (upper) and j (lower). jGreen Green i Red 2 5 4 1 2 j 5 4 (d) iRed.
Han sgn f (s ) f (s) Β Xj S i (gij (s . volume (year) 1–1+ . s ) Theorem 2 only gives a sufﬁcient condition but not the necessary condition for a consistent LEF. Xj ) gij (s. It seems difﬁcult for a single Complex Systems. Finding theorems for the necessary condition will be our future study because some asymmetrical penalties might be good evaluation functions for efﬁcient search. s)) f (s) Β Xj S i (gij (s . s)). Xj ) gij (s. which means they will direct the system to evolve to different conﬁgurations. Xj ) f (s) . However. such as in minimum coloring problems. Since the LEF is not allowed to use global information. Xj )) EiLEF (S i . Xj ) gij (s . on the other hand. EiLEF (S i . Xj ) (12) where m is the current number of colors. we know we can always ﬁnd a GEF ELEF for any LEF. Xj ) gij (s. Xj ) gij (s. Because Β > maxs . not all GEFs can be decomposed to be n LEFs.22 J. s ) sgn f (s ) f (s) 2Β Xj S i (gij (s . Xj )) . sgn(EGEF (S ) EGEF (S)) sgn(EiLEF (S i .s f (s ) sgn f (s ) f (s) 2Β Xj S i (gij (s . so it cannot know how many colors have been used to color the graph. Xj )) sgn f (s ) f (s) Β Xj S i (gij (s . So far. If we construct a GEF as: n EGEFO (S) m n i 1 Xj S i ∆(Xi . Xj )) . we have gij (s. Xj ) gij (s. we get a naïve evaluation function that reﬂects the nature of the minimum coloring problem. Xj )) sgn f (s ) And sgn(EiLEF (S i . an agent only knows its linked agent and does not know the others. Not all GEFs constructed in this way are consistent with the LEF.
evaluation Function 1 is more efﬁcient than Evaluation Function 2 in Figure 10 because Evaluation Function 2 has many local optima that will trap the evolution. However.3.2. As we know. Except for solution states. One approximate solution here is LEF equation (4). the evaluation function is introduced to make approximate evaluations for all possible states and lead the system to evolve to a solution state. Therefore. we will see equation (12) is more “rational” (rationality is discussed in section 3.2. A rational evaluation function satisﬁes: (1) all solution states should have smaller EF values than all nonsolution states.2 show that the rational GEF equation (12) works much worse than the partial rational GEF equation (12). An important notion here is that the evaluation function is not a part of the problem.Local and Global Evaluation Functions for Computational Evolution 23 agent to make a good decision for the whole system. it is a part of the algorithm: GTSS Algorithms Evaluation Function Exploration Heuristics. since the system needs to decide whether it accepts a new state or not. We will discuss “rational” and “partial rational” evaluation functions in our next paper. Complex Systems. the evaluation function in backtracking has only two EF values). For example. different models can be built for a problem. A good evaluation function will lead the system to evolve to a solution state more directly. what is the main difference between them? Is there any picture that can help us understand how LEF and GEF explore the state space? To understand the behavior of the LEF and GEF. it has to compare the new and current states—which one might be better? Which one might be closer to the solution state? For this purpose. 3. there are several different GEFs for the coloring problem. Therefore. For example. If we compare its consistent GEF equation (9) with GEF equation (12). ﬁrst of all we should understand the role of an evaluation function in searching. models do not always exactly reﬂect everything of the reality. In the GT paradigm. the evaluation function values (EF values) are not builtin attributes for conﬁgurations of a given combinatorial problem. there is no evaluation function and only two possible judgments for current partial conﬁgurations: “no conﬂict” or “conﬂict” (i. (2) all solution states should have the same EF value. So the evaluation function is an internal “model” for the actual problem. it is always difﬁcult to compare which conﬁguration is better than the other.. In the backtracking algorithm.e. The experiments in section 3. it is still impossible to construct an evaluation function that has no local optima for many combinatorial problems. depending on their purpose.2) than equation (9) because equation (12) gives a “fair” evaluation on all solution states while equation (9) does not.2 Exploration of local and global evaluation functions If an LEF is consistent with a GEF. volume (year) 1–1+ .
The map is the model for the conﬁguration space.3).24 J. the ﬂashlight only highlights one point (conﬁguration) each time in the neighborhood. but with a small probability will go to a random place (GEFGR. or with a small probability (e E/T ) move to that point even though it is worse. Traditional GEF methods use only one map. In simulated annealing. and others. So an algorithm includes what map it uses and how it uses the map to explore.” we now illustrate the exploration of LEF and GEF. This is like using a ﬂashlight to look at a map on a dark night.Evaluation Function 2 Local optima Solution Configurations Figure 10. Heuristics are for exploration in the space based on the map. There can be many variations in types of ﬂashlight and strategies for selection on the highlighted part of the map.” Using the metaphors of “map” and “ﬂashlight. Otherwise. which is ndimensional for nvariable problems. most of the time selects the best. It is only possible to get evaluation values for a small part of the conﬁguration space at each time step. It is impossible to compute evaluation values for all conﬁgurations at a time. This is why they are so difﬁcult to solve and people have developed so many heuristics to explore the conﬁguration space. Different evaluation functions for a problem. only part of the map can be seen unless the ﬂashlight is very “powerful. In section 3. see section 3. volume (year) 1–1+ . that is. we show that both the evaluation function (map) and the exploration methods will affect performance. GEF: One ndimensional map and neighborhood ﬂashlight. Han EF value — Evaluation Function 1 . we can simply use the greedy heuristic. it uses a ﬂashlight to look at the neighborhood of the current position on the map and make a decision about where to go (see Figure 11). greedyrandom. for Evaluation Function 1. the system computes the evaluation values 1 of neighbors in NS . But since the map is very big. always selects the best one (Figure 11). Complex Systems. For each time step. totally random. Searching (as in the GTSS style) for solutions is just like walking in a dark night using a map.3. There are several strategies for making decisions based on the highlighted part of the map: greedy. so the evaluation function is the generator of the map. and will move to that point if it is better than the current one.
randommove. In every time step each agent uses its onedimensional slice of its di dimensional map to guide its evolution. and so on. Consistent LEF and inconsistent LEF are very different. So the system accepts 2. Each agent uses its own map. gets to 3. Since the system dispatches ndimensional GEF map of EGEF all agents sequentially. or others. The evolution path is 0 1 2 3 4. Using the distributive LEF methods. which is totally random selection. each agent uses a onedimensional slice of the ELEF .Local and Global Evaluation Functions for Computational Evolution 25 EF value 0 1 2 3 4 a2 a1 Figure 11. always select the best one (Figure 12). all LEF maps can make up a GEF map. For consistent LEF a map of ai is actually a di dimensional slice of the GEF map of EGEF ELEF . The GEF methods for a system consisting of two agents (a1 and a2 ). So here the LEF and GEF use the same map and the difference lies in how they use the map: what ﬂashlights and strategies are used for selecting from the highlighted part of the map. So here the differences between the LEF and GEF lie not only in how they use the map. volume (year) 1–1+ . For inconsistent LEF a map of ai is no longer a di dimensional slice of the GEF map of EGEF ELEF . we summarize the main differences between global and local evaluation functions in Table 3 from two layers: evalComplex Systems. bettermove. but also on which map they are using. di is the degree of ai ). The ﬂashlight illuminates a onedimensional slice each time and the agent makes a decision according to its strategies: leastmove. select a random one and accept it if it is better than the current assignment. 2 is the best neighbor inside this circle. At every time step. The light circle around 1 is the part highlighted by the ﬂashlight when the system is in state 1. As Figure 12 shows. In other words. agent ai has a di dimensional map if ai has di constrained agents (in the coloring problem. To conclude this section. its exploration is from slice to slice. LEF: n di dimensional maps and slice ﬂashlight. the twoagent system evolves as 0 1 2 using slices (b) and (c) of the GEF map (a). highlights its neighborhood.
SA. volume (year) 1–1+ . bettermove. So the exploration is from one slice (dimension) to another: 0 1 2.n) GEF n di (number of (degree of each node Vi ) nodes) Nash equilibria N(ELEF) Local optima L(EGEF) N(ELEF)=L( ELEF ) N(ELEF)≠L( ELEF) ≠ Flashlight Neighborhood onedimensional slice (search scope) Exploration Greedy. (i = 1. We now ask the following questions about Complex Systems. The system starts from “0”.26 J. … random. We believe that using a different evaluation function will result in different performance.. Random. The exploration heuristics also affect performance. then a2 gets a slice (b) through “0” and selects the best move “1”. 3. Suppose it is the turn of a2 to move. This is discussed in the next section. Then it is the turn of a1 to move. Han EF value 0 EF value 0 1 a2 (b) A map slice for agent a2 a2 1 2 2 EF value 1 a1 a1 (a) The GEF map of a system consisting of agents a1 and a2 (c) A map slice for agent a1 Figure 12. uation function (map) and exploration heuristics. LEF Consistent Generator Evaluation Function Dimension Equilibria EGEF ELEF Inconsistent i ELEF . … Table 3. Heuristics GreedyStrategies Leastmove. This paper focuses on the evaluation function layer.3 Experimental results In the previous sections we proposed LEF methods and compared them to traditional GEF methods. a1 gets a slice (c) through “1” and selects the best move “2” and then repeats. Consistent LEF: map slices of GEF and exploration. Differences between the GEF and consistent/inconsistent LEF. random.
6. One speciﬁed LEF/GEF includes several algorithms according to their exploration heuristics and different parameter settings.edu/COLOR/instances.2 separately show results on the kcoloring and minimum coloring problems.3. 3.3. First.html. GEF method: Greedyrandom (GEFGR) always looks for the best conﬁg1 uration in a NS neighborhood and has a small random walk probability. b) Else c S // perform random walk randomselect (current possible colors) replace(S. LEF Evolution(S) { 1. Note that comparisons between local and global evaluation functions are difﬁcult for two reasons.3. 7. 4. i. We now compare the performance of the following three algorithms using different evaluation functions and speciﬁed parameters. 13].4 Sections 3.1.1 and 3. volume (year) 1–1+ . 2. } 4 Available 1 to n do // perform leastmove strategy Choose a random number r in [0. the LEF and GEF are concepts for evaluation functions. Return S at: http://mat. c) If S is a feasible solution.cmu. For each agent i 2. 1] If r < Pleastmove Find one color c that for all other possible colors b. GEF method: Simulated annealing (GEFSA) [29]. is their performance similar? In this section we present some preliminary experimental results on a set of coloringproblem benchmarks from the Center for Discrete Mathematics and Theoretical Computer Science.Local and Global Evaluation Functions for Computational Evolution 27 performance: Can the LEF work for coloring problems? Can an LEF (GEF) solve problems that a GEF (LEF) can solve if they are inconsistent? If an LEF and a GEF are consistent. 3.3. Here are details of the Evolution(S(t)) of the three algorithms in the framework shown in section 3. call ProcessSolution(S ) 9. Xi . LEF method: Alife&AER [12. it is difﬁcult to avoid embodying some preliminary knowledge in the algorithm. So the following experimental comparisons are limited to the speciﬁed settings of algorithms and problem instances. Some observations are given in section 3.gsia. Complex Systems. i i ELEF (S i . Second. 8. c) ELEF (S i . 5. 1.
10. call ProcessSolution(S ) 7. S 5. If S is a feasible solution. Else If r S Else 1 randomselect(NS ) EGEF (Sr ) 0 Sr >0 e Sr /T EGEF (S) // perform downhill move Choose a random number r in [0. 9. 7. that for all other EGEF (S ) Find one neighbor S 1 NS . Return S } GEFSA Evolution(S) { 1. If S is a feasible solution. If r < Pgreedy 3. 6. Sr 2. Han GEFGR Evolution(S) { 1. // perform greedy strategy 1 NS . Choose a random number r in [0. // perform random walk 1 randomselect(NS ) 6. Else 5. 8. Return S } ProcessSolution(S) { #If it is a kcoloring problem: Output the result S #If it is a minimum coloring problem: If S uses m colors and m is the chromatic number of the graph Output the result S and m Else if S uses m colors and m is less than the current number of possible colors Current Number of Possible Colors m 1 // reduce the number of possible colors. EGEF (S ) S S 4. call ProcessSolution(S ) 12. If 4.28 J. so NS is reduced } Complex Systems. 1] 2. 3. volume (year) 1–1+ . 1] // perform uphill move // do not accept the new state return S 11.
ei is the number of edges from node Vi . Pgreedy in GEFGR is also simply set to 1. and minPercent 0. the better its performance will be. we give the CPU runtime on a PIII1G PC WinXP for ﬁnding a solution and then measure the runtime as operation counts [38]. where n is the problem size. If so. Table 4 lists the complexities of the LEF.3. GEFGR. The following experiments examine the performance of ﬁnding a solution. So from Table 4 we can get this relationship of complexity for an agent move: GEFSA < LEF < GEFGR.3. and a time step includes n agent moves. freezeLim 10. which is the number of agent moves used in ﬁnding a solution. and GEFSA algorithms. First.5n/(1.2 where some comparisons are given. tempFactor 0. The complexity mainly relates to calculating conﬂicts and comparing EF values (the complexity of picking up the smallest value from m values is O(m)). in a kcoloring problem m is constantly equal to k.5n/(1.1 and 3. Parameter settings in GEFSA are the same as in [29]: initialT 10. and GEFSA methods where e is the number of edges in the graph. A time step in both GEFGR and GEFSA only includes one agent move. volume (year) 1–1+ . It is easy to think that the more complex an agent’s move is. The setting of Pleastmove in the LEF method is equal to 1. Complexity of an agent move and a time step of the LEFAlife&AER. the number of agent moves for ﬁnding a solution would have the relation: GEFSA moves > LEF moves > GEFGR moves. The cost of an agent move in the LEF includes steps 2 through 8 in Evolution(s).95.5n 1).Local and Global Evaluation Functions for Computational Evolution 29 Measurement Algorithm LEFAlife&AER GEFGR GEFSA An agent move of ai O(2 ei O(2 ei m) nm n) O n i 1 A time step (2 ei m) O(4 e nm n) nm) O(2 ei O(2 ei ) O(2 ei ) Table 4. So. The cost of an agent move in GEFSA includes steps 1 through 11. sizeFactor 2. and m is the current number of colors being used. cutoff 0.1.5n 1).02. The cost of an agent move in GEFGR includes steps 1 through 6. Complex Systems. and in a minimum coloring problem m is a dynamic decreasing value but is always less than n. Is this true? Let us look at the experimental results in sections 3. GEFGR.
30 J. d. it still beats the Complex Systems. The GEF does not always lead the evolution more efﬁciently than the LEF although it uses more information to make decisions and has centralized control over the whole system. d. The following comparisons are included: a. Inconsistent case: LEF equation (4) versus GEF equation (2) versus GEF equation (12). and decreases it if a feasible mcolor solution is found. c. Consistent case: LEF equation (4) versus GEF equation (9). We test the CPU runtime (Figure 15) and number of agent moves (Figure 16) for ﬁnding a Χ(G)color solution when Χ(G) is unknown. The LEF of each agent can only use local information to guide itself and sometimes there is not enough information to make a good decision. b. although the LEF only knows local information and the purpose is global. volume (year) 1–1+ . Different GEFs: GEF equations (2) versus (9) versus (12). (2).3. b. and LEF equation (4) versus GEF equation (2). and (9)). LEF equation (3) versus GEF equation (2).3.2 Minimum coloring problem Finding the chromatic number of the graph in a minimum coloring problem is a global goal. The system starts with a large number of colors m. Different exploration heuristics for the same evaluation function: GEFGR equation (2) versus GEFSA equation (2). Consistent cases: LEF equation (3) versus GEF equation (1). 3.3. the LEF (especially LEF equation (4)) beats the GEFs in CPU runtime in almost all instances.1 kcoloring problem We compare the CPU runtime (Figure 13) and the number of moves (Figure 14) for ﬁnding a solution for a given k colors using different LEFs (equations (3) and (4)) and GEFs (equations (1). LEF equation (4) versus GEF equation (1). c. Inconsistent cases: LEF equation (3) versus GEF equation (9). 3. and LEF equation (4) versus GEF equation (9). GEF equations (1) versus (2) versus (9). The following comparisons are included: a. Different exploration heuristics for the same evaluation function: GEFGR equation (1) versus GEFSA equation (1) versus LEF equation (3). Different evaluation functions for LEF/GEF: LEF equations (3) versus (4). In the kcoloring problems. In the minimum coloring problems.3 Observations from the experiments (1) LEFs work for the kcoloring and minimum coloring problems. until m is equal to Χ(G). Han 3.
001 second) and the highest value denoting that the runtime is more than 6000 seconds. volume (year) 1–1+ . with the lowest value denoting zero (less than 0.Local and Global Evaluation Functions for Computational Evolution 31 >6000 Figure 13. Average CPU runtime (second) of 100 runs for the kcoloring problems. 0 Complex Systems. Note that the yaxis is a logplot.
32 J. Note that the yaxis is a logplot. Han Figure 14. with the highest value “unknown” denoting that we have not tested the number of agent moves because the runtime is more than 6000 seconds. volume (year) 1–1+ unknown . Average number of agent moves of 100 runs for the kcoloring problems. Complex Systems.
Average CPU runtime (second) of 100 runs for solving the minimum coloring problems. volume (year) 1–1+ 0 . Note that the yaxis is a logplot. with the lowest value denoting zero and the highest value denoting that the runtime is more than 6000 seconds. Complex Systems.Local and Global Evaluation Functions for Computational Evolution 33 >6000 Figure 15.
with the highest value “unknown” denoting that we have not tested the number for agent moves because the runtime is more than 6000 seconds. Note that the yaxis is a logplot. Average number of agent moves of 100 runs for solving the minimum coloring problems. Han Figure 16.34 J. Complex Systems. volume (year) 1–1+ unknown .
Although GEF equation (9) is consistent with LEF equation (4) and GEF equation (2) works better than GEF equation (9) in most instances. There are so many efforts on exploration heuristics that we know they will affect performance. queen8_8. equation (4) beats equation (3) strongly. inithx. and zeroin. mulsol. miles250. One important reason is that the speciﬁed local search heuristic embodies preliminary knowledge of the Nqueen problem. so performance between LEF equation (3) and GEF equation (1) is more alike than that of LEF equation (4) and GEF equation (9). fpsol2. such as distributions and deviations of runtime. For the kcoloring problem (a decision problem) the performance of LEF equations (3) and (4) is very different. while GEF equation (1) cannot solve it within 6000 seconds. and zeroin within 2 seconds. GEF equation (12) cannot solve more than half of those instances. GEF equation (2) can solve fpsol2. the GEF selection will affect performance when solving function optimization problems [39]. inithx. but easy for GEF equation (2). For more experimental results. which helps to break the symmetry [40] in the coloring problems [41]. inithx. In the minimum coloring problem. (4) Exploration heuristics affect performance. As we know. The LEF is much faster than the GEFs for homer. We guess that is because part (a) of equation (2) puts pressure on each node to select a small index number color. and zeroin. In addition to coloring problems. mulsol. while equation (3) slightly beats equation (4) in miles and queen. LEF equation (4) is consistent with GEF equation (9) while LEF equation (3) is consistent with GEF equation (1). the LEF also works for Nqueen problems [12–14]. Complex Systems. GEF equation (2) works better than GEF equation (9) on average. (3) Performance of an LEF and a GEF are more alike if they are consistent. queen7_7. they will behave more alike. see [12–14]. In fpsol2. and performance based on different parameter settings.Local and Global Evaluation Functions for Computational Evolution 35 GEFs for most instances except queen8_12 and queen9_9. Using the LEF for the Nqueen problem can beat a lot of algorithms. These results tell us that if the Nash equilibria of the LEF are identical to the local optima of the GEF. queen8_12 is difﬁcult for LEF equation (4) and GEF equation (9). mulsol. volume (year) 1–1+ . (2) Evaluation function affects performance. LEF equation (4) and GEF equation (9) can solve fpsol2. while the others can solve them. inithx. LEF equation (4) still beats GEF equation (2). for example. the performance difference is very clear. mulsol. This is also true for combinatorial problems. In the kcoloring problem. and zeroin quickly while LEF equation (3) and GEF equation (1) cannot. but it is not as fast as one speciﬁed local search heuristic [18]. LEF equation (3) and GEF equation (1) beat LEF equation (4) and GEF equation (9) in miles1000 and queen8_12. In the minimum coloring problem (an optimization problem).
Even though they have the same evaluation function (map). However. In both the kcoloring and minimum coloring problems. the algorithmic details will not change the performance for a problem instance very much. Even using the same evaluation function. So this suggests that in some instances using local information does not always mean decreasing the quality of the decision. Both using GEF equation (2). that GEFSA uses fewer moves than GEFGR when solving kcoloring problems.36 J. and zeroin. GEFGR looks at n neighbors and selects one. (5) Larger complexity does not always mean better performance of an agent movement. and queen9_9. the LEF might be good for some networks while the GEF might be good for other networks. they still work differently in some instances. The new outlook suggests several insights and lines of future work and applications. We guess the graph structure (constraint network) will have a big impact on the performance of the LEF and GEF. 4. LEF equation (4) and GEFSA equation (9) are consistent but the LEF works faster than GEFSA. volume (year) 1–1+ . GEFSA and GEFGR perform differently. queen8_8. Han This is also true for the consistent LEF and GEF. (6) Performance of the LEF and GEF is dependent on the problem instance. inithx. even in the consistent case. In the language of multiagent systems. some interaction structures are more suitable for selforganization to reach a global purpose. we still see in some problem instances. In other words. In the minimum coloring problems. mulsol. We have the ranking for complexities of an agent move in different algorithms: GEFGR > LEF > GEFGR. GEFGR does not always use the fewest agent moves to reach the (optimal) solution state in coloring problems. Conclusions This paper proposes a new look at evolving solutions for combinatorial problems. LEF equation (3) beats GEF equation (1) in miles and queens while GEFGR equation (1) beats LEF equation (3) and GEFSA equation (1) in mulsol and zeroin. In the kcoloring problem. GEF equation (1) and LEF equation (3) are not good for fpsol2. queen7_7. As in our other paper [14]. the LEF uses fewer moves than GEFGR in some instances. LEF equation (4) and GEF equation (9) are consistent but LEF equation (4) beats GEF equation (9) strongly. Complex Systems. especially in queen7_7. and queen9_9. such as queen5_5. while GEFSA only looks at one neighbor. while some interaction structures favor centralized control. such as queen5_5 and queen7_7. The LEF ﬁts for most instances except queen8_12 in the minimum coloring problem. using NPcomplete cases of the kcoloring problem (a decision problem) and the minimum coloring problem (an optimization problem). Finding the “good” networks for the LEF and GEF is a very important future topic.
Xj ) in an ELEF is a symmetrical penalty. the ELEF is consistent with ELEF . There are two ways to construct evaluation functions: the global evaluation function (GEF) uses global information to evaluate a whole system state. they still explore differently. so even though the LEF is consistent with a GEF. if the global goal can be expressed by a GEF that is consistent with the LEF. ELEF can be consistent or inconsistent with ELEF . this theorem means that the LEF and GEF have the same ground state structure and the same number of solutions. volume (year) 1–1+ . Some remaining questions are: What is the condition for the existence of a consistent GEF for any LEF? What is the condition for the existence of a consistent LEF for any GEF? How do network properties relate to consistency of LEF and GEF in the coloring Complex Systems. Even though an LEF is consistent with a GEF. Experimental results support this: performance of LEF and GEF are more alike if they are consistent. This helps us understand why the LEF can guide agents to evolve to a global goal state. In spin glass theory. it still behaves differently and works better in some instances. Table 3 summarizes the differences between GEF and consistent/inconsistent LEF.Local and Global Evaluation Functions for Computational Evolution 37 Finding solutions for a coloring problem by using generatetestsinglesolution (GTSS) algorithms is like the evolution of a multiagent system. EGEF This paper gives a preliminary theorem for constructing consistent LEF which have the form of the spin glass Hamiltonian (equation (11)): If the penalty function gij (Xi . In addition to the research reported. and the local evaluation function (LEF) uses local information to evaluate a single agent state. When an LEF is inconsistent with a GEF. there is still a lot of future work. the LEF evolution proceeds differently than the GEF evolution. A consistency theorem is proposed (Theorem 1) which shows that Nash equilibria (Deﬁnition 6) of an LEF are identical to the local optima (Deﬁnition 5) of a GEF if the LEF is consistent with the GEF. The evaluation function of the GTSS algorithms is the internal ranking of all conﬁgurations. 43]. The LEF and GEF concepts provide both a new view of algorithms for combinatorial problems. So the LEF can be partly explained by the current theoretical results on traditional GEF [16. 42. Obviously. and for the collective behavior of complex systems. The experimental results show that exploration heuristics affect performance. we can always construct a GEF by summing up all of the agents’ LEFs as ELEF . Using coloring problems clariﬁes the concepts of “local” and “global. Since the way to construct a GEF is not unique for a problem. We analyze the relationship between LEF and GEF in terms of consistency and inconsistency.” These concepts are important distinguishing characteristics of distributed and centralized systems. as well as the same phase transitions [1]. 37. More study on consistent and inconsistent LEFs should give more insights into computation and game theory. the LEF’s Nash equilibria are not identical to the GEF’s local optima.
Dengyong Zhou.red3d. [3] http://www. UK. Jose Lobo.” in New Ideas in Optimization. edited by D. Axelrod. Cambridge. [6] P. http://www. 1966). UrbanaChampaign.com/cwr/boids/applet. A demonstration of the Boids model of bird ﬂocking is provided. M. Glover (McGraw Hill. and K.bitstorm. Lidror Troyansky. Complex Systems. 1999). John Miller. Sam Bowles. 1984). and Tamas A. Wiesenfeld.38 J. 223 (October 1970) 120–123. The Evolution of Cooperation (Basic Books. Corne. “Selforganized Criticality. volume (year) 1–1+ . 38 (1988) 364–374. Scott Kirkpatrick. Jeffrey and M. 5. Dorigo and G. “The Ant Colony Optimization Metaheuristic. Thanks to Laura Ware for proofreading. Acknowledgments This paper is supported by the International Program of the Santa Fe Institute and the National Natural Science Foundation of China. “Determining Computational Complexity from Characteristic ‘Phase Transitions. Vicsek. Bart Selman. 1999). Jim Crutchﬁeld. 400 (1999) 133–137. IL. 407 (2000) 487–490. Bak. Theory of SelfReproducing Automata (University of Illinois Press. Illes J.’” Nature. and F. Bradshaw (AAAI Press/MIT Press. [7] R.” Nature. Cris Moore. [9] M. ChienFeng Huang and other people for helpful discussions.” Physics Review A. edited by A. Farkas. “Simulating Dynamical Features of Escape Panic. References [1] Remi Monasson. [2] John von Neumann. Dorigo. Thanks also go to Mark Newman.’” Scientiﬁc American. C. Li Ming. [5] Dirk Helbing. Eric Smith. Melanie Mitchell. Han problems? How can we construct good LEFs and GEFs for a problem? LEF is about individual and GEF is about the whole system. [8] David Wolpert and Kagan Tumer. The author wants to thank Lei Guo and John Holland for their strong support and discussions. Riccardo Zecchina. “An Introduction to Collective Intelligence.org/gameoﬂife. Supriya Krishnamurthy. Tang. Di Caro. “Mathematical Games: The Fantastic Combinations of John Conway’s New Solitaire Game ‘Life. how about the evaluation functions for compartments? Can we study consistent and inconsistent LEFs in the context of game theory? These will be our future study. New York. [4] Martin Gardner.” in Handbook of Agent Technology. Haitao Fang.
G. McAllester. Jiming Liu. and M. E. Inc. and H. 13(1) (1992) 32–44. Johnston. “Emergence from Local Evaluation Function. Selman. [16] J.Local and Global Evaluation Functions for Computational Evolution 39 [10] S. 1999). [18] Rok Sosic and Jun Gu. “The Structure and Function of Networks. [21] Bart Selman. Jr. [22] D. B. Laird. Minton. [14] Han Jing and Cai Qingsheng. 119 (2000) 275–286. P. “Multiagent Oriented Constraint Satisfaction. [11] S. Kumar.” in Encyclopaedia of Artiﬁcial Intelligence. 1987). Percus. Johnson. Veechi. A. 1993.” Journal of Systems Science and Complexity. A demonstration of Alife Model for solving Nqueen problems can be downloaded from http://www. 147 (2002) 40–45. Kirkpatrick.exe. edited by Jiming Liu and Ning Zhong (The World Scientiﬁc Publishing Co. [12] Han Jing. “From ALIFE Agents to a Kingdom of N Queens.” Artiﬁcial Intelligence. “Optimization by Simulated Annealing.” Physics Review E. Boettcher and A. Coloring. “Nature’s Way of Optimizing. van Mourik and D. RI.” AI Magazine. 66(5) (2002) 056120. “Evidence for Invariants in Local Search. and Satisﬁability..” ORSA Journal on Computing. Kautz. [23] F.” Artiﬁcial Intelligence. [13] Jiming Liu. Pte. [20] S. “Minimizing Conﬂicts: A Heuristic Repair Method for Constraint Satisfaction and Scheduling Problems. C. D. New York. [15] Vipin Kumar. 136(1) (2002) 101–144. “Random Graph Coloring: A Statistical Physics Approach. and Tools.” IEEE Transactions on Knowledge and Data Engineering.. J. Methodologies.” Science. and Yuan Y. Gellat. Philips. C. Shapiro (John Wiley and Sons. Trick and D. 220(4589) (1983) 671–681. Kautz. Ltd.” Computer Physics Communications. Santa Fe Institute working paper 200208036. Providence. Volume 2. and P. “Algorithm for Constraint Satisfaction Problem: A Survey. Newman. edited by S. D. Saad. volume (year) 1–1+ . 52 (1992) 161–205. “Tabu Search: Part I. [19] V.” in Intelligent Agent Technology: Systems. 16(3) (2003) 372–390. Complex Systems. 1997). Tang.” in Second DIMACS Challenge on Cliques. Henry A. S. B. “Efﬁcient Local Search with Conﬂict Minimization: A Case Study of the Nqueen Problem. and Cai Qingsheng. M.. USA.santafe. “Local Search Strategies for Satisﬁability Testing. and Bram Cohen. “DepthFirst Search. edited by M. Glover.” in Proceedings of AAAI’97 (AAAI Press/The MIT Press. [17] M. 6(5) (1994) 661–668. 1(3) (1989) 190–206. Han Jing.” Artiﬁcial Intelligence.edu/ hanjing/dl/nq.
1995). Spin Glass Theory and Beyond (World Scientiﬁc. [30] D. 123 (2002) 75–102. Kennedy and R. Cecilia R. [29] David S. Part II. 39(4) (1987) 345–351. “Combinatorial Optimization on a Boltzmann Machine. NJ. Henry A. Eberhart. 39(3) (1991) 378–406.” in Proceedings of AAAI’94 (MIT Press. [36] Eric van Damme. and A.” ORSA Journal on Computing. [35] E. Singapore and Teaneck. Lyle A. Adaptation in Natural and Artiﬁcial Systems (The University of Michigan Press. J. McGeoch. Ergun. WA. NJ. San Jose. Orlin. and D. Holland. 1994).40 J. H. de Werra. 1987). Glover. 1987). Orlin. and Miguel Angel Virasoro. [37] Marc Mezard. 22. Complex Systems. [33] R. 251–256. “A Survey of Very Largescale Neighborhood Search Techniques. “Using Tabu Search Techniques for Graph Coloring. [32] Jacques Ferber. “New Methods to Color Vertices of a Graph. [27] J. Mitchell. Multiagent Systems: An Introduction to Distributed Artiﬁcial Intelligence (AddisonWesley. Han [24] F. “Particle Swarm Optimization. and Bram Cohen. Berlin and New York. 1975). [26] B. [25] B. Levesque.” Computing. Kautz. CA. “Noise Strategies for Improving Local Search. “A New Method for Solving Hard Satisﬁability Problems. [31] A. Piscataway.” Operations Research. Giorgio Parisi. Seattle. 2(1) (1990) 4–32. Ahuja. [34] Jan HM Korst and Emile HL Aarts. New York. Hertz and D.” IEEE International Conference on Neural Networks IV (IEEE Service Center. and Catherine Schevon. [28] J. “Optimization by Simulated Annealing: An Experimental Evaluation. Graph Coloring and Number Partitioning.” Discrete Applied Mathematics. Selman. B. Ahuja and James B. Korst. Selman. “Tabu Search: Part II.” Journal of Parallel and Distributed Computing. 1989). O. 1992). Aragon.” Communications of the ACM. Johnson. Ann Arbor. 1999). H. Aarts and J. P.” INFORMS Journal on Computing. volume (year) 1–1+ . Punnen. Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing (John Wiley & Sons. [38] Ravindra K. “Use of Representative Operation Counts in Computational Testing of Algorithms. Brelaz. 6(2) (1989) 331–357. 8(3) (1996) 318–330.” in Proceedings of AAAI’92 (MIT Press. Stability and Perfection of Nash Equilibria (SpringerVerlag. K.
Zecchina. “Constraint Handling Techniques: Penalty Functions. Damper.” in Proceedings Genetic and Evolutionary Computation Conference (GECCO2000). Fogel.” Evolutionary Computation. 1997). [43] S. A. Cocco. R. D. A. edited by T. condmat/0208460. Marino and R. 89 (2002) 268701. M. 2000). “Breaking the Symmetry of the Graph Colouring Problem with Genetic Algorithms. Weigt. [41] A. NV (Morgan Kaufmann Publishers. Late Breaking Papers. volume (year) 1–1+ . edited by W. I. [40] Ian P. [44] Wim Hordijk. Baeck. Mulet. [42] R. 2000). Semerjian.” condmat/0302003. Smith. “Coloring Random Graphs.” in Handbook of Evolutionary Computation. Gent and Barbara M.” in Proceedings ECAI’2000.Local and Global Evaluation Functions for Computational Evolution 41 [39] Alice Smith and David Coit. Complex Systems. Michalewicz (Oxford University Press. Pagnani. Horn (IOS Press. Monasson. San Francisco. 4(4) (1996) 335–360.” Physical Review Letters. “Symmetry Breaking in Constraint Programming. and R. and Z. “A Measure of Landscapes. and G. Montanari. “Approximate Analysis of Search Algorithms with ‘Physical’ Methods. Las Vegas.